a collection of technical fixes and other randon stuff


  • Join Us on Facebook!
  • Follow Us on Twitter!
  • LinkedIn
  • Subcribe to Our RSS Feed

Speeding up SQL server for millions of records

Recently i've been required to insert 6 million database records into the latest version of one of our products.

Actually, it's a lot more than this because it it 6 million business objects which of course tend to have their related lookup tables so the figure is probably much more than this.

The problem I have is that for reliable data, the information needs to be inserted through the interface to guarantee it is like a production environment, and this is slow... especially when you get over 1 million database records in your table.


As an experienced dataabase developer, my first thoughts were to look at profiler, and get some information on improving the performance with code tweaks or better indexing. Being an almost mature product, there was little to do here, but a few changes to the indexes and stored procedures helped a bit. This gave be about 50% speed improvement but this didnt last for long.

Soon the process of inserting records, and the backgroung logic doing lookups to check for duplicates etc... was slowing down to 3 or four records per second, which was going to take forever. At 2 million records, it was clear that this needed a different approach.


To rub salt in the wound there were other factors stopping me doing my task.

  • I had a brilliant laod testing app, i had written that could reduce call time/latency to the website by doing all this multithreaded. Unfortunately the code wasnt very threadsafe, and performance got worse with multiple threads due to all the locking
  • My super fast servers from last year had been hijacked by IT to run sharepoint
  • I could just run several databases at low volumes and piec ehem all together. THis involved a big manual manipulation of the data (a risk) and also ..i didnt have time. At table sizes of this size , even SQL server's interface was struggling.


The solution was based around previous tests where we found a super fast SAN storage system sped up performance significantly. However i didnt have access to one in such a short time.

I decided to improvise  adn create a RAM disc  to store the SQL database on and hopefully speed things up. This in theory should be much faster at seeking reading/writing than a standard mechanical hard drive.

I used a PC with 12GB ram, and created A RAM DRIVE.

I downloaded a trial version of this software: http://www.superspeed.com/servers/ramdisk.php and created an 8GB drive within 5 mins.

I then 'detached' my database using the SQL server interface, and moved the mdf,ldb files to the ram drive.

I then  re-attached the database amking sure sql server had permission to the new files and their storage location.

After a database shrink and index rebuild on all tables i re-started my task.

Insertion continued at 50-100 inserts per second which was much faster than before. And slowdown has stopped or at least is slowing down at a slower rate than before.


You can also move the system temp db file to the ramdisk too which can speed things up further, however, bear in mind if you need to do any large data deletion or modification on large numbers of rows, you may find the tempdb grows and fills your ram disk whilst trying to do this. For this reason , i left this out on my final run.


Dont forget a large dataabse will probably get several maintenance tasks carried out on it over a growth like this so it may be handy to set an hourly index rebuild task whilst your insertion run is executing.

Annoyingly this should all be ready just before our much more efficient mass insertion feature passes testing.

If you have the resources, i'd try a solid state drive or SAM system as you must remember your dataa will be lost as soon as the power goes off.

Dont forget to back it up once you've finished

SQL management interface locked the database and remote logins

Today there was a problem,

The site stopped working, no one could log into the interface, remote desktop onto the server failed / didnt work.....

The problem was someone doing a backup of the database server. They had  remote desktopped into the system and started a backup, half way through an error message popped up and the system halted while SQL server waited for the response.

Every now and then something you do on MS SQL server throws up an erorr message. I'd advise to always act on it in a timely manner, as sometimes it keeps tables in the database locked whilst waiting fro a reply.

Keep focussed and dont do backups during critical times of the day Smile


If anyone knows why this happens, i'd love to know so do leave a comment.


Reducing Transaction log growth in MIcrosoft SQL server 2005 /2008

A situation that you may find yourself in when developing or managing a database with high throughput is that the transaction log tends to grow faster than you would like.


The easiest way out is to switch the database to use simple logging. But this might not be an option ....for instance, you may need to ensure that you can roll back data and provide full recovery ability.

The second option is to increase the frequency of your full backups and clear your transaction log out more often. This may keep youi running for a little longer, but can be lots more work.

The third option is to look at your Stored Procedure code and reduce unwanted transactions.

Transactions - good or bad

Theres really no argument to say that transactions are bad on the whole. Without them , databases would be as reliable as a Bin-man on strike. But they can be too much of a good thing on busy, data critical dataabses.

A developer will go round ensuring all essential logic is encased in a try-catch and is enclosed in  begin trans / commit trans /rollback transaction statement(s).
This can be bad if all the code is doing is a select statement.
Transactions are only needed when something gets UPDATED/DELETED/INSERTED or the database gets modified in any way.

Step 1: Remove explicitly declared transactions where theyre not needed.

Transactions where you dont expect them or didnt think.

Next, comes a more sneaky culprit. "Temporary tables". These are often added to help process some complex logic or extract data from some XML file.
The act of creating a temporary table and doing insert/update statements creates a transaction for each one.

Step 2: Look for stored procedures who should just be doing select statements and check for temporary tables. If they're being used try to remove them completely or replace them with table variables. These wont modify the dataabase structure and therefore reduce the transactions.

More transactions means less!!!?

Table variabls may not be possible for your solution, or you may not have time to rre-factor your code.

An alternative method may be to re-instate or add explicit transactions to enclose the whole stored procedure and then perform a purposefull Rollback to clear the unneccesary transactions from the log.
This esentially doesnt change your code much , but gives you control over the many unexpected additional transactions that are present in your code.

How to spot them

Use SQL profiler and add "Transaction" events to those you are monitoring. You will find your stored procedures execute but there are many additional entries without any SQL text  displayed. These are the hidden transactions that are being created.

Also use this stored procedure to help find use of temporary tables. It searches the stored procedures for some text.......


CREATE PROCEDURE [dbo].[Find_Text_In_StoredProcedure]
@StringToSearch varchar(100)
   SET @StringToSearch = '%' +@StringToSearch + '%'
   SELECT Distinct SO.Name
   FROM sysobjects SO (NOLOCK)
   INNER JOIN syscomments SC (NOLOCK) on SO.Id = SC.ID
   AND SO.Type = 'P'
   AND SC.Text LIKE @stringtosearch