How Do I Keep My Argent SQL Backend Running Smoothly?

Here is a cookbook of what to do.

While this applies to the Argent backend, it is applicable to just about any SQL Server database.

There are lots of performance counters you can watch.

In Argent’s experience with over 2,000 Customers worldwide, here’s what we suggest.

SQLServer : Memory Manager : Total Server Memory

SQLServer : Memory Manager : Target Server Memory

These two counters are the best way to know if SQL Server has enough RAM to operate properly.

Total Server Memory indicates the current RAM currently in use by the SQL Server Service. This value includes the total number of buffers committed to SQL Server and the OS buffers of the type OS in use.

Target Server Memory indicates how much RAM the SQL Server Service would ideally like to have.

Generally, if the Total Server Memory counter is less than the Target Server Memory, then SQL Server has enough memory to run efficiently.

If, over time, the Total Server Memory value is equal to or higher than the Target Server Memory, the SQL Server Service may be under serious memory pressure and an increase in RAM is needed.

By the way, don’t just add physical RAM to the server.(You didn’t think it was going to be that easy, did you?)

You need to have the Server Administrator look at the properties of the SQL Server (using SQL Server Enterprise Manager in SQL Server 2000 or SQL Management Studio in SQL Server 2005).

Check the service is configured to use the newly installed RAM once it is added. If SQL Server is configured to use a fixed amount of memory, adding physical RAM is useless – you need to the limit once the memory has been installed.

SQLServer : Access Methods : Page Splits/Sec

Page splitting cause excessive I/O on your SQL Server machine. Page splitting generally starts when a page of SQL Server data becomes full and is then split between the current and a newly allocated page.

Occasional page splitting is a normal condition in SQL Server, but when this particular counter is high, excessive disk I/O is generally the result – with subsequent slow performance to the clients of the SQL Server, in this case the Argent screens, web products and services.

If this counter exceeds 100 over 25% of the time you have a problem. The solution is to increase the FILL FACTOR value in SQL Server Properties – Database Settings.

SQLServer : SQL Statistics : Batch Requests/Sec

SQLServer : Databases: Transactions/Sec

Getting a feel for how busy SQL Server actually is can be complicated. A good starting point is Batch Requests/Sec.

A DBA may state that Transactions/Sec is a more accurate measurement, but this is incorrect because Transactions/Sec only measures activity that is inside an actual transaction and not all activity. This can produce seriously skewed results.

Batch Request/Sec measures all SQL Server activity.

In general, Batch Request/Sec over 1,500 shows a very busy SQL Server environment.

But this is in general — if the server is running on a modern SMP machine sitting on a gigabit network segment this figure can be as high as 3,000; a typical 100 megabit network card can handle about 4,000 batch requests per second.

If the machine is a uni-processor, and over 1,500, tell your boss you need a bigger machine…

SQLServer : Databases : Log Flushes/Sec

SQL Server writes transaction logging information to a data cache before the information is written to the actual transaction log file.

A log flush is forced any time data is transferred from the cache to the physical log file; this happens every time a transaction is completed.

If you’re using the Argent Data Consolidator you may notice this value is very high when a large amount of data is being consolidated through the Transfer Engine.

If this is true, I/O can be significantly reduced by changing the value for the total number of records archived into SQL Server for each batch transaction.

You should consider changing the value for Process SQL Update In Batch Of Records from 100 to 300 in your Transfer Engine settings. If the transfer engine is local to the SQL Server and plenty of RAM is available, this value can be even higher.

Changing this value simply tells the Transfer Engine it can push more data to the SQL Server database with each transaction. This results in fewer transactions and, therefore, fewer log flushes.

SQLServer : Buffer Manager : Buffer Cache Hit Ratio

This counter is an indication of how often SQL Server is able to get data from the RAM buffer instead of the hard disk. A higher ratio is an indication of better performance.

A good target for the Cache Hit Ratio is anything above 85 percent. Anything less than 85% indicates the server needs more RAM.

Remember the counter is an accumulation from the time the SQL Server Service is started. The SQL Server Service should be restarted and watched for several hours before an accurate picture of the Buffer Cache Hit Ratio can be gathered.

Memory : Pages Per Second

This value should always be less than 100 on a production SQL Server.

This particular counter is one of the most overlooked values in the entire Windows operating system. If the value for Pages Per Second is above 100 on a regular basis, the operating system itself may not have enough RAM.

Check the properties of the SQL Server and ensure SQL Server is not using 100 percent of the RAM on the server. Ensure enough RAM is allocated to Windows for other tasks.

A quick and dirty way of measuring this is to simply use Task Manager to view the value for Available Physical Memory on the Performance tab.

Do not confuse this counter with Page Faults Per Second

A Page Fault means the required page is not currently paged in, but it may be in the discarded page list in memory. If the page is in the discarded page buffer, that is, it’s in another buffer still in RAM, a physical page-in from disk is not needed; the Pages Per Second is not incremented.

Is Defragmentation Necessary?

If ODBC timeouts are an issue and reports are slow to display in the Argent GUI, fragmentation may be the cause.

You can buy a third-party defragger.

Because SQL Server data files and transaction logs tend to be very large, extensive fragmentation may be a real issue. I/O access can be slowed significantly.

Fragmentation is particularly evident when range scans and table scans take place, as these operations read a large amount of data. If the drive head has to move all over the drive platter to locate the required data, time is wasted and timeouts can occur.

Remember to schedule defragmentation at night the server is not busy, but don’t run when the backups are running or the SQL Server maintenance is running.

Don’t run the defragger when Automatic Reports are being distribution by Argent.

RAID arrays make fragmentation more of a problem.