Tag Archives: Tuning

SQL Server Performance Monitoring Guidelines


Abstract
This chapter discusses the most important counters to monitor (in both Windows NT and SQL Server), consider time intervals, and recommend a long-term strategy for monitoring performance. A table summarizes which counters to consider for particular problems.



PERFORMANCE MONITOR 

Performance Monitor collects data about different counters, such as memory use. Performance Monitor can show you data in graphical format in real time, or you can save the data to log files. Pay particular attention to the discussion of log files in this section, because you will also use log files for long-term performance monitoring, which we will discuss in detail later. Working with a log file can be difficult to learn on your own because the options are not intuitive and are hidden on different screens.

You can choose between two Performance Monitors: one in the Administrative Tools group and one in the SQL Server group. They are the same basic program, but you need to run the one in the SQL Server group because it automatically loads the SQL Server-related counters. You run this version of the program with the following command: Perfmon.exe C:MssqlBinn Sqlctrs.pmc, where the .pmc file is the Performance Monitor counter file that contains the SQL counters. You can write applications that provide your own counters, and you can modify the new system stored procedures called sp_user_counter1 through sp_user_counter10 and track them, too.

When you run the program from the SQL Server group, the window appears. Because you started the program that includes the set of SQL counters, five counters appear at the bottom of the window when Performance Monitor starts. The five counters are

·     Cache Hit Ratio

·     I/O — Transactions per second

·     I/O — Page Reads per second

·     I/O Single Page Writes per second

·     User Connections

These counters will be explained in more detail later, but first, let’s learn how to navigate in Performance Monitor.

Changing Menu Options
The first set of buttons on the tool bar at the top of the window corresponds to the four views of the monitor: chart, alert, log, and report views. You can get to the same options using the View menu.

The menu options change depending upon which view is currently active. Without going into too much detail about the View menu options, their basic purpose is to let you set up and save standard viewing templates for each of the four views.

Understanding Counters
Windows NT lets you watch the performance of the system by “counting” the activity associated with any of its objects. Examples of objects in Windows NT are processors, disk drives, and processes. Each object has specific counters associated with it; for example, the % User Time counter is associated with a CPU or processor to designate what percent of the CPU is taken up by user programs (as opposed to system processes). This chapter gives you enough information to help you choose the right counters at the right time.

SQL Server includes many predefined counters, most of which you aren’t likely to use except in special cases. It can be difficult to know which counters are the basic ones to watch. If you have chosen the SQL Server Performance Monitor, several counters have been set up as default counters, such as Cache Hit Ratio and User Connections. You can create your own defaults by creating a .pmc file.

The counters are hooks into the operating system and other programs, like SQL Server, that have been built into the software to let Performance Monitor get data. Data collection is performed efficiently so that the additional load on the system is minimized. Windows NT needs most of the information gathered for managing memory, processes, and threads, and Performance Monitor is a good program to display the results.

On the tool bar, the button next to the four view buttons at the top of the window is a big plus sign, which you use to add counters to monitor. Click the + button, and the window will appear. The first field, Computer, has a search button at the end of the field. You can click on this field to bring up a list of all computers in your domain and choose a computer from the list, or you can type the name of a server you want to monitor. To monitor other servers, you need Windows NT administrative privileges on them.

In the next field, Object, you choose an object to monitor. The default is Processor, and the default counter shown in the field below is % Processor Time. The box on the right is the Instance. Any particular resource may have more than one instance ; that is, more than one of that particular resource — in this case, processors — may exist. This computer has only one processor (CPU) because the instance in the box is 0. Instance 3 refers to the fourth CPU.

From the fields along the bottom, you can pick the color, scale, line width, and line style of the information that will be displayed about the counter you are adding. These options let you choose a different look for each counter you add to the window. The only display choice that may need explanation is scale. The scale field is a multiplier that helps you fit the values on the screen in the range you have set on the y-axis, which by default is 0 –100.

After you choose the Object, Counter, and Instance you want to monitor and determine how you want the information to appear, click Add. The counter is added at the bottom of the list on the main window and starts graphing the next time your data is refreshed.

If you click the Explain button, a brief explanation of the counter you specified will appear. Sometimes, though, it uses abbreviations and acronyms that require further research, unless you are a Windows NT internals guru.

Setting Up Alerts
An alert is the warning the computer sends you when a resource such as memory or the network becomes a bottleneck. When an alert occurs, it is written to a log file, along with the date and time it occurred. The log file is a circular file, allowing at most 1,000 entries before it starts overwriting the oldest alerts. The alert can also be written to the Windows NT event log.

To add a new alert, click the second button on the toolbar, then click the + button. The dialog box will appear. Choose the counters you want to create alerts for, then click Add. The example on the screen will create an alert when the Cache Hit Ratio drops below 85 percent.

Notice the Run Program option in the lower right portion of the screen. You can use it to execute a program when the alert occurs. For example, you can choose SQL Server — Log in the Object field, Log Space Used (%) for the Counter, and the database you want to monitor from the Instance list. When the log file for that database gets above 90 percent, you can execute a batch file that runs an ISQL script to dump the transaction log. In this way you can reduce your chances of running out of log space.

Starting Log Files
Learning how to establish log files is very important, because log files are a critical part of the long-term strategy recommended later in this chapter. It can be a bit confusing, so let’s go through the steps.

1.  Click the third button on the toolbar — View Output Log File Status. Notice that the Log File entry at the top is blank, the status is closed, the file size is zero, and the log interval is 15.00 (seconds).

2.  Click +, and the list of objects will appear. Select the ones you want to add to the log and click Add. If you hold down the Ctrl key while selecting, you can choose more than one counter, and holding down Shift lets you highlight all the items in a range. All counters in the objects you pick will be tracked in the log file. We will discuss what to monitor later.

3.  Now we need to specify a log file. From the Options menu, choose Log. The dialog box will appear.

4.  This dialog box looks almost like the standard file dialog box, but it has two very important additions. At the bottom of the screen, the Update Time section shows the refresh interval. For short-term tracking, keep it at 15 seconds. For long-term tracking, set it at 300 seconds (5 minutes). The other important difference between this dialog box and the standard file name dialog box is the Start Log button. Nothing happens until you click this button to start collecting data. Once you do, the text of the button will change to Stop Log.

Type a log file name in the File Name box at the top. Then click Start Log.

5.  Click OK to close this dialog box, then minimize the window and let the log run for a while.

6.  Maximize the window and click the Stop Log button. Then switch to the Chart view by clicking the first button on the toolbar.

7.  From the Options menu, choose Data From. Select the log file you named earlier. You can then choose the counters you want to view from the log.

The best part about using log files is that you can view a few counters at a time to avoid overcrowding the window. You can also mix and match the counters you want to analyze at the same time. This feature is important because many of the counters depend on other counters.

Special Note: The log file does not do anything until you click the Start Log button in the Log Options dialog box (also available by choosing Log in the Options menu).


Reports
The fourth button on the toolbar, the Reports button, lets you print customized reports of the data collected in your log file. Experiment with the available reports when you have a chance; we won’t cover this option here.

TIME INTERVALS

The default refresh interval for Performance Monitor is one second. Every second, you get new information about your system’s performance. This interval is good for a very short-term examination of the system, but it can be a drain on the server. A five-second interval causes much less overhead, probably in the neighborhood of five percent extra activity. However, for long-term monitoring, 5 seconds produces a very large log file.

Setting the interval to 5 minutes creates a reasonable size log file, but this large an interval can mask performance peaks. However, because each entry in the log file stores the minimum, average, and maximum values for each counter, or aspect of SQL Server you want to monitor, you can discover the peaks with a little extra analysis. Five minutes is a good setting for long-term logging. You can always fire up another copy of Performance Monitor and look at one- to five-second intervals if you want a short-term peek into the system’s performance.

To determine the amount of drain on the system from Performance Monitor, shut down all the services and start the Monitor again. Add the CPU usage and watch it for about 30 seconds at the default interval of one second. Then change the interval to 0.1 seconds. Your CPU usage will jump dramatically. One odd observation is that the effect of changing from one second to 0.1 seconds is different on different computers, and it is different between Windows NT 4.0 and Windows NT 3.51. For example, when changing the interval on two 133 MHz computers — a laptop and a tower box — the tower machine has the better performance at the shorter interval, showing about 55 percent utilization, while the laptop shows about 60 percent utilization.

Special Note: The faster your refresh option, the more the drain on the system. The default one-second refresh interval creates less than 5 percent overhead on a single-processor machine. For multiprocessor machines, the overhead is negligible. With the refresh interval set to 0.01 seconds, Performance Monitor takes about 60 percent of the resources. At 10 seconds per refresh, the drain is almost too small to measure, even with a lot of counters turned on.

WHAT TO MONITOR

Now that you know how to use the program, let’s get to the section you’ve been waiting for: How do you know what to monitor? Of the hundreds of Windows NT counters and 50 or so SQL counters, how do you choose? Should you monitor everything? How long should you monitor the system?

Monitoring performance helps you perform two related tasks: identifying bottlenecks and planning for your future hardware and software needs (capacity planning). Learning about the important counters will help identify potential bottlenecks. The strategy section later in this chapter will help you put together a good plan for creating a general monitoring strategy.

What do you want to monitor? Everything! Well, monitoring everything may be a good idea for a short period, but the results will show that many of the counters are always at or near zero; monitoring them all the time may be a waste of time and resources. You need to establish a baseline for your system. This baseline lets you know what results are normal and what results indicate a problem. Once you establish a baseline, you don’t need to track everything.

The key categories to monitor can be split into two major sections: Windows NT categories and SQL Server categories. Categories in this sense are groups of objects that contain counters.

·     Windows NT

o    Memory

o    Processor

o    Disk I/O

o    Network

·     SQL Server

o    Cache

o    Disk I/O

o    Log

o    Locks

o    Users

o    Other Predefined Counters

o    User-Defined Counters

When monitoring both categories of data, look for trends of high and low activity. For example, particular times during the day, certain days of the week, or certain weeks of the month might show more activity than others. After you identify highs and lows, try to redistribute the workload. These peaks and valleys are especially good to know when something new is added to the schedule. If the peak loads are causing problems, identify which things can be scheduled at a later time when the system is not so busy. Knowing the load patterns is also helpful when problems occur, so that you can re-run a particular job or report when the load is low.

Get to know your users — find out which reports they need first thing in the morning. Perhaps you can schedule these reports to run at night in a batch mode, instead of having the user starting them during a busy time.

Monitoring Windows NT
The purpose of monitoring the Windows NT categories is to answer one of two questions: “What resource is my bottleneck?” or “Do I see any upward usage trends that tell me what resource I might run low on first?” SQL Server 6.5 introduced several highwater markers, such as Max Tempdb space used, which make it easier to identify potential long-term problems

 

Memory
The Memory: Pages/sec counter is the number of pages read or written to the disk when the system can’t find the page in memory. This page management process is referred to as paging. If the average value for this counter is five, you need to tune the system. If this value is 10 or more, put tuning the server high on your priority list. Before SQL Server 6.0, the value for this counter was an important flag to tell you whether memory was the bottleneck. Now, with SQL Server’s parallel read-ahead feature, this counter will give you only an indication of how busy the read-ahead manager is. However, we will discuss other counters that are better at tracking the read-ahead manager. In other words, this counter may have been one of the most significant counters to track in the past, and it still is on machines without SQL Server, but better ones are available to track memory. 

The Memory: Available Bytes counter displays the amount of free physical memory. If the value for this counter is consistently less than 10 percent of your total memory, paging is probably occurring. You have too much memory allocated to SQL Server and not enough to Windows NT.

Processor
Before we start talking about the counters in the processor category, it is important to know that Windows NT assigns certain responsibilities to certain processors if you have four or more CPUs. Processor 0 is the default CPU for the I/O subsystem. Network Interface Cards (NIC) are assigned to the remaining CPUs, starting from the highest-numbered CPU. If you have four processors and one NIC, that card is assigned Processor 3. The next NIC gets Processor 2. Windows NT does a good job of spreading out processor use. You can also set which processors SQL Server uses. See Chapter 16, “Performance Tuning,” particularly the notes on the Affinity Mask, for more information about allocating processors.

You can monitor each processor individually or all the processors together. For monitoring individual processors, use the Processor: % Process Time counter. This counter lets you see which processors are the busiest.

A better counter to monitor over the long term is the System: % Total Processor Time counter, which groups all the processors to tell you the average percentage of time that all processors were busy executing non-idle threads.

Who (or what) is consuming the CPU time? Is it the users, system interrupts, or other system processes? The Processor: Interrupts/sec counter will tell you if it is the system interrupts. A value of more than 1,000 indicates that you should get better network cards, disk controllers, or both. If the Processor: % Privileged Time is greater than 20 percent (of the total processor time) and Processor: % User Time is consistently less than 80 percent, then SQL Server is probably generating excessive I/O requests to the system. If your machine is not a dedicated SQL Server machine, make it so. If none of these situations is occurring, user processes are consuming the CPU. We will look at how to monitor user processes when we consider SQL Server-specific counters in the next section.

Disk I/O
As discussed in Chapter 16, “Performance Tuning,” having many smaller drives is better than having one large drive for SQL Server machines. Let’s say that you need 4 GB of disk space to support your application with SQL Server. Buy four 1-GB drives instead of one 4-GB drive. Even though the seek time is faster on the larger drive, you will still get a tremendous performance improvement by spreading files, tables, and logs among more than one drive.

Special Note: The single best performance increase on a SQL Server box comes from spreading I/O among multiple drives (adding memory is a close second).


Monitor the disk counters to see whether the I/O subsystem is the bottleneck, and if it is, to determine which disk is the culprit. The problem may be the disk controller board. The first thing to know about monitoring disk I/O is that to get accurate readings from the Physical Disk counters, you must go to a command prompt window and type DISKPERF -y, then reboot. This procedure turns on the operating system hooks into the disk subsystem. However, this setup also causes a small performance decrease of 3 to 5 percent, so you want to turn this on only periodically and only for a short period. Use the Diskperf -n command to turn it off, then restart your system. 

Track Physical Disk: % Disk Time to see how much time each disk is busy servicing I/O, including time spent waiting in the disk driver queue. If this counter is near 100 percent on a consistent basis, then the physical disk is the bottleneck. Do you rush out and buy another disk? Perhaps that is the best strategy if the other drives are also busy, but you have other options. You may get more benefit from buying another controller and splitting the I/O load between the different controllers. Find out what files or SQL Server tables reside on that disk, and move the busy ones to another drive. If the bottleneck is the system drive, split the virtual memory swap file to another drive, or move the whole file to a less busy drive. You should already have split the swap file, unless you only have one drive (which is very silly on a SQL Server machine).

LogicalDisk: Disk Queue Length and PhysicalDisk: Disk Queue Length can reveal whether particular drives are too busy. These counters track how many requests are waiting in line for the disk to become available. Values of less than 2 are good; if the value is any higher, it’s too high.

Network
Redirector: Read Bytes Network/Sec gives the actual rate at which bytes are being read from the network. Dividing this value by the value for the Redirector: Bytes Received/Sec counter gives the efficiency with which the bytes are being processed.

If this ratio is 1:1, your system is processing network packets as fast as it gets them. If this ratio is below 0.8, then the network packets are coming in faster than your system can process them. To correct this problem on a multiprocessor system, use the Affinity Mask and SMP Concurrency options in the SQL Configuration dialog box to allocate the last processor to the network card, and don’t let SQL Server use that processor. For example, if you have four CPUs, set the Affinity Mask to 7 (binary 0111) and SMP Concurrency to 3. This setup gives three CPUs to SQL Server and the fourth processor to the network card, which Windows NT assigns to that processor by default. If I/O is also a problem, set the Affinity Mask to 6 (binary 0110) and SMP Concurrency to 2, because Windows NT assigns the I/O subsystem to the first processor by default.

Monitoring SQL Server
The questions to ask yourself when monitoring the SQL Server categories are “Do I have the optimal configuration values for SQL Server?” and “Who is consistently using the most resources?”

If any of the counters considered in this section indicate a problem, the problem is somewhere related to SQL Server. If the problem is I/O, memory, CPU, or locks, you can dig deeper and find out who the culprits are. However, if you are using a long-term logging strategy for monitoring, you must monitor every session to be sure you have the necessary historical data when you want to see what was happening at a particular time.

If you are watching the monitor when a problem occurs, go to the SQL Server-Users object and turn on the counter for all instances. The instances in this case are the sessions currently logged on. You can see the login ID and the session number. If you see one or more sessions causing the problem, you can spy on them to find the last command sent. Go to the Enterprise Manager, click the Current Activity button on the toolbar, and double-click the line in the display corresponding to the session number. You will see the last command received from the session. To trace commands in more depth, use the SQLTrace utility that is new with version 6.5. (See Chapter 3, “Administrative and Programming Tools,” for details.)

The five main categories of SQL Server counters to monitor are cache, disk I/O, log, locks, and users. We will consider each of these categories separately as well as a mix of other important predefined counters. The final part of this section discusses the new user-defined counters.

Cache
To monitor your cache, watch SQL Server — Cache Hit Ratio. It monitors the rate at which the system finds pages in memory without having to go to disk. The cache hit ratio is the number of logical reads divided by the total of logical plus physical reads. If the value for this counter is consistently less than 80 percent, you should allocate more memory to SQL Server, buy more system memory, or both. However, before you buy more memory, you can try changing the read-ahead configuration options. Also look at the discussion of free buffers in the next chapter to determine whether the number of free buffers is approaching zero. Changing the free buffers configuration parameter may increase the cache hit ratio.

To find out if you have configured SQL Server properly, you should monitor SQL Server-Procedure Cache: Max Procedure Cache Used (%). If this counter approaches or exceeds 90 percent during normal usage, increase the procedure cache in the SQL Server configuration options. If the maximum cache used is less than 50 percent, you can decrease the configuration value and give more memory to the data cache. Rumor has it that SQL Server 7.0 will have a floating-point number for the procedure cache configuration parameter so that you can give the procedure cache less than 1 percent of your SQL Server memory. For a super server with gigabytes of memory, even 1 percent is too much for procedure cache.

If a 2K data page has been swapped to the Windows NT virtual memory file and read in again later, SQL Server still counts the page as already in memory for the purposes of the Cache Hit Ratio counter. Therefore, a system bogged down by heavy swapping to virtual memory could still show a good cache hit ratio. To find out if your system is in this category, monitor the Memory: Page Faults/Sec counter.

The Memory: Page Faults/Sec counter watches the number of times a page was fetched from virtual memory, meaning that the page had been swapped to the Windows NT swap file. It also adds to the counter the number of pages shared by other processes. This value can be high while system services, including SQL Server, are starting up. If it is consistently high, you may have given too much memory to SQL Server. The network and operating system may not have enough memory to operate efficiently.

Warning: This counter is a strange one to figure out. Running this counter on four different types of machines gave widely different results. To try to get a baseline value, we turned off all services, including SQL Server, unplugged the boxes from the network, and ran Performance Monitor with only the Memory: Page Faults/Sec counter turned on. The lowest measurement of page faults per second was from the system we least expected — a 50 MHz 486 with 16 MB of memory and one disk drive. It settled in at about five to seven page faults per second. The DEC Alpha with 4 processors, 10 GB RAID 5 striping on 5 drives, and 256 MB of memory was up in the 35 to 40 page faults per second range. So was a similarly configured Compaq ProLiant. The laptop performed in the middle, at about 15 page faults per second. It is a 90 MHz Pentium with 1 disk drive and 40 MB of memory. All were running Microsoft Windows NT version 3.51 service pack 4. All services except Server and Workstation were turned off. Running the same experiment with Windows NT 4.0 service pack 1 showed approximately the same results, except that the page faults per second numbers ran consistently 10 percent less than in Windows NT 3.51.

The result of this experiment is that we can’t recommend a range to gauge the performance of your machine. The best you can do is turn off all services for a brief period to get a baseline measurement on your machine, then use this value as a guide for your regular usage.

Disk I/O
Several counters measure how busy your disk drives are and which disk drives are the busiest. Remember that for any I/O measurements to be effective, you must run the Windows NT Diskperf -y command and reboot the system.

Even though the SQL Server: I/O Transactions Per Second counter is a bit misleading, it is still good, especially for capacity planning. This counter measures the number of Transact-SQL batches processed since the last refresh period. You should not use these results against any standard TPC benchmark tests that give results in transactions per second — it is not referring to a Begin/Commit transaction, just to batches of commands. Watch this number over a span of several months, because an increase in this counter can indicate that the use of SQL Server is growing.

The SQL Server: I/O — Lazy Writes/Sec counter monitors the number of pages per second that the lazy writer is flushing to disk. The lazy writer is the background Windows NT process that takes the data from cached memory and writes it to disk, although sometimes a lazy writer is hardware that reads the cached memory on the disk drive and is managed by the disk controller. A sustained high rate of lazy writes per second could indicate any of three possible problems:

·     the Recovery Interval configuration parameter is too short, causing many checkpoints

·     too little memory is available for page caching

·     the Free Buffers parameter is set too low

Normally this rate is zero until the least-recently used (LRU) threshold is reached. LRU is the indicator by which memory is released for use by other processes. Buying more memory may be the best solution if the configuration parameters seem to be in line for your server size.

The SQL Server: I/O Outstanding Reads counter and the I/O Outstanding Writes counter measure the number of physical reads and writes pending. These counters are similar to the PhysicalDisk: Disk Queue Length counter. A high value for this counter for a sustained period may point to the disk drives as a bottleneck. Adding memory to the data cache and tuning the read-ahead parameters can decrease the physical reads.

The SQL Server: I/O Page Reads per Second counter is the number of pages not found in SQL Server data cache, which indicates physical reads of data pages from disk. This value does not count pages that are read from the Windows NT virtual memory disk file. There is no way to watch only the logical page reads per second. According to sources in the SQL development team, counters for logical pages reads are hidden in a structure that is not available in this version of SQL Server. However, you can figure out the logical page reads per second by taking the total page reads per second and subtracting the physical page reads per second.

 

You should occasionally turn on the I/O Single Page Writes counter. A lot of single page writes means you need to tune SQL Server, because it is writing single pages to disk instead of its normal block of pages. Most writes consist of an entire extent (eight pages) and are performed at a checkpoint. The lazywriter handles all the writing of an extent at a time. When SQL is forced to hunt for free pages, it starts finding and writing the LRU pages to disk — one page at a time. A high number of single page writes means that SQL Server does not have enough memory to keep a normal amount of pages in data cache. Your choices are to give more memory to SQL Server by taking memory away from the static buffers, by decreasing the procedure cache, or decreasing the amount of memory allocated to Windows NT.

Log
Tie the SQL Server — Log: Log space used (%) counter to an alert. When the value goes over 80 percent, send a message to the administrator and to the Windows NT event log. When it goes over 90 percent, dump the transaction log to a disk file (not the diskdump device), which will back up the log and truncate it. You want to track this counter for all your application databases, for Tempdb, and for the Distribution database if you are running replication.

Locks
To check out locking, turn on the SQL Server Locks: Total Locks and Total Blocking Locks counters. If you notice a period of heavy locking, turn on some of the other lock counters to get a better breakdown of the problem. The value for Total Blocking Locks should be zero or close to it as often as possible.

One counter to turn on to see if you have configured the system correctly is SQL Server Licensing: Max Client Count. Once you have established that your licensing choice is correct, turn it off. You should turn it back on occasionally to check the connections. If you do exceed the license count, you will know because users will be denied access.

Users
When you suspect that one particular user is the cause of any performance problems, turn on the counters in the Users section. However, with many users on the system, it is difficult to guess which counters to use, and it is difficult to turn on all counters for all sessions. One shortcut is to go into the Current Activity screen of the SQL Enterprise Manager and look at the locks in the Locks tab as well as the changes in CPU and Disk I/O activity in the Detail tab.

Monitor the SQL Server — Users: CPU Time counter for each user. Users for whom this counter returns high values may use inefficient queries. If the query appears reasonable, a high value may indicate an indexing problem or poor database design. Use Showplan to determine if the database’s indexes are optimal. Look for wide tables (long row sizes), which indicate a non-normalized database. Wide tables and inefficient indexes can cause more I/O than table scans.

Other Predefined Counters
A new counter in SQL Server 6.5, SQL Server: Max Tempdb Space Used, indicates how well you have estimated the size of Tempdb. If the value for this counter is very small, you know you have overestimated the size of Tempdb. Be sure to watch this counter frequently, especially during the busiest times and when your nightly jobs run. If it approaches the size of Tempdb, then you should probably increase Tempdb’s size.

Compare SQL Server: NET — Network Reads/Sec to SQL Server: NET — Bytes Received/Sec (or Network Writes/Sec compared to Bytes Transmitted/Sec). If the SQL Server network counters are significantly lower than your server counter, your server is busy processing network packets for applications other than SQL Server. This reading indicates that you are using the server for uses other than SQL Server, perhaps as a primary or backup domain controller, or as a print server, file server, Internet server, or mail server. To get the best performance, make this server a dedicated SQL server and put all the other services on another box.

If you are using replication, you should focus on the publishing machine. You should monitor the distribution machine and the subscriber as well, but the publisher will show the first signs of trouble. Turn on all counters in the SQL Server Replication-Publishing DB object. The three counters will tell you how many transactions are held in the log waiting to be replicated, how many milliseconds each transaction is taking to replicate, and how many transactions per second are being replicated.

User-Defined Counters
Last but not least, you can define counters. The user-defined counters are in the SQL Server User-Defined Counters object in the Master database. The 10 counters correspond to 10 new stored procedures called sp_User_Counter1 through sp_User_Counter10. These stored procedures are the only system stored procedures you should change. If you look at the code of the procedure, they all perform a Select 0, which, when tracked on Performance Monitor, draws a flat line at the bottom of the screen. Replace the Select 0 with a Select statement that returns one number; an integer is preferable, but float, real, and decimal numbers also work. These queries should be quick, not ones that take minutes to run.

Please note that these counters are different from the user counters mentioned earlier, which track the specific activity of a particular person logged in to SQL Server.

The current version of Performance Monitor contains a bug. If User Counter 1 contains an error, none of the 10 counters will show up in Performance Monitor. However, this bug is not the only reason that you might not see these user defined counters in Performance Monitor. The Probe login account, added when you install SQL Server, must have both Select and Execute permission on these 10 stored procedures for them to appear.

It would be nice to be able to change the names of these stored procedures so you could more easily remember what you are tracking. Maybe this feature will be included in version 7.0.

Here is a trick: Suppose you want to count the number of transactions you have in a table. You could put the following statement in sp_User_Counter1:

SELECT COUNT(*) FROM MyDatabase.dbo.MyTable

If MyTable had 40 million rows, the stored procedure would take a lot of time to execute, even though it scans the smallest index to get an accurate count. Instead, you could get an approximate number by using the following command:

SELECT rows FROM myDatabase.dbo.sysindexes WHERE id=OBJECT_ID(‘MyTable’) AND indid in (0,1).

This way is much faster, even though SQL Server does not keep the value in sysindexes up-to-date. Sometimes the counters tracked in sysindexes get out of sync with the actual table, and the only way to get them updated accurately is with DBCC. But most of the time the value in sysindexes is accurate enough.

LONG-TERM PERFORMANCE MONITORING

The concept behind a good long-term strategy for monitoring performance is simple to explain: Use log files to track as many items as you can without affecting performance. We break this discussion into three sections: establishing a baseline, monitoring performance over the long term, and tracking problems.

Establishing a Baseline
First, go to a command prompt and turn on the disk counters using the command Diskperf -y, then reboot. Then establish a new log file, click the + button, add all the options, and start the logging process. Choosing all the options tracks every instance of every counter in every object. You are tracking a lot of information, especially with the physical disk counters turned on.

Run Performance Monitor with this setup for a week; if you wish, you can manually stop and restart the log file every night so that each day is contained in a different log file. These measurements become your baseline; all your trend measurements will be based on this baseline. This method is not a perfect way to establish a baseline if you have very many special activities taking place on your server that week. But you may never experience a “typical” week, and it’s better to get some baseline measurement than wait.

We also recommend that you start a performance notebook. In this notebook, keep a page where you log special activities and events. For instance, an entry in your log might say, “Ran a special query for the big boss to show what a Cartesian product between two million-record tables does to the system.” In your performance notebook, be sure to record changes to the hardware, along with dates and times. You should also schedule actions like backups and transaction log dumps regularly so that when you look at system performance for one night last week, you do not have to wonder whether the backup was running.

We recommend that you run your long-term monitoring from another computer on the network. This way, you are not skewing the results by running it on the server you are trying to monitor. Also, avoid running Perfmon.exe to capture the long-term baseline, because someone must be logged on for it to run, and leaving an administrator machine logged on for long time periods is not a good idea. Instead, run the command-line version of Performance Monitor, called Monitor.exe. It is essentially the same program as Perfmon.exe without the screens. All output can be directed to the log files. To further simplify your life, get Srvany.exe from the Windows NT resource kit and make Monitor.exe into a Windows NT service. This way you can manage Monitor.exe like any other network service.

Periodically, perhaps once every six months, repeat this baseline process with all the counters turned on. Then compare your baselines to establish a trend.

Monitoring Performance over the Long Term
Once you have established your baseline, start another series of log files for your everyday use. First, turn off the physical disk counters with the Diskperf -n command from a command prompt and reboot the system. You can still track everything else if you want to because turning off the physical disk counters reduces the performance problems caused by monitoring. However, it is not necessary to track all the counters. We recommend you track the following objects:

·     Logical Disk

·     Memory

·     Paging File

·     Processor

·     Server

·     SQL Server

·     SQL Server — Replication (only if you are running replication)

·     SQL Server — Locks

·     SQL Server — Log

·     SQL Server — Procedure Cache

·     SQL Server — Users

·     System


Tracking Problems
When you experience performance problems, leave your Performance Monitor running with the log file so you continue to collect long-term data. Then start Performance Monitor again to track the particular problem. Turn on whatever counters you need to look at, using this chapter as a guide for the key counters to monitor in the disk, memory, network, and processors categories.

Start with the high-level counters — look for the words “total” or “percent” (or the % sign). When one of these counters indicates a problem, you usually have the option of watching counters that give you more detail. Learn which counters in different sections are related to each other. The relationships can tell you a lot. For example, the I/O Transactions Per Second counter in the SQL Server section is closely related to the CPU % counter in the processor section. If the number of I/O transactions per second goes up, so does the processor usage.

Concentrate on finding out which resource is causing the problem. Is it the system or a user process? Is it Windows NT or SQL Server? Before you purchase more hardware, try to find a configuration option related to the problem. Don’t hesitate to change hardware configuration or move data to different servers to balance the work among the available resources.

For specific examples of tuning performance, see Chapter 16, “Performance Tuning.”

Special Note: Use log files to track as many items as you can without affecting performance.


Monitoring with Transact-SQL
You can also use three Transact-SQL commands to do your own monitoring:

·     DBCC MEMUSAGE

·     DBCC SQLPERF — cumulative from the start of SQL server; use iostats, lru stats, and netstats parameters

·     DBCC PROCCACHE — six values used by Performance Monitor to monitor procedure cache

The output from these commands can be inserted into a table for long-term tracking and customized reporting. Tracking the MEMUSAGE output calls for some tricky programming because different sections have different output formats. The other two commands are more straightforward.

The example below shows how to capture the DBCC PROCCACHE output. This command displays the same six values that you can display in Performance Monitor to watch the procedure cache usage in SQL Server.

CREATE TABLE PerfTracking (date_added datetime default (getdate()), num_proc_buffs int, num_proc_buffs_used int, num_proc_buffs_active int, proc_cache_size int, proc_cache_used int, proc_cache_active int) go INSERT PerfTracking (num_proc_buffs, num_proc_buffs_used, num_proc_buffs_active,    proc_cache_size, proc_cache_used, proc_cache_active) EXEC (“dbcc proccache”) go

After running this command, you can use any SQL Server-compliant report writer or graphing program to create your own fancy graphs. 

COUNTERS: A SUMMARY

The list below is a quick reference to the information about counters we’ve presented in this chapter. After the performance questions you may ask, we list related counters.

Is CPU the bottleneck?

·     system: % total processor time

·     system: processor queue length

What is SQL Server’s contribution to CPU usage?

·     SQL Server: CPU Time (all instances)

·     process: % Processor Time (SQL Server)

Is memory the bottleneck?

·     memory: page faults/sec (pages not in working set)

·     memory: pages/sec (physical page faults)

·     memory: cache faults/sec

What is SQL Server’s contribution to memory usage?

·     SQL Server: cache hit ratio

·     SQL Server: RA (all read ahead counters)

·     process: working set (SQL Server)

Is disk the bottleneck? (Remember that disk counters must be enabled for a true picture.)

·     physical disk: % disk time

·     physical disk: avg disk queue length

·     disk counters: monitor logical disk counters to see which disks are getting the most activity

What is SQL Server’s contribution to disk usage?

·     SQL Server-users: physical I/O (all instances)

·     SQL Server: I/O log writes/sec

·     SQL Server: I/O batch writes/sec

·     SQL Server: I/O single-page writes

Is the network the bottleneck?

·     server: bytes received/sec

·     server: bytes transmitted/sec

What is SQL Server’s contribution to network usage?

·     SQL Server: NET — Network reads/sec

·     SQL Server: NET — Network writes/sec

Did I make Tempdb the right size?

·     SQL Server: Max Tempdb space used (MB)

Is the procedure cache configured properly? (The highwater marks for the percentages are more important than the actual values.)

·     Max Procedure buffers active %

·     Max Procedure buffers used %

·     Max Procedure cache active %

·     Max Procedure cache used %


SUMMARY

SQL Server 6.5 gives you new configuration and tuning options. It also adds new counters to help you track the use of SQL Server on your system. Use Performance Monitor to see if your system is configured properly. Performance Monitor is one of the best tools you can use to identify current bottlenecks and prevent future problems.