Latest Entries »

Hp-Sitescope Vs Hp-Diagnostics


(1) Agentless monitoring tool

(2) No memory consuming tool.


(1) Agent required tool

(2) Memory will be consuming in the monitoing servers. Even this may tend to affect the performance of the web application.


Issue: After the scenario has been set up with certain load generators, while starting the scenario we face some errors due to Load generators.

Solution: Note the scripts for which the particular load generator has been used. Open the .prm file for the particular scripts. We can see the mismatch of parameterization name from the script to the prm file of the corresponding script. Now modify the parameter name which is correctly given during parameterization in the script.

Once it is changed and restart the scenario, it will run successfully.

LoadRunner:   Anatomy of a Script

When you record and save a LoadRunner script in Vugen, there are a number of files that are created. Here’s what they are, and what they do and identification of the files you can safely delete..

Files Required for Playback
During the course of recording and playback of scripts, the Vugen application will create many files, but only some of them are necessary for playback (either in Vugen or the Controller).

For example, say you have script named PerformancEngineer, with two Actions, Home and Forums, then the required files you would need in the PerformanceEngineer script directory would be:

* PerformanceEngineer.usr
* default.usp
* default.cfg
* globals.h
* Home.c
* Forums.c
* vuser_init.c
* vuser_end.c
* PerformanceEngineer.prm

Here’s what is in each file:

PerformanceEngineer.usr: Primarily, the .usr file defines which actions are used by the script. There are other properties which define which protocols are used and other settings, but most of the info

default.usp: Contains the run logic for the script

default.cfg: Contains the run-time settings (except for run-logic)

globals.h: The global headers file- visible and editable in Vugen

*.c (Action files): These are the action files containing your script code. You can edit these files in any text editor, if you want. Sometimes it is easier than starting up Vugen

PerformanceEngineer.prm: Containes the parameter definitions

*.dat: Your data files, you can save these in the script directory or somewhere else, even a mapped network drive on a different server
Files Created During Vugen Playback

All of the files listed below can safely be deleted and not affect your ability to use the script.

result1: One or more result directories are created which contain script playback results

*.idx: The .idx files are binary “index” files created by Vugen for holding parameter values

combined_PerformanceEngineer.c: A list of #includes for all of your Actions

logfile.log, mdrv.log: random log files which you will probably never need to look at

mdrv_cmd.txt, options.txt: These text files contain commands and arguments for the script compiler and driver (mdrv) and are created dynamically, so you can safely delete them.

output.txt: This one is important. This file contains all of the log messages generated during script playback. The contents of this file appear in the “Output Window” section of Vugen

output.bak: A backup of the above file

pre_cci.c: Output from the C pre-processor, which contains all of the functions used in your scrip, from all of the Acitons and header files.

In summary, you can delete: *.txt, *.log, *.idx, *.bak, result*, pre_cci.c, combined_*, *.ci
Files Created During Recording

The ‘data’ directory in your script directory contains the script recording data. I usually delete this so it doesn’t get checked into my version control system, but you may want to keep it around if you use the graphical scripting mode and/or you want to compare playback vs. recording. The auto-correlation feature makes use of this data, too, but I haven’t had much sucess using that feature.

(This has been referred from the site

Working with a firewall means that you can prevent unauthorized access to or from a private network, on specific port numbers.

In a regular LoadRunner load test scenario (not over a firewall), the Controller has direct access to the LoadRunner agents running on remote machines. This enables the Controller to connect directly to those machines. When running Vusers or monitoring applications over a firewall, this direct connection is blocked by the firewall. The connection cannot be established by the Controller, because it does not have permissions to open the firewall.

LoadRunner solves this problem by using a communication configuration based on HTTPS or secured TCP/IP. This configuration uses the standard SSL port on the firewall (port 443).

A LoadRunner agent is installed on load generators running Vusers over a firewall, and on Monitor Over Firewall machines that monitor the servers that are located over a firewall. The agent communicates with the MI Listener machine through port 443 in the firewall. The MI Listener is a component that serves as router between the Controller and the LoadRunner agent.

When the LoadRunner agent connects to the MI Listener, the MI Listener keeps a listing of the connection to the agent using a symbolic name that the agent passed to it. When the Controller connects to the MI Listener, it communicates to the MI Listener through port 50500.

Setting Up your System to Use Firewalls:

Basic Steps Setting up your system to use firewalls involves the following stages of configuration: Installation and initial configuration Running Vusers over a firewall Installation and initial configuration

To enable over-firewall communication, ensure that you have installed the following LoadRunner components:

MI Listener Monitor Over Firewall component To perform initial configuration of your over-firewall system:

1 Configure your system according to TCP or HTTPS.

2 Modify your firewall settings to enable communication between the machines on either side of the firewall.

3 Configure the MI Listener.

Configuring the MI Listener To configure the MI Listener:

1 Open incoming HTTPS service for port 443. The port settings are set by your system administrator.

2 Stop the LoadRunner agent on the MI Listener machine by right-clicking its icon in the system tray and selecting Close from the popup menu.

3 Run MI Listener Configuration from Start > Programs > LoadRunner > Advanced Settings, or run \launch_service\bin\MILsnConfig.exe.

4 Set each option as described in the following table:

5 Click OK to save your changes, Cancel to cancel them, or Use Defaults.

6 Restart the LoadRunner agent by double-clicking the shortcut on the desktop, or choosing Start > Programs > LoadRunner.

7 Make sure that port 443 is free on the MI Listener machine.

Running Vusers over a firewall

To set up your system to run Vusers over a firewall:

1 On each load generator machine that will be running over a firewall, configure the LoadRunner agent to communicate with the MI Listener.

2 Configure the Controller machine to recognize the load generator and MI Listener machines.

Configuring LoadRunner Agents Over the Firewall

1 Stop the LoadRunner agent by right-clicking its icon in the system tray and selecting Close.

2 Run Agent Configuration from Start > Programs > LoadRunner > Advanced Settings, or run \launch_service\bin\AgentConfig.exe.

3 Select the Enable Firewall Agent check box, and then click Settings. The Agent Configuration dialog box opens.

4 Set each option as described in “Agent Configuration Settings”

5 Click OK to save your changes, or Cancel to cancel them.

6 Restart the LoadRunner agent by double-clicking the shortcut on the desktop, or select Start > Programs > LoadRunner > LoadRunner Agent Service/Process.

7 Check the connection status between the LoadRunner agent and the MI Listener.

Configuring the Controller for Running over a Firewall

1 Run the Controller from Start > Programs > LoadRunner > Applications > Controller and create a new scenario, or load an existing one.

2 Click Generators to display the Load Generators window. In the Name field, enter the symbolic name of the server. This is the same name that you entered in the Local Machine Key setting in the Agent Configuration.

3 Select the Load Generator, and click Details to display the Load Generator Information.

4 In the Security tab, enter the MI Listener machine’s name in the MI Listener field. This is the same name that you entered in the MI Listener Name setting of the Agent Configuration dialog box. In this example, the MI Listener is bunji.

5 In the Firewall Settings section, select one of the following options: ➤ Enable running Vusers over Firewall. To run Vusers over the firewall. ➤ Enable Monitoring over Firewall. To monitor Vusers over the firewall.

6 Click OK to return to the Load Generators dialog box.

7 Select the load generator and click Connect. This will do all the setup required to run your test over the firewall…

ViewState in ASP.NET


Microsoft ASP.NET Web Forms pages are capable of maintaining their own state across multiple client round trips. When a property is set for a control, the ASP.NET saves the property value as part of the control’s state. To the application, this makes it appear that the page’s lifetime spans multiple client requests. This page-level state is known as the view state of the page. In Web Forms pages, their view state is sent by the server as a hidden variable in a form, as part of every response to the client, and is returned to the server by the client as part of a postback. In this article we will see how View State is implemented in ASP.NET for state management and we will also see how effectively you can use this object in your web form.

Problems with ViewState

Viewstate has lots of advantages and as well as disadvantages, so you need to weigh carefully before making the decision to use it. As I told you early, view state doesnt require any server resources for its operation. It is passed to the client during every postback as an hidden element. Since it is added with every page, it adds few Kbytes to the page. This effects the loading of the page in the client. Other main problem with Viewstate is, since it is passed as plain text to the client. Anybody can tamper this value, because of this you shouldnt store any important data in the viewstate. View state is one of the most important features of ASP.NET, not so much because of its technical relevance, but more because it makes the magic of the Web Forms model possible. However, if used carelessly, view state can easily become a burden. Although ViewState is freely accessible in a hidden field called __VIEWSTATE, the view state information is not clear text. By default, a machine-specific authentication code is calculated on the data and appended to the view state string. The resulting text is then Base64 encoded only, but not encrypted. In order to make the view state more secure, the ASP.NET @Page directive supports an attribute called EnableViewStateMac whose only purpose is detecting any possible attempt at corrupting original data.

Implementation of ViewState

StateBag implements the view state and manages the information that ASP.NET pages and embedded controls persist across successive posts of the same page instance. The class works like a dictionary object and implements the IStateManager interface. The Page and the Control base classes expose the view state through the ViewState property. So you can add or remove items from StateBag as you would with any dictionary object:

ViewState(“FontSize”) = value

You should start writing to the view state only after the Init event is fired for a page request. You can read from the view state during any stage of the page lifecycle, but not after the page enters rendering mode—that is, after the PreRender event is fired.
The contents of the StateBag collection are first serialized to a string, then Base64 encoded, and finally assigned to a hidden field in the page that is served to the client. The view state for the page is a cumulative property that results from the contents of the ViewState property of the page plus the view state of all the controls hosted in the page.

Decision on ViewState Usage

As We ‘ve discussed here, the view state represents the state of the page and its controls just before the page is rendered in HTML. When the page posts back, the view state is recovered from the hidden field, deserialized, and used to initialize the server controls in the page and the page itself. However, this is only half the story.

After loading the view state, the page reads client-side information through the Request object and uses those values to override most of the settings for the server controls. In general, the two operations are neatly separated and take place independently. In particular, though, the second operation—reading from Request.Form—in many situations ends up just overriding the settings read out of the view state. In this particular case the view state is only an extra overhead. For example consider the following case, we have one textbox in the page and a link button. If you are typing the some  values in to the textbox and the posting the page using linkbutton. After postback, value in the textbox is retained though you enable or disable the viewstate. In this case you shouldnt enable viewstate for this textbox. Viewstate value is overridden by request.form values, since loadpostdata fires after loadviewstate view event in the Page lifecycle.

But if you consider that readonly property of textbox  is set to False by default. Then in the Page_Load if you are trying to change its readonly property to true based on certain condition. So after setting readonly property in Page_Load and if it is posted back by clicking linkbutton. To retain its readonly property across postback, we need to enable viewstate for this property. Otherwise this property wont be retained across postback.

Viewstate in DataGrid

If you have Set EnableViewState to true for a DataGrid which is having thousands of record. Then you will end up having viewstate size more than 10 KBytes. But if you disable viewstate, you will not be able to fire any events in DataGrid. Postback and acting on postback relies on Viewstate. So if it is readonly datagrid and if you are not going to use paging and sorting provided by datagrid, then you can disable viewstate. But if you want use above mentioned feature of DataGrid, then you can not disable ViewState in DataGrid. So to avoid excessive load on client machine because of viewstate . You can disable viewstate for each item in DataGrid. Disabling can be done in two ways, one way is disabling each itemtemplate columns viewstate to false.

<asp:TemplateColumn headertext=”ProductID”>
<asp:TextBox id=”ProductID” runat=”server”  EnableViewState=”False” >

Other way is by disabling viewstate for each datagrid item in Pre-Render event handler.

Private Sub Page_PreRender(ByVal sender As Object, ByVal e As System.EventArgs) Handles
Dim dgi As DataGridItem
For Each dgi In DataGrid1.Items
dgi.EnableViewState = False
End Sub


The view state is a key element of an ASP.NET page because it is the primary means to persist the state of the Web server controls. Whenever the page posts back, the state is restored, updated using the current form parameters, then used to run the postback event handler. Normally, the view state is a hashed string encoded as Base64 and stored in a hidden field called __VIEWSTATE. In this way, the view state is not cached on the client, but simply transported back and forth with potential issues both for security and performance. Since it is performance overhead, you need to decide properly when and where you should use viewstate in your webform.

(The above notes has been referred from the site

The trouble with performance bottlenecks is that they can be tough to identify. Is it the CPU? The network? A clumsy bit of code? Often, the most obvious culprit is actually downstream of something larger and more mystifying. And when performance riddles remain unsolved, IT management may find itself faced with a Hobson’s choice between admitting ignorance and making up excuses.

Fortunately, as with medical diagnoses or detective work, experience helps. Drawing on our years of sleuthing and experimentation, we’ve collected 15 of the most likely ailments — and suggested remedies — to help your ICT operation track down and crack performance problems.

Some of these bottlenecks are more obvious than others. But by identifying common speed killers across ICT disciplines, we hope to jumpstart your quest to create the highest performing infrastructure your resources will allow.

No. 1: It’s probably not the servers

Server upgrades used to make all the difference, which is why the old saw “When all else fails, throw more hardware at it” persists today. That’s still true in some cases. But how much of IT is really that compute-intensive? Generally, you can save a lot of time and money by turning your hairy eyeball away from server hardware. The lower end of the server spectrum has more than enough horsepower to handle everyday tasks.

Here’s one concrete example. On a network of over 125 users, an elderly Windows domain controller appeared to be ripe for replacement. This server originally ran Windows 2000 Server and was upgraded to Windows Server 2003 some time ago, but the hardware remained unchanged. This HP ML330 with a 1GHz CPU and 128MB of RAM was functioning as an Active Directory domain controller carrying all the AD FSMO roles, running DHCP and DNS services as well as running IAS (Internet Authentication Services).

Molasses, right? In fact, it actually did the job just fine. Its replacement was an HP DL360 G4 with a 3GHz CPU, 1GB of RAM, and mirrored 72GB SCSI drives. Carrying all those services, it runs hardly any load at all — and the performance difference is unnoticeable.

It’s easy to identify applications that will eat all your CPU and memory, but they tend to be pretty specialised. For almost everything else, the humble commodity box will do the trick.

No. 2: Speed up those queries

You can create the niftiest application in the world, but if access to back-end database servers creates a bottleneck, your end users or customers won’t be happy. So fine-tune those database queries and maximise performance.

Three basic measures can help you improve query performance. First, most database products include tools (such as DB2 UDB for iSeries’ Visual Explain) that can dissect your query during development, providing feedback on syntax and the approximate timing of the various sections of the SQL statements. Using this information, locate the lengthiest portions of the query and break those down further to see how you might shorten the execution time.

Some database products also include performance advice tools, like Oracle’s Automatic Database Diagnostic Monitor, that provide recommendations (such as suggesting you create a new index) to speed up queries.

Next, turn on database monitoring tools on a staging server. You might use a third-party monitoring product, such as Fidelia’s NetVigil, if your database lacks monitoring support. With the monitors enabled, generate traffic against the


database server using load-testing scripts. Examine the data gathered to see how your queries performed while under load; this information may lead you to some further query tweaking.

If you have enough server resources to mimic your mixed workload production environment fairly closely, you can execute a third round of query tuning using a load testing tool, such as OpenSTA, plus database monitoring to see how your queries perform alongside other applications that hit the database.

As database conditions change — with volume growth, record deletions, and so on — keep testing and tuning. It’s often well worth the effort.

No. 3: What cost, virus protection?

Virus protection on critical servers is a basic requirement, especially for Windows servers. The impact can be painful, however. Some virus scanners are more obtrusive than others and can reduce server performance significantly.

Try running performance tests with and without your virus scanner running to determine the impact. If you see a marked improvement without the scanner, it’s time to look for another vendor. Also check specific features. Disable real-time scans, and quite often you’ll bring performance up.

No. 4: Maximising the middle tier

No matter how well written your business logic, when you deploy it to the middle tier, you will need to tune the application server runtime environment to maximise performance.

Like a vintage stereo with oodles of knobs for tweaking sound quality, application servers from vendors such as BEA, IBM and Oracle supply a dizzying set of controls. The trick is to turn the knobs just the right way, depending on the attributes of your application.

For example, if your application is servlet-heavy, you’ll want to enable servlet caching. Likewise, if your application uses many SQL statements to support a large user base, you’ll want to enable prepared statement caching and set the maximum size of the cache so it’s large enough to support the intended workload.

One of the major areas where performance tuning can really help is with the database connection pool. Set the minimum or maximum connections too low and you’re certain to create a bottleneck. Set them too high and you’ll likely see a slowdown resulting from the added overhead needed to maintain the larger connection pool.

If you know the intended workload, tune the application server runtime by turning on performance monitoring tools such as IBM’s Tivoli Performance Viewer for WebSphere on a staging application server. Generate the amount of workload that you expect by using a load-generation tool, then save the monitoring results and play them back to analyse which knobs need adjusting.

When in production, it’s a good idea to turn on low-overhead, passive monitoring to keep tabs on the runtime. If your workload changes over time, you’ll want to execute a fresh performance review.

No. 5: Optimise network connectivity

Most mid-level enterprise servers now have dual gigabit NICs — but most of them don’t use the second pipe. Moreover, gigabit switch prices have dropped through the floor. With a 120MBps link to your fileserver, a number of 100-megabit clients can achieve wire-rate file access simultaneously.


Even without gigabit switching, NIC bonding should be a staple. At its simplest, bonding two NICs gives you redundancy, but add transmit load-balancing, and you can effectively double the outbound bandwidth. Using switch-assisted teaming will provide the same effect on inbound traffic. Almost every major server vendor offers NIC teaming drivers — and there are third-party utilities, too. It’s a big, cheap bandwidth boost.

No. 6: Winding up your Web servers

Is there really that much you can do to tune a Web server and maximise performance? In fact, there is — mainly by adjusting a handful of critical settings to match the production traffic you expect.

For Web servers already in production, begin by collecting real-time Web server statistics (most major Web servers have that functionality built in). Then move to staging in order to determine which parameters, if any, need adjustment.

Activate the Web server’s performance-monitoring tools on the staging server. Execute a load test and inspect relevant parameters, such as response time, bytes sent and received, and the number of requests and responses.

Key parameters you’ll want to tune depending on the volume of traffic include caching, threading, and connection settings.

Enable caching for frequently used content; some Web servers allow you to cache files dynamically based on usage, whereas others demand that you specify what will be cached. Make certain your maximum cache size is sufficient for the traffic you expect. And if your Web server supports cache acceleration, enable that, too.

For the threading and connection settings, set the minimums and maximums in accord with expected workload. For connections, you’ll also need to define the maximum number of requests per connection and the connection time-out setting. Don’t set either of these values too small or too large, or slowdowns may result.

No. 7: The woe of the WAN

Think you need to reclaim WAN bandwidth? You can easily spend a bundle on traffic-shaping appliances or caching engines in an attempt to rein in WAN bandwidth utilisation. But what if it’s not the pipe?

First things first: before you buy anything, get a solid idea of what traffic is crossing the WAN. Network analysis tools such as Ethereal, ntop, Network Instrument’s Observer or WildPacket’s EtherPeek NX can give you a fresh look at what’s really on the wire.

You may find that replication times for your Active Directory are set far too low and simply configuring longer replication intervals can buy you breathing room during the workday. Are some users in remote locations mapping shares to the wrong servers and pulling large files across the WAN without realising it? Are the vestiges of a long-disabled IPX network still floating around? Some WAN problems boil down to application mis-configuration, where traffic is directed across the WAN when it should have stayed local. Regular reports on WAN traffic patterns will save money and headaches.

No. 8: Let’s play nice

All too often, applications, Web services and Web sites from multiple departments across the enterprise compete for server resources. Although each of these components may be well-tuned in its own right, an application from another department that is also using the same production clusters may have a poorly tuned query or some other issue, which in turn affects your users or customers.


In the near term, all you can do is work with your system administrators and the department that is having the performance problem to obtain resolution for your users or customers. Longer term, create a community across all of the departments that use the production clusters where your objects are deployed.

Work across teams to ensure that there is adequate funding for a staging environment that is truly representative of the mixed workload production environment. Ultimately, you’ll want to develop a series of benchmarks that can be used to validate mixed workload performance in the staging environment.

No. 9: Caching, shaping, limiting, oh my!

If your WAN is truly undersized — and you can’t afford a long-haul frame-relay network — traffic shaping and caching can help unclog the pipe.

Traffic-shaping configurations are more art than science. Prioritising apps is often more political than technical but may have tremendous effects on perceived network performance.

Caching is a different beast altogether. It requires less work than traffic shaping, but the impact will likely be smaller. Caching engines store and serve up local copies of commonly accessed data to reduce WAN traffic. The downside is that dynamic content can’t truly be cached, so e-mail won’t enjoy the same performance bump.

No. 10: Predictive patching

You arrive at work on Monday only to learn that a bunch of desktops are hung or that the performance of a critical application has slowed to a crawl. After investigating, you determine that a patch that was applied over the weekend is the cause.

That’s why you need tools that support patch rollbacks. Even better, include patch testing as part of your patch-management strategy. First, you must take regular inventory of the applications and technologies in play on desktops and servers. Most systems-management tools, such as Microsoft’s SMS, have the capability to take inventory for you automatically.

Next, replicate the applications and technologies into a staging environment. If your operating system and infrastructure software do not include patch testing tools, get a third-party tool such as FLEXnet AdminStudio or Wise Package Studio.

Alternatively, you can write some scripts to functionally exercise the platform or technology with the latest patches in play. You will need to repeat this scenario (and adjust scripts) as new patches arrive and as software changes are made.

The final, crucial step is to execute a round of stress tests with the new patches in play. Quite often, patches prove problematic only under heavy loads.

No. 11: Keeping your cool

It might seem obvious, but environmental problems in server rooms and wiring closets can have a deleterious affect on overall network performance. Hot servers aren’t happy servers, and the same goes for network hardware. Make sure your datacentre isn’t making the mercury sing, and install environmental sensors to keep it that way.

On a related note, it’s always a good idea to leverage the server-management utilities that may have come with your servers. Dell’s OpenManage and HP’s Insight Manager provide very pertinent information regarding the health and well-being of your server infrastructure, including whether or not it has a fever.


No. 12: Reining in mirroring and replication

Slowdowns often plague enterprises that use mirroring or replication for high availability or disaster recovery between locations. If you have many locations or many database tables — or a lot of transactions or journalling that needs to stay in sync between multiple locations — watch out, because the performance loss can be dramatic.

If possible, run your mirroring and replication activity across separate WAN “pipes” to keep it isolated from production traffic. Your network design team can assist with a viable topology. At the same time, go over the configuration (star topology, for example) you intend to use to support mirroring and replication. Vendor representatives and the network design team can provide useful input on constructing a configuration that will prevent network saturation.

Configuration aside, mirroring and replication products — such as XOsoft’s WANSyncHA Oracle or High Availability Linux Project’s Heartbeat — usually provide options to control timing and traffic flow between sites. Some products enable you to schedule syncing activity during off hours, whereas others let you activate syncing tasks only if a particular threshold is reached. Use those controls; that’s what they’re there for.

No. 13: ‘My cable modem is faster’

A shared T1 was once immeasurably fast to most folks. Now, it’s less than half the bandwidth most of us have at home.

Are users complaining that their broadband hookup at home is faster? Well, guess what? A cable connection may be just what you need at work. Buying an asynchronous business-class cable circuit to coexist with a T1 can ease the Internet connection burden substantially. Source-routing at the network core directs regular Internet traffic from within the network to the cable connection while pushing higher-priority traffic through the T1 circuit, giving you the best of both worlds.

Traffic-shaping at the Internet circuit is definitely worthwhile, and many firewalls are integrating this capability, removing the need for more hardware. Blocking or limiting applications deemed “frivolous” at the firewall can help, and deploying a proxy server — even a transparent proxy — can also be beneficial, albeit a bit of a chore. Implementation of an AUP enforcement product such as Websense can also control unchecked Internet usage.

If your corporate Web site runs within your network, moving it to a co-location or managed-hosting facility might be a viable option to increase throughput of your Internet circuit. Co-location costs are dropping, and monthly rates on servers suitable to run most sites are quite reasonable. Removing that draw from your link may produce noticeable performance improvements. Of course, gathering metrics on your Web site’s used bandwidth is a must before you make any decisions.

No. 14: Estimating future speed

One of the trickier tasks is projecting infrastructure requirements in the face of constantly evolving business demands. Most people start with capacity planning tools, such as TeamQuest’s View and Modelling products. No tool alone, however, can accurately predict the nature of tomorrow’s workload.

A little sleuthing is needed. Put on that detective hat in a sandbox, staging or fail-over environment and do some proof-of-concept work.

Create a virtualised, grid-type environment and replicate a representative sample of workload. From this exercise you’ll obtain and use numbers to extrapolate the overall change in infrastructure performance if the topology were to undergo a wholesale change. Execute baseline performance tests that detail today’s workload. Then execute load tests that mimic the addition of the expected number of transactions in six months or an increase in the number of users over time.


Even if testing resources are not as plentiful as those found in production, you can still run tests that mimic the type of infrastructure changes required to meet business demands and get reasonably good numbers. With this information you should be able to project when to add resources to keep performance on track.

No. 15: Profiling to maximise performance

Be proactive in preventing performance bottlenecks by including a mandatory profiling step in the latter part of development, specifically during the software lifecycle. Profiling your applications and Web services will uncover whether your code is efficient.

A profiling tool analyses and provides reporting on a number of the attributes in your code during its run time. Among other things, it can tell you which sections of code are taking the longest, how effectively you are using processor resources, and how your code is using memory, including reporting that illustrates where you may have a memory leak in your code.

If you use any of the major IDEs, such as Visual Studio or WebSphere Studio, profiling tools are included. If your development tools do not currently include profiling facilities, there are a number of third-party offerings, including Compuware’s DevPartner, YourKit, InfraRED, and J-Sprint.

During development, you’ll likely want to execute profiling locally to gain a general understanding of how your code is performing. Although profiling tools may differ slightly in their approach, profiling locally entails starting an agent or collection process on your machine. Then, exercise the application as your users would, stop the agent or collection process, and analyse the results.

If your code will be distributed across multiple platforms, consider profiling in a distributed fashion during the staging process.

To accomplish this, you’ll need to start agents or collection processes on each of the machines where your code will run. After exercising the code, most profiling tools will combine data from all of the agents or collection processes identified in the profiling session. Analysing the data from all the distributed agents or collection processes will yield an end-to-end illustration of your code’s efficiency. Use that map as a basis for your tweaking, and better performance will result.


To get Concurrent Sessions in Remote Desktop working, follow the steps below exactly:

  1. Download the file below and extract it somewhere. (You have to be registered to see the file)
  2. Reboot into Safe Mode. This is necessary to remove Windows File Protection.
  3. Copy the termserv.dll in the zip to %windir%System32 and %windir%ServicePackFilesi386. If the second folder doesn’t exist, don’t copy it there. Delete termserv.dll from the dllcache folder: %windir%system32dllcache
  4. Merge the contents of Concurrent Sessions SP2.reg file into the registry.
  5. Make sure Fast User Switching is turned on. Go Control Panel -> User Accounts -> Change the way users log on or off and turn on Fast User Switching.
  6. Open up the Group Policy Editor: Start Menu > Run > ‘gpedit.msc’. Navigate to Computer Configuration > Administrative Templates > Windows Components > Terminal Services. Enable ‘Limit Number of Connections’ and set the number of connections to 3 (or more). This enables you to have more than one person remotely logged on.
  7. Now reboot back into normal Windows and try out whether Concurrent Sessions in Remote Desktop works. It should!

If anything goes wrong, the termserv_sp2.dll is the original file you replaced. Just rename it to termserv.dll, reboot into safe mode and copy it back.

The termserv.dl_ file is provided in the zip is for you slipstreamers out there. Just replace that file with the corresponding file in the Windows installation disks.

I’ve added in extra information from the comments below… thanks for all who researched! 🙂

Other links of interest:

If anything doesn’t work with my procedure above though, post a comment here and I’ll try to help!

Attachment Size 357.92 KB

For Further Details, please look further (These details has been fully followed up with vishnu’s Blog)

All files located in %temp% and %tmp% folders of the Controller, VuGen, and LoadGenerator machines can be safely removed after a script has been saved, the scenario run, and the results collated, analyzed, and saved.


The following are temporary files created by LoadRunner:


The Brr_xxxx folder contains raw data collected by the Controller during runtime. These data will be collated and copied to the result folder after the scenario completes and are safe to delete if collation completes.


The folder contains runtime information of the load test and is safe to delete.


The folder is another runtime information folder; it contains information about monitors and is safe to delete.


Res is the default result folder, you should move it to a different location for safe keeping. You can also specify the result folder by selecting Result -> Result Settings in the Controller.


Aes_log.txt, drv_log.txt, ResmonLog.txt, ResmonLogBrief.txt, and are all log files and are needed only for debugging purposes in case of a problem.


Noname*, vuac*, vxml*, and unq* are all script-related files and can be removed once the script has been saved.


JET1858.tmp is an MS Jet temp file used during the collation process. It is safe to delete once collation finishes.

There is a simple way to examine, whether the load test which we conducted is to a real extent or not.


The factors which we have to capture is


Hits per second,

Avg Response time,

Think Time

No of users simulated.


Thus substituting all the gathered data in the formula,


Total Number of users executed = (Response Time + Think Time) * Request per second



Thus while calulating using the formula, the result value should equal or should be greater than the total number of users executed.


From this deal we can able to find out, how much users the servers can handle. If the result value goes below the total numbers executed, then the servers are not capable to server these many users.


The Page size also can be verified using,


Page size = Throughput / Hits per second.

This chapter discusses the most important counters to monitor (in both Windows NT and SQL Server), consider time intervals, and recommend a long-term strategy for monitoring performance. A table summarizes which counters to consider for particular problems.


Performance Monitor collects data about different counters, such as memory use. Performance Monitor can show you data in graphical format in real time, or you can save the data to log files. Pay particular attention to the discussion of log files in this section, because you will also use log files for long-term performance monitoring, which we will discuss in detail later. Working with a log file can be difficult to learn on your own because the options are not intuitive and are hidden on different screens.

You can choose between two Performance Monitors: one in the Administrative Tools group and one in the SQL Server group. They are the same basic program, but you need to run the one in the SQL Server group because it automatically loads the SQL Server-related counters. You run this version of the program with the following command: Perfmon.exe C:MssqlBinn Sqlctrs.pmc, where the .pmc file is the Performance Monitor counter file that contains the SQL counters. You can write applications that provide your own counters, and you can modify the new system stored procedures called sp_user_counter1 through sp_user_counter10 and track them, too.

When you run the program from the SQL Server group, the window appears. Because you started the program that includes the set of SQL counters, five counters appear at the bottom of the window when Performance Monitor starts. The five counters are

·     Cache Hit Ratio

·     I/O — Transactions per second

·     I/O — Page Reads per second

·     I/O Single Page Writes per second

·     User Connections

These counters will be explained in more detail later, but first, let’s learn how to navigate in Performance Monitor.

Changing Menu Options
The first set of buttons on the tool bar at the top of the window corresponds to the four views of the monitor: chart, alert, log, and report views. You can get to the same options using the View menu.

The menu options change depending upon which view is currently active. Without going into too much detail about the View menu options, their basic purpose is to let you set up and save standard viewing templates for each of the four views.

Understanding Counters
Windows NT lets you watch the performance of the system by “counting” the activity associated with any of its objects. Examples of objects in Windows NT are processors, disk drives, and processes. Each object has specific counters associated with it; for example, the % User Time counter is associated with a CPU or processor to designate what percent of the CPU is taken up by user programs (as opposed to system processes). This chapter gives you enough information to help you choose the right counters at the right time.

SQL Server includes many predefined counters, most of which you aren’t likely to use except in special cases. It can be difficult to know which counters are the basic ones to watch. If you have chosen the SQL Server Performance Monitor, several counters have been set up as default counters, such as Cache Hit Ratio and User Connections. You can create your own defaults by creating a .pmc file.

The counters are hooks into the operating system and other programs, like SQL Server, that have been built into the software to let Performance Monitor get data. Data collection is performed efficiently so that the additional load on the system is minimized. Windows NT needs most of the information gathered for managing memory, processes, and threads, and Performance Monitor is a good program to display the results.

On the tool bar, the button next to the four view buttons at the top of the window is a big plus sign, which you use to add counters to monitor. Click the + button, and the window will appear. The first field, Computer, has a search button at the end of the field. You can click on this field to bring up a list of all computers in your domain and choose a computer from the list, or you can type the name of a server you want to monitor. To monitor other servers, you need Windows NT administrative privileges on them.

In the next field, Object, you choose an object to monitor. The default is Processor, and the default counter shown in the field below is % Processor Time. The box on the right is the Instance. Any particular resource may have more than one instance ; that is, more than one of that particular resource — in this case, processors — may exist. This computer has only one processor (CPU) because the instance in the box is 0. Instance 3 refers to the fourth CPU.

From the fields along the bottom, you can pick the color, scale, line width, and line style of the information that will be displayed about the counter you are adding. These options let you choose a different look for each counter you add to the window. The only display choice that may need explanation is scale. The scale field is a multiplier that helps you fit the values on the screen in the range you have set on the y-axis, which by default is 0 –100.

After you choose the Object, Counter, and Instance you want to monitor and determine how you want the information to appear, click Add. The counter is added at the bottom of the list on the main window and starts graphing the next time your data is refreshed.

If you click the Explain button, a brief explanation of the counter you specified will appear. Sometimes, though, it uses abbreviations and acronyms that require further research, unless you are a Windows NT internals guru.

Setting Up Alerts
An alert is the warning the computer sends you when a resource such as memory or the network becomes a bottleneck. When an alert occurs, it is written to a log file, along with the date and time it occurred. The log file is a circular file, allowing at most 1,000 entries before it starts overwriting the oldest alerts. The alert can also be written to the Windows NT event log.

To add a new alert, click the second button on the toolbar, then click the + button. The dialog box will appear. Choose the counters you want to create alerts for, then click Add. The example on the screen will create an alert when the Cache Hit Ratio drops below 85 percent.

Notice the Run Program option in the lower right portion of the screen. You can use it to execute a program when the alert occurs. For example, you can choose SQL Server — Log in the Object field, Log Space Used (%) for the Counter, and the database you want to monitor from the Instance list. When the log file for that database gets above 90 percent, you can execute a batch file that runs an ISQL script to dump the transaction log. In this way you can reduce your chances of running out of log space.

Starting Log Files
Learning how to establish log files is very important, because log files are a critical part of the long-term strategy recommended later in this chapter. It can be a bit confusing, so let’s go through the steps.

1.  Click the third button on the toolbar — View Output Log File Status. Notice that the Log File entry at the top is blank, the status is closed, the file size is zero, and the log interval is 15.00 (seconds).

2.  Click +, and the list of objects will appear. Select the ones you want to add to the log and click Add. If you hold down the Ctrl key while selecting, you can choose more than one counter, and holding down Shift lets you highlight all the items in a range. All counters in the objects you pick will be tracked in the log file. We will discuss what to monitor later.

3.  Now we need to specify a log file. From the Options menu, choose Log. The dialog box will appear.

4.  This dialog box looks almost like the standard file dialog box, but it has two very important additions. At the bottom of the screen, the Update Time section shows the refresh interval. For short-term tracking, keep it at 15 seconds. For long-term tracking, set it at 300 seconds (5 minutes). The other important difference between this dialog box and the standard file name dialog box is the Start Log button. Nothing happens until you click this button to start collecting data. Once you do, the text of the button will change to Stop Log.

Type a log file name in the File Name box at the top. Then click Start Log.

5.  Click OK to close this dialog box, then minimize the window and let the log run for a while.

6.  Maximize the window and click the Stop Log button. Then switch to the Chart view by clicking the first button on the toolbar.

7.  From the Options menu, choose Data From. Select the log file you named earlier. You can then choose the counters you want to view from the log.

The best part about using log files is that you can view a few counters at a time to avoid overcrowding the window. You can also mix and match the counters you want to analyze at the same time. This feature is important because many of the counters depend on other counters.

Special Note: The log file does not do anything until you click the Start Log button in the Log Options dialog box (also available by choosing Log in the Options menu).

The fourth button on the toolbar, the Reports button, lets you print customized reports of the data collected in your log file. Experiment with the available reports when you have a chance; we won’t cover this option here.


The default refresh interval for Performance Monitor is one second. Every second, you get new information about your system’s performance. This interval is good for a very short-term examination of the system, but it can be a drain on the server. A five-second interval causes much less overhead, probably in the neighborhood of five percent extra activity. However, for long-term monitoring, 5 seconds produces a very large log file.

Setting the interval to 5 minutes creates a reasonable size log file, but this large an interval can mask performance peaks. However, because each entry in the log file stores the minimum, average, and maximum values for each counter, or aspect of SQL Server you want to monitor, you can discover the peaks with a little extra analysis. Five minutes is a good setting for long-term logging. You can always fire up another copy of Performance Monitor and look at one- to five-second intervals if you want a short-term peek into the system’s performance.

To determine the amount of drain on the system from Performance Monitor, shut down all the services and start the Monitor again. Add the CPU usage and watch it for about 30 seconds at the default interval of one second. Then change the interval to 0.1 seconds. Your CPU usage will jump dramatically. One odd observation is that the effect of changing from one second to 0.1 seconds is different on different computers, and it is different between Windows NT 4.0 and Windows NT 3.51. For example, when changing the interval on two 133 MHz computers — a laptop and a tower box — the tower machine has the better performance at the shorter interval, showing about 55 percent utilization, while the laptop shows about 60 percent utilization.

Special Note: The faster your refresh option, the more the drain on the system. The default one-second refresh interval creates less than 5 percent overhead on a single-processor machine. For multiprocessor machines, the overhead is negligible. With the refresh interval set to 0.01 seconds, Performance Monitor takes about 60 percent of the resources. At 10 seconds per refresh, the drain is almost too small to measure, even with a lot of counters turned on.


Now that you know how to use the program, let’s get to the section you’ve been waiting for: How do you know what to monitor? Of the hundreds of Windows NT counters and 50 or so SQL counters, how do you choose? Should you monitor everything? How long should you monitor the system?

Monitoring performance helps you perform two related tasks: identifying bottlenecks and planning for your future hardware and software needs (capacity planning). Learning about the important counters will help identify potential bottlenecks. The strategy section later in this chapter will help you put together a good plan for creating a general monitoring strategy.

What do you want to monitor? Everything! Well, monitoring everything may be a good idea for a short period, but the results will show that many of the counters are always at or near zero; monitoring them all the time may be a waste of time and resources. You need to establish a baseline for your system. This baseline lets you know what results are normal and what results indicate a problem. Once you establish a baseline, you don’t need to track everything.

The key categories to monitor can be split into two major sections: Windows NT categories and SQL Server categories. Categories in this sense are groups of objects that contain counters.

·     Windows NT

o    Memory

o    Processor

o    Disk I/O

o    Network

·     SQL Server

o    Cache

o    Disk I/O

o    Log

o    Locks

o    Users

o    Other Predefined Counters

o    User-Defined Counters

When monitoring both categories of data, look for trends of high and low activity. For example, particular times during the day, certain days of the week, or certain weeks of the month might show more activity than others. After you identify highs and lows, try to redistribute the workload. These peaks and valleys are especially good to know when something new is added to the schedule. If the peak loads are causing problems, identify which things can be scheduled at a later time when the system is not so busy. Knowing the load patterns is also helpful when problems occur, so that you can re-run a particular job or report when the load is low.

Get to know your users — find out which reports they need first thing in the morning. Perhaps you can schedule these reports to run at night in a batch mode, instead of having the user starting them during a busy time.

Monitoring Windows NT
The purpose of monitoring the Windows NT categories is to answer one of two questions: “What resource is my bottleneck?” or “Do I see any upward usage trends that tell me what resource I might run low on first?” SQL Server 6.5 introduced several highwater markers, such as Max Tempdb space used, which make it easier to identify potential long-term problems


The Memory: Pages/sec counter is the number of pages read or written to the disk when the system can’t find the page in memory. This page management process is referred to as paging. If the average value for this counter is five, you need to tune the system. If this value is 10 or more, put tuning the server high on your priority list. Before SQL Server 6.0, the value for this counter was an important flag to tell you whether memory was the bottleneck. Now, with SQL Server’s parallel read-ahead feature, this counter will give you only an indication of how busy the read-ahead manager is. However, we will discuss other counters that are better at tracking the read-ahead manager. In other words, this counter may have been one of the most significant counters to track in the past, and it still is on machines without SQL Server, but better ones are available to track memory. 

The Memory: Available Bytes counter displays the amount of free physical memory. If the value for this counter is consistently less than 10 percent of your total memory, paging is probably occurring. You have too much memory allocated to SQL Server and not enough to Windows NT.

Before we start talking about the counters in the processor category, it is important to know that Windows NT assigns certain responsibilities to certain processors if you have four or more CPUs. Processor 0 is the default CPU for the I/O subsystem. Network Interface Cards (NIC) are assigned to the remaining CPUs, starting from the highest-numbered CPU. If you have four processors and one NIC, that card is assigned Processor 3. The next NIC gets Processor 2. Windows NT does a good job of spreading out processor use. You can also set which processors SQL Server uses. See Chapter 16, “Performance Tuning,” particularly the notes on the Affinity Mask, for more information about allocating processors.

You can monitor each processor individually or all the processors together. For monitoring individual processors, use the Processor: % Process Time counter. This counter lets you see which processors are the busiest.

A better counter to monitor over the long term is the System: % Total Processor Time counter, which groups all the processors to tell you the average percentage of time that all processors were busy executing non-idle threads.

Who (or what) is consuming the CPU time? Is it the users, system interrupts, or other system processes? The Processor: Interrupts/sec counter will tell you if it is the system interrupts. A value of more than 1,000 indicates that you should get better network cards, disk controllers, or both. If the Processor: % Privileged Time is greater than 20 percent (of the total processor time) and Processor: % User Time is consistently less than 80 percent, then SQL Server is probably generating excessive I/O requests to the system. If your machine is not a dedicated SQL Server machine, make it so. If none of these situations is occurring, user processes are consuming the CPU. We will look at how to monitor user processes when we consider SQL Server-specific counters in the next section.

Disk I/O
As discussed in Chapter 16, “Performance Tuning,” having many smaller drives is better than having one large drive for SQL Server machines. Let’s say that you need 4 GB of disk space to support your application with SQL Server. Buy four 1-GB drives instead of one 4-GB drive. Even though the seek time is faster on the larger drive, you will still get a tremendous performance improvement by spreading files, tables, and logs among more than one drive.

Special Note: The single best performance increase on a SQL Server box comes from spreading I/O among multiple drives (adding memory is a close second).

Monitor the disk counters to see whether the I/O subsystem is the bottleneck, and if it is, to determine which disk is the culprit. The problem may be the disk controller board. The first thing to know about monitoring disk I/O is that to get accurate readings from the Physical Disk counters, you must go to a command prompt window and type DISKPERF -y, then reboot. This procedure turns on the operating system hooks into the disk subsystem. However, this setup also causes a small performance decrease of 3 to 5 percent, so you want to turn this on only periodically and only for a short period. Use the Diskperf -n command to turn it off, then restart your system. 

Track Physical Disk: % Disk Time to see how much time each disk is busy servicing I/O, including time spent waiting in the disk driver queue. If this counter is near 100 percent on a consistent basis, then the physical disk is the bottleneck. Do you rush out and buy another disk? Perhaps that is the best strategy if the other drives are also busy, but you have other options. You may get more benefit from buying another controller and splitting the I/O load between the different controllers. Find out what files or SQL Server tables reside on that disk, and move the busy ones to another drive. If the bottleneck is the system drive, split the virtual memory swap file to another drive, or move the whole file to a less busy drive. You should already have split the swap file, unless you only have one drive (which is very silly on a SQL Server machine).

LogicalDisk: Disk Queue Length and PhysicalDisk: Disk Queue Length can reveal whether particular drives are too busy. These counters track how many requests are waiting in line for the disk to become available. Values of less than 2 are good; if the value is any higher, it’s too high.

Redirector: Read Bytes Network/Sec gives the actual rate at which bytes are being read from the network. Dividing this value by the value for the Redirector: Bytes Received/Sec counter gives the efficiency with which the bytes are being processed.

If this ratio is 1:1, your system is processing network packets as fast as it gets them. If this ratio is below 0.8, then the network packets are coming in faster than your system can process them. To correct this problem on a multiprocessor system, use the Affinity Mask and SMP Concurrency options in the SQL Configuration dialog box to allocate the last processor to the network card, and don’t let SQL Server use that processor. For example, if you have four CPUs, set the Affinity Mask to 7 (binary 0111) and SMP Concurrency to 3. This setup gives three CPUs to SQL Server and the fourth processor to the network card, which Windows NT assigns to that processor by default. If I/O is also a problem, set the Affinity Mask to 6 (binary 0110) and SMP Concurrency to 2, because Windows NT assigns the I/O subsystem to the first processor by default.

Monitoring SQL Server
The questions to ask yourself when monitoring the SQL Server categories are “Do I have the optimal configuration values for SQL Server?” and “Who is consistently using the most resources?”

If any of the counters considered in this section indicate a problem, the problem is somewhere related to SQL Server. If the problem is I/O, memory, CPU, or locks, you can dig deeper and find out who the culprits are. However, if you are using a long-term logging strategy for monitoring, you must monitor every session to be sure you have the necessary historical data when you want to see what was happening at a particular time.

If you are watching the monitor when a problem occurs, go to the SQL Server-Users object and turn on the counter for all instances. The instances in this case are the sessions currently logged on. You can see the login ID and the session number. If you see one or more sessions causing the problem, you can spy on them to find the last command sent. Go to the Enterprise Manager, click the Current Activity button on the toolbar, and double-click the line in the display corresponding to the session number. You will see the last command received from the session. To trace commands in more depth, use the SQLTrace utility that is new with version 6.5. (See Chapter 3, “Administrative and Programming Tools,” for details.)

The five main categories of SQL Server counters to monitor are cache, disk I/O, log, locks, and users. We will consider each of these categories separately as well as a mix of other important predefined counters. The final part of this section discusses the new user-defined counters.

To monitor your cache, watch SQL Server — Cache Hit Ratio. It monitors the rate at which the system finds pages in memory without having to go to disk. The cache hit ratio is the number of logical reads divided by the total of logical plus physical reads. If the value for this counter is consistently less than 80 percent, you should allocate more memory to SQL Server, buy more system memory, or both. However, before you buy more memory, you can try changing the read-ahead configuration options. Also look at the discussion of free buffers in the next chapter to determine whether the number of free buffers is approaching zero. Changing the free buffers configuration parameter may increase the cache hit ratio.

To find out if you have configured SQL Server properly, you should monitor SQL Server-Procedure Cache: Max Procedure Cache Used (%). If this counter approaches or exceeds 90 percent during normal usage, increase the procedure cache in the SQL Server configuration options. If the maximum cache used is less than 50 percent, you can decrease the configuration value and give more memory to the data cache. Rumor has it that SQL Server 7.0 will have a floating-point number for the procedure cache configuration parameter so that you can give the procedure cache less than 1 percent of your SQL Server memory. For a super server with gigabytes of memory, even 1 percent is too much for procedure cache.

If a 2K data page has been swapped to the Windows NT virtual memory file and read in again later, SQL Server still counts the page as already in memory for the purposes of the Cache Hit Ratio counter. Therefore, a system bogged down by heavy swapping to virtual memory could still show a good cache hit ratio. To find out if your system is in this category, monitor the Memory: Page Faults/Sec counter.

The Memory: Page Faults/Sec counter watches the number of times a page was fetched from virtual memory, meaning that the page had been swapped to the Windows NT swap file. It also adds to the counter the number of pages shared by other processes. This value can be high while system services, including SQL Server, are starting up. If it is consistently high, you may have given too much memory to SQL Server. The network and operating system may not have enough memory to operate efficiently.

Warning: This counter is a strange one to figure out. Running this counter on four different types of machines gave widely different results. To try to get a baseline value, we turned off all services, including SQL Server, unplugged the boxes from the network, and ran Performance Monitor with only the Memory: Page Faults/Sec counter turned on. The lowest measurement of page faults per second was from the system we least expected — a 50 MHz 486 with 16 MB of memory and one disk drive. It settled in at about five to seven page faults per second. The DEC Alpha with 4 processors, 10 GB RAID 5 striping on 5 drives, and 256 MB of memory was up in the 35 to 40 page faults per second range. So was a similarly configured Compaq ProLiant. The laptop performed in the middle, at about 15 page faults per second. It is a 90 MHz Pentium with 1 disk drive and 40 MB of memory. All were running Microsoft Windows NT version 3.51 service pack 4. All services except Server and Workstation were turned off. Running the same experiment with Windows NT 4.0 service pack 1 showed approximately the same results, except that the page faults per second numbers ran consistently 10 percent less than in Windows NT 3.51.

The result of this experiment is that we can’t recommend a range to gauge the performance of your machine. The best you can do is turn off all services for a brief period to get a baseline measurement on your machine, then use this value as a guide for your regular usage.

Disk I/O
Several counters measure how busy your disk drives are and which disk drives are the busiest. Remember that for any I/O measurements to be effective, you must run the Windows NT Diskperf -y command and reboot the system.

Even though the SQL Server: I/O Transactions Per Second counter is a bit misleading, it is still good, especially for capacity planning. This counter measures the number of Transact-SQL batches processed since the last refresh period. You should not use these results against any standard TPC benchmark tests that give results in transactions per second — it is not referring to a Begin/Commit transaction, just to batches of commands. Watch this number over a span of several months, because an increase in this counter can indicate that the use of SQL Server is growing.

The SQL Server: I/O — Lazy Writes/Sec counter monitors the number of pages per second that the lazy writer is flushing to disk. The lazy writer is the background Windows NT process that takes the data from cached memory and writes it to disk, although sometimes a lazy writer is hardware that reads the cached memory on the disk drive and is managed by the disk controller. A sustained high rate of lazy writes per second could indicate any of three possible problems:

·     the Recovery Interval configuration parameter is too short, causing many checkpoints

·     too little memory is available for page caching

·     the Free Buffers parameter is set too low

Normally this rate is zero until the least-recently used (LRU) threshold is reached. LRU is the indicator by which memory is released for use by other processes. Buying more memory may be the best solution if the configuration parameters seem to be in line for your server size.

The SQL Server: I/O Outstanding Reads counter and the I/O Outstanding Writes counter measure the number of physical reads and writes pending. These counters are similar to the PhysicalDisk: Disk Queue Length counter. A high value for this counter for a sustained period may point to the disk drives as a bottleneck. Adding memory to the data cache and tuning the read-ahead parameters can decrease the physical reads.

The SQL Server: I/O Page Reads per Second counter is the number of pages not found in SQL Server data cache, which indicates physical reads of data pages from disk. This value does not count pages that are read from the Windows NT virtual memory disk file. There is no way to watch only the logical page reads per second. According to sources in the SQL development team, counters for logical pages reads are hidden in a structure that is not available in this version of SQL Server. However, you can figure out the logical page reads per second by taking the total page reads per second and subtracting the physical page reads per second.


You should occasionally turn on the I/O Single Page Writes counter. A lot of single page writes means you need to tune SQL Server, because it is writing single pages to disk instead of its normal block of pages. Most writes consist of an entire extent (eight pages) and are performed at a checkpoint. The lazywriter handles all the writing of an extent at a time. When SQL is forced to hunt for free pages, it starts finding and writing the LRU pages to disk — one page at a time. A high number of single page writes means that SQL Server does not have enough memory to keep a normal amount of pages in data cache. Your choices are to give more memory to SQL Server by taking memory away from the static buffers, by decreasing the procedure cache, or decreasing the amount of memory allocated to Windows NT.

Tie the SQL Server — Log: Log space used (%) counter to an alert. When the value goes over 80 percent, send a message to the administrator and to the Windows NT event log. When it goes over 90 percent, dump the transaction log to a disk file (not the diskdump device), which will back up the log and truncate it. You want to track this counter for all your application databases, for Tempdb, and for the Distribution database if you are running replication.

To check out locking, turn on the SQL Server Locks: Total Locks and Total Blocking Locks counters. If you notice a period of heavy locking, turn on some of the other lock counters to get a better breakdown of the problem. The value for Total Blocking Locks should be zero or close to it as often as possible.

One counter to turn on to see if you have configured the system correctly is SQL Server Licensing: Max Client Count. Once you have established that your licensing choice is correct, turn it off. You should turn it back on occasionally to check the connections. If you do exceed the license count, you will know because users will be denied access.

When you suspect that one particular user is the cause of any performance problems, turn on the counters in the Users section. However, with many users on the system, it is difficult to guess which counters to use, and it is difficult to turn on all counters for all sessions. One shortcut is to go into the Current Activity screen of the SQL Enterprise Manager and look at the locks in the Locks tab as well as the changes in CPU and Disk I/O activity in the Detail tab.

Monitor the SQL Server — Users: CPU Time counter for each user. Users for whom this counter returns high values may use inefficient queries. If the query appears reasonable, a high value may indicate an indexing problem or poor database design. Use Showplan to determine if the database’s indexes are optimal. Look for wide tables (long row sizes), which indicate a non-normalized database. Wide tables and inefficient indexes can cause more I/O than table scans.

Other Predefined Counters
A new counter in SQL Server 6.5, SQL Server: Max Tempdb Space Used, indicates how well you have estimated the size of Tempdb. If the value for this counter is very small, you know you have overestimated the size of Tempdb. Be sure to watch this counter frequently, especially during the busiest times and when your nightly jobs run. If it approaches the size of Tempdb, then you should probably increase Tempdb’s size.

Compare SQL Server: NET — Network Reads/Sec to SQL Server: NET — Bytes Received/Sec (or Network Writes/Sec compared to Bytes Transmitted/Sec). If the SQL Server network counters are significantly lower than your server counter, your server is busy processing network packets for applications other than SQL Server. This reading indicates that you are using the server for uses other than SQL Server, perhaps as a primary or backup domain controller, or as a print server, file server, Internet server, or mail server. To get the best performance, make this server a dedicated SQL server and put all the other services on another box.

If you are using replication, you should focus on the publishing machine. You should monitor the distribution machine and the subscriber as well, but the publisher will show the first signs of trouble. Turn on all counters in the SQL Server Replication-Publishing DB object. The three counters will tell you how many transactions are held in the log waiting to be replicated, how many milliseconds each transaction is taking to replicate, and how many transactions per second are being replicated.

User-Defined Counters
Last but not least, you can define counters. The user-defined counters are in the SQL Server User-Defined Counters object in the Master database. The 10 counters correspond to 10 new stored procedures called sp_User_Counter1 through sp_User_Counter10. These stored procedures are the only system stored procedures you should change. If you look at the code of the procedure, they all perform a Select 0, which, when tracked on Performance Monitor, draws a flat line at the bottom of the screen. Replace the Select 0 with a Select statement that returns one number; an integer is preferable, but float, real, and decimal numbers also work. These queries should be quick, not ones that take minutes to run.

Please note that these counters are different from the user counters mentioned earlier, which track the specific activity of a particular person logged in to SQL Server.

The current version of Performance Monitor contains a bug. If User Counter 1 contains an error, none of the 10 counters will show up in Performance Monitor. However, this bug is not the only reason that you might not see these user defined counters in Performance Monitor. The Probe login account, added when you install SQL Server, must have both Select and Execute permission on these 10 stored procedures for them to appear.

It would be nice to be able to change the names of these stored procedures so you could more easily remember what you are tracking. Maybe this feature will be included in version 7.0.

Here is a trick: Suppose you want to count the number of transactions you have in a table. You could put the following statement in sp_User_Counter1:

SELECT COUNT(*) FROM MyDatabase.dbo.MyTable

If MyTable had 40 million rows, the stored procedure would take a lot of time to execute, even though it scans the smallest index to get an accurate count. Instead, you could get an approximate number by using the following command:

SELECT rows FROM myDatabase.dbo.sysindexes WHERE id=OBJECT_ID(‘MyTable’) AND indid in (0,1).

This way is much faster, even though SQL Server does not keep the value in sysindexes up-to-date. Sometimes the counters tracked in sysindexes get out of sync with the actual table, and the only way to get them updated accurately is with DBCC. But most of the time the value in sysindexes is accurate enough.


The concept behind a good long-term strategy for monitoring performance is simple to explain: Use log files to track as many items as you can without affecting performance. We break this discussion into three sections: establishing a baseline, monitoring performance over the long term, and tracking problems.

Establishing a Baseline
First, go to a command prompt and turn on the disk counters using the command Diskperf -y, then reboot. Then establish a new log file, click the + button, add all the options, and start the logging process. Choosing all the options tracks every instance of every counter in every object. You are tracking a lot of information, especially with the physical disk counters turned on.

Run Performance Monitor with this setup for a week; if you wish, you can manually stop and restart the log file every night so that each day is contained in a different log file. These measurements become your baseline; all your trend measurements will be based on this baseline. This method is not a perfect way to establish a baseline if you have very many special activities taking place on your server that week. But you may never experience a “typical” week, and it’s better to get some baseline measurement than wait.

We also recommend that you start a performance notebook. In this notebook, keep a page where you log special activities and events. For instance, an entry in your log might say, “Ran a special query for the big boss to show what a Cartesian product between two million-record tables does to the system.” In your performance notebook, be sure to record changes to the hardware, along with dates and times. You should also schedule actions like backups and transaction log dumps regularly so that when you look at system performance for one night last week, you do not have to wonder whether the backup was running.

We recommend that you run your long-term monitoring from another computer on the network. This way, you are not skewing the results by running it on the server you are trying to monitor. Also, avoid running Perfmon.exe to capture the long-term baseline, because someone must be logged on for it to run, and leaving an administrator machine logged on for long time periods is not a good idea. Instead, run the command-line version of Performance Monitor, called Monitor.exe. It is essentially the same program as Perfmon.exe without the screens. All output can be directed to the log files. To further simplify your life, get Srvany.exe from the Windows NT resource kit and make Monitor.exe into a Windows NT service. This way you can manage Monitor.exe like any other network service.

Periodically, perhaps once every six months, repeat this baseline process with all the counters turned on. Then compare your baselines to establish a trend.

Monitoring Performance over the Long Term
Once you have established your baseline, start another series of log files for your everyday use. First, turn off the physical disk counters with the Diskperf -n command from a command prompt and reboot the system. You can still track everything else if you want to because turning off the physical disk counters reduces the performance problems caused by monitoring. However, it is not necessary to track all the counters. We recommend you track the following objects:

·     Logical Disk

·     Memory

·     Paging File

·     Processor

·     Server

·     SQL Server

·     SQL Server — Replication (only if you are running replication)

·     SQL Server — Locks

·     SQL Server — Log

·     SQL Server — Procedure Cache

·     SQL Server — Users

·     System

Tracking Problems
When you experience performance problems, leave your Performance Monitor running with the log file so you continue to collect long-term data. Then start Performance Monitor again to track the particular problem. Turn on whatever counters you need to look at, using this chapter as a guide for the key counters to monitor in the disk, memory, network, and processors categories.

Start with the high-level counters — look for the words “total” or “percent” (or the % sign). When one of these counters indicates a problem, you usually have the option of watching counters that give you more detail. Learn which counters in different sections are related to each other. The relationships can tell you a lot. For example, the I/O Transactions Per Second counter in the SQL Server section is closely related to the CPU % counter in the processor section. If the number of I/O transactions per second goes up, so does the processor usage.

Concentrate on finding out which resource is causing the problem. Is it the system or a user process? Is it Windows NT or SQL Server? Before you purchase more hardware, try to find a configuration option related to the problem. Don’t hesitate to change hardware configuration or move data to different servers to balance the work among the available resources.

For specific examples of tuning performance, see Chapter 16, “Performance Tuning.”

Special Note: Use log files to track as many items as you can without affecting performance.

Monitoring with Transact-SQL
You can also use three Transact-SQL commands to do your own monitoring:


·     DBCC SQLPERF — cumulative from the start of SQL server; use iostats, lru stats, and netstats parameters

·     DBCC PROCCACHE — six values used by Performance Monitor to monitor procedure cache

The output from these commands can be inserted into a table for long-term tracking and customized reporting. Tracking the MEMUSAGE output calls for some tricky programming because different sections have different output formats. The other two commands are more straightforward.

The example below shows how to capture the DBCC PROCCACHE output. This command displays the same six values that you can display in Performance Monitor to watch the procedure cache usage in SQL Server.

CREATE TABLE PerfTracking (date_added datetime default (getdate()), num_proc_buffs int, num_proc_buffs_used int, num_proc_buffs_active int, proc_cache_size int, proc_cache_used int, proc_cache_active int) go INSERT PerfTracking (num_proc_buffs, num_proc_buffs_used, num_proc_buffs_active,    proc_cache_size, proc_cache_used, proc_cache_active) EXEC (“dbcc proccache”) go

After running this command, you can use any SQL Server-compliant report writer or graphing program to create your own fancy graphs. 


The list below is a quick reference to the information about counters we’ve presented in this chapter. After the performance questions you may ask, we list related counters.

Is CPU the bottleneck?

·     system: % total processor time

·     system: processor queue length

What is SQL Server’s contribution to CPU usage?

·     SQL Server: CPU Time (all instances)

·     process: % Processor Time (SQL Server)

Is memory the bottleneck?

·     memory: page faults/sec (pages not in working set)

·     memory: pages/sec (physical page faults)

·     memory: cache faults/sec

What is SQL Server’s contribution to memory usage?

·     SQL Server: cache hit ratio

·     SQL Server: RA (all read ahead counters)

·     process: working set (SQL Server)

Is disk the bottleneck? (Remember that disk counters must be enabled for a true picture.)

·     physical disk: % disk time

·     physical disk: avg disk queue length

·     disk counters: monitor logical disk counters to see which disks are getting the most activity

What is SQL Server’s contribution to disk usage?

·     SQL Server-users: physical I/O (all instances)

·     SQL Server: I/O log writes/sec

·     SQL Server: I/O batch writes/sec

·     SQL Server: I/O single-page writes

Is the network the bottleneck?

·     server: bytes received/sec

·     server: bytes transmitted/sec

What is SQL Server’s contribution to network usage?

·     SQL Server: NET — Network reads/sec

·     SQL Server: NET — Network writes/sec

Did I make Tempdb the right size?

·     SQL Server: Max Tempdb space used (MB)

Is the procedure cache configured properly? (The highwater marks for the percentages are more important than the actual values.)

·     Max Procedure buffers active %

·     Max Procedure buffers used %

·     Max Procedure cache active %

·     Max Procedure cache used %


SQL Server 6.5 gives you new configuration and tuning options. It also adds new counters to help you track the use of SQL Server on your system. Use Performance Monitor to see if your system is configured properly. Performance Monitor is one of the best tools you can use to identify current bottlenecks and prevent future problems.