Tag Archives: asp

What is ViewState in .net Technology..?


ViewState in ASP.NET

Introduction

Microsoft ASP.NET Web Forms pages are capable of maintaining their own state across multiple client round trips. When a property is set for a control, the ASP.NET saves the property value as part of the control’s state. To the application, this makes it appear that the page’s lifetime spans multiple client requests. This page-level state is known as the view state of the page. In Web Forms pages, their view state is sent by the server as a hidden variable in a form, as part of every response to the client, and is returned to the server by the client as part of a postback. In this article we will see how View State is implemented in ASP.NET for state management and we will also see how effectively you can use this object in your web form.

Problems with ViewState

Viewstate has lots of advantages and as well as disadvantages, so you need to weigh carefully before making the decision to use it. As I told you early, view state doesnt require any server resources for its operation. It is passed to the client during every postback as an hidden element. Since it is added with every page, it adds few Kbytes to the page. This effects the loading of the page in the client. Other main problem with Viewstate is, since it is passed as plain text to the client. Anybody can tamper this value, because of this you shouldnt store any important data in the viewstate. View state is one of the most important features of ASP.NET, not so much because of its technical relevance, but more because it makes the magic of the Web Forms model possible. However, if used carelessly, view state can easily become a burden. Although ViewState is freely accessible in a hidden field called __VIEWSTATE, the view state information is not clear text. By default, a machine-specific authentication code is calculated on the data and appended to the view state string. The resulting text is then Base64 encoded only, but not encrypted. In order to make the view state more secure, the ASP.NET @Page directive supports an attribute called EnableViewStateMac whose only purpose is detecting any possible attempt at corrupting original data.

Implementation of ViewState

StateBag implements the view state and manages the information that ASP.NET pages and embedded controls persist across successive posts of the same page instance. The class works like a dictionary object and implements the IStateManager interface. The Page and the Control base classes expose the view state through the ViewState property. So you can add or remove items from StateBag as you would with any dictionary object:

ViewState(“FontSize”) = value

You should start writing to the view state only after the Init event is fired for a page request. You can read from the view state during any stage of the page lifecycle, but not after the page enters rendering mode—that is, after the PreRender event is fired.
The contents of the StateBag collection are first serialized to a string, then Base64 encoded, and finally assigned to a hidden field in the page that is served to the client. The view state for the page is a cumulative property that results from the contents of the ViewState property of the page plus the view state of all the controls hosted in the page.

Decision on ViewState Usage

As We ‘ve discussed here, the view state represents the state of the page and its controls just before the page is rendered in HTML. When the page posts back, the view state is recovered from the hidden field, deserialized, and used to initialize the server controls in the page and the page itself. However, this is only half the story.

After loading the view state, the page reads client-side information through the Request object and uses those values to override most of the settings for the server controls. In general, the two operations are neatly separated and take place independently. In particular, though, the second operation—reading from Request.Form—in many situations ends up just overriding the settings read out of the view state. In this particular case the view state is only an extra overhead. For example consider the following case, we have one textbox in the page and a link button. If you are typing the some  values in to the textbox and the posting the page using linkbutton. After postback, value in the textbox is retained though you enable or disable the viewstate. In this case you shouldnt enable viewstate for this textbox. Viewstate value is overridden by request.form values, since loadpostdata fires after loadviewstate view event in the Page lifecycle.

But if you consider that readonly property of textbox  is set to False by default. Then in the Page_Load if you are trying to change its readonly property to true based on certain condition. So after setting readonly property in Page_Load and if it is posted back by clicking linkbutton. To retain its readonly property across postback, we need to enable viewstate for this property. Otherwise this property wont be retained across postback.

Viewstate in DataGrid

If you have Set EnableViewState to true for a DataGrid which is having thousands of record. Then you will end up having viewstate size more than 10 KBytes. But if you disable viewstate, you will not be able to fire any events in DataGrid. Postback and acting on postback relies on Viewstate. So if it is readonly datagrid and if you are not going to use paging and sorting provided by datagrid, then you can disable viewstate. But if you want use above mentioned feature of DataGrid, then you can not disable ViewState in DataGrid. So to avoid excessive load on client machine because of viewstate . You can disable viewstate for each item in DataGrid. Disabling can be done in two ways, one way is disabling each itemtemplate columns viewstate to false.

<asp:TemplateColumn headertext=”ProductID”>
<ItemTemplate>
<asp:TextBox id=”ProductID” runat=”server”  EnableViewState=”False” >
</asp:TextBox>
</ItemTemplate>
</asp:TemplateColumn>

Other way is by disabling viewstate for each datagrid item in Pre-Render event handler.

Private Sub Page_PreRender(ByVal sender As Object, ByVal e As System.EventArgs) Handles
MyBase.PreRender
Dim dgi As DataGridItem
For Each dgi In DataGrid1.Items
dgi.EnableViewState = False
Next
End Sub

Conclusion

The view state is a key element of an ASP.NET page because it is the primary means to persist the state of the Web server controls. Whenever the page posts back, the state is restored, updated using the current form parameters, then used to run the postback event handler. Normally, the view state is a hashed string encoded as Base64 and stored in a hidden field called __VIEWSTATE. In this way, the view state is not cached on the client, but simply transported back and forth with potential issues both for security and performance. Since it is performance overhead, you need to decide properly when and where you should use viewstate in your webform.

(The above notes has been referred from the site http://www.extremeexperts.com)

Using perfmon to tune n-tier .NET applications


On Web Server / Application Server:

 

Processor\% Processor Time

Processor\% User Time

Processor\% Privileged Time

Processor\% Idle Time

 

These counters will give you % Processor Utilization for n number of concurrent users. Usually the ISV’s that I have worked with have multiple web servers with software network load balancing. They say that the avg CPU % is <50% but there might be times (peak times) where this might go up. If you want to see how many concurrent users your application supports for a particular hardware platform you can look @ this counter.

 

 

PhysicalDisk\%Disk Read Time

PhysicalDisk\%Disk Write Time

PhysicalDisk\Avg Disk queue Length

 

These counters to make sure there is not much logging etc that is happening either from your application (by mistake: Yes, I have seen this happening where when we were doing performance load testing, ISV’s code mistakenly logging when it should not). Also IIS logging (by default it is on). I was working with one ISV where in the peak times, their CPU% on web server was going beyond 95% but on an average for a whole day it was only ~25%. So when I looked in to the web server, IIS log (full log) was on by default. I recommended them to turn this off and we saw their peak time performance (response/sec) almost improved by 18-20%. So it is important to check this counter.

 

.NET CLR Memory\% Time in GC – This is the #1 counter to look at to see if GC is a possible issue in your application. If the %Time in GC is very low (< ~10%) then GC is not an issue. But incase if this counter is >25 or 30% then definitely this is an area that you would want to look into.

 

.NET CLR Memory\Gen”n”Collections (0, 1 and 2 all 3 counters) – If %Time in GC is high then this is the next set of counters that we want to look into. The good healthy ratio between gen2:gen1 is 1:10. But if this ratio is ~1:1 or 1:2 etc then looking @ allocation pattern and object survival would be next step to look into (Gen 1 promoted bytes might be another counter you want to look into). Why is this ratio? Since .NET GC is a generational one, when a gen “n” is collected then all gen “n-1” will also be collected. i.e when a gen2 collection happens it is gen1, gen0 + LOH. So if the ratio of Gen2:gen1 is 1:1 that means that all the collections (gen1) are because of gen2. So if Gen2:gen1:gen0 is 1:1:1 then all the collections are gen2 collections which is very bad as gen2 collection means looking into entire heap. So @ this time, looking @ allocation graph and object survival statistics would definitely help. The CLR Profiler tool from Microsoft® would be the best tool @ this point which will help you solve these issues. Another question some folks keep asking me “I am doing lot of gen0 collections / sec”. I think I have a serious GC problem. My answer to this question is, don’t even look @ collections and in fact doing lot of Gen0 collections is a good thing (as opposed to doing gen1 or gen2). There are lots of temporary objects getting created and you are running out of gen0 budget and so end up doing these collections. Always look @ % Time in GC as the first counter. If it is very high then start looking into the second level counters such as collections.

 

.NET CLR LocksAndThreads\Contention Rate/sec – This counter is basically used to see how much contention you have in your application. Lower the value better. This counter has to be looked in conjunction with CPU utilization. For ex: Let’s say you are increasing the concurrent users and you would expect the CPU utilization in your web/app server should go up and so is requests/sec. But you don’t see increase in CPU utilization and this counter keep going up every time you add more concurrent users with no increase in requests/sec. This definitely means that you have contention issues. One of more data sources are shared and as the # of concurrent users increase so are # of threads and thus increase contentions. At this point to really identify you can use SOS tool from Microsoft (I will be blogging in detail on using these SOS tool to identify contentions). Also looking @ another counter (System\ Context Switches/sec) will help. If this value is high there is high contention because of which CPU is switching threads very fast.

 

.NET CLR Exceptions\Exceps Thrown /sec: This is another important counter to look at. If this value is high then this can have performance implications. Note that some code paths such as Response.ReDirect always throw exception so careful consideration should be done. If this value is high, it is important to look @ log (assuming that application for exception has some kind of log in debug mode etc) or using a debugger to catch and see why and where these exceptions are thrown and minimizing or eliminating the unnecessary ones can help improve the performance.

 

ASP.NET Applications\Pipeline Instance Count: This is another important counter too look at. This counter tells the number of active request pipeline instances for a specified ASP.NET application. Since only one execution thread will run in one pipeline instance, this gives the maximum number of concurrent requests that are processed by the application. The lower the value is better. You would see that in the warm up phase of the application, this value fluctuates as the threadpool is optimizing the # of thread/s that is optimal for giving you the best performance and this should be fairly constant assuming constant load coming. If you see significant fluctuations in this counter then it is better to look @ determining the best possible value by tuning machine.config file.

 

ASP.NET Applications\Requests/sec: Self explanatory. Total # of requests executed per second

 

ASP.NET Applications\Requests in Application Queue: This counter should be very low. Zero would be ideal. If the value is high then it indicates that all these are in queue waiting to be processed. This you might see when you are trying to increase the # of concurrent users (in a typical performance load testing) to determine how many concurrent users your application can support and you give higher value. This might also happen in the peak periods when burst of requests come and so monitoring this counter is very useful to take further actions for good sustained performance of ASP.NET web applications.

 

ASP.NET Worker Process Restarts: This is to see if w3wp.exe was crashed or shutdown and the restart happened. This should be 0 in majority cases if you see >0 then it is recommended to debug more the causes by looking into event monitor to see if an AV happened.

 

Hope you have got good idea about counters to track (and why and what information they give) on web or application server. Now we can go into database layer. I am not really a data base person but can give some information on the SQL counters that we track or look into.

 

Database layer: Locks\Lock Requests/sec:

 

Locks\Lock wait time (ms):

Locks\Number of deadlocks/sec:

 

These are some of the first set of counters we use to see how much locking is being done in database. Lower the value of these counters the better. If you see very high CPU % for the database occasionally then it would be advised to use SQL Profiler tool to look at which query is taking longer time. I was working on performance load testing with an ISV and we saw that with just ~150-200 concurrent users the database utilization was shooting up to 80%. We started a SQL profiler and saw 1 particular query was taking a very long time. We went ahead modified that to give junk instead of actually going to the table and returning the value (this was basically some news items (top 3) which will be displayed on to the page). This alone reduced the db to <10%. Later lack of indexing and real non efficient query was cause of this. So even 1 stored procedure can create significant problems. As far as the locks are concerned, sp_who2 or sp_lock system stored procedures can give you lot of information on specific locks which are causing issues and you will be able to fix them.

 

That is it for perfmon. Hope this is detail enough information which will help while looking @ n-tier .NET application. Please let me know if you need any specific information or some topic that you want me to write and any comments you have on these blogs.