Archived Posts from this Category
Archived Posts from this Category
When you first request a page from the an ASP.NET application, the .NET framework takes the ASPX file and generates code to actually execute the page. This code is then compiled by the framework and the results of the compilation are stored in the Temporary ASP.NET files directory within the framework directory (usually located in c:\windows\Microsoft.NET\Framework). When the ASPX of the compiled DLL changes this code is re-generated and recompiled.
On a server that hosts lots of ASP.NET applications this store of temporary compiled code can occupy a considerable amount of space. On machines with a limited amount of space on their OS partition this can begin to cause problems. Thankfully the ASP.NET framework does allow the location of this directory to be specified as a custom location.
As with most server wide settings you need to make a change to the Machine.Config (for .NET 1.1) or Machine wide web.Config (for .NET 2). The crucial part of the configuration is the Compilation element within system.web. The compilation element has an optional attribute called tempDirectory that allows a new directory location to be specified overriding the default setting of %FrameworkInstallLocation%\Temporary ASP.NET Files.
One thing to watch out for when making this change is the file permissions on your new Temporary ASP.NET files - copying the permissions from the original location will do the trick nicely.
Today I encountered a strange problem with IIS6 restarting without notice on a server that had recently had the .NET Framework 2.0 installed on it. In our particular case the problem was made worse by the fact that the IIS restart was unsuccessful leaving the server in a some what crippled state. We traced the problem to clicking on the ASP.NET in the IIS Management MMC, not making any changes to the settings, then clicking OK on the properties dialog.
Usually making a change to the version of ASP.NET will cause a restart of IIS (and there are alternative ways that avoid the restart), however in this case it seems as though just viewing the tab and then clicking OK was enough to cause ASP.NET to restart the IIS Service. I’ve still not discovered precisely why this is happening, but for the time being I wanted to implement a work around to avoid the problem by disabling the ASP.NET Tab.
I thought doing this would be easy, after all enough people seem to have problems with the tab not being there. Common causes of the tab being missing seem to be running the IIS MMC on x64, or having installed an earlier beta of .NET 2.0. The fixes in most cases seem to be modifications to the registry or re-running aspnet_regiis -i to re-register ASP.NET.
While it is possible that by fiddling with the registry I could break the ASP.NET tab that didn’t seem to be a good solution, so I carried on digging. It turns out that the ASP.NET Tab is implemented as an MMC Snap-in extension, and can be disabled with two clicks of the mouse once you’ve found the setting. So, to disable the ASP.NET tab within the ASP.NET management MMC:
Now when you start the IIS Manager the ASP.NET tab won’t be there.
Sadly the tab is still there in the Computer Management MMC (Computer Management > Services And Applications > Internet Information Services), and looking at the computer management MMC in the same way as above does not yield the same choice of Extensions, so if anyone knows how to influence Computer Management in the same way, please let me know!
Web Services are very simple to consume using .NET code, and its all to easy to forget what is actually going on when you add that Web Reference to your project in Visual Studio. Once you’ve entered the URL for you web service and clicked the Add Reference button, Visual Studio requests the service description WSDL file, and code generates classes to represent the data and the web service methods. I found delving into this generated code taught me quite a lot about the way XML serialization and web services actually work.
Behind the scenes the web service proxy uses classes in the System.Web.Services.Protocols namespace to actually perform the calls to the service, and these calls end up as
System.Net.WebRequests containing the correctly encoded data that makes up the message.
Quite a lot of the problems with web services I encounter during deployment are actually problems with the
System.Net.WebRequest. The most common cause of problems seems to be problems where web access is provided via some form of HTTP Proxy Server, and this typically results in an exception of the form:
[WebException: The underlying connection was closed: Unable to connect to the remote server.]
If you have access to the machine running your application its a very good idea to check if the machine can make a request to your Web Service endpoint using a web browser. This also allows you to check the settings to see if web traffic is passing through a proxy - failing this, check with the people responsible for the network.
If you find that there is a proxy involved, there are a couple of strategies available to resolve the problem.
machine.configto affect all applications
defaultProxysection with that attribute
usesystemdefault="false"allow system wide setting of a default proxy server overriding an OS setting:
usesystemdefault = "false"
MyWebService ws= new MyWebService();
WebProxy proxyObject = new WebProxy("http://proxyservername:port", true);
MyWebService.Proxy = proxyObject;
All of the above methods apply to Web Service classes and also the WebRequest Classes
I discovered this week that the proxy problem is not the only cause of a web application which calls a web service throwing the exception:
[WebException: The underlying connection was closed: Unable to connect to the remote server.]
The next step I took in investigating this problem was to create two very simple test applications - one ASP.NET based like the code I was having problems with, and another a simple console application I could run as Administrator on the machine in question.
System.Net.WebRequest r = System.Net.WebRequest.Create("http://av.com");
string resp = new System.IO.StreamReader(r.GetResponse()
public class Test
public static void Main()
WebRequest r = WebRequest.Create("http://av.com");
string resp = new StreamReader(r.GetResponse().GetResponseStream()).ReadToEnd();
Both of these make WebRequests to the Altavista search engine, and therefore tested requests out onto the Internet, returning the HTML from the Altavista homepage. As expected the ASP.NET based version gave the same exception as before, however the console application revealed not one, but two exceptions:
Unhandled Exception: System.TypeInitializationException: The type initializer for "System.Net.Sockets.Socket" threw an exception.
---> System.Net.Sockets.SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
Which is followed by a more standard looking timeout exception
[WebException: The operation has timed-out. ]
Changing the test code to request a page from the local IIS server to no effect confirmed that it was unlikely that this was an HTTP proxy problem.
Quite a lot of searching the web, lead me towards an Microsoft Bulletin Bulletin
BUG: You receive a “The operation has timed-out” error message when you access a Web service or when you use the IPAddress class which sounded somewhat familiar, and suggested that the problem might be caused by have more than 50 protocol bindings . Running the enum.exe utliity linked in the MS article revealed that this machine had over 100 bindings. Performing the same check on a number of other machines revealed that a more typical value was about 20, so something was not quite right with this machine. Removing some unneeded protocols from the networking setup resolved the issue, with both console application and ASP.NET test page returned the expected HTML, and most importantly the failing web service calls in the web application now works.
In the presentation about Good API Design I talked about in yesterdays post one of the key points made was that once an API is defined you should never make changes to it that will break your client’s code. An example cited throwing exceptions based on values previously considered fine.
As luck would have it I encountered an actual example of precisely this problem today while installing the .NET 2.0 Runtime on a development server. This server runs a number of .NET 1.1 applications and a number of classic ASP applications consuming COM components written in .NET 1.1.
Things didn’t start well, with the framework installer stopping the IIS instance for the better part of 10 minutes while installing, however it did restart it again once it was done (unlike MSDTC and SQL Server when installing anything from the Windows Components section of Add Remove Programs on Windows 2003).
Matters got worse when someone mentioned that one of the components on the server was now misbehaving - specifically one that uses the ASP.NET Cache to provide caching capabilities.
Whenever a web application tried to create this object (via Server.CreateObject) it was getting an invalid pointer error. Other COM components developed in a similar way were working fine, so I assumed there was something wrong with the registration of the component. Un-registering and re-registering the component gave no joy - neither did calling it from a simple VBScript file.
To make matters worse, a simple .NET test application was working just fine using the exact same library.
After a bit of head scratching and pondering the SysInternals (Now a part of Microsoft) Process Explorer revealed that instead of using the .NET 1.1 version of System.Web both CScript and the IIS DLLHost were loading the .NET 2.0 version. The code for the component hadn’t changed, so maybe the .NET framework had.
Loading the source code for the component into Visual Studio 2005 and attempting to compile and run a the simple test application revealed the problem, a Null Reference Exception from within the framework.
As the COM Component was using the ASP.NET System.Web.Cache it was creating a HTTP Context instance internally. This code looked like this:
private System.Web.HttpContext context = new System.Web.HttpContext(null);
Poking round the disassembled code of System.Web in Reflector didn’t reveal what it was that was causing the exceptions, although I did only go a few functions deep, however it did reveal an alternative way of getting to the cache.
Changing our code to use a call to
System.Web.HttpRuntime.Cache to obtain the cache instance fixed our problem, and a quick rebuild of the component against .NET 2 and redeploy to the server and we were back up and running.
Lessons learned from all this:
When investigating performance problems on production servers it is always very useful to have as much information about the actual work that the server is performing at any given time. Out of the box IIS 6 does not give you much to work with - at best you can identify the virtual host that is causing the issues by putting it into its own application pool, and then using Task Manager you can see PID of the w3wp.exe instance which is occupying your servers CPU. Once you have that, the iisapp.vbs administrative script will reveal the name of the application pool which is misbehaving.
I have often wished to be able to see in real time what is actually going on within the worker process or IIS instance, and with the discovery of the Internet Information Services Diagnostic Tools trace tools I have found something that comes pretty close to what I would like.
The IIS trace tools contain a command line tool that will return the details of the executing requests as XML (even obtaining the information from a remote IIS server), however the jewel in the crown is the Request Viewer - a window app that with the click of a tool bar button reveals the requests currently executing on the server. (see screen shot below)
Unfortunately the tool does not show you the name of the host that the requests relate to, just the site ID and application pool pid, but these are easily converted into the application pool name (as mentioned above, use iisapp) or the site (look the is up in the IIS Manager.
Another problem with the request viewer is that when I first ran and clicked the refresh now button it all I got was an error and no details of the requests currently running. Thankfully I found the solution on the web, and it was as simple as making sure the temp environment variable was set to a path that didn’t use long filenames.
As the IIS viewer leaves a command window in the background when it runs, I thought that the best solution would be to have a simple batch file that set the environment up, ran IISApp.VBS and then started the Request Viewer:
"C:\Program Files\IIS Resources\TraceDiag\reqviewer.exe"
So now that empty command window contains the PID values for all my application pools so its not wasting space in my RDP window