HttClient: The Myths Behind ‘Using’ on a ReEntrant Object

In developing a solution in Azure to [REDACTED], I discovered a “bug” in HttpClient that seems to be somewhat common knowledge but I figured that I would share it with you, in case you run into the same problems.

This bug surfaced – moreso – because we’re using Azure, than anything else. You see, anything in Azure should be considered multitenancy; meaning that your app is running parallel to – potentially – hundreds of other apps within the infrastructure’s backplane.

So, I was using Parallel.ForEach and .ctor’ing a new HttpClient per thread and making a call to obtain data from a rest endpoint from a unique sub-url; which was entirely dependent on data that I had previously obtained via another rest call. Obscurity is strong with this one, I’m aware.

Every once in a while, I would get the exceptions (by unboxing them): “Only one usage of each socket address (protocol/network/port) is normally permitted: <ipAddressGoesHere>.

In technicality, I was only using one address per HttpClient but there’s a catch/caveat to all of this. Even if you use the ‘using‘ statement, the IDispose interface isn’t immediately called.

The socket would still be in use, even if it were. This is because the socket that the HttpClient uses is put into TIME_WAIT. So, you have an open socket in use to that host and because the host hasn’t closed the socket, if you instantiated all new HttpClients (which used new ephemeral ports), you could potentially run out of ports to consume.

…but, wait, there’s more!™

The HttpClient is considered reentrant (and this is where our true problem comes in). This means that some (if not all) of your non-disposed of HttpClients could be re-used to try to go what the HttpClient considers a currently in-use object (because the port is still considered open when it’s in TIME_WAIT).

In fact, if we chase down the SocketException –> Win32Exception –> hResult, we can see that this comes from the system as 0x00002740, which is WSAEADDRINUSE.

The solution? Since public static (I think just static, really) instances of HttpClient are considered thread-safe, the singleton model is what we want to go with.

So, instead of instantiating an HttpClient per call, you would instantiate a singleton instance that would be used per-host in your class instance. This allows the HttpClient and it’s port to be re-used (thus, reducing potential ephemeral port exhaustion as a byproduct). And since it appears that Azure re-instantiates your class per run (if you’re using the TimerTrigger, for example), then you create a scenario where the object’s lifetime is bound to your class. (Assuming you call HttpClient.Dispose() before the run completes and the object moves out of scope.)

…but MSDN says to use ‘using’ for IDisposable objects!

Yes, this is true but, again, we have to consider that even though .Dispose() might be called when we leave scope, we have no control over when GC actually comes through and disposes of the object from the Gen1/Gen2 heaps. We also cannot control when the TCP port is actually closed because that’s dependent on the host. So, even if HttpClient.Dispose() is called, you’re still at the whims of the keep-alive configured on the host for the actual port to be closed.

Diagram from the IETF? Diagram from the IETF.


So, even though it’s been practically beaten into you throughout your CS career to use ‘using’, there are times when the singleton model (and not invoking using) are more favourable to your software design needs, expectations, and requirements than what you’ve been taught is the best practice.

Happy coding! 🙂

Wait-Chain Traversal: Or, How We Can Use PowerShell to JIT C# to Load an Unmanaged Assembly (C++), InterOping With Windows APIs, Returning that Data Back to PowerShell

So, as the long-winded title infers, today I’ll be covering something that I wrote a long time ago and have recently re-written (albeit, probably badly) for you to use and/or learn from.

In this case, we’re using a PowerShell script, which JITs C# code, the C# code calls into an unmanaged DLL (C++), and that calls into an Windows API. Once the data has been obtained from the Windows API, we pass the data back from the unmanaged assembly to the managed code (via Marshal) and then return that back to the PowerShell instance to be displayed to the user.

Before we dive into the what we’re doing, we should cover some key concepts. The first is JIT’ing. JIT stands for “Just-In-Time” (Compilation) and the name is slightly a misnomer but we’ll cover that in a second. So, in JIT, what happens is that the run time precompiles the code before it’s ran. This is important because a key-concept in exception handling is the runtime’s seek operation to find a handler for an exception that is thrown. You’ll often see this as a FirstChanceException in a dump file. In PowerShell, we have the added ability to leverage the JIT compilation by passing source code as a type into the App’s Domain. It’s important to distinguish that once the App Domain has been disposed of, the type specified is lost and has to be re-instantiated again.

So, what – exactly – is this code going to be doing? Well, since Windows Vista, the Windows Operating System exposes the Wait Chain Traversal API. You can see a demonstration of this API in Task Manager: Go to the Details tab, right click on a process and click “Analyze Wait Chain”.

Since Windows Server 2016 Core doesn’t include a desktop or any GUI interfaces, a more robust way was needed to obtain the same information in production, to determine if the reason an application wasn’t responding or performing work was because the threads were blocked.

When you run the code, you can tell if this is the case or not by something like the following:

12972   [7464:12972:blocked]->[ThreadWait]->[7464:12968:blocked]->[End]

Where the first thread is blocked by a thread wait on the the second thread, which is also (itself) blocked.

So, first things first, the PowerShell code. Take a peek here to see that. Note that the Source code is contained with a specific character-delimited string @”<code>”@. After that we add a Type, pointing it to the source code we’ve defined and referencing the assemblies that we’ll need for this type to work. Worthy of noting is that when we add this type, it is exposed in PowerShell the same way any normal .NET type is, via the []:: convention.

Note that in the C# source we import the unmanaged DLL and reference the exposed method. In the body of the code, we also construct an IntPtr to reference for the return. So, now, we get to Marshalling.

An IntPtr is, quite literally, a pointer or handle to an object. A pointer is a reference to memory where an object exists and the object is typically delimited by characters to signify the termination of it (e.g.: the end of a string is null-terminated). A handle is roughly the same premise but the handle abstracts memory management from the caller. So, at 200 ticks, the handle could point to address 0x000001 and at 369 ticks, it could point to 0x34778.

Alright, so why this matters is because when we pass from unmanaged code back to managed code, there’s nothing that implicitly tells the managed code where to find the object in native memory; so, we have to pass a pointer to the object back to managed (I believe managed creates it’s own copy and creates an address for that object) and, using that, we can then try to convert the passed object from a native string into a managed string (via Marshalling).

What about this unmanaged code I keep hearing about? Oh, well… You can find that here. Don’t forget to add the headers referenced in stdafx.h, or your compiler will cry bloody murder.

So, how this works is: The script is called via PowerShell. After some magic to verify that we weren’t given junk data, then PowerShell JIT’s the C# code and performs the runtime operations. The compiler loads the unmanaged DLL into memory. The C# code then calls into the unmanaged DLL via the exposed method (declspec/dllexport/cdecl). The unmanaged code performs it’s work and returns the string back to the caller – unaware that the caller is managed code. The managed code creates an IntPtr to reference the return and then Marshal is called to convert the native string into a managed string. This is then returned back to the user. In the case of multiple instances of a process, the managed string is added to an array and that array is returned.

While it may not seem like much and seem like overglorified complication just to check for threads on wait chaings, it was code that was written to be used in production where we had no desktop environments and we had to determine if this was the case.

I hope that someone gets some kind of use out of it. 🙂

Until next time.