.NET Core 2.0: Expression: [Recursive resource lookup bug]

When developing Azure Functions and targeting .NET Core 2.0 (or higher), you may run into an infinite recursion bug that prevents you from successfully building the Azure Function; worst of all is that it may happen at a seeming random time.

Assert Failure
Expression: [Recursive resource lookup bug]
Description: Infinite recursion during resource lookup within System.Private.CoreLib.  This may be a bug in System.Private.CoreLib, or potentially in certain extensibility points such as assembly resolve events or CultureInfo names.
Resource name: ArgumentNull_Generic

When you go looking for any related bugs/issues, you’ll find all of them resolved around the last quarter of 2019 with a note that it’s been fixed in .NET Core 2.0.3.

.NET Core 2.0.3 is a servicing update to .NET Core and it isn’t pushed through Windows Update – because reasons, I guess?


The easiest way to get around this issue is to move from .Net Core 2.0.0 to .NET Core 2.0.3. The SDK can be found here and the Runtime can be found here.

After installing, all of your localisation will be clear-sailing without even a hint or a trace of infinite recursion at build-time from that point-forward.

Happy dev’ing! =]

C#: Recursively Getting Subfolders Whilst Ignoring the Errors that Would Stop Other Traversal Means

So, I’m writing something that is – eventually – meant to scrub the harddrive on IoC (indicators of compromise). The first major problem to solve is the following:

If I have drive ‘C:\’, and I try to enumerate all of the folders under the drive, Directory.EnumerateDirectories (including the recursive search option of all directories), then the first exception that throws stops the iteration – entirely.

This is, I suppose, desired behaviour from a code-provider’s perspective but entirely undesirable from a code consumer’s perspective.

So, how can we get around this? Well, we need a way to “swallow” the exceptions but to carry-on with the folders that we can access but we need to keep processing for this to occur. Instead of nesting foreach ad-infinitum (because we never know how deep or shallow the traversal might be), we would – instead – call the method recursively on itself.

So, it breaks down in the following code:

private static void EnumerateSubfolders(string path)
string[] directories = Directory.EnumerateDirectories(path).ToArray();
foreach (string directory in directories)
// Dispose of all of the exceptions about access because we can’t do anything about them.

Note that all we’re doing is going one child-folder deep, adding the results to the list of string of folders and then for each of those, traversing one child-folder deep, again. In this way, even if we hit an exception, we’ve gone one child deep and we stop processing for that child – without it affecting the traversal of the other children; which, while not the idyllic for obtaining all of the folders in Windows, is a far cry better than the entire process stopping on the first exception thrown.

Happy programming! 🙂



NuGet: Targeting All of the .NET Versions Plausible (The Easy Way)

I recently published a NuGet package that targets .NET vesions from 4.5 to the latest (currently, 4.8). (I could only go back to .NET 4.5 because that’s when HttpClient first dropped. Sorry, not sorry.)

In previous NuGet packages, I had to set-up different build iterations for each .NET version and build them all, independently and manually. This was not a fun process, to be sure.

So, how did I do this latest NuGet package so I didn’t have to go through all of that heartache? Well, when you create a new project in Visual Studio, select Class Library (.NET Standard).

Trust me, I’m aware it seems counter-intuitive to do this but there’s a trick coming up that will save you hours of work and heartache.

One the project is loaded, right click on the Project in Solution Explorer and select Edit Project File. One here, you should see some XML beginning with

<Project Sdk=”Microsoft.NET.Sdk”>

and a node that has TargetFrameworkVersion. We’re going to change this and replace the line to be plural-indicative:


Once this is done, we’re going to do one more thing to make our lives 1000% easier:

Right-click the Project in Solution Explorer, again, click Properties. Select the Package tab. Here, you’ll see most of the fields that you would expect to see in a nuspec file. Edit these fields to contain the values that you want and then select Generate NuGet package on build and, if you require the license to be accepted, Require license acceptance.

Now, you’ll have to close and re-open Visual Studio when you save everything but, trust me, this is a far more favourable pain than individually building to each .NET target.

When you build in this project, now, you’ll get a nupkg dropped into your flavour folder (debug or release), which you can then upload into NuGet. The nupkg will – automatically – contain all of the .NET versioned binaries for you. No more action is required on your part.

That’s it! You can now target multiple .NET versions for your NuGet package, without having to do much of anything else (except to ensure that what you’re targeting is included in versions of .NET that you’re targeting and, if not, code for those conditions).

Happy coding! 🙂


Wait-Chain Traversal: Or, How We Can Use PowerShell to JIT C# to Load an Unmanaged Assembly (C++), InterOping With Windows APIs, Returning that Data Back to PowerShell

So, as the long-winded title infers, today I’ll be covering something that I wrote a long time ago and have recently re-written (albeit, probably badly) for you to use and/or learn from.

In this case, we’re using a PowerShell script, which JITs C# code, the C# code calls into an unmanaged DLL (C++), and that calls into an Windows API. Once the data has been obtained from the Windows API, we pass the data back from the unmanaged assembly to the managed code (via Marshal) and then return that back to the PowerShell instance to be displayed to the user.

Before we dive into the what we’re doing, we should cover some key concepts. The first is JIT’ing. JIT stands for “Just-In-Time” (Compilation) and the name is slightly a misnomer but we’ll cover that in a second. So, in JIT, what happens is that the run time precompiles the code before it’s ran. This is important because a key-concept in exception handling is the runtime’s seek operation to find a handler for an exception that is thrown. You’ll often see this as a FirstChanceException in a dump file. In PowerShell, we have the added ability to leverage the JIT compilation by passing source code as a type into the App’s Domain. It’s important to distinguish that once the App Domain has been disposed of, the type specified is lost and has to be re-instantiated again.

So, what – exactly – is this code going to be doing? Well, since Windows Vista, the Windows Operating System exposes the Wait Chain Traversal API. You can see a demonstration of this API in Task Manager: Go to the Details tab, right click on a process and click “Analyze Wait Chain”.

Since Windows Server 2016 Core doesn’t include a desktop or any GUI interfaces, a more robust way was needed to obtain the same information in production, to determine if the reason an application wasn’t responding or performing work was because the threads were blocked.

When you run the code, you can tell if this is the case or not by something like the following:

12972   [7464:12972:blocked]->[ThreadWait]->[7464:12968:blocked]->[End]

Where the first thread is blocked by a thread wait on the the second thread, which is also (itself) blocked.

So, first things first, the PowerShell code. Take a peek here to see that. Note that the Source code is contained with a specific character-delimited string @”<code>”@. After that we add a Type, pointing it to the source code we’ve defined and referencing the assemblies that we’ll need for this type to work. Worthy of noting is that when we add this type, it is exposed in PowerShell the same way any normal .NET type is, via the []:: convention.

Note that in the C# source we import the unmanaged DLL and reference the exposed method. In the body of the code, we also construct an IntPtr to reference for the return. So, now, we get to Marshalling.

An IntPtr is, quite literally, a pointer or handle to an object. A pointer is a reference to memory where an object exists and the object is typically delimited by characters to signify the termination of it (e.g.: the end of a string is null-terminated). A handle is roughly the same premise but the handle abstracts memory management from the caller. So, at 200 ticks, the handle could point to address 0x000001 and at 369 ticks, it could point to 0x34778.

Alright, so why this matters is because when we pass from unmanaged code back to managed code, there’s nothing that implicitly tells the managed code where to find the object in native memory; so, we have to pass a pointer to the object back to managed (I believe managed creates it’s own copy and creates an address for that object) and, using that, we can then try to convert the passed object from a native string into a managed string (via Marshalling).

What about this unmanaged code I keep hearing about? Oh, well… You can find that here. Don’t forget to add the headers referenced in stdafx.h, or your compiler will cry bloody murder.

So, how this works is: The script is called via PowerShell. After some magic to verify that we weren’t given junk data, then PowerShell JIT’s the C# code and performs the runtime operations. The compiler loads the unmanaged DLL into memory. The C# code then calls into the unmanaged DLL via the exposed method (declspec/dllexport/cdecl). The unmanaged code performs it’s work and returns the string back to the caller – unaware that the caller is managed code. The managed code creates an IntPtr to reference the return and then Marshal is called to convert the native string into a managed string. This is then returned back to the user. In the case of multiple instances of a process, the managed string is added to an array and that array is returned.

While it may not seem like much and seem like overglorified complication just to check for threads on wait chaings, it was code that was written to be used in production where we had no desktop environments and we had to determine if this was the case.

I hope that someone gets some kind of use out of it. 🙂

Until next time.

Using PowerShell and .NET to Construct a DirectorySearcher

NOTE: This post – drafted, composed, written, and published by me – originally appeared on https://blogs.technet.microsoft.com/johnbai and is potentially (c) Microsoft.

PowerShell and .NET are very interoperable and can help to save time, when you’re performing generic, every day tasks. For example, let’s say that I want to create a Date-Time value that’s thirty minutes ago and one for right now, we can do this in one fell swoop in Exchange Management Shell (read: PowerShell):

v2: Get-MessageTrackingLog -Start [System.DateTime]::Now.AddMinutes(-30) -End [System.DateTime]::Now

v3: Get-MessageTrackingLog -Start ([System.DateTime]::Now).AddMinutes(-30) -End ([System.DateTime]::Now)

As you can see, the syntax changes slightly but the methods are the same because they derive from the same .NET class.

Below, I demonstrate constructing a DirectorySearcher object for a specific case. The final script is published on OneScript, here, but I wanted to demonstrate that we can use PowerShell + .NET to solve some complex problems in a rather easy way.


In rare cases, removal of an Exchange Server from the forest doesn’t go according to plan and, without Exchange Management Shell (EMS), finding servers via Active Directory might be a bit of a pain-point. Enter DirectorySearcher.

Here is an example, finding Exchange 2013 mailbox servers in the forest:

$colMBX = @()
$CurrentDomain = [System.DirectoryServices.ActiveDirectory.Domain]::GetComputerDomain()
$ForestName = $CurrentDomain.Forest.Name
$ForestDC = $ForestName.Replace(“.”,”,DC=”)
$ForestLDAP = [ADSI]”LDAP://CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=$ForestDC”
$orgName = $ForestLDAP.psbase.children | where {$_.objectClass -eq ‘msExchOrganizationContainer’}
$Path = “LDAP://CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=” + $orgName.Name + “,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=” + $ForestDC
$Searcher = New-Object System.DirectoryServices.DirectorySearcher
$Searcher.Filter = ‘(&(&(&(objectCategory=msExchExchangeServer)(msExchCurrentServerRoles:1.2.840.113556.1.4.803:=54)(serialNumber=*15*))))’
$Searcher.PageSize = 10000
$Searcher.SearchScope = “OneLevel”
$Searcher.PropertiesToLoad.Add(“Name”) | Out-Null
$Searcher.SearchRoot = $Path
$ServerResult = $Searcher.FindAll()
foreach ($result in $ServerResult)
       $colMBX += $result.Properties.name[0]

You’ll notice that in the filter we’re using the numeric value ‘1.2.840.113556.1.4.803’ between the attribute and the value we’re seeking. This OID is an Extensible Matching Rule for the bitwise operator AND, which may also be referred to as ‘LDAP_MATCHING_RULE_BIT_AND‘. It is not required for use in your filter but does follow RFC convention.

In Exchange 2013, for Mailbox servers we can use the value ’54’ to search and for CAS servers we can use the value ‘16385’.

To explain the values, we can demonstrate via table:

Server role Role value
Mailbox role 2
Client Access role (CAS) 4
Unified Messaging role 16
Hub Transport role 32

The Mailbox role now has the previous roles in one server, so 2 + 4 + 16 + 32 = 54.

You can read more on PowerShell + DirectorySearcher here: http://technet.microsoft.com/en-us/library/ff730967.aspx