C#: Recursively Getting Subfolders Whilst Ignoring the Errors that Would Stop Other Traversal Means

So, I’m writing something that is – eventually – meant to scrub the harddrive on IoC (indicators of compromise). The first major problem to solve is the following:

If I have drive ‘C:\’, and I try to enumerate all of the folders under the drive, Directory.EnumerateDirectories (including the recursive search option of all directories), then the first exception that throws stops the iteration – entirely.

This is, I suppose, desired behaviour from a code-provider’s perspective but entirely undesirable from a code consumer’s perspective.

So, how can we get around this? Well, we need a way to “swallow” the exceptions but to carry-on with the folders that we can access but we need to keep processing for this to occur. Instead of nesting foreach ad-infinitum (because we never know how deep or shallow the traversal might be), we would – instead – call the method recursively on itself.

So, it breaks down in the following code:

private static void EnumerateSubfolders(string path)
{
try
{
string[] directories = Directory.EnumerateDirectories(path).ToArray();
foldersList.AddRange(directories);
foreach (string directory in directories)
{
EnumerateSubfolders(directory);
}
}
catch
{
// Dispose of all of the exceptions about access because we can’t do anything about them.
}
}

Note that all we’re doing is going one child-folder deep, adding the results to the list of string of folders and then for each of those, traversing one child-folder deep, again. In this way, even if we hit an exception, we’ve gone one child deep and we stop processing for that child – without it affecting the traversal of the other children; which, while not the idyllic for obtaining all of the folders in Windows, is a far cry better than the entire process stopping on the first exception thrown.

Happy programming! ūüôā

 

 

HttClient: The Myths Behind ‘Using’ on a ReEntrant Object

In developing a solution in Azure to [REDACTED], I discovered a “bug” in HttpClient that seems to be somewhat common knowledge but I figured that I would share it with you, in case you run into the same problems.

This bug surfaced – moreso – because we’re using Azure, than anything else. You see, anything in Azure should be considered multitenancy; meaning that your app is running parallel to – potentially – hundreds of other apps within the infrastructure’s backplane.

So, I was using Parallel.ForEach and .ctor’ing a new HttpClient per thread and making a call to obtain data from a rest endpoint from a unique sub-url; which was entirely dependent on data that I had previously obtained via another rest call. Obscurity is strong with this one, I’m aware.

Every once in a while, I would get the exceptions (by unboxing them): “Only one usage of each socket address (protocol/network/port) is normally permitted: <ipAddressGoesHere>.

In technicality, I was only using one address per HttpClient but there’s a catch/caveat to all of this. Even¬†if you use the ‘using‘ statement, the IDispose interface isn’t immediately called.

The socket would still be in use, even if it were. This is because the socket that the HttpClient uses is put into TIME_WAIT. So, you have an open socket in use to that host and because the host hasn’t closed the socket, if you instantiated all new HttpClients (which used new ephemeral ports), you could potentially run out of ports to consume.

…but, wait, there’s more!‚ĄĘ

The HttpClient is considered reentrant¬†(and this is where our true problem comes in). This means that some (if not all) of your non-disposed of HttpClients could be¬†re-used to try to go what the HttpClient considers a currently in-use object (because the port is still considered open when it’s in TIME_WAIT).

In fact, if we chase down the SocketException –> Win32Exception –> hResult, we can see that this comes from the system as 0x00002740, which is WSAEADDRINUSE.

The solution? Since public static (I think just static, really) instances of HttpClient are considered thread-safe, the singleton model is what we want to go with.

So, instead of instantiating an HttpClient per call, you would instantiate a singleton instance that would be used per-host in your class instance. This allows the HttpClient and it’s port to be re-used (thus, reducing potential ephemeral port exhaustion as a byproduct). And since it appears that Azure re-instantiates your class per run (if you’re using the TimerTrigger, for example), then you create a scenario where the object’s lifetime is bound to your class. (Assuming you call HttpClient.Dispose() before the run completes and the object moves out of scope.)

…but MSDN says to use ‘using’ for IDisposable objects!

Yes, this is true but, again, we have to consider that even though .Dispose()¬†might be called when we leave scope, we have no control over when GC actually comes through and disposes of the object from the Gen1/Gen2 heaps. We also cannot control when the TCP port is¬†actually closed because that’s dependent on the host. So, even if HttpClient.Dispose() is called, you’re still at the whims of the keep-alive configured on the host for the¬†actual port to be closed.

Diagram from the IETF? Diagram from the IETF.

Time_Wait

So, even though it’s been practically beaten into you throughout your CS career to use ‘using’, there are times when the singleton model (and not invoking using) are more favourable to your software design needs, expectations, and requirements than what you’ve been taught is the best practice.

Happy coding! ūüôā

Announcing ‘Tomte’: The Customisable Remote Administration Tool for Windows Systems

TL;DR – I’ve released source code for remote systems administration via PowerShell + WCF + WindowsService, here, under the Mozilla Public License 2.0.¬†

Windows has a huge, glaring gap where Linux really shines: Remote configuration via tools. Sure, there’s DSC (Desired State Configuration) but it doesn’t help much for run-time automation and administration.

Tomte¬†(o, as in phone, and ‘e’, as in meh) is a framework (though lacking in some features) that was written to primarily address that and give SRE/DevOps teams a tool to quickly add an activity to be used (just add a PowerShell command and then the corresponding activity).

First things, first: We have to set the SQL Server up. Install your flavour of SQL server (I’ve port this code twice, now, so I just kept the instance name, as that made life easier) and run the following commands from the command line, replacing the instance name with your SQL Server instance and the database name with the database that you created.

osql -E -S .\SQL2008Express -d WorkflowInstanceStore -i C:\Windows\Microsoft.NET\Framework64\v4.0.30319\SQL\en\SqlWorkflowInstanceStoreSchema.sql

osql -E -S .\SQL2008Express -d WorkflowInstanceStore -i C:\Windows\Microsoft.NET\Framework64\v4.0.30319\SQL\en\SqlWorkflowInstanceStoreLogic.sql

IMPORTANT: If you want to configure the service to use a user-account, please change that in the source code and add permissions for the account to the SQL database; otherwise, the SQL database will require Anonymous Logon permissions, as the Local System account used for the Windows Service is obfuscated as an anonymous logon on SQL Server.

Now that the database is set-up, you’ll need to punch a hole in your firewall to listen on port 65534. You can change the listen port in the application configuration file of the Windows Service (Fels√∂kning.Tomte.AdminService) via the baseAddress setting.

        <host>

<baseAddresses>

<!– NOTE: The ‘*’ allows the service to install on any given machine –>

<add baseAddress=”http://*:65534/WorkflowService/service”/>

</baseAddresses>

</host>

Now that we have the SQL database ready and we’ve punched a hole in the firewall, we should install the Windows Service, first. Ensure that the build configuration of all of the projects is set to ‘x64’ and clean/build the solution.

To install the windows service, we’ll leverage the standard tools that ship with .NET, to prevent any cursory problems with “specialised” versions of these tools. In an elevated command (or PowerShell) window, run the following to install the service, replacing the path with your build’s output path.

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe

Installationsverktyg för Microsoft (R) .NET Framework, version 4.7.3190.0

Copyright (C) Microsoft Corporation. Med ensamrätt.

 

 

Kör en överförd installation.

 

Startar installationsfasen av installationen.

Se loggfilen om du vill ha information om installationsförloppet för sammansättningen C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe.

Filen finns på C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.InstallLog.

Installerar sammansättningen C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe.

Berörda parametrar är:

logtoconsole =

assemblypath = C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe

logfile = C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.InstallLog

Installerar tj√§nsten Fels√∂kning.Tomte.AdminService…

Tjänsten Felsökning.Tomte.AdminService har installerats.

Skapar EventLog-k√§llan Fels√∂kning.Tomte.AdminService i loggen Application…

 

Installationsfasen slutfördes och allokeringsfasen inleds.

Se loggfilen om du vill ha information om installationsförloppet för sammansättningen C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe.

Filen finns på C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.InstallLog.

Utför sammansättningen C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe.

Berörda parametrar är:

logtoconsole =

assemblypath = C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.exe

logfile = C:\Code\git\felsokning\Tomte\Felsökning.Tomte\Felsökning.Tomte.AdminService\bin\x64\Debug\Felsökning.Tomte.AdminService.InstallLog

 

Allokeringsfasen slutfördes.

 

Den överförda installationen slutfördes.

Now that the Windows Service is installed, we need to start it. You can do this via any number of means. Start>Run>services.msc is an easy way to do that.

With the service running, we can now import the PowerShell module.

Import-Module “C:\Code\git\felsokning\Tomte\Fels√∂kning.Tomte\Fels√∂kning.Tomte.PowerShell\bin\x64\Debug\Fels√∂kning.Tomte.PowerShell.dll” -Verbose

VERBOSE: Loading module from path ‘C:\Code\git\felsokning\Tomte\Fels√∂kning.Tomte\Fels√∂kning.Tomte.PowerShell\bin\x64\Debug\Fels√∂kning.Tomte.PowerShell.dll’.

VERBOSE: Importing cmdlet ‘Update-RemoteWindowsSystem’.

VERBOSE: Importing cmdlet ‘Test-RemotePortConnectivity’.

VERBOSE: Importing cmdlet ‘Test-RemoteFileExists’.

VERBOSE: Importing cmdlet ‘Start-RemoteSecureDelete’.

VERBOSE: Importing cmdlet ‘Set-RemoteComputerName’.

VERBOSE: Importing cmdlet ‘Set-RemoteSymbolServerEnvironmentVariable’.

VERBOSE: Importing cmdlet ‘Restart-RemoteService’.

VERBOSE: Importing cmdlet ‘Restart-RemoteSystem’.

VERBOSE: Importing cmdlet ‘Request-DevOpsElevation’.

VERBOSE: Importing cmdlet ‘Install-RemoteSysInternals’.

VERBOSE: Importing cmdlet ‘Get-RemoteDateTime’.

VERBOSE: Importing cmdlet ‘Get-RemoteFileText’.

VERBOSE: Importing cmdlet ‘Get-RemoteFreeDiskSpace’.

VERBOSE: Importing cmdlet ‘Get-RemoteLoggedOnUsers’.

VERBOSE: Importing cmdlet ‘Get-RemoteOSVersion’.

VERBOSE: Importing cmdlet ‘Get-RemotePingResponse’.

VERBOSE: Importing cmdlet ‘Get-RemoteProcessIds’.

VERBOSE: Importing cmdlet ‘Get-RemoteProcessThreads’.

VERBOSE: Importing cmdlet ‘Get-RemoteServerTimeSkew’.

VERBOSE: Importing cmdlet ‘Get-RemoteSystemUptime’.

VERBOSE: Importing cmdlet ‘Get-RemoteWebResponseString’.

VERBOSE: Importing cmdlet ‘Get-RemoteWindowsEvents’.

VERBOSE: Importing cmdlet ‘Edit-RemoteConfigurationFile’.

VERBOSE: Importing cmdlet ‘Copy-RemoteFiles’.

VERBOSE: Importing cmdlet ‘Copy-RemoteImagesAndLibrariesForProcess’.

Now that the module is loaded, you can run commands against the remote (or local) endpoint running the Windows Service. (As you can see, I’ve found a localisation bug that I need to sort-out with the return from the DateTimeActivity – 05-01-2019 can easily be May 1st or the 5th of January, depending on how you datetime.)

Get-RemoteDateTime -Server 192.168.0.252

den 5 januari 2019 15:00:02

 

Get-RemoteFreeDiskSpace -Server 192.168.0.252

Drive C:\ has 87.52% free

Drive D:\ has 90.22% free

 

Get-RemoteOSVersion -Server 192.168.0.252 -Kernel32

Major  Minor  Build  Revision

—–¬† —–¬† —–¬† ——–

10     0      17763  1

 

Test-RemotePortConnectivity -Server 192.168.0.252 -TargetHost 8.8.8.8 -Port 53

True

Now that we’ve run a few workflows, let’s check the SQL database and verify that the data we expect to find is there.

SQLQuery

SQL Query showing the executions of the Workflows on the machine.

…and that’s just about it.

Hope that this helps someone at some point in the some future. ūüôā

Wait-Chain Traversal: Or, How We Can Use PowerShell to JIT C# to Load an Unmanaged Assembly (C++), InterOping With Windows APIs, Returning that Data Back to PowerShell

So, as the long-winded title infers, today I’ll be covering something that I wrote a long time ago and have recently re-written (albeit, probably badly) for you to use and/or learn from.

In this case, we’re using a PowerShell script, which JITs C# code, the C# code calls into an unmanaged DLL (C++), and that calls into an Windows API. Once the data has been obtained from the Windows API, we pass the data back from the unmanaged assembly to the managed code (via Marshal) and then return that back to the PowerShell instance to be displayed to the user.

Before we dive into the what we’re doing, we should cover some key concepts. The first is JIT’ing. JIT stands for “Just-In-Time” (Compilation) and the name is slightly a misnomer but we’ll cover that in a second. So, in JIT, what happens is that the run time precompiles the code before it’s ran. This is important because a key-concept in exception handling is the runtime’s seek operation to find a handler for an exception that is thrown. You’ll often see this as a FirstChanceException in a dump file. In PowerShell, we have the added ability to leverage the JIT compilation by passing source code as a type into the App’s Domain. It’s important to distinguish that once the App Domain has been disposed of, the type specified is lost and has to be re-instantiated again.

So, what – exactly – is this code going to be doing? Well, since Windows Vista, the Windows Operating System exposes the Wait Chain Traversal API. You can see a demonstration of this API in Task Manager: Go to the Details tab, right click on a process and click “Analyze Wait Chain”.

Since Windows Server 2016 Core doesn’t include a desktop or any GUI interfaces, a more robust way was needed to obtain the same information in production, to determine if the reason an application wasn’t responding or performing work was because the threads were blocked.

When you run the code, you can tell if this is the case or not by something like the following:

12972   [7464:12972:blocked]->[ThreadWait]->[7464:12968:blocked]->[End]

Where the first thread is blocked by a thread wait on the the second thread, which is also (itself) blocked.

So, first things first, the PowerShell code. Take a peek here to see that. Note that the Source code is contained with a specific character-delimited string @”<code>”@. After that we add a Type, pointing it to the source code we’ve defined and referencing the assemblies that we’ll need for this type to work. Worthy of noting is that when we add this type, it is exposed in PowerShell the same way any normal .NET type is, via the []:: convention.

Note that in the C# source we import the unmanaged DLL and reference the exposed method. In the body of the code, we also construct an IntPtr to reference for the return. So, now, we get to Marshalling.

An IntPtr is, quite literally, a pointer or handle to an object. A pointer is a reference to memory where an object exists and the object is typically delimited by characters to signify the termination of it (e.g.: the end of a string is null-terminated). A handle is roughly the same premise but the handle abstracts memory management from the caller. So, at 200 ticks, the handle could point to address 0x000001 and at 369 ticks, it could point to 0x34778.

Alright, so why this matters is because when we pass from unmanaged code back to managed code, there’s nothing that implicitly tells the managed code where to find the object in native memory; so, we have to pass a pointer to the object back to managed (I believe managed creates it’s own copy and creates an address for that object) and, using that, we can then try to convert the passed object from a native string into a managed string (via Marshalling).

What about this unmanaged code I keep hearing about? Oh, well… You can find that here. Don’t forget to add the headers referenced in stdafx.h, or your compiler will cry bloody murder.

So, how this works is: The script is called via PowerShell. After some magic to verify that we weren’t given junk data, then PowerShell JIT’s the C# code and performs the runtime operations. The compiler loads the unmanaged DLL into memory. The C# code then calls into the unmanaged DLL via the exposed method (declspec/dllexport/cdecl). The unmanaged code performs it’s work and returns the string back to the caller – unaware that the caller is managed code. The managed code creates an IntPtr to reference the return and then Marshal is called to convert the native string into a managed string. This is then returned back to the user. In the case of multiple instances of a process, the managed string is added to an array and that array is returned.

While it may not seem like much and seem like overglorified complication just to check for threads on wait chaings, it was code that was written to be used in production where we had no desktop environments and we had to determine if this was the case.

I hope that someone gets some kind of use out of it. ūüôā

Until next time.

2017 Project: Documented Code Samples

NOTE: This post – drafted, composed, written, and published by me – originally appeared on https://blogs.technet.microsoft.com/johnbai and is potentially (c) Microsoft.

In an effort to empower more people across the planet to learn, but for it not to be an arduous journey in doing so, I’ve started heavily commenting code so that people can learn from it. This will be the first of many code ventures that I attempt to do and then share with comments, so that others can learn from it. The reason for the commenting with the links is because I, myself, have often wondered what ‘x’ does or why ‘y’ may have been used; so, in typical learning fashion, I set off to my favourite search engine and undertake to read more on the class, method, property, etc. To save someone the time and heartache of doing that, I include the .NET MSDN/TechNet documentation with the code – to assist self-driven learning.

In this first of series, DownloadBingImage, Wes (Mills) was making a PowerShell script to download the Bing Search Image. Kevin (Miller) asked him if it was in a service or task and Wes replied that it was a task, so I challenged myself to write it into a Windows Service. I rose to the occasion – mostly, out of self-interest in Windows Services, as it would help me to learn more about coding fundamentals in this area.

And, so, a few months of trial and error and I have a working version of the service installed on my local machine and it downloads the image[s]. This same exact code is what I’m sharing with you.

You can find a copy of the code with in-line comments and links to the classes/methods here. If you want to start at the main service entry point, look here.

It should go without saying that I am not a professional developer (albeit, my role does require understanding all aspects of the SDLC), so if you have comments or are confused as to why I did something a particular way (from a professional developer’s perspective) and it doesn’t quite make sense, that’s why: I’m more of a hobbyist than what others would call a “professional”, at this point.

I hope that this helps at least one person on their journey towards software development. If it’s just one, then the effort was worth it. ūüôā

Happy Coding!

C# + EWS: Autodiscover Test (Exchange and O365)

NOTE: This post – drafted, composed, written, and published by me – originally appeared on https://blogs.technet.microsoft.com/johnbai and is potentially (c) Microsoft.

In times of troubleshooting client-side issues, it may become necessary to query for the autodiscover response the user is receiving from either Exchange on-premises or Exchange in O365 Рor, in the case of a redirection, both on-premises and O365 Exchange. This is a sample C#.NET Console Application, which will query for the Autodiscover response and use the TraceListener class to write the response to files.

There are two things the code doesn’t take into account for:

1. The condition wherein the user’s SMTP address and UPN are different.
2. ALL of the possible returns from Autodiscover for the UserSettings.

 

Thus, I have included the the source code for two reasons:

1. Promotion of writing .NET programs for both on-premises Exchange and O365 Exchange.
2. Customization of both the UserSettings one is targeting and the target delivery folder for which the files should be saved.

 

If you have any problems, questions, or concerns, feel free to reach out to me and I’ll try to address them as soon as possible.

The source code can be found here: http://gallery.technet.microsoft.com/C-EWS-Autodiscover-Test-870b4a8e

O365 & EWS: EmailMessage.SetExtendedProperty() Introduces Undesirable Behavior for Cloud

NOTE: This post – drafted, composed, written, and published by me – originally appeared on https://blogs.technet.microsoft.com/johnbai and is potentially (c) Microsoft.

In Office 365, there is a known issue where Item.SetExtendedProperty() will prevent ResponseMessage.SendAndSaveCopy() from working correctly. Instead of sending the messaging and¬†placing the item in the ‘Sent Items’ folder, the message will be sent and remain in the ‘Drafts’ folder.

This issue can be corrected by changing the source code of the EWS application in either of the following two ways:

1. Specify the ‘Sent Items’ folder via passing¬†‘SENTFOLDEREWSID‘ in the method (Note:¬†WellKnownFolderName.SentItems will not work for this case):

var messageToSend = responseMessage.Save();

// This is our method that is introducing our repro scenario in O365.
messageToSend.SetExtendedProperty(newExtendedPropertyDefinition(newGuid(“{00000000-0000-0000-0000-000000000000}”), “<String>”, MapiPropertyType.String), 1);

// Send and save a copy of the replied email mesage in the default Sent Items folder.
messageToSend.SendAndSaveCopy(SENTFOLDEREWSID);

 

2. Use Item.Update() before the SendAndSaveCopy() method:

 var messageToSend = responseMessage.Save();

// This is our method that is introducing our repro scenario for the cloud.
messageToSend.SetExtendedProperty(newExtendedPropertyDefinition(newGuid(“{00000000-0000-0000-0000-000000000000}”), “<String>”, MapiPropertyType.String), 1);

// We update the item before sending it.
messageToSend.Update(ConflictResolutionMode.AlwaysOverwrite);

// Send and save a copy of the replied email mesage in the default Sent Items folder.
messageToSend.SendAndSaveCopy();

 

With this, you should be able to work-around this EWS issue until a fix is found. Happy coding!

 

Attached, you will find the repro code with the fix (Program.cs).

Program.cs

EWS: Obtaining Mail Item from List

NOTE: This post – drafted, composed, written, and published by me – originally appeared on https://blogs.technet.microsoft.com/johnbai and is potentially (c) Microsoft.

In troubleshooting an issue for a customer, I ran into a problem: I could obtain the data from the MAPI store (via EWS) but I was unable to figure out how to cast from the list of items obtained to an actual message to action against.

For example, here’s where I was attempting to obtain the items from the MAPI container:

Console.WriteLine(“Connecting to EWS endpoint…”);
ExchangeService ExService = newExchangeService(ExchangeVersion.Exchange2007_SP1);
ExService.Credentials = new WebCredentials(targetMBX, passWord);
ExService.AutodiscoverUrl(targetMBX, RedirectionUrlValidationCallback);

// Obtain the message to reply to.
ItemView iv = new ItemView(3);
FindItemsResults<Item> zeitem = ExService.FindItems(WellKnownFolderName.Inbox, iv);

As you can see,¬†we’re calling FindItems and returning it as a collection of Items. I thought¬†a cast would have to occur, to convert the Item back into an EmailMessage. Instead, as one would be happy to find out ¬†as I was, the¬†Item is an encapsulation of the¬†EmailMessage and we can ‘extract’ it via an array call:

var item = zeitem.Items[0];

From this, we can further action against the mail items via EWS:

if (item is EmailMessage)
{

¬†¬†¬†¬†¬† Console.WriteLine(“Working with the first item found.”);

   // Reply to the message
ResponseMessage responseMessage = message.CreateReply(replyToAll);
string myReply = “This is a test of the EWS responseMessage method[s].”;
responseMessage.BodyPrefix = myReply;
var messageToSend = responseMessage.Save();

// Send and save a copy of the replied email message in the default Sent Items folder.
messageToSend.SendAndSaveCopy();
}

Using the EWS API, one can do many powerful and important administrative tasks in Exchange. You can read more about it, here.