iTunes: Save Your Money and Don’t But the Singles, Buy the EP

A new album, Innan det tar slut, dropped on 29 Nov 2019 and I finally got around to purchasing it today. When I went to play the six track album, here’s what it looked like in iTunes:

It’s a six song EP. So, where did the rest of the songs go?

Well, they were singles that I had bought before the album dropped (see the “More by GAMMAL” section in the photo); so, they only show up if I tap Visa komplett album or I go into the singles’ albums.

What’s even more fun is if I tap on a song I already own, from this full album view, – say Hemma Igen – Apple wants me to sign-up for Apple Music. Apparently, I can only play them from the limited album view, I guess?

Some might try to argue that a single from the single album and the same song from the EP are – in fact – logically different songs; however, unless the ones and zeroes of these two files differs and they’re of different sizes, then they’re actually the same song! In fact, iTunes considers that I already own the songs, I just haven’t “downloaded” them, yet. (Fun-filled fact: They exist under the album single album forevermore,)

Anyways, long-story short of it is that you should just wait for the full album to drop and buy it all at once, rather than dealing with such nonsensical maladies of music listening.

…but you don’t have to take my word for it! The internet is littered with complaints regarding this same issue and it looks like Apple is not going to fix it any time soon.

So, don’t fall into the same trap. Just wait for the album. 🙃

C#: Recursively Getting Subfolders Whilst Ignoring the Errors that Would Stop Other Traversal Means

So, I’m writing something that is – eventually – meant to scrub the harddrive on IoC (indicators of compromise). The first major problem to solve is the following:

If I have drive ‘C:\’, and I try to enumerate all of the folders under the drive, Directory.EnumerateDirectories (including the recursive search option of all directories), then the first exception that throws stops the iteration – entirely.

This is, I suppose, desired behaviour from a code-provider’s perspective but entirely undesirable from a code consumer’s perspective.

So, how can we get around this? Well, we need a way to “swallow” the exceptions but to carry-on with the folders that we can access but we need to keep processing for this to occur. Instead of nesting foreach ad-infinitum (because we never know how deep or shallow the traversal might be), we would – instead – call the method recursively on itself.

So, it breaks down in the following code:

private static void EnumerateSubfolders(string path)
{
try
{
string[] directories = Directory.EnumerateDirectories(path).ToArray();
foldersList.AddRange(directories);
foreach (string directory in directories)
{
EnumerateSubfolders(directory);
}
}
catch
{
// Dispose of all of the exceptions about access because we can’t do anything about them.
}
}

Note that all we’re doing is going one child-folder deep, adding the results to the list of string of folders and then for each of those, traversing one child-folder deep, again. In this way, even if we hit an exception, we’ve gone one child deep and we stop processing for that child – without it affecting the traversal of the other children; which, while not the idyllic for obtaining all of the folders in Windows, is a far cry better than the entire process stopping on the first exception thrown.

Happy programming! 🙂

 

 

Zen Installer: Installing Arch Linux and the Subsequently Confusing FSCK Issue

So, I recently installed Arch Linux (via the Zen Installer) and all went well. Well, until I removed the USB the OS was installed from and rebooted, that is…

systemd_fsck_dependency_failure

Plug the USB stick back in and the system would boot normally. Checked FSTAB and it looked entirely valid and was, since the system was in running state when I did.

Markering_001

O.k., let’s try this again but this time check the journal (journalctl -xb) to see what’s happening.

jul 14 21:37:42 [REDACTED] systemd[1]: Starting File System Check on /dev/sdb1…
jul 14 21:37:42 [REDACTED] systemd-fsck[463]: /dev/sdb1: 17 files, 12139/130807 clusters
jul 14 21:37:42 [REDACTED] systemd[1]: Started File System Check on /dev/sdb1.
jul 14 21:37:42 [REDACTED] audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’[email protected] comm=”systemd” exe=”/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=success’
jul 14 21:37:42 [REDACTED] kernel: audit: type=1130 audit(1563133062.173:11): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’[email protected] comm=”systemd” exe=”/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=success’
jul 14 22:19:49 [REDACTED] systemd[1]: [email protected]: Succeeded.
jul 14 22:19:49 [REDACTED] systemd[1]: Stopped File System Check on /dev/sdb1.
jul 14 22:19:49 [REDACTED] audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’[email protected] comm=”systemd” exe=”/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=success’
jul 14 22:19:49 [REDACTED] kernel: audit: type=1131 audit(1563135589.643:55): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’[email protected] comm=”systemd” exe=”/usr/lib/systemd/systemd” hostname=? addr=? terminal=? res=success’
jul 14 22:21:44 [REDACTED] systemd[1]: dev-sdb1.device: Job dev-sdb1.device/start timed out.
jul 14 22:21:44 [REDACTED] systemd[1]: Timed out waiting for device /dev/sdb1.
jul 14 22:21:44 [REDACTED] systemd[1]: Dependency failed for File System Check on /dev/sdb1.
jul 14 22:21:44 [REDACTED] systemd[1]: [email protected]: Job [email protected]/start failed with result ‘dependency’.
jul 14 22:21:44 [REDACTED] systemd[1]: dev-sdb1.device: Job dev-sdb1.device/start failed with result ‘timeout’.

Well, that didn’t help too much, save to tell me that fsck can’t load a dependency. Wait a minute… It can’t be… Can it?

Ctrl+Alt+F1 to open the terminal. Nano’ed fstab. Changed all of the sdb to sda (since the usb was no longer plugged in) and rebooted. It worked.

So, here’s what was happening:

When the USB drive was plugged in, the HDD drive was /dev/sdb. When the USB was not plugged in, it was no longer /dev/sda but the HDD was now /dev/sda.

The ultimate work-around to prevent this from happening again? Use the UUIDs instead of the non-static drive assignments, as the kernel’s name descriptors are not persistent. (See the Arch Linux Wiki for more details.)

Obligatory desktop screenshot? Obligatory desktop screenshot.

Markering_004

 

 

NuGet: Targeting All of the .NET Versions Plausible (The Easy Way)

I recently published a NuGet package that targets .NET vesions from 4.5 to the latest (currently, 4.8). (I could only go back to .NET 4.5 because that’s when HttpClient first dropped. Sorry, not sorry.)

In previous NuGet packages, I had to set-up different build iterations for each .NET version and build them all, independently and manually. This was not a fun process, to be sure.

So, how did I do this latest NuGet package so I didn’t have to go through all of that heartache? Well, when you create a new project in Visual Studio, select Class Library (.NET Standard).

Trust me, I’m aware it seems counter-intuitive to do this but there’s a trick coming up that will save you hours of work and heartache.

One the project is loaded, right click on the Project in Solution Explorer and select Edit Project File. One here, you should see some XML beginning with

<Project Sdk=”Microsoft.NET.Sdk”>

and a node that has TargetFrameworkVersion. We’re going to change this and replace the line to be plural-indicative:

<TargetFrameworks>net45;net451;net452;net46;net461;net462;net47;net471;net472;net48;</TargetFrameworks>

Once this is done, we’re going to do one more thing to make our lives 1000% easier:

Right-click the Project in Solution Explorer, again, click Properties. Select the Package tab. Here, you’ll see most of the fields that you would expect to see in a nuspec file. Edit these fields to contain the values that you want and then select Generate NuGet package on build and, if you require the license to be accepted, Require license acceptance.

Now, you’ll have to close and re-open Visual Studio when you save everything but, trust me, this is a far more favourable pain than individually building to each .NET target.

When you build in this project, now, you’ll get a nupkg dropped into your flavour folder (debug or release), which you can then upload into NuGet. The nupkg will – automatically – contain all of the .NET versioned binaries for you. No more action is required on your part.

That’s it! You can now target multiple .NET versions for your NuGet package, without having to do much of anything else (except to ensure that what you’re targeting is included in versions of .NET that you’re targeting and, if not, code for those conditions).

Happy coding! 🙂

 

HttClient: The Myths Behind ‘Using’ on a ReEntrant Object

In developing a solution in Azure to [REDACTED], I discovered a “bug” in HttpClient that seems to be somewhat common knowledge but I figured that I would share it with you, in case you run into the same problems.

This bug surfaced – moreso – because we’re using Azure, than anything else. You see, anything in Azure should be considered multitenancy; meaning that your app is running parallel to – potentially – hundreds of other apps within the infrastructure’s backplane.

So, I was using Parallel.ForEach and .ctor’ing a new HttpClient per thread and making a call to obtain data from a rest endpoint from a unique sub-url; which was entirely dependent on data that I had previously obtained via another rest call. Obscurity is strong with this one, I’m aware.

Every once in a while, I would get the exceptions (by unboxing them): “Only one usage of each socket address (protocol/network/port) is normally permitted: <ipAddressGoesHere>.

In technicality, I was only using one address per HttpClient but there’s a catch/caveat to all of this. Even if you use the ‘using‘ statement, the IDispose interface isn’t immediately called.

The socket would still be in use, even if it were. This is because the socket that the HttpClient uses is put into TIME_WAIT. So, you have an open socket in use to that host and because the host hasn’t closed the socket, if you instantiated all new HttpClients (which used new ephemeral ports), you could potentially run out of ports to consume.

…but, wait, there’s more!™

The HttpClient is considered reentrant (and this is where our true problem comes in). This means that some (if not all) of your non-disposed of HttpClients could be re-used to try to go what the HttpClient considers a currently in-use object (because the port is still considered open when it’s in TIME_WAIT).

In fact, if we chase down the SocketException –> Win32Exception –> hResult, we can see that this comes from the system as 0x00002740, which is WSAEADDRINUSE.

The solution? Since public static (I think just static, really) instances of HttpClient are considered thread-safe, the singleton model is what we want to go with.

So, instead of instantiating an HttpClient per call, you would instantiate a singleton instance that would be used per-host in your class instance. This allows the HttpClient and it’s port to be re-used (thus, reducing potential ephemeral port exhaustion as a byproduct). And since it appears that Azure re-instantiates your class per run (if you’re using the TimerTrigger, for example), then you create a scenario where the object’s lifetime is bound to your class. (Assuming you call HttpClient.Dispose() before the run completes and the object moves out of scope.)

…but MSDN says to use ‘using’ for IDisposable objects!

Yes, this is true but, again, we have to consider that even though .Dispose() might be called when we leave scope, we have no control over when GC actually comes through and disposes of the object from the Gen1/Gen2 heaps. We also cannot control when the TCP port is actually closed because that’s dependent on the host. So, even if HttpClient.Dispose() is called, you’re still at the whims of the keep-alive configured on the host for the actual port to be closed.

Diagram from the IETF? Diagram from the IETF.

Time_Wait

So, even though it’s been practically beaten into you throughout your CS career to use ‘using’, there are times when the singleton model (and not invoking using) are more favourable to your software design needs, expectations, and requirements than what you’ve been taught is the best practice.

Happy coding! 🙂

Testing Private Methods in Static Classes: The Not-So-Easy Way

I had a problem with testing a method: The method is intentionally private, as exposing it public wouldn’t benefit anyone and it merely reduces code-overhead for repeated operations – given ‘x’ condition.

In this case, it was if the product of two numbers was greater than nine, then take those numbers and add them together and that was the new number. So, for example, if the product was ten, we would then add one and zero and get the new number 1; or if it was eleven, we would get one and one, which is two, and so on…

So, because the class is static and can be instantiated without a reference, you can’t reference it as a PrivateType, which I came to quickly realize:

PrivateTestFailure

The way to get around this, was to construct the type, itself (since it’s static), and try to access the private method from that instantiation:

PrivateTestSucceeding

Once that happens, your tests should now be able to access your private static method contained within your public static class.

A Dev’s First NuGet Package™: A Short Story

So, I killed literal hours today, trying to push my first NuGet package. It isn’t much, currently, it just has two extension methods for the System.DateTime object. You can find it here.

What this post aims to do is to fill the gaps that I lost hours of my life over, so that someone else doesn’t have to do the same thing. I suffered the heartache of it, let me prevent that same heartache from happening to you, yeah? 🙂

So, first things first, you need a DLL. It needs to do something. It doesn’t have to be fancy, like computing the potentiality of the quantum state of spin or anything like that. It just… …shouldn’t be an empty class and should contribute something that you think would save someone ‘x’ amount of time doing (repeatedly).

O.k., now that you have your project done, you’ll want to head over to NuGet and create an account. This just makes one of the later steps a lot easier to perform.

While you’re there, download the latest and greatest NuGet command-line tool. Copy the exe from your downloads folder to your project’s folder to make life easier for these nexts steps.

Open your choice of terminal, command or PowerShell, and navigate to where your project is. Now, run the following command, replacing the project name with the project you’ve made.

nuget.exe spec myProject.csproj

This will generate the nuspec file, which contains a list of specifications for your NuGet package.

Before you continue, open your AssemblyInfo.cs file and make the necessary changes. (Hint: When you want to bump your NuGet version, you can do that through this file).

In your nuspec file that was created, add the files section like below but targeting the .NET version of your specific assembly.

<files>
    <file src="..\..\Project\bin\Release\Assembly.dll" target="lib\net472" />
    <file src="..\..\Project\bin\Release\Assembly.pdb" target="lib\net472" />
</files>

Also, it’s important to pick the license that’s good for your needs, so head over to SPDX’s site to see which one fits your end-goals. Once you find the license you want to use, modify your nuspec file to something like the following, specifying your chosen license instead.

<license type="expression">MPL-2.0-no-copyleft-exception</license>

When everything’s as you want it, go back to your prompt and run the following command:

nuget.exe pack myProject.csproj

This will generate the nupkg file.

Now that you have the nupkg file, go back to NuGet and, using your account that you created earlier, upload the nupkg that you just created to NuGet.

…and that’s it, you have now published an assembly to NuGet for the world to consume. 🙂

Apt-Get: Viewing Release Notes on Packages Updated and/or Installed

Subjectively speaking, you might want to see release notes for packages, whether you’re installing them or updating them. (If this isn’t down your avenue of caring or you’re looking for a more exciting post, then, you can probably just skip this rest of this, altogether. True story.)

apt_get

Anyways, if you use terminal to update your machine, there’s a Debian package (sorry, other flavours, I haven’t dug into it but maybe you have the equivalent?) that – when installed – will configure apt-get to automatically prompt you with the release notes for any update or package that you install.

Of course, the caveat is that if you’re updating (e.g.: upgrade), the release notes are shown just before install.

The package, in question, is called apt-listchanges. You can see screenshots of it in action, here.

Install it as you would any other package and that’s it. 🙂

linux_your_grandma_could_do_it

Azure: Enabling Function Apps to Access Microsoft Graph

With Azure Functions, it’s even easier to code to Azure; however, with such portability comes a little bit of pain to the growing processes. It used to be (and still is, frankly) that you had to register a full-blown application in Azure Active Directory to grant your application read access to other resources, such as Microsoft Graph. However, with these new features, we can bypass a lot of the old-school red-tape to get what we need to do up and running.

The first thing that you’ll want to do in your Azure Function App is to enable it’s Managed Identity. Keep in mind that all of the functions within your Azure Function App resource will have this shared managed identity, so this could present security implications, as you roll-out more functions in a shared space.

In the Azure Portal, go your Azure Function App, pick the specific Function App in question, go to Platform Features, select Identity, and then – under System assigned – toggle the Status to On. Save it. Wait for the execution to complete and then Refresh the page. Save the ObjectId, as we’ll be using that to grant our Functions within the Function App access to the Microsoft Graph.

AzureFunctionAppManagedIdentity

Now that we have a service principal (via managed identity), we need to grant it permissions to the Graph Application, itself, at the tenant level.

Unfortunately, we can’t – at present – do this via the Azure Portal, so to do so, we’ll need to use the Azure Active Directory PowerShell Snap-In and we need to decide which permissions that we want to grant to our Azure Function App.

Connect as you normally would in your tenant and then run the following commands, replacing the redacted object id value with the one that you copied from the portal in the steps above and changing the target role value to be whichever roles in Graph that you want to assign (remember: this is at the Function App level and not individualised for specific functions within the Function App). In this case, I’m after reading the Security Alerts from Microsoft Graph, so this is the only role that I’ll be targeting. YMMV.

$objectId = [REDACTED]
$graph = Get-AzureADServicePrincipal -Filter “AppId eq ‘00000003-0000-0000-c000-000000000000′”
$targetRole = $graph.AppRoles | Where{$_.Value -like “*SecurityEvents.Read.All*”}
$msp = Get-AzureADServicePrincipal -ObjectId $objectId
New-AzureADServiceAppRoleAssignment -Id $targetRole.Id -PrincipalId $msp.ObjectId -ObjectId $msp.ObjectId -ResourceId $graph.ObjectId

If all goes according to plan, then you should see output similar to the below.

ObjectId ResourceDisplayName PrincipalDisplayName
——– ——————- ——————–
ibpZvR05qk2N7TMPftyjXxh14LgZiq1Fo41w-kdjaYA Microsoft Graph [REDACTED]

Now, you can leverage Microsoft Graph in the Functions contained within your Azure Function App for whichever permission[s] that you just granted the Azure Functions App.

Happy dev’ing! 🙂

 

 

DLNA: Network Shares Cause Blocking Threads in Windows Explorer During Copying Events

TL;DR – DLNA shares have a bug which can cause Explorer to stop copying data across the share, nigh indefinitely. The easiest work-around is to restart Windows Explorer, delete the target file in the previous copy operation, and start anew.

I’ve discovered a bug that I can’t seem to get addressed because the assembly isn’t publicly documented, anywhere, but I figured that I would write about what happens to explain it to those of you who run into it.

First, we need to cover what DLNA (Digital Living Network Alliance) is. (Wiki article is here.) DLNA is a standard by which multiple libraries can be accessed for sharing/streaming, without having had a proprietary library to communicate between them.

Plex Media Server is one such media streaming service built using the DLNA libraries for sharing resources. One of the DLNA features, in Windows, is that it appears as a Network Share/Location in Windows Explorer.

So, we have a Plex Media Server and it’s serving DLNA. The media on the server is browsable, as it if were a dedicated network share. If we treat it as such and copy from one location to another, this is when this particular bug surfaces.

Microsoft has implement DLNA in x86 and x64 processes via assemblies included in Windows. In particular, for this bug, we care about the mfnetcore.dll assembly, which can be found in either the System32 or the SysWow64 folders.

Here’s a dump of the stack during repro:

81 TID:0e60 kb kbn kbnL kn knL kpn kPn
# Child-SP RetAddr Call Site
00 00000000`2fc8eb18 00007ffb`272683d3 ntdll!NtWaitForSingleObject+0x14
01 00000000`2fc8eb20 00007ffa`e0845d59 KERNELBASE!WaitForSingleObjectEx+0x93
02 00000000`2fc8ebc0 00007ffa`f135812f mfnetcore!MFGetSupportedDLNAProfileInfo+0xaa69
03 00000000`2fc8ec60 00007ffb`27ba03ed mfplat!CStreamOnMFByteStream::Read+0xef
04 00000000`2fc8ecb0 00007ffb`27ad01c5 windows_storage!SHCopyStreamWithProgress2+0x1ad
05 00000000`2fc8ed90 00007ffb`27ad03ce windows_storage!CCopyOperation::_CopyResourceStreams+0x89
06 00000000`2fc8ee00 00007ffb`278e50df windows_storage!CCopyOperation::_CopyResources+0x17e
07 00000000`2fc8eea0 00007ffb`276f784f windows_storage!CCopyOperation::Do+0x1b5cbf
08 00000000`2fc8efa0 00007ffb`276f5d4f windows_storage!CCopyWorkItem::_DoOperation+0x9b
09 00000000`2fc8f080 00007ffb`276f657a windows_storage!CCopyWorkItem::_SetupAndPerformOp+0x2a3
0a 00000000`2fc8f370 00007ffb`276f2f1e windows_storage!CCopyWorkItem::ProcessWorkItem+0x152
0b 00000000`2fc8f620 00007ffb`276f3907 windows_storage!CRecursiveFolderOperation::Do+0x1be
0c 00000000`2fc8f6c0 00007ffb`276f33d6 windows_storage!CFileOperation::_EnumRootDo+0x277
0d 00000000`2fc8f760 00007ffb`276fd25c windows_storage!CFileOperation::PrepareAndDoOperations+0x1c6
0e 00000000`2fc8f830 00007ffb`2874c525 windows_storage!CFileOperation::PerformOperations+0x10c
0f 00000000`2fc8f890 00007ffb`2874acf0 shell32!CFSDropTargetHelper::_MoveCopyHIDA+0x269
10 00000000`2fc8f940 00007ffb`2874d517 shell32!CFSDropTargetHelper::_Drop+0x220
11 00000000`2fc8fe20 00007ffb`29b6c315 shell32!CFSDropTargetHelper::s_DoDropThreadProc+0x37
12 00000000`2fc8fe50 00007ffb`2ab17974 SHCore!_WrapperThreadProc+0xf5
13 00000000`2fc8ff30 00007ffb`2aeba271 kernel32!BaseThreadInitThunk+0x14
14 00000000`2fc8ff60 00000000`00000000 ntdll!RtlUserThreadStart+0x21

Note that frames that we, generally, care about are in orange and red at the top of the stack and those are the last instructions executed in the thread. In this case, we’re waiting on a response from the request to get the supported DLNA profile information and this is demonstrated by the fact that we’re waiting on an object at the top of the stack. Essentially, we have an open/blocking request that has never completed and the thread will have to die to unblock the request.

We can see the block happen on other native threads. Specifically, in the dump that I created, there were three threads with the same stacks, shown as below.

97 TID:3cdc kb kbn kbnL kn knL kpn kPn
# Child-SP RetAddr Call Site
00 00000000`3448f9e8 00007ffb`299bf5cd win32u!NtUserMsgWaitForMultipleObjectsEx+0x14
01 00000000`3448f9f0 00007ffa`f88e2cfd user32!RealMsgWaitForMultipleObjectsEx+0x1d
02 00000000`3448fa30 00007ffa`f88e2c24 duser!CoreSC::Wait+0x75
03 00000000`3448fa80 00007ffb`299d05d1 duser!MphWaitMessageEx+0x104
04 00000000`3448fae0 00007ffb`2aef33c4 user32!_ClientWaitMessageExMPH+0x21
05 00000000`3448fb30 00007ffb`26f51224 ntdll!KiUserCallbackDispatcherContinue
06 00000000`3448fb98 00007ffa`fad54913 win32u!NtUserWaitMessage+0x14
07 00000000`3448fba0 00007ffa`fad547a9 explorerframe!CExplorerFrame::FrameMessagePump+0x153
08 00000000`3448fc20 00007ffa`fad546f6 explorerframe!BrowserThreadProc+0x85
09 00000000`3448fca0 00007ffa`fad55a12 explorerframe!BrowserNewThreadProc+0x3a
0a 00000000`3448fcd0 00007ffa`fad670c2 explorerframe!CExplorerTask::InternalResumeRT+0x12
0b 00000000`3448fd00 00007ffb`2785b58c explorerframe!CRunnableTask::Run+0xb2
0c 00000000`3448fd40 00007ffb`2785b245 windows_storage!CShellTask::TT_Run+0x3c
0d 00000000`3448fd70 00007ffb`2785b125 windows_storage!CShellTaskThread::ThreadProc+0xdd
0e 00000000`3448fe20 00007ffb`29b6c315 windows_storage!CShellTaskThread::s_ThreadProc+0x35
0f 00000000`3448fe50 00007ffb`2ab17974 SHCore!_WrapperThreadProc+0xf5
10 00000000`3448ff30 00007ffb`2aeba271 kernel32!BaseThreadInitThunk+0x14
11 00000000`3448ff60 00000000`00000000 ntdll!RtlUserThreadStart+0x21

Work-arounds: This bug is pretty ugly and there aren’t a whole lot of work-arounds for it. One could wait for the lifetime of the thread to cause an abort, which could be a considerable amount of time. The work-around that I typically opt for is to restart Windows Explorer process, via Task Manager, delete the file and try to copy it again. Sure, it takes a lot of time but it’s considerably a far lower cost, time-wise, than waiting for a thread to become unblocked due to a timeout.