Misleading CORS exception turns out to be an IIS issue

Last week I got stuck on a deployment issue, which took me about a day to resolve. So I think this is worth sharing with you. For our next software release we developed an AngularJS application with an ASP.NET WebAPI backend and token based OAuth2 authentication. Local execution worked like a charm and the deployment to the first test server wasn’t an issue at all. But on the second test server we kept getting a CORS exception like this:

No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘http://localhost:7812‘ is therefore not allowed access.

Endless hours of web researches followed, but nothing seemed to help, neither various strategies on enabling CORS in the backend nor fiddling around with the web.config. After comparing the IIS settings on the second test server with the ones on the first working test server for three times (!) I finally found the difference which caused the error above:

 

IIS-Authorization-AllUsers

.NET Authorization Rules on first test server

 

IIS-Authorization-AnonymousUsers

.NET Authorization Rules on second test server

 

Well, it does look very similar but it isn’t! By setting the authorization rule to “Allow All Users” the problem could finally be solved. Yet another example of how a misleading error message can cost you a valuable day of work!

How to Solve the TFS 2015 HTTPS Push Limit Size Problem

This is a guest post from Marco Röösli.

We had the problem, that our Git clients were not able to push big files onto the TFS Git server over HTTPS. It seems that some Git clients have a problem with the crypto algorithm tls_rsa_with_aes_256_cbc_sha.

Although it’s possible to setup Git clients to force using another crypto algorithm it’s not the favorite solution because all clients need to be updated this way.

The IIS Server where TFS is running has an order of crypto algorithm which should be used. To prevent Git configurations on each client, you can change this order.

This approach worked for us:

Change the order of crypto algorithms: tls_rsa_with_rc4_128_sha must be set before tls_rsa_with_aes_256_cbc_sha and more secure crypto algorithms than tls_rsa_with_aes_256_cbc_sha must be set before tls_rsa_with_rc4_128_sha.

The most Git clients do not support more secure crypto algorithms than tls_rsa_with_aes_256_cbc_sha. Therefore, the crypto algorithm tls_rsa_with_rc4_128_sha will be used for Git clients and this solves push limitations.

The most of all browsers do support more secure crypto algorithms than tls_rsa_with_aes_256_cbc_sha. For browsing on TFS, it will still use the most secure crypt algorithms available.

Caution! In order to make this changes work, a reboot of the TFS server is required!

You can change the order of the crypto algorithms using this tool: (https://www.nartac.com/Products/IISCrypto)

IISCrypto

Or you can set it directly in the Windows registry:


[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL\00010002]


"Functions"="TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P521,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA"

Visual Studio 2015: Restore NuGet Packages With Build Script

With Visual Studio 2012 and 2013 you were able to enable the NuGet Package Restore within the context menu of the solution node:

VS2013

With Visual Studio 2015 this menu item is missing:

VS2015

When the NuGet Package Restore was enabled, the packages have been restored even when the solution was built outside Visual Studio with – let’s say MSBuild.exe – inside a build script.

Since this feature is not available anymore in the newest version of Visual Studio you have to do an extra step in your build script. My build scripts always look like this:

$baseDir = Resolve-Path ..
$scriptsDir = "$baseDir\scripts"
$sourceDir = "$baseDir\source"
$nugetCommand = "$sourceDir\.nuget\NuGet.exe"
$solutionFile = "$sourceDir\[YourSolution].sln"

$scriptsDir\clean.ps1	
$nugetCommand restore $solutionFile

msbuild $solutionFile /target:Rebuild /p:Configuration=Release

To make this work, you have to create a new folder named .nuget in the solution directory and copy a version of NuGet.exe into this directory like it would be if you have created the solution with Visual Studio 2013.

Surface Pro 3 Keyboard not working. What to do?

SurfaceTypeCover
This week I encountered a problem with my Surface Pro 3. The Type Cover was not working anymore. After a bit of googling I found the solution which worked for me. Here’s a step-by-step guidance:

 

  • Shutdown your Surface
  • Press and hold the Volume Up and Power buttons for, at least, 15 seconds.
  • The screen may flash the Surface logo or other things but keep the buttons held for the 15 seconds.
  • When you release them, your Surface should be powered off. Wait at least 10 seconds and then turn it on normally.

Taken from http://www.lovemysurface.net/keyboard-problems-with-surface/
(See Step A – The Basics)

 

GitFlow, GitVersion, Octopack: Increment Beta Build Number for NuGet Packages

Last week we ran into a problem with our Octopus deployment: After a bug fix on the release branch we noticed that the new build had not automatically been deployed to our staging system. The reason for this was quickly found. We expected that the NuGet Package version was incremented by one such as Package.1.2.3-beta0002, but it was still Package.1.2.3-beta0001. Therefore, Octopus took no action since this particular version was already deployed.

After all it turned out that the 4-digit number suffix won’t increment automatically no matter how many commits you have done on the release branch. The same behavior is also true for hotfix branches.

However on the develop branch the 4-digit number suffix in incremented on each commit as I would expect it to be since a commit has a side effect and build artifacts aren’t obviously the same as before.

So why are there different behaviors on the branches mentioned above? The very short and personal answer is: I don’t know! But at least I can tell you how you can manipulate the number suffix: The trick is to use tags. Let me explain this with an example using GitFlow [1].

Gitflow2

Here are some comments:

Step 0
Initial commit on the master branch and creation of the develop branch.
The unstable number suffix starts with 0000 since no commit was made on the development branch. So far so good…

Steps 3, 9, 17
Branching of release and hotfix branches. The number suffix starts with 0001. Here I would expect it to be 0000 since no commit was yet made to this branches (see step 0).

Steps 6, 8, 14, 19
Creation of a new tags. This leads to the desired suffixes.
Please note: Without the tag the suffix would still be like the previous one!

Step 7
Interestingly the suffix is incremented automatically but only once!

Steps 10, 11, 12, 16, 17
After merging the release branch into develop I would expect to reset the suffix to 0000 since no ‘real’ commit was yet made to the develop branch. Therefore, all suffixes should be interpreted as suffix value minus 1. But that’s my personal interpretation… The important thing is that the suffix in incremented on every commit which is indeed the case.

Example setup
In order to extract the version numbers, I created a very simple Git repo with a text file and the latest stable version of GitVersion.exe [2]. In the text file, I incremented an arbitrary number so that I had something to commit. The version I took, was the one tagged with “NuGetVersion”:

Gitflow

I didn’t test the whole workflow with Octopack [3] but I have noticed before that the beta build number suffix wasn’t incremented either.

In summary, I’m still a big fan of the GitFlow model and tools like GitVersion.exe and Octopack even though there are some weird cases with automatic versioning. If you ran into similar problems, I hope that this post is somehow useful to you. If you even know a good reason for the versioning mentioned above behavior, please write it in the comments below!

References
[1] Gitflow
Gitflow Workflow by Atlassian
Introducing Gitflow by Datasift
A successful Git branching model by Vincent Driessen
Gitflow Cheat Sheet by Daniel Kummer

[2] GitVersion.exe
Introduction page by ParticularLabs
GitHub repository

[3] Octopack
Documentation by Octopus Deploy
GitHub repository

Don’t forget the subscription store

If you develop or maintain a distributed message driven solution with NServicebus or any other kind of Pub/Sub infrastructure, this might be interesting for you.

Yesterday we spent our sprint retro time to discuss an issue we have encountered during the last release installation on the production environment. In fact we have lost some business critical events and therefore some invoices were not delivered to the customer. We only took notice of it as a customer complained about an invoice not being archived.

Here’s a picture of what has happened during the installation:

MovingSubscriptions

As you can see, we moved the subscription store of the invoicing component. Since the new subscription store was empty and the invoicing component was started first it already produced invoices and sent delivery notification events although no component has yet subscribed to the events.

In a Pub/Sub environment it’s normally no problem to have no subscribers at all and NServicebus didn’t complain about this fact.

The following scenarios can bring you into these troubles:

  • Move endpoint to another machine
  • Move the subscription store
  • Introduce a new event

Therefore you have to be aware of the starting order or make sure that you have manually added all subscribers to the subscription store!

What we have done to solve the problem of lost messages is the following approach:  Since the event payload data (invoices) were available in the blobstore and we knew that there was only one subscriber to the invoiceProcessed event we could republish the missing events over a temporary endpoint.  In a distributed queueing technology such as MSMQ republishing the same type of event from a different endpoint is not that easy because an event always belongs to a single endpoint. So we had to cheat a little bit:

Instead of (re)publishing the events we treated them as commands and sent them to the delivery endpoint. But NServiceBus enforces you to use Messaging best practices where you are not allowed to send an event in the first place. Luckily you can overrule this policy with the following code:

public class DisableValidatorBehaviorInPipeline : PipelineOverride
{
    public override void Override(BehaviorList<SendLogicalMessageContext> behaviorList)
    {
        behaviorList.Remove<SendValidatorBehavior>();
    }
}

 

Lessons learned

  • Think about events which have only one consumer: Probably they’re supposed to be commands because events can usually not be republished.
  • Although NServiceBus guarantees the delivery of messages, be careful about subscriptions when you deploy new versions of your endpoints or manually manipulate the subscription store

(This post is a co-production of @FabianTrottmann and me)