Tuesday, September 18, 2018

Architecture Decision Record (ADR)


Software Engineering is a relatively new field of engineering. There are still debates whether it is engineering or art. Regardless it needs an Architecture similar to other engineering disciplines. Unlike the other fields, the main challenge software architects faces is to make sure the delivered code is inline with the Architecture. With the advent of Agile which is very difficult to practice in other field, finalizing Architecture in software is really challenging. If we religiously finish architecture before coding, some other competition might have taken the market over.

But still we need to document the architecture even it is after the release. Don't laugh. It is needed to refer in future at least. Software unlike the other fields, is change friendly. It evolves really fast. There are different ways to document architecture. We can use UML diagrams, new trend of C4 architecture model etc...Even if we create beautiful architecture diagrams of delivered software, the problem is that it is very difficult to document why we took the decision. If we get a software in Silverlight, we should understand in the first place why  that technology is selected? Why there are WCF web service calls instead of ReST services? etc... Nothing happens without a reason in software development. So it is good if that reason can be recorded for future developers.

We can sit and write a beautiful document around diagrams and add the decisions. But its really boring and it becomes obsolete immediately as the software evolves. So what another approach.

ADR ie Architecture Decision Records in its simple form can be interpreted as the adoption of agile into documentation. Below is one good article about that practice.

Recently ThoughtWorks brought it to main attention. They call it as Light weight ADR. Yes in teh world of agile everything has to be light weight or at least in the name. As per the past history they promote after they had tried it in the field. There are more references about ADR which are included in the References section of this post.

Contents of ADR

When we adopt the ADR into project, the first thing to decide is what contents needed in ADR. The main problem is to make it light weight. If we add all the diagrams, meeting minutes etc... it will be another documentation nightmare. So we have to choose what fields to be included. Below link summarize many formats.


Format of ADR

Now a days developers even write official letters in markdown. It got that much attention due to its support in opensource communities such as GitHub. So without any confusion the ADR can use markdown.

Where to keep the ADR

Another question is where to keep the ADR. Since it is small textual representation, it can be inside a shared folder, SharePoint or in email. If we keep the ADR in a place other than source code, it may not help us in future. If the ADR is with source, where ever the source goes the ADR too goes. Today it can be TFS tomorrow it can be in Git. Sometimes companies opensource via GitHub.

Since ADR don't have any relevance without source code, the better places is with code.

Open source

Now a days GitHub is the synonym of open source. They support markdown in wiki as well as in source. Lets see the differences in keeping ADR in wiki v/s source

ADR in wiki

Wiki is independent of code. The main problem is that when we branch to develop a new feature we cannot have ADRs inside branch which are needed for that feature development. We can workaround this by many means but still little difficult. The advantage is easy editing. No need to check out, commit and push to get some changes done.
Below is one example for keeping ADR in wiki.

ADR in Source

The opposite way helps us to keep the ADR with source. When we branch the ADR comes with us. If we are doing any overriding of Architecture, we can document there. The pull request can include the same which gives reviewer that there is something fundamentally happened due to this feature.


The main purpose of naming is to distinguish the ADRs. When we keep the ADR record files, either we can keep then inside a folder called ADR or prefix the files. Similarly we can sequentially number them and keep that number in the file name or inside the contents. Right now there doesn't seems to be a standard. Hopefully something will evolve soon similar to Swagger for APIs.

Some real world usage

Below is one real world usage. The ADRs are kept in below location.

Rendered as below in the documentation.

How I implemented

My open source projects started adopting the ADR. Below is one example ADR

Rendered as

There are only 6 fields used to make it or call it lightweight.



Tuesday, September 11, 2018

Functional Programming - Randomize IEnumerable

.Net has IEnumerable to represent a sequence. Though it is not advertised as a functional helper, we can use IEnumerable to get really clean functional programming in .Net. It has so many methods to manipulate and select elements but it really lacks a mechanism to take random numbers from the sequence. Below is one which gives us somewhat random elements from the IEnumerable sequence.

public static IEnumerable<TResult> Randomize<TResult>(this IEnumerable<TResult> source)
            return source.
                Select((sourceItem, index) => new
                    Item = sourceItem,
                    Id = Guid.NewGuid()
            OrderBy(t1 => t1.Id).Select(t1 => t1.Item);

How to use the above?

IEnumerable<int> input = new List<int>() { 1, 2, 3, 4 };
int randomElement = input.Randomize().FirstOrDefault();

As seen in the source the randomization is depended on the GUID generation. If the GUIDs are generated in increasing order the randomization will not work.

The advantage of this method is to randomize as lazy collection.

Nuget support

The above is available as nuget. Below is the URL.


Tuesday, September 4, 2018

PowerShell to get list of email addresses from company AD

Often we may need to send mail to everyone to the company as an announcement or requesting urgent help etc...Normally companies might have a group mail address to do such things. Even if there is none we can easily get the list of all emails

First to get the OU and DC details of your AD. If there are confusion on what is OU, DC refer the details here. Better search using your email id itself to get the OU and DC details.

Get-ADUSER -Filter 'EmailAddress -like "<your email address>"'

This will give the details in the DistinguishedName property. Now fill that information in below script and run.

$container = "OU=<your OU>,DC=<DC>,DC=<DC>"

get-ADUSER -Filter * -SearchBase $container | `
select -Property UserPrincipalName | `
Export-Csv -Path "<path>.csv"

This export the email addresses to the csv file mentioned. It is interesting to see that the UserPrincipalName has the email. If the email is kept separately, the code has to be modified to select proper attribute.

Happy Scripting...

Tuesday, August 28, 2018

Another .Net helper library via nuget package system


Over the past 13 years, to be precisely from Nov 2005 till today, I had written lot of .Net code for day job as well as to personal projects. When I started .Net, I though yes I will master it and enjoy rest of my career. But I soon realized that is not going to work with the collapse of Silverlight. Microsoft was telling or people was arguing that Silverlight will not die as Microsoft is using it for their Azure portal. All of a sudden MSFT replaced Azure Silverlight site with HTML and that was kind of last nail on Silverlight. More details on it can be found in my last post in SilverlightedWeb blog which is a readonly blog now.  Then I thought Silverlight technology's end is inevitable as it is replaced by HTML5 but .Net will live long.

That thought got shaken when MSFT released their so called code editor now becoming full fledged IDE named VS Code. That didn't use WPF which was the star of desktop programming from MSFT at that time. Instead it used Electron from Github which depends on Chrome. Yes, the browser from Google powering web. Essentially we develop browser application and show as standalone executable. That was the time I said good bye to WPF technology. More details here. Then whats left, only ASP.Net which was and still struggling to compete with NodeJS. Don't bring Windows Phone here as that is the one of very hand countable things MSFT properly shut down. No idea how long something called UWP will live.

.Net Core

Finally something came named .Net Core. Its like Angular 1.x and 2. Only name is same, internally all most new. That is what now MSFT fans betting on, as return of .Net. .It is advertised as another true cross platform which will run on Linux! Yes its the second cross platform .Net. The original one also advertised as cross platform with the intermediate language and JIT similar to JVM echo system.

The another factor is performance. .Net Core is expected to beat NodeJS for serving http responses. There are some case studies people are claiming it is faster such as on Bing.

Another areas where .Net was weak is AI, Machine Learning, distributed computing etc... Now there is ML.Net SDK also announced.

Yes it may be faster and may become powerful than Python in AI programming.  But will this technology enough to feed my family in future?

So what is next?

Personally I don't see a bright future for .Net unless .Net Core becomes a big hit. So better to reduce focus on .Net and consider other technologies as well seriously. Electron for desktop development, Angular + NodeJS for web front end, Scala for distributed programming etc...

But what I should do with all my .Net knowledge acquired in past 13 years as I still have hope on huge return of .Net core?

Offload from brain and move on. The better place to offload code level techniques is a nuget package at this point. I could have added the helper classes to my first nuget package, but unfortunately that was towards a specific problem of Orchestration. Hence I had to start another nuget library for my helper classes and coding techniques. Link to Github repo below.


This library uses multi targeting feature so that one code base can be compiled to multiple targets. This is useful especially to provide libraries for .Net Core.

Thanks to AppVeyor for giving free CI&CD support to publish to nuget repo.

Why I didn't join with other helper nuget libraries is described in the readme of the repo.

Tuesday, August 21, 2018

PowerShell to create databases from backup

Below is PS code snippet to restore many databases from one DB backup. Useful if we need to load test. There could be version mismatch issues if we directly use the RelocateFile class. So use the small hack as shown in the snippet.

1..100 | foreach { 

    $sqlServerSnapinVersion = (Get-Command Restore-SqlDatabase).ImplementingType.Assembly.GetName().Version.ToString() 

    $assemblySqlServerSmoExtendedFullName = "Microsoft.SqlServer.SmoExtended, Version=$sqlServerSnapinVersion, Culture=neutral, PublicKeyToken=89845dcd8080cc91" 

    $RelocateData = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, $assemblySqlServerSmoExtendedFullName"('<Data file group name in backup>', "<data folder>\$_.mdf") 
    $RelocateLogs = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, $assemblySqlServerSmoExtendedFullName"('<Log file group name inbackup>', "<Log folder>\$_.ldf") 
    $RelocateFG = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, $assemblySqlServerSmoExtendedFullName"('<any more file group name eg File stream>',"<data folder>\$_") 

    Restore-SqlDatabase ` 
        -ServerInstance "<server name>\<instance>" ` 
        -Database "<db name>$_" ` 
        -BackupFile "<path to bak file>" ` 
        -RelocateFile @($RelocateData,$RelocateLogs,$RelocateFG) 

    "Restored $_ DB" 

Some points

  • Change the value from 100 to increase the number of databases needed. 
  • The new DBS will be created with the number suffixed to their names. 
  • Relocating file is required. Else there could be collision with previous database.

Tuesday, August 14, 2018

Functional Programming - Finding valleys

If we were from ASP.Net Web Forms world and tasted the ASP.Net MVC at least once, it is very difficult to go back. It is similar when we go from imperative world to functional programming world. This post is a continuation of Functional Programming series. The advantages of FP has already discussed in earlier posts. 
The problems are mainly taken from HackerRank and trying to solve using FP methods. The main intention is to understand how to solve problems functionally which otherwise considered as 'solvable only imperatively'.

The Problem

Input is a sequence of D and U characters. D means one step down down and U means up. A person named Gary is hiking from sea level and end at sea level. We need to find out how many valleys he covered. If he is in a deep valley and had a small hill and again went down, it is counted as one valley. Detailed description can be found in original link

Traditional solution

The traditional (FP was there for long, but not in main stream) ie if someone comes from imperative programming world, they get it as state machine problem. Yes there are states where Gary reaches after each step. The state has to be mutated based on the rules.

In functional programming, mutation is not an appreciated word. So lets see how can we do this without mutating the state.

Functional way

Language used is JavaScript

function countingValleys(n, s) {
  var res = s.split('').reduce((context, value,index,arr) => {
    if(value === 'D') {
      if(context.s === 0 ) { return {v:context.v+=1,s:context.s-=1}}
      else return {v:context.v, s:context.s-=1}
    if(value ==='U'){
      return {v:context.v,s:context.s+=1}
    else {
  }, {
    v: 0,
    s: 0
  return res.v

Here n means number of steps and s is the input character sequence. If we enter this into HackerRank browser based editor, with their auto generated wiring code, this returns the number of valleys covered.

How it runs

It uses the Fold concept of FP which is implemented in JavaScript using reduce() function. Initial state is given to the reduce function, when the function progress through each element, it create new states (context variable hold the state) from existing state with the modification required. Yes as per FP, mutation is evil not creation of object from another. Also notice that -= and += is not mutating the state, instead its a transformation when assigning value during creation.

The commented console.log lines will help to under the flow it allowed those to run.

Happy functional coding...

Tuesday, July 31, 2018

Azure @ Enterprise - Time bound SAS for WebJob to dequeue ServiceBus Queue messages


Enterprise in 'Azure @ Enterprise' series refers to companies or projects which have stringent security measures and process guidelines which a normal developer may not think of. Or are not expected in the self service cloud world. Often the security measures are taken to make sure that the responsibility is moved to some other party or vendor so that people at Enterprise are free from low level details of those concerns.

If we are in non enterprise projects, developing a queued back end processing in Azure is super cool with Azure Queue and WebJobs. But in enterprise that is not the case, we have to make sure the service supports virtual network or not. Will that support encryption at rest as well as in transit. If it encrypt can the enterprise provide key etc...Basically enterprise don't want to fully trust the cloud vendor though in reality vendor own all and have full control.


Enterprise may evaluate Azure ServiceBus Queue is better than the Azure Storage Queue as per current feature sets and the standards what it used to evaluate. It may come out viceversa. If it demand key rotation for ServiceBus connection string with expiry, it is little difficult, if we had used only configuration(web.config or app.config) based connection string.

Even if it is not enterprise project, it is good practice to rotate the keys with expiry as long as the ServiceBus don't support hosting it inside the vNet. If it support hosting inside vNet, the attack surface could be less. Currently any Tom,Dick or Harry can launch brute force attack against the service bus end points.

Key rotation will not remove the attack surface but reduce the possibility of a successful attack.



It is possible to have key rotation even with the attribute based WebJob functions. We can have expiry to the keys as well. Lets see step by step.

Generating time based SAS (Shared Access Signature)

It is supported in ServiceBus. In order to generate one we need one Shared Access Policy. Normally when we create the SB instance, there will be 'RootManagerSharedAccessKey' with primary and secondary keys.

Ideally this is supposed to be done outside of the main application such as Azure Automation Code and the generated new time based SAS has to be in KeyVault. This will help to have only automation account knowing the high privileged Shared Access Policy key and WebJobs know only KeyVault secret name where the the time based Shared Access Signature is stored by Automation Runboook. If application has access to the KV, it can retrieve the SAS. 

Since the code to do SAS rotation using Azure Automation Runbook is not under the scope of this post, omitting that code. If anyone request through comments section, the code will be provided.

WebJob to read SAS and dequeue

Now lets come to WebJob code side. Before we start the JobHost to listen using RunAndBlock() on the thread, there is option to override to specify the connection string to the ServiceBus which uses the time based SAS. Below goes the code.

private static void RunAndBlock()
    var config = new JobHostConfiguration();

    if (config.IsDevelopment)
    var host = new JobHost(config);            

static class ServiceBusConfigurationFactory

    /// <summary>
    /// Returns the SB configuration
    /// </summary>
    /// <returns></returns>
    /// <remarks>Change this to read from Azure KV</remarks>
    internal static ServiceBusConfiguration Get()
        return new ServiceBusConfiguration() {
            ConnectionString = BuildsSBConnectionString(GetITimeSensitiveSASTokenProvider().Get()),

    private static string BuildsSBConnectionString(string sharedAccessSignatureToken)
        ServiceBusConnectionStringBuilder builder = new ServiceBusConnectionStringBuilder();
        builder.SharedAccessSignature = sharedAccessSignatureToken;
        builder.Endpoints.Add(new Uri("https://<name of SB instance>.servicebus.windows.net/"));
        string finalCon = builder.ToString();
        return finalCon;

    private static ITimeSensitiveSASTokenProvider GetITimeSensitiveSASTokenProvider()
        return new KVBasedimeSensitiveSASTokenProvider();

The code is mostly self explanatory. When the first time main() of WebJob is started, it will get the ServiceBusConfiguration from the factory class. Factory uses provider class which knows how to talk to Azure KV or some other store where the SAS is present. Once the SAS is present, the connection string can be built from it.

One thing to remember is that the SAS is not connection string. Connecton string has to be 

Things to remember when working with ServiceBus

API Collision

There are 2 dependencies when we work with Azure ServiceBus. Microsoft.Azure.ServiceBus.dll & Microsoft.ServiceBus.dll and they both has classes with same name. Sometimes its very difficult to get the code compiled if downloaded as snippet.

What if the token expires after RunAndBlock() is invoked?

At least in testing there were no issues to the dequeued messages. All new instances of WebJob process, it takes the new connection string.