Tuesday, October 16, 2018

Progressive Web Apps - Index


Progressive Web Apps or PWA technology is expected to end the nightmare of developing platform  specific apps for achieving one purpose. Currently this is tough for mobile developers as they have to develop for Android and iOS. Thanks to Microsoft for ending Windows phones. But still there is Windows app store where applications for Windows 10 can be downloaded. Still its nightmare to have one app working for all the platforms though there are many technologies are there to address the same such as Xamarin. Yes that is a big prediction that one day PWA will make Xamarin like platforms obsolete similar to how HTML5 made Flash & Silverlight into history.


The purpose of this post is to index all the links related to PWA in one place at least for my personal use. It will be including links my future PWA related posts as well. I am not the person doing it and I tried to include similar indexes at the end of this post.


Feature detection

Not all browsers support PWA also not all support all features. Since PWA included bunch of features, better do feature detection than checking the browser version.


Below are the features of PWA. Not a complete list as its still evolving.

Navigation Preload

Offline Storage

This is not really PWA feature but can be leveraged to build offline application.

Service Worker

Highlight of PWA which help us to develop offline applications

This will take the web apps further towards native feel. Also help us to reengage users. 


Another highlight. This feature is going to help us to get our web apps installed as native apps.

Publishing to AppStores

The app stores are getting PWA apps along with native apps. One day there will be more PWA apps than native. The day where all the device capabilities/APIs are exposed via PWA specs.

Developing PWA

What developers needs to know about PWA.

Debugging and experiences

Seems little difficult to debug at this point. But will definitely get improvements over time.

Testing PWA

In the world of DevOps we need to have coded test cases to have continuous delivery. This space needs more innovations.

Platform specific

Not all platforms started supporting same way. Below goes platform specific details. Hope some day I can make this section obsolete.


Sample apps

We are not alone. There are apps out there which use PWA. Its reality.

Source code

Some repos to browse for PWA.
https://github.com/GoogleChromeLabs/airhorn - This demonstrates the installation scenario

Other indexes like this


Tuesday, October 2, 2018

Online diagramming tools

We are living in the age of easy software development and business friendly environment. Lot of things are free if we want to develop software or start a company than 10 years back. Diagramming was difficult earlier but now there are lot of tools to create good stunning diagrams. Below is a list of online browser based diagramming tools for software developers and architects. All are either free, or has freemium pricing model.


Below has general diagramming support as well as software engineering diagrams.


https://cloudcraft.co - AWS specific, 3D diagrams

Architecture as Code

https://structurizr.com/ - Used to draw C4 arch diagrams using code!

This list demands periodic update as new tools will appear and some fade away or move to paid model. Hopefully updates can be done at least once an year.

Tuesday, September 25, 2018

Travis-CI v/s Windows development environment

We are in the world of DevOps where we do CI & CD activities automatically for even individual code commits. Don't think DevOps means just doing CI & CD. One of the practice in DevOps is CI&CD. Though there are so many CI&CD tools, the leading 'free for open source hosted SaaS' tools are AppVeyor & Travis-CI. Azure Pipeline is new player in the market giving free service.

AppVeyor is my default choice. But in order to get a feel of another CI&CD mechanism, but I used Travis-CI in one of my personal projects.


It was going good till the builds started failing recently. After inspecting logs, I could understand that there are new NPM module versions available and conflicting. They were used because I used ^ in versions which is mentioned in packages.json. So decided to fix.

It started working fine in my local machine but when I pushed into GitHub, the CI&CD pipeline again failed.

This time its was on the tests. Some files were not served during tests. The http error code is 404-Not found. Initially I thought, since the file names have unicode characters in it, Webpack dev server is not able to serve the files. But it worked in my machine. The classic case of 'it works in my machine'.  But here it is my personal project and I am the only one to fix.


After hours of debugging via log statements, I was able to figure the silly issue. It is the difference in Linux v/s Windows. Windows is not case sensitive for file paths but Linux is. The files are accessible in Windows but due to case sensitivity the files are not accessible in Linux. Travis-CI uses Linux in their build environment.

Seems others too faced the same issue and asked Travis-CI to add documentation.

But Travis-CI is smart and had the documentation already. It was I who is now into the lazy to read group.

Moral of the story

Better be consistent with Dev, QA, Stg and Prod environments. 10th item in the 12-Factor App methodology.



More best practices on TravisCI - https://eng.localytics.com/best-practices-and-common-mistakes-with-travis-ci/

Tuesday, September 18, 2018

Architecture Decision Record (ADR)


Software Engineering is a relatively new field of engineering. There are still debates whether it is engineering or art. Regardless it needs an Architecture similar to other engineering disciplines. Unlike the other fields, the main challenge software architects faces is to make sure the delivered code is inline with the Architecture. With the advent of Agile which is very difficult to practice in other field, finalizing Architecture in software is really challenging. If we religiously finish architecture before coding, some other competition might have taken the market over.

But still we need to document the architecture even it is after the release. Don't laugh. It is needed to refer in future at least. Software unlike the other fields, is change friendly. It evolves really fast. There are different ways to document architecture. We can use UML diagrams, new trend of C4 architecture model etc...Even if we create beautiful architecture diagrams of delivered software, the problem is that it is very difficult to document why we took the decision. If we get a software in Silverlight, we should understand in the first place why  that technology is selected? Why there are WCF web service calls instead of ReST services? etc... Nothing happens without a reason in software development. So it is good if that reason can be recorded for future developers.

We can sit and write a beautiful document around diagrams and add the decisions. But its really boring and it becomes obsolete immediately as the software evolves. So what another approach.

ADR ie Architecture Decision Records in its simple form can be interpreted as the adoption of agile into documentation. Below is one good article about that practice.

Recently ThoughtWorks brought it to main attention. They call it as Light weight ADR. Yes in teh world of agile everything has to be light weight or at least in the name. As per the past history they promote after they had tried it in the field. There are more references about ADR which are included in the References section of this post.

Contents of ADR

When we adopt the ADR into project, the first thing to decide is what contents needed in ADR. The main problem is to make it light weight. If we add all the diagrams, meeting minutes etc... it will be another documentation nightmare. So we have to choose what fields to be included. Below link summarize many formats.


Format of ADR

Now a days developers even write official letters in markdown. It got that much attention due to its support in opensource communities such as GitHub. So without any confusion the ADR can use markdown.

Where to keep the ADR

Another question is where to keep the ADR. Since it is small textual representation, it can be inside a shared folder, SharePoint or in email. If we keep the ADR in a place other than source code, it may not help us in future. If the ADR is with source, where ever the source goes the ADR too goes. Today it can be TFS tomorrow it can be in Git. Sometimes companies opensource via GitHub.

Since ADR don't have any relevance without source code, the better places is with code.

Open source

Now a days GitHub is the synonym of open source. They support markdown in wiki as well as in source. Lets see the differences in keeping ADR in wiki v/s source

ADR in wiki

Wiki is independent of code. The main problem is that when we branch to develop a new feature we cannot have ADRs inside branch which are needed for that feature development. We can workaround this by many means but still little difficult. The advantage is easy editing. No need to check out, commit and push to get some changes done.
Below is one example for keeping ADR in wiki.

ADR in Source

The opposite way helps us to keep the ADR with source. When we branch the ADR comes with us. If we are doing any overriding of Architecture, we can document there. The pull request can include the same which gives reviewer that there is something fundamentally happened due to this feature.


The main purpose of naming is to distinguish the ADRs. When we keep the ADR record files, either we can keep then inside a folder called ADR or prefix the files. Similarly we can sequentially number them and keep that number in the file name or inside the contents. Right now there doesn't seems to be a standard. Hopefully something will evolve soon similar to Swagger for APIs.

Some real world usage

Below is one real world usage. The ADRs are kept in below location.

Rendered as below in the documentation.

How I implemented

My open source projects started adopting the ADR. Below is one example ADR

Rendered as

There are only 6 fields used to make it or call it lightweight.



Tuesday, September 11, 2018

Functional Programming - Randomize IEnumerable

.Net has IEnumerable to represent a sequence. Though it is not advertised as a functional helper, we can use IEnumerable to get really clean functional programming in .Net. It has so many methods to manipulate and select elements but it really lacks a mechanism to take random numbers from the sequence. Below is one which gives us somewhat random elements from the IEnumerable sequence.

public static IEnumerable<TResult> Randomize<TResult>(this IEnumerable<TResult> source)
            return source.
                Select((sourceItem, index) => new
                    Item = sourceItem,
                    Id = Guid.NewGuid()
            OrderBy(t1 => t1.Id).Select(t1 => t1.Item);

How to use the above?

IEnumerable<int> input = new List<int>() { 1, 2, 3, 4 };
int randomElement = input.Randomize().FirstOrDefault();

As seen in the source the randomization is depended on the GUID generation. If the GUIDs are generated in increasing order the randomization will not work.

The advantage of this method is to randomize as lazy collection.

Nuget support

The above is available as nuget. Below is the URL.


Tuesday, September 4, 2018

PowerShell to get list of email addresses from company AD

Often we may need to send mail to everyone to the company as an announcement or requesting urgent help etc...Normally companies might have a group mail address to do such things. Even if there is none we can easily get the list of all emails

First to get the OU and DC details of your AD. If there are confusion on what is OU, DC refer the details here. Better search using your email id itself to get the OU and DC details.

Get-ADUSER -Filter 'EmailAddress -like "<your email address>"'

This will give the details in the DistinguishedName property. Now fill that information in below script and run.

$container = "OU=<your OU>,DC=<DC>,DC=<DC>"

get-ADUSER -Filter * -SearchBase $container | `
select -Property UserPrincipalName | `
Export-Csv -Path "<path>.csv"

This export the email addresses to the csv file mentioned. It is interesting to see that the UserPrincipalName has the email. If the email is kept separately, the code has to be modified to select proper attribute.

Happy Scripting...

Tuesday, August 28, 2018

Another .Net helper library via nuget package system


Over the past 13 years, to be precisely from Nov 2005 till today, I had written lot of .Net code for day job as well as to personal projects. When I started .Net, I though yes I will master it and enjoy rest of my career. But I soon realized that is not going to work with the collapse of Silverlight. Microsoft was telling or people was arguing that Silverlight will not die as Microsoft is using it for their Azure portal. All of a sudden MSFT replaced Azure Silverlight site with HTML and that was kind of last nail on Silverlight. More details on it can be found in my last post in SilverlightedWeb blog which is a readonly blog now.  Then I thought Silverlight technology's end is inevitable as it is replaced by HTML5 but .Net will live long.

That thought got shaken when MSFT released their so called code editor now becoming full fledged IDE named VS Code. That didn't use WPF which was the star of desktop programming from MSFT at that time. Instead it used Electron from Github which depends on Chrome. Yes, the browser from Google powering web. Essentially we develop browser application and show as standalone executable. That was the time I said good bye to WPF technology. More details here. Then whats left, only ASP.Net which was and still struggling to compete with NodeJS. Don't bring Windows Phone here as that is the one of very hand countable things MSFT properly shut down. No idea how long something called UWP will live.

.Net Core

Finally something came named .Net Core. Its like Angular 1.x and 2. Only name is same, internally all most new. That is what now MSFT fans betting on, as return of .Net. .It is advertised as another true cross platform which will run on Linux! Yes its the second cross platform .Net. The original one also advertised as cross platform with the intermediate language and JIT similar to JVM echo system.

The another factor is performance. .Net Core is expected to beat NodeJS for serving http responses. There are some case studies people are claiming it is faster such as on Bing.

Another areas where .Net was weak is AI, Machine Learning, distributed computing etc... Now there is ML.Net SDK also announced.

Yes it may be faster and may become powerful than Python in AI programming.  But will this technology enough to feed my family in future?

So what is next?

Personally I don't see a bright future for .Net unless .Net Core becomes a big hit. So better to reduce focus on .Net and consider other technologies as well seriously. Electron for desktop development, Angular + NodeJS for web front end, Scala for distributed programming etc...

But what I should do with all my .Net knowledge acquired in past 13 years as I still have hope on huge return of .Net core?

Offload from brain and move on. The better place to offload code level techniques is a nuget package at this point. I could have added the helper classes to my first nuget package, but unfortunately that was towards a specific problem of Orchestration. Hence I had to start another nuget library for my helper classes and coding techniques. Link to Github repo below.


This library uses multi targeting feature so that one code base can be compiled to multiple targets. This is useful especially to provide libraries for .Net Core.

Thanks to AppVeyor for giving free CI&CD support to publish to nuget repo.

Why I didn't join with other helper nuget libraries is described in the readme of the repo.