Tuesday, May 26, 2015

Moq - How to know the number of times a method is called on mocked object

Moq seems a good mocking library for .Net unit tests. It has most of the features a developer needs and pretty much stable and many people are using it. Once I started using it, I never felt a need to another mocking library. Below is one if the scenario which I had to face related to Moq.


There is a method which randomly executes operations (IOperation implemented classes) stored in an array. The operations are unit tested independently to make sure there is no problems with them. This method which is in test simply uses Random class to get random number and get an operation from array using that number and invokes the operation.

Ideally speaking we don't need to test the Random class which is part of .Net framework. But why should we avoid unit testing our logic inside the method ? Some tests like 'making sure random value is not above the length of array' etc... So whats the big deal?


The unit tests works by Arrange,Act,Assert methodology. First we need to arrange the things, Then act or do the operation by calling the method under test. Finally assert.

When we do the act, we should be knowing what is the output. Then only we can write assert. If we don't know what is going to happen we don't know what to assert.

An Example

Recently I forked ChaosMonkey for .Net and started contributing to it. I made it configurable. Looking into that code might be useful to understand the example better.It contains a MonkeyKeeper and IMonkey interface. MonkeyKeeper is the external interface. When MonkeyKeeper.Unleash() is called it should randomly pick an IMonkey from its monkeys collection and call Unleash() method on it. IMonkey implementations can be easily unit tested to make sure they are doing their duties on Unleash. But how to unit test MonkeyKeeper.Unleash()?

In this scenario, we can assert that the mocked IMonkey objects at least got one call to unleash. But in order to get at least one random calls, we need to call the MonkeyKeeper.Unleash multiple times because Random can return same value multiple times.

Below is one implementation, where MonkeyKeeper.Unleash() is called 5 times which is holding 3 IMonkeys collection.

        public void WhenMonkeyListProviderGivesMultipleMonkeysAndUnleashMultipleTimes_UseThoseMonkeysAtleastOnce()
            Settings settings = new Settings();
            ChaosLogger logger = new ChaosLogger("");
            var mockMonkey1 = new Mock<ParentMonkey>(settings, logger);
            var mockMonkey2 = new Mock<ParentMonkey>(settings, logger);
            var mockMonkey3 = new Mock<ParentMonkey>(settings, logger);
            var mockMonkeyListBuilder = new Mock<MonkeyListBuilder>();
            mockMonkeyListBuilder.Setup(builder => builder.GetMonkeys(settings, logger))
                    .Returns(new List<ParentMonkey>() {
            MonkeyKeeper keeper = new MonkeyKeeper(
            mockMonkey1.Verify(monkey => monkey.Unleash(), Times.AtLeastOnce);
            mockMonkey2.Verify(monkey => monkey.Unleash(), Times.AtLeastOnce);
            mockMonkey3.Verify(monkey => monkey.Unleash(), Times.AtLeastOnce);

Most of the times this test will succeed. But there is no guarantee that it will succeed always because Randon.Next() using inside MonkeyKeeper.Unleash can return same value 4 times and that's why its random. in such scenarios this test will fail. So whats the solution.


There is no solution to this. If we can predict random number and write asserts, that is not random. So what we can do?

At least make sure, there is some level of randomness in the selection of IMonkeys. Also we can make sure if there are 3 IMonkeys in the collection at least one of the IMonkey's unleash is called. If that is not called or called multiple times, there are some problems in the random selection logic of MonkeyKeeper.Unleash. Whats the problem in doing that?

We have 3 monkeys and we don't know which one will be selected by random logic. But we need to make sure that the unleash is called only once and on one monkey. This clearly tells that normal mechanism of asserting via Verify() and Times will not work. If there is any confusion just try writing that as below

mockMonkey1.Verify(monkey => monkey.Unleash(), Times.AtLeastOnce);
mockMonkey2.Verify(monkey => monkey.Unleash(), Times.AtLeastOnce);
mockMonkey3.Verify(monkey => monkey.Unleash(), Times.AtLeastOnce);

Why the above wont work? when we unleash it may call mockMonkey1. and that line works and test passed. But what about the next lines? They will fail. We cannot put a return statement because next time mockMonket3 might be the one selected. In that case first 2 statements will fail.

Now whats the method? We need to know the number of times a method is called on mocked object. There is no direct way to know that. So we need to use the callback mechanism available in Moq to achieve our aim. Callback function helps to get a hook in our unit test when a particular method is called on mock object. Code goes as below.
public void WhenMonkeyListProviderGives3MonkeysAndUnleashOnce_UnleashMustBeCalledOnOneMonkey()
    Settings settings = new Settings();
    ChaosLogger logger = new ChaosLogger("");
    var mockMonkey1 = new Mock<ParentMonkey>(settings, logger);
    var mockMonkey2 = new Mock<ParentMonkey>(settings, logger);
    var mockMonkey3 = new Mock<ParentMonkey>(settings, logger);
    int callCount = 0;
    mockMonkey1.Setup(monkey => monkey.Unleash()).Callback(() =>callCount+=1);
    mockMonkey2.Setup(monkey => monkey.Unleash()).Callback(() => callCount += 1);
    mockMonkey3.Setup(monkey => monkey.Unleash()).Callback(() => callCount += 1);
    var mockMonkeyListBuilder = new Mock<MonkeyListBuilder>();
    mockMonkeyListBuilder.Setup(builder => builder.GetMonkeys(settings, logger))
            .Returns(new List<ParentMonkey>() {
    MonkeyKeeper keeper = new MonkeyKeeper(
    Assert.AreEqual(1, callCount, "Unleash Called multiple times");


Tuesday, May 19, 2015

Unit testing ASP.Net Web Forms


We have a simple legacy web application created using ASP.Net Web Forms. It works well and doing its duties without fail. Mainly it deals with simple pages not many transactions involving many pages. But rarely there are changes coming in. Even if the changes come frequently, the changes are small changes which are not affecting the core functionality of web site. Being a passionate software engineer, we wanted to bring innovation into the project such as refactoring to patterns and more object orientation.


We are confident on the refactoring we are doing in the site. But still there is a factor of fear that whether those changes will break the other pages in the site or not? Basically need to make sure all the links are working. Normally after a change we manually click on every link and make sure they are all loading.

That is a repetitive work. No software engineer should be doing repetitive work. We need to solve this in less cost. As usual since its legacy application, there is no budget to buy tools or automated testing service.


1.Write a tool to validate links

We can write a tool which will parse the html of default.aspx and make sure all the links are working properly ie returning valid html contents. Can use XMLHttpRequest or WebClient class to accomplish it. Can configure the tool to run as post build action.

Challenges are in validation because there is no guarantee that HTML pages can be easily parsed with XML classes. There is a nuget called HTML Agility pack which we can try here.

2.Write unit tests on existing Web Forms app

Write unit tests for all the pages and at least make sure all the are passing.

After evaluating, I feel the unit tests are the way to do developer testing. But is my legacy application written as testable. Absolutely no. Now what we can do? One way is to rewrite as ASP.Net or use the Url based unit test. Refactoring the business logic and testing those is not applicable as the site doesn't have complex logic and more than that we need to make sure the pages are all working after our change.

3.Convert into ASP.Net MVC and unit test

Though this is the perfect solution most of the time this will not work out because.
  1. There may not be enough budget for maintaining legacy app or convert into ASP.Net MVC because from the business perspective, the site is doing its duty properly
  2. There might be links already bookmarked by users. Those will be invalid unless there is a rerouting mechanism in new MVC application.

Solution - Unit test ASP.Net Web Forms

The concept of unit test is applied to one level above the normal unit testing of classes and methods. Here we test the ASP.Net Pages itself. We can check particular properties of Page is set when rendered for a particular request url. Below are the steps to create unit tests for ASP.Net Web pages

1.Create UnitTest Project

Create normal unit test project from VisualStudio->New->Project dialog which we used to create for other scenarios such as class libraries or MVC.

2.Refer System.Web

Reference to Syste.Web.dll in UnitTest project is required if we want to hold the Page variable for assert.

3.Write test method with special attributes

Now comes the testmethod same as in normal test project. Here additions are 2 attributes HostType and UrlToTest. They specifies where the web page is hosted and what page corresponding test method is going to test. Below is one scenario where test method is testing a page hosted in Local IIS server.

string url = "http://localhost/joymononline/"
public void WhenLinksPageIsRequested_ShouldReturn200AndContent()
    Page page = TestContext.RequestedPage;
    string url = Path.Combine(baseUrl, "links.aspx");
    Assert.IsTrue(page.Title.Equals("Joymon Online | Links"));

If you are looking for how to test a web application running from development server, please have a look at the below links.

I strongly recommend developing by pointing to local IIS server, because IIS is the way your application is going to be served to end users. Otherwise there will be scenarios of  "works on my machine" which is really tough to deal with.

Debugging IIS hosted Web Forms Unit test

By default if we put a break point in ASP.Net Web Forms unit test where the site is hosted in IIS, it wont get hit. We need to attach to IIS process w3wp.exe.

So put a Debugger.Break or Debugger.Launch. This will open up an error dialog and from there we can attach to W3WP.exe.1

Also need to make sure that the dlls are compiled in debug mode.


Sometime there may be issues when running the unit tests. One error what I faced was below.

The URL specified ('http://localhost/joymononline') does not correspond to a valid directory. Tests configured to run in ASP.NET in IIS require a valid directory to exist for the URL. The URL may be invalid or may not point to a valid Web application

There are multiple reasons for this.

First we need to run Visual Studio in admin privilege. 

Second we need check whether the below entry is logged into event log viewer -> application  node

(QTAgent32_40.exe, PID 12920, Thread 14) WebSites.GetWebServer: failed to create AspNetHelper: Microsoft.VisualStudio.Enterprise.Common.AspNetHelperException: The website metabase contains unexpected information or you do not have permission to access the metabase.  You must be a member of the Administrators group on the local computer to access the IIS metabase. Therefore, you cannot create or open a local IIS Web site.  If you have Read, Write, and Modify Permissions for the folder where the files are located, you can create a file system web site that points to the folder in order to proceed. ---> System.Runtime.InteropServices.COMException: Unknown error (0x80005000)
   at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
   at System.DirectoryServices.DirectoryEntry.Bind()
   at System.DirectoryServices.DirectoryEntry.get_IsContainer()
   at System.DirectoryServices.DirectoryEntries.ChildEnumerator..ctor(DirectoryEntry container)
   at System.DirectoryServices.DirectoryEntries.GetEnumerator()
   at Microsoft.VisualStudio.Enterprise.Common.IISHelper.GetWebServerOrdinal(Uri site)
   --- End of inner exception stack trace ---
   at Microsoft.VisualStudio.Enterprise.Common.IISHelper.GetWebServerOrdinal(Uri site)
   at Microsoft.VisualStudio.Enterprise.Common.IISHelper.get_WebServerOrdinal()
   at Microsoft.VisualStudio.Enterprise.Common.IISHelper.get_RootPath()
   at Microsoft.VisualStudio.Enterprise.Common.IISHelper.get_PhysicalPath()
   at Microsoft.VisualStudio.Enterprise.Common.AspNetHelperMan..ctor(Uri uri, BasicAuthCredential credential, Int32 frameworkMajorVersion)
   at Microsoft.VisualStudio.Enterprise.Common.AspNetHelperMan..ctor(Uri uri, BasicAuthCredential credential)
   at Microsoft.VisualStudio.Enterprise.Common.AspNetHelper.Create(Uri uri, BasicAuthCredential credential)
   at Microsoft.VisualStudio.TestTools.HostAdapters.WebSites.GetWebServer(String webServerName, WebServerType webServerType, String urlToTest, String pathToWeb, String webAppRoot, BasicAuthCredential credential, Context context, WebSiteConfigurationType webSiteConfigType, Origin origin)

If we can see the above we can try enabling IIS 6 compatibility. This is applicable if we are running on Windows 7 with IIS 7 or above.

Go to Control Panel->Programs and Features->Turn Windows Featuers On or Off


Internet Information Services->Web Management Tools->IIS 6 Management Compatibility

Tick the "IIS Metabase and IIS 6 configuration compatibility" option and install it.

Some users says that we need to enable everything in IIS 6 Management Compatibility.2



Tuesday, May 12, 2015

Software Containerization via Docker - First look and some thoughts

I was hunting behind the architecture of micro services and its advantages in distributed systems development and maintenance. The word 'Docker' stuck me when I heard that word multiple times in  one of the talk regarding micro services in Netflix. In that talk Adrian repeatedly telling that the Docker helped them a lot to deliver fast.They are not at all bothered by "works in my machine" scenarios. Then I started investigating what is Docker?

What is Docker

Docker is a software containerization technique where software with its dependencies can be run independently in an isolated environment. The container is run by container engine which is running inside Linux OS. If we are hosting an web application inside web server hosted in the container, we need to explicitly open ports for that container to establish communication. There are APIs to do container operations. The main advantage is friction less movement of executable software from one environment to another environment. If we make sure once our software is running inside the docker container, that container can run in any place where the docker engine is available

This easily leads to a question of

What is the difference between a virtual machine and docker container?

What I have understood from my analysis is virtual machine is an OS running inside another OS. It is heavy as the host operating system. If we want to utilize same hardware for multiple operating systems, we can go for that. But if we need mobility of software only better way is container concept.

Solomon, Founder and CEO of Docker Inc simply comparing docker ecosystem with logistics industry in his talk posted in www.docker.com

To me, the virtual machines looks like below.
But is this what we need when we need to transport software in its running condition? Absolutely no. That is where container comes into play as follows.

The vehicle which is carrying the package doesn't need to bother about what is inside container as long as it adhere to standard container specifications. In software, we really don't need to carry the operating system when we want to move our software. But we need to carry the dependencies of course.

Is there any standard for containers in software industry as in logistics

I don't think there is one right now. Docker can have it's own container file format and APIs to manage the containers where competitors can have their own. But a standardization is really required and soon that would evolve as everybody needs to move their software such as on premise to cloud and vice versa.

Some Docker facts

  • It uses new API in Linux kernel
  • Right now the Docker engine can only work on top of Linux
    • If we want to host Docker engine in Windows or Mac, we need to use Linux VMs
  • The containers are Linux specific. Though we can host .Net via Mono.
  • It is used for easing the deployment and maintenance of distributed application. No meaning in running UI app inside docker container.
  • There is docker app store where we can uploaded our containers or get containers from others. Same like other app store. The major difference is we don't need to install the app, instead just use the already installed and configured software by running docker container on docker engine.

Is Docker ready for production?

I think yes, if we are dealing with services hosted inside Linux server boxes. For Microsoft .Net community its not. Again it's my opinion after some research may vary from person to person and scenario to scenario.

Docker from .Net developer's view

As Docker is not yet ready to directly run on Windows platform, I don't think, its the time for .Net developers to enter into the Docker world. Of course they can try out docker commands and host .Net services using Mono etc... Another thing we need to notice is that there is no windows containers which can be run by the docker engine.

There is a tool called Boot2Docker available for Windows users to host a minimal Linux OS and run the Docker engine inside it. Once the engine ie runtime is available any docker container can be run inside it.

Again, the main bottleneck why we can't run Docker engine in Windows is the usage of Linux specific APIs to build the Docker engine.

The good news is Microsoft announced partnership with Docker to bring the technology into Windows ecosystem. As usual it seems the strategy is cloud first. But in near future we can get Windows Servers supporting docker containers out of the box. More than that, if a standard for containerization comes as part of partnership, that would be a mile stone in software industry.

Other software container technologies

Docker is way ahead in the container business. But there are competitors as well in the software container business. Rock-It from CoreOS, ElasticBox , DrawBridge from Microsoft to name few.


For those who are not able to follow what is Docker and containerization, try google or just wait for couple of months, you will certainly start hearing the word.

Tuesday, May 5, 2015

Extensible Chaos Monkey

Chaos Monkey

Chaos Monkey is a great concept introduced by Netflix to create random issues in cloud server environments so that those issues can be addressed early and test against unexpected failures. That make sure any the system can recover from any kind of failure which can happen in production.

The good news is that, they didn't kept it secret. They made that tool online including source in Github. Its written in Java so it was little difficult for the .Net community to extend. There are many .Net ports of same tool or tried to implement the same.

The problem

In our company also things are not different from any other production environment. There are issues which are first reported / only happens in production. We have an online audit system which contains many backend processing components. The backend queueing system is developed in such a way that any application track developer can add a new queue type and write it's associated handler. Though there are guidelines for the backend developers to roll back properly, there are times when developers are not caring it to meet the deadlines and it goes to production. In production we end up with wrong data states. Some of the reasons are unexpected IIS / AppPool recycling. Database timeout / outage etc...Since the application is not expecting those states, it will be clueless how to recover from there. Finally support has to manually run SQL statements in production servers to correct the state.

Ideally QA is supposed to test scenarios such as IIS reset and all when the backend services are running. But they have limitations. Its very difficult to make sure that they cover all the scenarios.  Ensuring the randomness and tracing the abnormal event to reproduce in dev will be difficult. There are also difficulties in manually creating abnormal scenarios when the tests are running overnight.


All these things leads to a testing strategy where issues / abnormal scenarios which are expected in production needs to created in dev/QA environments so that we can identify how the system behaves to those abnormal events and sometimes change the application flow to recover those states.

This is the point we started looking towards already existing ChaosMonkey solution. It seems suitable for us. But we are not able to use it straight away because ChaosMonkey is targeted towards cloud but our production deployment are in house.

Why extensible

So we tried to extend the tool. But since its in Java and most of our developers are familiar with .Net ecosystem, we started looking for .Net port of Chaos Monkey. Unfortunately not able to find much.  Though there are some they are also targeted towards cloud. To restart local IIS server, had to write lot of code. 

So finally decided to take one as base and make it extensible to meet the scenario. Forked ChaosMonkey implementation from Githb by Simonmnro and added my own changes to make it plugable.

My version of Extensible chaos monkey is available in below location.

What's Next

  • Clean separation of plugin code and config
  • Adding more plugins such as increase memory pressure.

Feel free to comment if there are better solution to the issue and contact me if willing to contribute to the project.

Tuesday, April 28, 2015

Change Areas folder convention to Plugins in ASP.Net MVC

ASP.Net MVC is mostly about convention. The controllers are inside Controller folder, cshtml files needs to be under views folder, the views are looked up from ActionResult by the action name etc...But whatever conventions are there, all those can be overridden. That is the beauty of MVC framework.

Recently we faced one scenario of overriding convention in one of the MVC projects. It uses MVC Areas to achieve plugin model. We deploy new plugins into the areas folder without recompiling the original framework web application. During one of the demo, there was a suggestion to rename the 'Areas' folder to 'Plugins' for better naming convention and readability. We tried to resist it as it adds unwanted code. But due to the pressure from leadership we decided to change it. It is good in one way that, we gets more chances to look inside the MVC framework.

Initially thought of having custom controller or ViewEngine. But after playing with MVC for some time, it became very easy as adding some lookup folder paths into RazorViewEngine. Files and folder hierarchy under the Plugins folder needs to be done manually. Right click and add area will not put new files into Plugins folder. Code below. The sub folder structure and files can be organized in side the Plugins folder in the same way how its seen in Area's folder.

        protected void Application_Start()

        private static void AddPluginsFolderToAreaLocationFormatsOfRazorViewEngine()
            string[] PluginsLocationFormats = new string
                [] { "~/Plugins/{2}/Views/{1}/{0}.cshtml" };
            var razorEngine = ViewEngines.Engines.OfType<RazorViewEngine>().First();
            razorEngine.AreaMasterLocationFormats = 
            razorEngine.AreaViewLocationFormats = 

If anybody faces issues in getting this run, comment here. I can upload a sample MVC5 project.


Tuesday, April 21, 2015

Determining health of system in distributed queueing system to dequeue next message

Recently we are in to the business of building a distributed system where the long running operations to be offloaded to processing machines. Processing machines are pulling the messages from the queue instead of having a manager allocating tasks. 

Initially we were assigning throttling numbers to each of the processing machines according to their configuration such as number of processors, RAM etc...But soon we could see that the machines are either not utilized or over utilized. So decided to introduce a mechanism where the real time machine load needs to considered before taking new message from the queue.

We considered some techniques for sensing the load but those were not feeling better than analyzing the Windows performance counters. So decided to go with looking at the appropriate performance counters before dequeue a new message from the queue.

What are the appropriate performance counters? Its always debatable. We are in the initial stages of the implementation. Hopefully can update soon.

Another challenge we faced was how the performance counter data is translated to a boolean value saying system is healthy or not. We evaluated PAL which reads the .blg files and produces report. But finally reached to a conclusion of saving the performance counter values into database and running own rule engine there which replaces PAL.

Some links on how to work with performance counters from perfmon.exe below


Tuesday, March 17, 2015

Starting Scala & R Language

Its always good to learn at least one new programming language per year to sustain in software engineering industry as architect or in any development position. At least need to know what are the capabilities and some hands on experience on the new language we are learning every year.

Last year I was with PowerShell. It is a nice language. Less code more things. That is the beauty if it. Also piping is a great feature. Seamless integration with .Net programs etc...

This year I decided to learn something not much related to .Net. One is Scala and other is R language.

Why interested in Scala & R as a hard core .Net guy?

Let us see some code fragments in these languages.

  def OddOrEven(a: Int) = {
    if (a % 2 == 0) "Even"
    else "Odd"

There is not more things interesting here except the absence of return statement. Yes the language now can understand programmer's intention and can remove strict rules.

This is just a start. There are more things out there in scala which makes programming faster with less code.

After working in delivery oriented organization which maintains huge amount of legacy code, I feel the way to survive in intense delivery environment is to stick to the below principle

"Less code less defects"

Some of the features are already available in C# in the form of lambda expressions, automatic properties etc...But Scala seems more simplified and takes advantage of great JVM.

# R Program to increment all members in list

The same thoughts here. Less code as well as R is very good for statistical programming.

There are many more reasons why these languages are becoming popular. Just google for more features.

Setting up the environment for these 2 languages is very easy. Next time I will post it.