Tuesday, February 23, 2016

Why I moved www.JoymonOnline.in code to Github from VS Online

The related between me ad VSOnline was very good. It was all working fine till I wanted to setup continuous integration and delivery to my personal web site www.JoymonOnline.in.

My Requirements

Its was simple.
  • Free - Need to get the continuous integration in free.
  • CI & CD - Support for hosted service preferred
    • Support for running unit tests and integration tests.
  • Integration with source control - The integration should happen as soon as I check-in
Everything available there in VS Online. 1000+ minutes of build is more than enough for doing CI activities. But still why I had to move to Github + AppVeyor combination?

Issue 1 - No IIS

I was using IIS hosted tests for the web site. In my development machine I can host the project in IIS and run from Visual Studio. The test methods were attributed with [HostType("ASP NET")] to get the tests run by calling the IIS hosted pages. When I tried to run the same tests from VS Online, I could see that I need to write extra scripts to create IIS web app more than that VS Online build machines  does not have IIS. The suggestion is to have machine somewhere else on premise or in cloud and deploy the app there and run unit tests.

To be frank, I don't have enough budget to have a machine in the internet for doing integration testing of my web site.

Issue 2 - No admin access

When the build machine is allocated for us from VS Online, our build is not running with admin privileges. So even if we try to host web app with other frameworks such as MVCIntegrationTestFramework, it won't allow.

Link to list of software in VS Online build machine.

What I get in Github + AppVeyor

  • Hosting in Github is free. Only thing is my source will be public. But what is there in it? HTML & CSS files which anyone can copy from browser.
  • AppVeyor is free if the source is in Github. They provide admin access to our build activities and has IIS in their machine along with other software.
  • Secure storage of credentials to do Web Deploy or FTP upload
More details are in one of the earlier post.

So which is better? Absolutely going to Github and setting up CI & CD via AppVeyor is the way

Now everything is good. After my check in AppVeyor build the site run unit testing and deploy to staging. After manual testing in staging, if I put a release tag on Github, AppVeyor will deploy to my production www.joymononline.in server which is in GoDaddy.

Tuesday, February 16, 2016

CI & CD for pure HTML+JS browser apps using Travis-CI

When its said pure HTML+JS browser app, it really mean. There is no web services or server side HTML generation involved. Its simple HTML+JS site running inside browser. People may wonder why such an browser app needs continuous integration where there is no compilation involved.

That's true that it doesn't need compilation. But it always can have tests and those tests can be run after each git push and deploy to staging or production. Many of the enterprise people will not have such a scenario of pure client app. However lets look how can we setup Continuous Integration & Continuous Delivery for such an pure HTML+JS SPA.

Objectives

  • Upon git push the integration process should start
  • It should unit test the tests written using Jasmine framework.
  • If the tests are working, deploy to a FTP location in a web site.

Tools & Frameworks used

The app uses HTML5,JS and jQuery. Below is the list used for CI & CD activities.
  • Github - where the code is stored
  • Travis-CI - which provides machine to run the CI & CD activities
  • Jasmine - Test Framework API
  • Karma - Test runner
  • PhantonJS - The headless browser where Karma can run the tests
  • NodeJS
  • NPM Modules other than required by above
    • ftp-deploy

Node.JS

Do not confuse with the usage of Node.JS. We are not going to run anything in server, its pure HTML+JS SPA. NodeJS is just to do npm package management and to upload to FTP for deployment. The test runner Karma which we are going to use comes as npm package. Also this is the language type supported in Travis-CI for pure HTML+CSS+JavaScript projects.

Prerequisites

This assumes the understanding of
  • Basic web development using HTML+JS+CSS (browser side alone)
  • Using Github.
  • Understanding about NodeJS and its package management ecosystem npm.

Setting up source - production & test code

Application

The application which we are going to use is HTML5 based Karel simulator. Instead of creating separate branch inside it's git repository, better we can create another repository named travis-ci-test in Github itself and uploaded the same source there. /Src is the folder which contains source code.

Tests

There is no question that the leader in client side web testing is Jasmine framework. This provides the API to write test code. Since this is not a post to discuss about Jasmine framework, lets go to Jasmine test code directly. The test JavaScript files can reside inside in the /Tests folder.

Running tests using SpecRunner.html

Jasmine is a JavaScript test framework written in JavaScript and run tests inside browser. For that we just need to create a html file and refer Jasmine js files, app/production js files and test scripts. In this case a SpecRunner.html is available in below location. Browsing it will run the tests and show the results.


Make sure the tests are running from SpecRunner.html. It is always recommended to do a big job in small tasks which can be done in less than 1 hour.

Setting up NodeJS & NPM

The above setup will work really great in our machine. But when in CI environment, we need to run the tests from program. Travis-CI doesn't have client side only JavaScript+HTML app support. But it has node_js support which we can leverage here to test our app.

Next step is to add some Node.JS things. Basically we need to convert our folder to Node.JS application. We can create/convert a Node.JS app by simply issuing 'npm init' command, from any command prompt inside our folder. It will ask for some questions and will create package.json file.

Introducing Karma & PhantomJS

Once our folder is npm enabled or in other sense we got a package.json, we can install packages. Karma is a test runner program which can run our Jasmine tests we wrote in previous step. The JavaScript needs a engine to execute the script and Karma is not a js engine. So it needs a js engine. It can either use any browser itself or can use headless browser called PhantomJS. PhantomJS comes as npm package. All the packages mentioned till here can be installed via npm. See the devDependencies section in package.json file for the packages what we need.

In our local machine we can use npm install <package name> --user-dev command to install the packages. But in CI machine, it will do automatically by reading the package.json.

Setting up Karma & PhantomJS

Karma wants to know where are the production & test js files to run tests. For that it needs a config file. In this case, its  karma.config.js


This file can be created using karma init command. It will ask some questions and create the file. This file is self explanatory.

How npm knows Karma is the test runner to be used

Once the setup is done we are going to execute the tests by issuing below command.

npm test

How does npm knows to run the tests using Karma? Its again configuration inside package.json.

There is a section called 'Scripts' and it contains 'test' section inside. Test node expects an executable command. There we can give the command to run Karma along with karma config we created in previous step.

Once this step is proper we should be able to issue 'npm test' command and it should should run the tests and show results. Testing every step is important.

Upload to FTP via ftp-deploy npm module

Now comes to deployment. If all the tests are passed, we can deploy to our staging or production environment. In this case I want to deploy to a FTP location. The hosted CI platform Travis-CI which we are going to use, doesn't have direct support to deploy a folder to FTP. We had to rely on curl command which can transfer one file at a time. One file at a time doesn't work most of the time.

Now there are 2 ways
  • Use tools such as grunt / gulp which can orchestrate the CI activities. They have FTP deploy methods.(Google knowledge. Didn't try myself)
  • We can write some code in shell script or NodeJS and hook it after the tests run successfully.
As there are already enough tools and technologies just for CI & CD than the production code, lets write some custom NodeJS code to upload files to FTP. No to Grunt and Gulp for now.

The written FTP upload code uses ftp-deploy npm package to upload the files. Why should we rewrite code again for reading the folders, reading the files and to upload using FTP protocol. That is something many people already solved and made open source.

The FTP code can be hooked to the integration process after test by editing the package.json file. We can use 'posttest' section for it.

Testing everything in local machine

If the above environment is ready, we should be able to run everything in our machine. The command 'npm test' should run the unit tests and if everything pass, deploy to FTP folder. In our local environment to work we need to give the FTP credentials to ftpupload script via environment variables. We will see why its via environment variables in next section.

Intro to Travis-CI and getting started with it

Travis-CI is a free hosted CI&CD SaaS environment for open source projects. This is similar to AppVeyor. We could have used AppVeyor which is already referred in previous posts and have already used in other .Net projects. But this time lets try something new. Below are the steps to achieve the same.

  • Upload the code to Github.
  • Create account in Travis-CI using Github.
  • Connect to Github project.
  • Setup below environment variables for FTP access.
    • ftp_host
    • ftp_user
    • ftp_password - Do not turn the flag to display in build log. Build log is public
    • ftp_localPath - Path to the folder inside the CI machine. Usually src
    • ftp_remotePath - Path to where the files should be copied. Only relative url
Refer a build log for sample values. Once this is working. Just do a push to Github to start the integration process.

UI testing in Travis-CI

Travis-CI don't support running in chrome out of the box. They are telling we need to give some commands to get it running. To me it didn't work out. It tried 2-3 times and said failed to start Chrome. So decided to use PhantomJS. Anyway in this sample application, the main objective is test the JavaScript. So it works.

If there are any steps missing and difficult to follow just fork / download the sample. There is no good documentation than the code itself.

References

http://orizens.com/wp/topics/first-steps-in-setting-up-travis-ci-to-your-javascript-project/
https://blog.logentries.com/2015/01/unit-testing-with-karma-and-jasmine-for-angularjs/
http://swizec.com/blog/how-to-run-javascript-tests-in-chrome-on-travis/swizec/6647
http://gis.utah.gov/how-to-wire-up-travis-ci-to-your-js-projects/
http://www.sitepoint.com/testing-javascript-jasmine-travis-karma/

Tuesday, February 9, 2016

Automated integration testing in ASP.Net MVC & Forms without IIS

Background

I have wrote about testing ASP.Net web forms in one of my previous post. This the era of MVC and I am still writing about web forms. Mainly I am working with web forms only for my web site http://joymononline.in. I have started it before MVC days and still I don't see any reason for converting to MVC. Its serving its purpose using the web forms technology? Why should I change? Another reason I am keep it there in web forms just to realize that technology should not drive our architectural decisions. Every time before I suggest MVC for any project, I think about my own web site and think twice whether we should go to MVC or not? None in my company agree or do in old web forms but its still worth considering and making sure there is specific reason for selecting ASP.Net MVC. According to them we should never do anything in web forms.

Recently I tried to implement Continuous Integration and Continuous Delivery to my web site. Details of doing it via AppVeyor can be found in another post. It was very easy to run unit tests in the integration server.

Problem

But the one of the issue I found during that exercise is on running Integration tests. What I wanted was to make sure all the pages in my site is browsable after each check and and deploy to staging. I was using [HostType("ASP.NET")] attribute to run the tests against IIS. This is mentioned in my earlier post about testing web forms. This works well in development machine as I have created IIS web application from Visual Studio and the project was pointing to that web application(Project Properties->Web->Server->Local IIS in VS 2013). But in build machine, this hosting will not happen automatically. We need to write script to host the application to IIS which is according to me extra task.

Possible approaches

This problem is not new. Many people have solved it in many ways. There are free solutions such as using tools like Selenium or can use hosting libraries such as WatiN, MVCInterationTestFramework etc... Some more options can be seen in the below link.
http://stackoverflow.com/questions/118531/what-is-the-best-way-to-test-webforms-apps-asp-net

My Solution

The requirement is clear. We need to host the site in a web server to run integration tests. So why can't we start a web server before the test begins and the test request site page. This lead to MVCIntergationTestFramework library. This .Net library helps us to host any folder as IIS web application. It also has some features to test results from MVC response.

No need to get confused with MVC prefix. As mentioned earlier, we can host any folder as IIS web application. So web forms also works. The main git project has many clones and one of those provide nuget as well named FakeHost. So its easy to include into VS test project.

Continuous integration

Since it doesn't need upfront IIS web application, it is easy to run tests from CI server provided it has IIS in it. AppVeyor has IIS in it and it runs smoothly there.

Once again this is my solution towards a small problem of integration testing of my personal web site. It may not be applicable always. If anybody wants to take a look at production code how its done, have a look at my personal web site code in Github. Now I am really enjoying the free CI & CD life. I don't know how to write down the joyful feeling of getting our source compiled after each check in, automatically testing the same and publish to staging and production if everything goes well. Everything free.

Happy CI & CD

Tuesday, February 2, 2016

What is BrowserLink and how to get rid of that?

We were on a Javascript debugging session which involved Web Workers, FileAPI and jQuery in an http(s) enabled page. There were so many things in the F12 console window which increased the difficulty in debugging the same. When the Web Workers seems not accepting the messages, we started putting so many console.logs to understand where it breaks actually. Soon we notice that there is some .Net exception stack coming in the browser console as follows

[19:45:14 GMT-0400 (Eastern Daylight Time)] Browser Link: Exception thrown when trying to invoke Browser Link extension callback "madskristensen.editorextensions.browserlink.unusedcss.unusedcssextensionfactory.GetIgnoreList":
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.NullReferenceException: Object reference not set to an instance of an object.
   at MadsKristensen.EditorExtensions.BrowserLink.UnusedCss.UnusedCssExtension.GetIgnoreList()
   --- End of inner exception stack trace ---
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
   at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
   at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at Microsoft.VisualStudio.Web.BrowserLink.ClientMessaging.DispatchMessage(BrowserLinkConnection connection, MessageArgs obj)

The debugging happened in a dev machine. At first we thought there is some VS extension failing. But why should it come to browser? Sometimes when we debug js inside browser it enters strange file other than jQuery which took to the height of frustration. The debugging was kind of over after some time. But this wired call stack struck in my mind.

Immediately after the debugging, I googled for what is this browser link. I could see that its a feature from Microsoft from VS 2013 on wards, enabled by default for dealing with more browsers. More specifically refreshing multiple browsers associated with Visual Studio. If this is enabled Visual Studio establish links between browsers an can refresh them. It uses modules to inject javascript etc...

Wow...I immediately disabled it by following the below link.

http://blogs.msdn.com/b/webdev/archive/2013/06/28/browser-link-feature-in-visual-studio-preview-2013.aspx