Tuesday, January 31, 2017

Serverless JoymonOnline.in - Intermediate update & experience

The Serverless is the new buzzword and it is getting really good attention. It is nothing but an architectural style where developers don't need to worry about servers for deployment, scaling etc... Just write and push the code. It will scale automatically.

In the previous article about Serveless, I had mentioned I am in the process of converting my personal site to Serverless. The decision was purely financial. Why should I give money to GoDaddy for hosting some pages which can be served by GitHub pages for free?

This is just an intermediate update about that effort.

Steps

Below are the steps taken during this effort. It may vary from project to project based on complexity. JoymonOnline.in is just a profile site which don't use any custom modules or handlers.
  1. Avoid server side data rendering
    1. Replace .Net third parties with equivalent JavaScript API
    2. Convert each .aspx page to contain its own Angular application.
    3. Avoid ASP.Net specific skin files and use pure CSS
  2. Client side integration
    1. Replace ASP.Net master page with Angular routing.
    2. Bundling, Minification, CI & CD
  3. Backward compatibility
    1. Links
    2. SEO
Please note that these steps are relevant if the decision is 
  • Not to start a fresh app
  • To ensure that the code base is ready to delivery at any point in time.
If business can keep quite during the migration or the development can be done is separate branch ad later merge, these steps are not required. Just start a new app.

Avoid server side HTML rendering

The current www.JoymonOnline.in site uses ASP.Net web forms technology. In other words it uses server side HTML generation and sends it to the browser. There is no rule that Serverless should not use server side rendering. It can use, provided the server side rendering can scale automatically without doing any extra task. For example a WebAPI service end point can return HTML and it can be rendered at browser with the help of JavaScript. That introduce some kind of client side HTML manipulation. Once we take that decision, there is not much differences between rendering full contents at client side v/s rendering some contents at server and inject it to client side views. So in short Serverless goes hand in hand with client side web apps which we generally call SPA (Single Page application). It can't be done in one step. So sub tasks are as follows.

Replace .Net third party with JS API

If we want to replace the third party .Net API with JS equivalent there are 2 options
  1. Use equivalent JS library
    1. Has to make sure the critical API keys are not exposed. Can do client site authentication or be happy with free quota.
  2. Write a FaaS which uses same .Net API and let it return required data to client.
    1. We have to make sure it is secured and key present only at service and service can only be called from our app.
Coming back to JoymonOnline.in, it uses some third party APIs at server side to get data and render  via ASP.Net. GitHub API, Blogger API are the main third parties. So these calls needs to be done from client machine. For GitHub, it was easy as there is straight JS library. But for Blogger it was little difficult as the Google feed API is discontinued. Finally it worked with blogspot feed url.

But for GitHub, there is a limitation due to the quota for unauthenticated API calls. Since the developer key cannot be exposed to client, it allows only 60 requests / hour / IP address. Since JoymonOnline.in is expecting more than 60 calls from single API it is ok to go with quota. 

Convert each .aspx page to contain its own Angular application

Next step is to convert page by page to Angular. This is to make sure that the application is always in functional condition than like a demolished house before renovation. Convert individual pages to its own angular app and render that app inside the ASPX pages. Let that page be just an angular app host. Noting else. 
In this step we have to identify equivalent third party ASP.Net UI controls, if applicable. For example how to replace Telerik web forms controls with KendoUI.

In case the plan is to first get to ng1 and later to ng2, I would strongly recommend to go with components rather than directives. TypeScript is becoming defacto standard for angular. Use that to avoid debugging time. After this step, we will have ASP.Net application but the functionality working from browser via angular.

Avoid ASP.Net specific skin files and use pure CSS

Next thing to tackle are the styles. If the application is using skin files, we have to get equivalent CSS files. Sometimes we may need to do this along with the previous step. Depends on nature of third parties and our code.

Client side integration

The next big step is to integrate things at client side.

Replace ASP.Net master page with Angular routing.

The main pending thing is to have angular master page and routing. We already have separate components in different apps. Just create new app in index.html and have the routing to load proper components. This steps sound simple but it has many things to do.

Bundling, minification, CI, CD etc...

Now its the time to optimize the site and do integration to build pipeline. Till this point, the expectation is that we are able to use existing build system. Recommended is Webpack for client side activities such as compilation and bundling. CD is always confusing with Continuous Delivery and Continuous Deployment. At least make sure it is Continuous Delivery.

Current, JoymonOnline.in using AppVeyor for CI & CD. Though AppVeyor is mainly intended for Windows development, it supports NodeJS as well. So decided to continue in AppVeyor. At this point the build output will be pushed to \docs folder as staging. GitHub pages recently started supporting serving web pages from \docs folder instead of gh-pages branch.

It is not fun to develop JS based client side app using Visual Studio solution file. After this step we don't need .sln file to work with. Node rules the client side web development world. At some point we have to introduce Node. So why not from the start if it is possible to live with only Node? When it is said 'start', it means the starting of the migrated application. If anyone feels they can live with .sln file without Node, let them continue. But better don't use mix. Comparing Node with .sln is not the right comparison. But the intention is to use Node and its ecosystem such as NPM & package,json etc... 

Backward compatibility

These may be optional depends on scenarios.

Links

This will make sure that the bookmarks saved by users are still valid. The old ASP.Net application could have modified to redirect the .aspx pages to new URLs. But for JoymonOnline.in, this is not done.

SEO

The SEO efforts needs to be put if the application is public facing.

These steps itself needs explanations which will be having contents enough for separate post. At this point, the entire application has converted and available at joymon.github.io . The main pending thing is to change settings in GoDaddy to point JoymonOnline.in records to joymon.github.io.

Tuesday, January 24, 2017

My JavaScript Module experiments - CI & CD

This is second in the series "My JavaScript Module experiments". This post talks about setting up a simple CI & CD pipeline in modularied JavaScript environment using AppVeyor as CI & CD runner.

Other posts in this series below.

My JavaScript Module experiments
2 My JavaScript Module experiments - CI & CD

Introduction

This post directly jumps to setting up CI & CD for modularized JavaScript. Here also Webpack is used as module loader as well as to do other operations such as TypeScript transpilation, minification etc...

AppVeyor & GitHub Pages

AppVeyor & GitHub pages are 2 external services used here. AppVeyor provides free CI & CD service for open source projects and GitHub Pages provides free hosting from Github repositories. Basic knowledge about these services are considered as prerequisites.

AppVeyor is mainly targeted to Microsoft .Net stack and it automatically takes care, if there is a Visual Studio solution file. But we can make it work for Node as well by disabling the default build process. That can be done by simple changes inside appveyor.yml file. AppVeyor supports configuring CI & CD activities either in a appveyor.yml file found in the GitHub repo or through their UI. Here we are using the appveyor.yml  file as its easy to understand. Though is a file publicly accessible file in repo, we can include sensitive information inside it with encryption.

Separation of Concerns - What does what

When we look at the technologies used, we can see those can so same tasks. AppVeyor can run normal commands as well as PowerShell and even start NodeJS which in turn starts Webpack. Using normal commands, PowerShell & Node we can do file copy operations. Webpack which can be started as command can also execute code which does file copy. What is the problem if all the technologies can do same task? The problem is lack of clarity on which component does what.

For small projects, it is fine. We can have AppVeyor scripts and Webpack do file copy. But when we think of larger projects we need to have clear distinction on that should do what. Else it would be difficult to maintain in future.

The simple rule can be to assign the dev related tasks such as compilation and bundling  to Webpack and deployment tasks to the AppVeyor scripts such as pushing to GitHub. Ultimately AppVeyor is ruling the environment. So there should be one AppVeyor command which starts Webpack. Summarized duties as follows.
  • AppVeyor
    • Downloads the source code
    • Install proper Node version and setup environment by installing packages.
      • This include the Webpack NPM package too.
    • Starts Webpack
    • Run tests.
    • Collect the artifact and deploy  
  • Webpack
    • Compilation
    • Do minification / Uglify
    • Create bundle
    • Emit the output to \dist folder. (\dist is just a convention. It can be any folder)

The CI & CD workflow

The CI & CD workflow can be simple. After the transpilation &  bundling the output HTML app which includes html, js, images etc...needs to be pushed to the \docs folder of repository. The GitHub repository can be configured to serve web pages from \docs folder.

Sample

It is easy to explain the but there are high possibility for missing steps. So to make it more understandable, please refer sample repo located at below location

  • It does CI & CD on 'TypeScript-Angular1-Adv folder'.Other folders are showing different Webpack samples.
  • Push the output to the \docs folder in same repo. \Docs folder is configured to GitHub pages to serve from.
  • It is little advanced Angular 1 sample done using TypeScript.
  • Node & NPM used as dev technology.
  • AppVeyor.yml has all the CI & CD steps. The GitHub token is encrypted and kept inside it.
  • Since the aim of that folder is to minimally show the webpack features, at this point there are no tests included.

Tuesday, January 17, 2017

The great developer divide

This will be non interesting to someone who is looking for code snippets. This is gonna kind of theory or history class with some predictions. As always predictions may come true not guaranteed. Lets come to the point. What is meany by developer divide? In simple sense, the developers are going to be divided into 3 species.
  • Application integration developers
  • AI / Algorithm developers
  • System / infrastructure developers
    • Cloud  developers  - includes device driver writers.
Before we think oh there are already 2 groups of developers and are named the same for at least last 10-15 years. What is new?

Evolution in Biology

Lets look at how new species are evolved in biology. If you are an opponent of evolution, you may better stop reading as without understanding the biological evolution, its difficult to understand any evolution including the formation of new languages and all. In biological evolution when some group does different things than the other group of same species or they are separated geographically a new species is formed. If the divide is for small amount of time, they can mingle together and continue as one species. In the other case, there will be 2 species.
Another reason for survival or destruction of species is the ability to suit to the environment. For example, consider the famous evolution scenario explained in England related to industrial revolution.
This is kind of easy and old explanation. The real reason is the gene mutation, which goes to the next level. Only the members who got useful gene mutations will produce more off springs and after a long time they becomes new species which has clear distinction from the old.

Characteristics of these developer species

Coming back to the software, what are characteristics of these new species.

Application integration developers

These are the majority of developers. They write applications which accepts data from user and store somewhere. When user wants the data they just show them or send to them periodically by means of some push mechanism, Some transformations on the data will be done by them which are more towards formatting the data. They never create new data ie knowledge from the data except generation of logs.

More importantly this species of developers will be vanished or become low profile when other species of developers become stronger. It will be difficult for these application developers to convert to the other species though some may succeed and survive. Whatever these developers were doing will become a task of business people. For example business analyst will be able to assemble applications and do most of the customizations. 

Currently there are some environments where we can see this is happening. Visual Studio LightSwitch is one to name.

AI / Algorithm developers

This species will write complex algorithms and expose them as service with the help of infrastructure developers. Application integration developers will consume the algorithms written by this group before they get extinct with the invention of AI programs which does coding.

This type of developers will be high profile, high in number and gets job safety for longer time. They will be knowing basics of application integration but never know what and how the infrastructure developers work.

We can see examples already such as Algorithmia, Azure Machine learning etc...

System / infrastructure developers

This species will know how to deal with the bare metal machine whether it is silicon based or quantum based even carbon based biological machines. They cleverly abstract the hardware from the application integration developers and AI developers.

There will be less number of developers in this species. They control most of the things. They may consume the algorithms written by the other species without knowing how its done.

Reasons for this prediction.

No one can just predict something and run. They should have some kind of reasons for the prediction.

Bare metal is going away from developers

This is something everyone is agreeing now a days. There are many many abstractions coming over the hardware and the main reason is to code once and run everywhere. Another reasons we can hear is the reusability, less development cost etc...

Frameworks

When we were developing for Intel 8085, we had to know what are the CPU registers, their size, memory addressing and port addresses. Now if we ask how many registers are there in the CPU to a new gen .Net or Java or JavaScript developer, they will not have any clue. If we further go back in time, we know there was punch cards and developers had to know so many mechanical & electrical properties of the system. After 8085 days, we got assemblers, compilers and we entered into managed world.
Now from the managed world we are going to the world of integration. Connect some dots and the software will be ready. Even for device programming JavaScript is used which completes the story.

Cloud is abstracting

Another abstraction is happening in the distributed systems world. Distributed means something which is done with 2 Turing machines. or somewhere data is serialized and passing through. Cloud started slowly with IaaS where we got virtual machine. Then came PaaS where we don't need to worry about machines anymore. It is now reached to FaaS where each function is a service and can be deployed independently. Another sweet name is Serverless. Now people are telling that though the name Serverless, there are servers underneath but developer don't need to worry. 

But think of a time where the underlying machine is a quantum biological computer. Still we can call it a server but the underlying system has completely changed and there will be no clue to the consumers how the FaaS executes code.

AI Algorithms are too complex to be understood by all

Earlier colleges were teaching sorting and searching algorithms. But people who studied those were not coding those algorithms everyday as those came as part of the standard libraries. That helped the developers concentrate on the real business problem. That is good. After those days the business requirements got a new face of data analytics. Everyone wanted to analyze their data. Some at least know what they want from data. Some are just fiddling with data for treasure. That brought a good momentum to the field of algorithms. Many algorithms were developed but the same question came again. Should we just consume those algorithms like how we are consuming sorting and searching or should we learn those algorithms. Obviously the route is same. The consumption. 

Another reason is their complexity. It is very difficult for normal application developer to know how face recognition algorithm works as it requires prior knowledge in different fields. So the consumption is easy. This eventually boosts the developer divide.

There may be many more reasons which are not listed here. However the developers are getting unavoidable division which occur in every growing science and technology fields. The best practice is to select own field and be an expert in it.

Thanks for reading.

Tuesday, January 10, 2017

My JavaScript Module experiments

Background

After I wrote my post about the selecting Angular 1 for my personal site when Angular 2 is out, I seriously started thinking about myself. Am I too joining the group who hate changes? As mentioned in that post, the main bottleneck is not the concept or syntax of ng2 nor it's performance and stability. The problem was around tools. What are those tools doing? They help us to write modularized JavaScript. They do transpilation, bundling, minification etc...

If we download standard ng2 project starter template, we will get pretty much a good running starter. The tools will be already configured. From there we can simply add our own functionalities. Unless we have crazy requirements, that configuration will work. We just need to add features. Then what is the trouble? The trouble, at least I had was that I am not getting how those tools are doing the magic. I checked with some other people who has problems with ng2. I could see they also have trouble understanding the tool chain. Its very difficult to move ahead in that magic setup especially, if we are coming from little old school who want to know exactly how the software works. For most of the newbies, it doesn't matter. As far as the setup/tool chain is working they can code new features.

Is there a way to survive in web development without knowing bundling? The answer is 'no' as that is the way forward to build huge apps.

So what is the solution. Learn the JavaScript module system from basics with simple samples than  starters which are configured to use features such as transpilation, source map, minification, unit testing, dev & prod  modes etc...

This post explains how I learned module system in my way, though there are tons of tutorials explaining the JavaScript module system. Most of the steps will be pointing to external sites to avoid duplicate content and of course they explain better than me.

What is JavaScript module system

If we have background in Java, it is the equivalent of Java package system. For .Net people, it can be compared to assemblies. Simple as that. But since the JavaScript engines don't have native support for modules, we have to rely on different tools which does the module implementation. To me its like workaround till the browsers or the JS engines know what is module. May be at least in the case of module loading. I am not expecting any JS execution environment will ever help us to create a bundle.

What is the relation to Angular 2 and these module system? Angular 2 uses these module concept and it is good to follow the same for our app as well. So if we download any Angular 2 sample or starter, we can see it uses SystemJS or Webpack. As mentioned earlier, either we can just download a starter and extend it without knowing how the magic works or learn what is the module system from the scratch.

Selecting WebPack

If we google for JavaScript module tools we will see many. Browserify, RequireJS, SystemJS, webpack to name few. We can debate which one is better for days and that itself require one separate post. So for the time being selecting webpack as it is selected by Angular-cli and at least it is not in beta.

NodeJS n NPM are everywhere

It is very difficult now a days to do web development without knowing node and its package management tool NPM. Though Node & NPM doesn't have any role inside browser they are used in tooling side. Webpack also comes as NPM package. So this post assumes reader has knowledge about NodeJS & NPM 

Step 1 Understand via simple ECMA 5

The first thing as JS developer unless we are born to ES6 is to learn how we can modularize ECMA5 or ES5 or simply the traditional JavaScript code. There is no need to write one more tutorial to understand how ES5 can leverage webpack. Just go to 'getting started' tutorial of webpack and try it.

Please try till Section "THE FIRST  LOADER" in the below 'getting started' link.

http://webpack.github.io/docs/tutorials/getting-started/

Before using loader, it is good to know something else. Obviously we should come back to loaders. Because loaders does magic. 

After trying the above, remove the .js extension from require and run. Sample below.

var content = require("./content.js");
to 
var content = require("./content");

We can see still it is working. The file extension is something optional.

Step 2 - webpack.config.js

If we go through any webpack tutorial, we can understand that most of the issues using webpack is because of wrong webpack.config.js. So it is important.

Minimum config

Lets see what is the minimum config required for this webpack.config.js file. Have a look at the below link starting from "Defining a config file" section where it is explained clearly. Read till the "Webpack loaders and preloaders".

Watch mode

This is a useful mode in development. This is explained in the above tutorial.

Dev Server & hot reload

These are again development time features where we can start a light web server which monitors the development changes and reloads for us. This save good amount of time in switching to command window and typing commands. Yet to see what will happen on performance, if there are 100s of files.

The feature is explained in the above tutorial.

Step 3 - Enter TypeScript

A big no to ES6 at this point only because it lacks compile time type system. This is again debatable. If we are dealing with small applications, it is better to be dynamic typed ie pure JavaScript. But for bigger apps, type checking is must. Otherwise it will take a good amount of time  ie time needed for finding out what are the members of objects and what a function accepts and returns.

In this section we are going to see how TypeScript and webpack works together.

The below article below explains TypeScript and how to use TypeScrpt with Webpack. Read from beginning till "Watch Support"
http://www.jbrantly.com/typescript-and-webpack/

Why till "Watch Support". Because it then includes jQuery. Our focus is to understand how webpack can be used to process TypeScript not with jQuery.

Now its time to clone my webpack repo given below. This will be used going forward in this post.
https://github.com/joymon/webpack-starters

It has a folder named ECMA5-TypeScript. Point command line to that folder and run webpack command. We could see it transpiles the logger.ts & tscomponent.ts and output bundle.js along with other ES5 modules. Running the index.html from disk will show some log statements.

"TypeScript Worked !!! - logger module loaded" - Means a ES5 module was able to load TS module.
"TS component which has dependency on JS component worked" - Hope it is self explanatory

Lets see how it worked

Loaders in webpack - ts-loader

Loaders help to process the file, if those are in different formats. For example, today browsers don't know how to read and execute the TypeScript. So the TypeScript needs to be transpiled to JavaScript. That can be done by using the webpack loader mechanism. In this "webpack-starters" repo, ts-loader is used for that purpose. The configuration is present in the webpack.config.js.

After loader does its job, the final bundle.js is created as usual. index.html only knows bundle.js.

TypeScript loading ES5 modules

If we need ES5 code to load a TS module it is easy. Since ES5 don't care about types, we can write any valid js and at runtime only it checks the objects. But if we want to use ES5 module in TS, the first thing is to tell, what is require() function. TypeScript as language don't know what is require(). So to tell TypeScript about require() we have to use nodejs typing. Node typing has the definition for require(). 

To add Node typing the sample is using a package called  @types/node. Refer package.json. 

Please note since this post is not aiming at teaching TypeScript language and what are typing files,  more details on @types/node are omitted.

Step 4 - TypeScript with AngularJS 1.x

Here we are going to next step of using AngularJS 1.x via webpack module mechanism. The sample is present in the TypeScript-Angular1 folder of the "webpack-starters" repo mentioned above. Sample just shows a simple Angular 1 application with one component.

Setting up ng1+TS+webpack from scratch involves multiple steps which are given below
  1. Initial setup of TypeScript+webpack
  2. Include the NPM package for Angular
  3. Include the @types/angular.
  4. A module where angular.module() is called to setup angular. In the sample refer to ngModule.ts
  5. An entry point(bootstrap.ts) module where we can call the above module and add other Angular constructs such as components, services, filters etc...Big sorry for directives for easy conversion to ng2.
  6. Index.html to use required Angular directives.Minimum ng-app.
  7. While adding components export them
Now have a look at the TypeScript-Angular1 folder of the "webpack-starters" if not done already. Below are some questions which might have popped up. The answers are also given along with questions,
  • Why ngmodule.ts? 
    • This is just a class for wrapping the original angular object. This gives us a common place for registering the angular object such as registering components, filters, services etc...This is optional. Just make sure the code which is executing angular.module and other registration functions are getting called.
  • Why bootstrap.ts
    • This acts as entry point for the application. It imports all the other modules which contains the app is ngmodule and all the angular constructs. For simplicity the sample only includes a simple component.
  • How did the require("angular") worked? Who exported the angular library for webpack?
    • This is done by the node_modules\angular\index.js .It exports as angular. If we change this we can see the module is not found.
  • Why webpack-dev-server is needed. Can't we double click and run from the disk?
    • For just ng-app to work, there is no need of web server. But for angular to locate and load the html template, it requires app to be served via url not just from disk. We can even use IIS to serve the folder. Only requirement is to navigate via http protocol.
Any new questions can be asked in the comments which will be included into the post later with answers?

More

This post shows the tip of iceberg. There are many more things which are required to setup a professional application for development and production. With this basic understanding of module system, learning those will be easy.

Some links are given below which are related to JavaScript modules and bundling

Tuesday, January 3, 2017

TypeScripting AngularJS 1.x - Filters

This post is part of TypeScripting AngularJS 1.x series. Other posts are listed below. The sample code is created using Angular 1.5.9, TypeScript 1.8 & angularjs.TypeScript.DefinitelyTyped 6.5.6. 
  1. TypeScripting AngularJS 1.x - Define module
  2. TypeScripting AngularJS 1.x - Define module - improvements for big enterprise apps Improvement
  3. TypeScripting AngularJS 1.x - Using $http 
  4. TypeScripting AngularJS 1.x - Directives
  5. TypeScripting AngularJS 1.x - Filters

Angular Filters in TypeScript

These are small handy reusable functions which can help us to format the data while displaying without modifying the underlying model. Currency and date formatting are excellent examples. We don't need to keep one more property in the model which holds short date of another date object just for displaying. There are many filters built into Angular and it allows us to create new as well.

Below is a code snippet to create custom Angular filter named. HTMLTagRemover using TypeScript. As the name implies it removes any html tags in the string passed to it. 

export class HTMLTagRemover {
    public static filter(): Function {
        return (text: string): string => { 
            return text ? String(text).replace(/<[^>]+>/gm, '') : '' };
    }
AppModule.getInstance().registerFilter("removeTags", HTMLTagRemover.filter);
It is a simple factory function which returns the actual filter function. The last line registers the filter with Angular application. It requires some prior knowledge related to defining Angular module in TypeScript way to understand how to get the AppModule.getInstance() and how it works.

Below is the code of registerFilter() in the AppModule class
registerFilter(name: string,fun:Function) {
    this.app.filter(name, fun);
}
The app is the real Angular object

Dependency injection 

Without dependency injection Angular is never complete. How can we get the dependencies injected into this filter?
export class HTMLTagRemover {
    static $inject: string[] = ['$sce'];
 
    public static filter($sce: ng.ISCEService): Function {
        return (text: string): string => {
            console.log($sce.trustAsHtml(text));
            return text ? String(text).replace(/<[^>]+>/gm, '') : ''
        };
    }
}
This injects $sce which can be used to sanitize data into our filter. Similar to $sce we can inject our own services too.

Complete code can be found in my personal web site source in Github.