Tuesday, December 11, 2018

Encrypting the ADO.Net connection to SQL Server and verification

Why should we encrypt ADO.Net communication

We all might have heard about encrypting the web server traffic using http(s) protocol. It make sense to any beginner that it should be encrypted since the communication is going via vulnerable public internet. But ADO.Net is also providing us option to encrypt the connections to SQL Server Database. Lets why some reasons to encrypt.

Connecting from client machine to database server

Nobody might be doing direct connections from client to database using windows identity or custom identities. But it was an option people used earlier and if we ended up in legacy systems this is one thing to take care. A low hanging fruit to increase security.

Connecting from web or queue processing server to ADO.Net

This is more often scenario. The client machine to web server communication is already encrypted. Nobody can intercept it. Once the communication reaches the hosting environment, it is difficult for an outside attacker to intercept the communication between web server and database server. This is true if we are in a corporate environment with someone else taken care of the network level protection.

Still there are chance that some insider attacker can intercept.

When we are in cloud environment

Another reason to encrypt the database communication is the cloud hosting. Though the cloud vendor says they are the best in the world to secure everything and obliged to keep things secret, there is still a factor of belief. What if something goes wrong and someone intercept the communication? So it is better to encrypt than taking a chance.

If someone hacks into the environment and obtain the encryption key as well, there is nothing to be done. For example the cloud provider is in a country which is in war with our country and our application is significant enough to help them to win the war. Its Unavoidable. If there is effect, there will be some side effects. We save cost but exposed to less secure environment. 

How to encrypt the SQL Server Communication

Though encrypting the communication is to be final answer to the security, let's see how to encrypt. SQL Server supports mainly certificate based encryption. In other words the certificate as key for encryption or used for key exchange to finalize the encryption. Digging into the details of how it works is not in the scope of this post. Below are some links as those are readily available.

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine

https://support.microsoft.com/en-us/help/316898/how-to-enable-ssl-encryption-for-an-instance-of-sql-server-by-using-mi

More visual help here.

When the connection gets encrypted? It is a great question. There are so many combinations of configuration and below link explains when the SQL Server encrypt, when it use self signed certificate, when it fails etc...
https://docs.microsoft.com/en-us/sql/relational-databases/native-client/features/using-encryption-without-validation?view=sql-server-2017

Problem 1 - When connecting from SQL Profiler

Client unable to establish connection ssl provider the target principal name is incorrect

https://stackoverflow.com/questions/37734311/mssql-with-ssl-the-target-principal-name-is-incorrect

We have to use the full computer name when connecting.

Other problems

There are variety of problems users reported when they enable encryption without reading all the docs. Some are related to permission of SQL Account to the certificate, some related to expired or wrong issued to certificates, some experienced because the certificate was in wrong store etc...

Verify the ADO.Net connection is encrypted

As we saw earlier, even after doing the settings there are chances of connection falling back to plain mode without encryption. This section is about how to ensure the connections are encrypted.

Using the sys.dm_exec_connections

There is a DMV called dm_exec_connections. It can be simply queries like below to check the connections and its encryption status.

SELECT session_id, connect_time,client_net_address, net_transport, auth_scheme, encrypt_option, local_tcp_port
FROM sys.dm_exec_connections
WHERE net_transport = 'TCP'

Play with the above code to explore more properties of  SQL server connections. If we need to check whether the connection is encrypted from .Net code, we have to use the @@SPID to get the connection detail in the same DMV.

What is the certificate used for encryption? - Registry

After we enable the encryption via user interface, we can go to registry and ensure the certificate. Below is the location in registry.

HKLM:\Software\Microsoft\Microsoft SQL Server\<SQL InstanceID>\MSSQLServer\SuperSocketNetLib

As instructed in the guidelines, the encryption is enabled per SQL Instance. So one machine can have 2 SQL Instances. One encrypted and other not.

What is the certificate used for encryption? - SQL ErrogLogs

The SQL logs is another place to ensure the certificate what is used for encryption.

References

https://blog.coeo.com/securing-connections-to-sql-server-with-tls

Tuesday, December 4, 2018

AngularJS Sunset

I was in the mood that AngularJS will live for some more time after the introduction of Angular. That came when the team announced that they will monitor the downloads of AngularJS and define the retirement strategy later. 

But now things have changed. Angular team announced a road map to the end of AngularJS with 1.7 version. When they say end, its the end of new releases and official support meaning there won't be any security patches to the AngularJS framework after the currently set LTS period.

What is AngularJS sunset means to us

The period stared from Jun 30, 2018 and it is till June 30, 2021. It doesn't mean they will release patch for each and every thing. The exact criteria is outlined in the announcement. 

https://blog.angular.io/stable-angularjs-and-long-term-support-7e077635ee9c
https://docs.angularjs.org/misc/version-support-status

We can use AngularJS ever after that period but if there are new security holes identified, hackers can exploit that to do whatever they can. So it is better to migrate to Angular from now itself. We have 3 years to get the migration done.

Bye bye AngularJS

Tuesday, November 27, 2018

Architecture v/s Code - Validating sys admins in SQL Server

In the misused agile world, it is always difficult to make sure the code is following the Architecture or Architecture document is following the code. 'Misused agile' means, using the word agile to make developers work all day and night for frequent releases. They struggle to meet the releases and there are high chances for taking deviations, shortcuts or hacks which eventually ends up Architecture and code going in 2 separate directions.

One if the best way to ensure they are inline by using Simon Brown;s C4 architecture model. C4 clearly document the structure of the code. One area which that model is not exactly covering as is the security model of the execution. Architects has to review the deployment document before any deployment or at least after deployment to ensure the security model is right.

With no further introduction let us take a problem of service accounts having higher permissions than needed. Either as Architects we can review each and every DB Server Instance and raise alarms or defects. Or make it automated.

Below script gives the list of logins. By looking at the sysadmin flag we can determine the role and raise a alarms. This can even be integrated with CI/CD systems as validation rule to block the deployments, if database updates are going via CI/CD pipeline.

SELECT loginname, sysadmin

FROM sys.syslogins

ORDER by sysadmin DESC

Happy reviewing. 

Tuesday, November 20, 2018

Detecting whether the code is running under Karma test runner

Disclaimer

Putting the disclaimer in the beginning as it is a bad practice to check whether our code is running under the test and do actions based on it. Ideally it should be done via dependency injection and mock objects.

But still to document there is an option to check whether the code is running under test and trying to explain why its needed in one scenario.

How to check code is running by Karma

The idea came from the below SO question.

https://stackoverflow.com/questions/26761928/how-to-check-if-you-are-using-karma

Before going into the code let us take some time to understand Karma. It is a test runner. It runs the JavaScript application by starting browser instance(s). Those instances can be either with UI or headless. Headless means no UI or consider as in memory.

Lets understand the code

function isKarmaRunning(){
  let isKarmaRunning = false;
  if (typeof window["__karma__"] !== "undefined") {
    isKarmaRunning = true;
  }
  console.log(`[bootstrap] isKarmaRunning - ${isKarmaRunning} ${typeof window["__karma__"]}`); // still undefined
  return isKarmaRunning;  
}

It is self explanatory and show the danger itself. It assumes when the code is run by Karma test runner, the window object will get a properly called __karma__. We can detect the presence of Karma by checking that property.

The danger is that in future when Karma decided to stop setting the __karma__ property or change its name, our code will fail.

Thanks to laukok for his question with solution.

One scenario to detect whether Karma is running

This is not a scenario where we must check for Karma. Rather its hack and you are free to suggest how to do it right.

The scenario is where AngularJS and PWA (Progressive Web Apps) are combined. PWA's ServiceWorker feature is used to intercept the web requests to populate cache or serve from cache instead of hitting server. The cache is filled to ensure that the application can be used even without internet connection.

The ServiceWorker has fetch API which is used to intercept the web requests. fetch API hook starts only after the service worker is properly initiated. It may take up to 100ms as per different sources. If AngularJS application is bootstraped before ServiceWorker comes into active mode, it will not catch the web requests hence not cached. The web requests include the requests from ng to get the html views

If the user is still online and use the application, there are chances that those web requests are again intercepted and stored into cache. But what if the user first launch the application and went to offline. The cache is not populated hence error.

In order to work the initialization delay one of the solutions is to explicitly bootstrap the Angular application once the ServierWorker has started. Then the problem is that the tests written will complain that the Angular application is not bootstraped. Then either we need to change all the tests to wait for the ServiceWorker to initialize or have the application changes to do below

If the application is running under Karma, boo Angular immediately. Else wait for the ServiceWorker to initialize.

Too complicated. Isn't it. As an software engineer we should have gone back to original problem and solved the issue by separating the components and directives to separate ng module or one of the other approaches. 

Hopefully once there is enough time to analyze there will be a follow up post with better solution. In mean time feel free to pour your comments.



Tuesday, November 13, 2018

Freemium - What is always free in clouds platforms

Background

There are any different pricing model in the software industry which doesn't require any money to be spend to use software. Trial, totally free, freemium, sand boxed trial account etc...They are different in their own way. Totally free and freemium are the models which really help us build something based on the offering. Freemium might not be a new word for a follower of this blog. There were posts which has freemium reference such as Serverless monitoring, online diagramming tools, tool box for open source more precisely developers who don't want to invest on tools.

Once again Freemium is a model where we can use systems in free but for advanced usage we need to pay. Event if we don't want advanced features the basic features are free for ever. Refer wiki for the Freemium

Freemium cloud features

Lets see what is there from Cloud providers in freemium tier. ie what is there always free which can help us to develop production prototypes and start own company without investing any money other than domain if we want.

Please note that, these are not the free account or free trial which we cannot depend on to put something into production.

https://cloud.google.com/free/docs/always-free-usage-limits#container_name
https://azure.microsoft.com/en-in/free/free-account-faq/
https://firebase.google.com/pricing/
https://aws.amazon.com/free/?awsf.Free%20Tier%20Types=categories%23alwaysfree
https://www.heroku.com/pricing

Since the cloud providers may add more services as freemium, only links are provided not another copy paste which needs more time to make it up to date.

Freemium doesn't necessarily guarantee that it its free for lifetime. The vendors have their rights to change the pricing.

More links

https://en.wikipedia.org/wiki/Cloud-computing_comparison
https://www.infoworld.com/article/3179785/cloud-computing/aws-vs-azure-vs-google-cloud-which-free-tier-is-best.html
https://www.redpixie.com/blog/aws-vs-azure-vs-google-cloud

Tuesday, November 6, 2018

PowerShell to find the listening port and its process - eg SQL Server Analysis Service

Problem

Enterprise environment always enforce security in different levels. One such level is network ports. There will be less questions when we want to open port 443 for http(s) communication. Also it is fine if the ports are familiar such as 1433 for SQL Server. But when it comes to some unfamiliar ports its difficult. Also they wants to open very minimal ports. So finding and documenting exactly what ports are required is a considerable job for an Architect. This is obviously not applicable to Architects who are there when solutions or define the solution themselves. But its trouble for Architects who reverse engineer what is in the development / testing / staging environment and prepare document which is essential to move the application to the higher environment.

One such thing is SQL Server Analysis Service. For SQL Server OLTP engine its easy to find out. But for analysis service, its difficult if the installation is done using custom port.

What Google says

This is not the first time people encounter the problem and there are lot of solutions to it. Some from google are below.

To find ports and processes in general

http://blogs.microsoft.co.il/scriptfanatic/2011/02/10/how-to-find-running-processes-and-their-port-number/
https://stackoverflow.com/questions/42857466/find-process-that-is-blocking-a-port-on-windows

https://superuser.com/questions/1215093/powershell-one-liner-to-show-process-on-same-line-as-port-using-netstat-issue

Find SSAS Ports

https://support.microsoft.com/en-us/help/2466860/how-to-determine-and-change-the-port-of-an-ssas-instance
https://www.cisco.com/c/en/us/support/docs/conferencing/telepresence-management-suite-analytics-extension/112506-pqa-112506-00.html#q1

The general scripts are either based on long custom PS command or text analytics based. The SSAS port findings required 2 steps to figure port out.

Easy solution using PowerShell

Below is another one which does the things in singe PS command. Yes obviously there is piping and we can argue whether its multi-line script or not.

Finding ports and processes


Get-NetTCPConnection | `
where {$_.State -eq 'Listen'} | `
select -Property LocalAddress, `
                        @{name="Process";Expression={$psid = $_.OwningProcess; (Get-Process | where {$_.Id -eq $psid }).Name + ', ' + $_.OwningProcess}}, `
                        LocalPort ` |
sort -property Process

Pretty much straight forward. It relies on the Get-NetTCPConnection commandlet. Then does some filtering and selection to get the list back.

Finding port of SSAS Service

We can apply the above technique easily to filter the SSAS process

get-nettcpconnection | `
where {$_.State -eq 'Listen'} | `
where {$_.OwningProcess -in (Get-Process | Where {$_.Name -eq 'msmdsrv'}).Id} | `
Select-Object -Property LocalAddress, `
                        @{name="Process";Expression={$psid = $_.OwningProcess; (Get-Process | where {$_.Id -eq $psid }).Name + ', ' + $_.OwningProcess}}, `
                        LocalPort `


It takes a big assumption that the process name of SSAS is 'msmdsrv'. In future Microsoft change the name the script will not work.

References

Tuesday, October 30, 2018

Travis-CI - Continuous deployment to staging but production deployment only on tag commit

Background

Travis-CI is a hosted Continuous Integration & Continuous Delivery tool. Based on configurations, users can have their application deployed to production. They provide free service to open source projects. The CI&CD environment currently using Linux.

Aim

  • Have the CI & CD pipeline trigger for each and every commit going into GitHub. Deploy only to staging or testing area.
  • Testing / staging area is GitHub pages hosted in the same repo. Deploy to staging should not trigger another build which ends up as infinite loop. Not into gh-pages branch but to \docs folder.
  • When there is tagged commit deploy to production.

How its done

Avoid build on commit from Travis-CI bot

The word bot is used to indicate the CI&CD account used by Travis-CI. We can see that account on the commits from Travis-CI. When the bot commits, it uses message as follows.

""Deploy <github userid>/<project> to github.com/<github userid>/<project>.git:master""

One such example given below.

"Deploy joymon/prayerbook to github.com/joymon/prayerbook.git:master"

We need to write build conditions at the top of .travis.yml file to bye pass the build if the commit message is matching.

Build and deploy on commit tag

This is another condition we need to put into condition. If we don't put this Travis-CI will not build on tag commit. Below is one example.

if: tag IS present OR commit_message != "Deploy joymon/prayerbook to github.com/joymon/prayerbook.git:master" OR tag = true

If we use tag = true alone it may not work. Needs more analysis on the Travis-CI yml schema.

A sample travis.yml file can be found below which works based on the above conditions.

https://github.com/joymon/prayerbook/blob/master/.travis.yml

References

https://stackoverflow.com/questions/18216991/create-a-tag-in-github-repository
https://docs.travis-ci.com/user/conditions-v1
https://github.com/travis-ci/travis-ci/issues/8518

Disclaimer

The author is not working for Travis-CI or any of their subsidiaries or partners. There is no guarantee that the technique mentioned in this post will work forever. Travis-CI may change their system the way they want it to be. 
This is not at a sponsored post.