Tuesday, January 22, 2019

Azure @ Enterprise - dealing with retired services is tough

Though there are so many advantages with cloud, there seems some problems due to vendor lock in.  Especially if we invest on vendors specific technologies. Below is one such scenario occurred in my day job related to Azure.

Story of Azure ChatBot

One of our clients who is in highly regulated industry was very much skeptical about cloud. After long time periods they were convinced into Azure. One reason could be the long term relationship with Microsoft. They started to use Azure with all the enterprise gates. In fact the Azure @ Enterprise series in this blog actually originated from dealing with those gates. 

Beginning of last year, we started to develop a chat bot demo. The idea was to integrate the chat bot into one of the big applications as a replacement to FAQ. Users can ask questions to bot thus avoiding obvious support tickets in the future.

Things went well. We got appreciation on the demo and started moving to production. About half way, things started turning south. The demo chat bot application used Bot SDK V3. It had voice recognition enabled which allow users to talk to it and get the response back in voice. During the demo we used Azure Bing Speech API. But later before the production, we got the notice that the service is obsolete and will be retired mid 2019. Another surprise was the introduction of Bot SDK V4 which is entirely different that Bot SDK V3. Something like AngularJS v/s Angular.

Retirement of Bing Speech API

As per the announcement, the service will no longer work after 15 Oct 2019!! We need to migrate to Speech Service.

They already given the details on how to migrate.

The sad part is that, soon after the announcement, we are no longer able to create Bing Speech API resource in the Azure portal. We were just started to test the application in development subscription.

Another problem is the out of the box incompatibility of new Speech Service with Bot SDK V3. Bing Speech API and Bot SDK V3 are compatible and easy to integrate.

Impacts

It was easy from Microsoft, but developers gets into trouble. Many internal & client meetings happened. Lot of time spend to do assessment on migration. Finally decided to contact Microsoft to get exception. 
Then the next phase of efforts started. Raising support tickets, meetings with Microsoft and luckily since our client is big for Microsoft, we got exception to create the Bing Speech API resource in new subscriptions.

If it was a startup, the decision might have been taken quick but might not get the Microsoft support as what we got. But for bigger enterprises, its a tedious job to get decision. Wasting time which was reserved for achieving features. The budget changes, delays etc...

Bot SDK V4

The problem is not ended there with Bing Speech API. Even if we decide to go with the proposed Speech Service which replace Bing Speech API, the Bot SDK V3 is not compatible out of the bot. So need to upgrade the Bot SDK to V4 as well.

With SDK V4, Microsoft changed the programming model of bots. Its almost rewrite of V3 code when upgrade to V4. Also Bot SDK V4 is only available in .Net Core. What if an enterprise is not ready to take .Net Core? Yes it can happen and happening for us. Again so much efforts.

Another problem seems the Bot SDK V4 is not compatible with Bing Speech API. Even with our exception, we cannot migrate to SDK V4. Both the upgrades has to happen together to make sure that the application is not breaking in the process. 

I am always a proponent of CI/CD pipeline. But here, since the application is having the pipeline with 'build on check in '  & 'deploy on success' enabled, there could be disturbances.

Further thoughts

We are not alone in the planet who were hit with this issue. But still need to think about future on how to handle the scenario.

What if it happens during the maintenance phase of application?

Now there is a active development team for the project. There could be time when the application is complete and expected to run for long. In that situation, if changes like this comes which cause application to fail, what is the guarantee Azure is giving?

Budget for future

Normally when the application is built, it goes to maintenance mode with less budget. What if there is not enough budget at later time to rewrite the application on these breaking changes? Seems we need to let the app into its fate similar to how we abandon the satellites in space.

Summary

We survived because the client is a big shot for Microsoft. If you are big enough to influence Microsoft, go to Azure proprietary PaaS. Else think twice and thrice.

Simple solution is to use vendor independent technologies. For example containers for web apps and APIs, Scala for analytics. That will help to host application in our own data center, if clouds became expensive or vanished in future.

Tuesday, January 15, 2019

PowerShell to validate certificate

Scenario

There is a WPF based enterprise application working based on certificate issued into the system by the certificate authority. When the application starts it make sure there is a certificate in the store which is issued by the enterprise. For remote service calls, certificate chain based trust is used to ensure the WCF communication is secured. The servers will accept the client request which comes with certificate issued by the same certificate authority of what is in server.

Problem

One day accidentally someone issued one more cert to the client machine in the same name. Then the poor application stopped working as it doesn't know what to do when there are more than one certificate.

Panic situation started. Some are sure that in such situation one of the certificate should be revoked. Some tells application should have intelligence to choose the cert with longest expiry, etc... 

How to understand what is really happened? Need to look into one of the client machine to which developers barely have access. So don't even think about installing or giving an test application. 

Modifying the application is not a big deal but without understanding what is happening is waste of time. The problem here is understanding what is happening in client machine. This might seem very silly to a fresher or someone working for public products. Welcome to Enterprise, it doesn't work that way in Enterprise.

PowerShell to rescue

The PowerShell is really a revolution where it helps developers to run code in restricted environments otherwise they cannot do anything via installer or utility exe. .bat files were there, but that won't let developers run C# code as is in a machine without compilation.

Lets see the snippet which will help us to check whether the certificates are revoked or not.

Get-childitem Cert:\CurrentUser\My -recurse | %{ write-host $_.Subject ; Test-Certificate -cert $_ }
The main API is the Test-Certificate command let. The initial code fragments are used to iterate the certificate store.

Happy scripting...

Tuesday, January 8, 2019

Searching for users in Active Directory

As developers for windows based systems, there are situations where we need to deal with Active Directory. This post is about searching user in Active Directory from C#. It can run from desktop as well as web applications as long as the user running application has the permission to AD and AD is reachable via network.

Prerequisites

Knowledge about Active Directory and terms such as Forest, Global Catalog etc... are preferred to understand this post

Searching in particular AD Forest

Here the input is the search string and the name of the AD Forest

public static IEnumerable<ADMember> GetUsersByADForestName(string searchString, string nameOFADForest)
{
    DirectoryContext rootDomainContext = new DirectoryContext(DirectoryContextType.Forest, nameOFADForest);
    Forest forest = Forest.GetForest(rootDomainContext);
    return GetUsersFromForest(searchString, forest);
}

As seen above, the Forest object is created using DirectoryContext which is created using the name of AD Forest.

The GetUsersFromForest() method is given below which does the search

private static IEnumerable<ADMember> GetUsersFromForest(string searchString, Forest forest)
{
    GlobalCatalog catalog = forest.FindGlobalCatalog();
    using (DirectorySearcher directorySearcher = catalog.GetDirectorySearcher())
    {
        //Use different options based on need. Below one is for searchnig names.
        directorySearcher.Filter = $"(&(objectCategory=person)(objectClass=user)(|(sn={searchString}*)(givenName={searchString}*)(samAccountName={searchString}*)))";

        SearchResultCollection src = directorySearcher.FindAll();

        foreach (SearchResult sr in src)
        {
            yield return new ADMember
            {
                Title = GetPropertyValueFromSearchResult(sr, "title"),
                FirstName = GetPropertyValueFromSearchResult(sr, "givenName"),
                MiddleName = GetPropertyValueFromSearchResult(sr, "middleName"),
                LastName = GetPropertyValueFromSearchResult(sr, "sn"),
                Phone = GetPropertyValueFromSearchResult(sr, "phone"),
                Email = GetPropertyValueFromSearchResult(sr, "mail"),
                DisplayName = GetPropertyValueFromSearchResult(sr, "name"),
                UserName = GetPropertyValueFromSearchResult(sr, "samAccountName")
            };
            
        }
    }
}

Hope the above code snippet is self explanatory. The ADMember is just a class with required properties. Below goes the GetPropertyValueFromSearchResult method.

private static string GetPropertyValueFromSearchResult(SearchResult searchResult, string property)
{
    return searchResult.Properties[property].Count > 0 ? searchResult.Properties[property][0].ToString() : string.Empty;
}

The assembly references needed are given below
1. System.DirectoryServices.dll
2. System.DirectoryServices.AccountManagement.dll

Searching in current AD Forest and its trusted Forests

There could be scenarios where we need to search the current AD Forest and any other forests trusted to it. Below goes the code for it.

private static IEnumerable<ADMember> GetUsersByADForestNameAndItsTrustedForests(string searchString)
{
    searchString = EscapeForSearchFilter(searchString);
    List<ADMember> userListFromActiveDirectory = new List<ADMember>();

    var currentForest = Forest.GetCurrentForest();
    userListFromActiveDirectory.AddRange(GetUsersFromForest(searchString, currentForest));

    IEnumerable<ADMember> userListFromGlobalCatalog = GetUsersFromTrustedForests(searchString);
    userListFromActiveDirectory.AddRange(userListFromGlobalCatalog);

    return userListFromActiveDirectory;
}
private static IEnumerable<ADMember> GetUsersFromTrustedForests(string searchString)
{
    var forest = Forest.GetCurrentForest();
    List<ADMember> userInfo = new List<ADMember>();
    var relations = forest.GetAllTrustRelationships().Cast<TrustRelationshipInformation>();
    var filteredRelations = relations.Where(IsTheTrustValid);
    Parallel.ForEach(filteredRelations, (TrustRelationshipInformation trust) =>
    {
        Trace($"TrustedRelation. Source {trust.SourceName}, TargetName {trust.TargetName},{trust.TrustDirection},{trust.TrustType}");
        try
        {
            DirectoryContext rootDomainContext = new DirectoryContext(DirectoryContextType.Forest, trust.TargetName);
            Forest trustedForest = Forest.GetForest(rootDomainContext);
            var userDetails = GetUsersFromForest(searchString, trustedForest);
            if (userDetails.Any())
            {
                userInfo.AddRange(userDetails);
            }
        }
        catch (Exception ex)
        {
            Trace($" Searching exception {ex.Message} for TrustedRelation. Source {trust.SourceName}, Destination {trust.TargetName}.  Continuing...");
        }
    });
    return userInfo;
}
private static bool IsTheTrustValid(TrustRelationshipInformation trust)
{
    return (trust.TrustDirection == TrustDirection.Bidirectional || trust.TrustDirection == TrustDirection.Outbound)
        && trust.TrustType == TrustType.Forest;
}

All the 3 methods are present in the above snippet which helps to filter the trust relation and to create object of Forest from TrustRelation object. 

Please note the code snippet is for demonstration purpose only. It has to be tweaked for production especially when we are dealing with parallelism.

Tuesday, December 11, 2018

Encrypting the ADO.Net connection to SQL Server and verification

Why should we encrypt ADO.Net communication

We all might have heard about encrypting the web server traffic using http(s) protocol. It make sense to any beginner that it should be encrypted since the communication is going via vulnerable public internet. But ADO.Net is also providing us option to encrypt the connections to SQL Server Database. Lets why some reasons to encrypt.

Connecting from client machine to database server

Nobody might be doing direct connections from client to database using windows identity or custom identities. But it was an option people used earlier and if we ended up in legacy systems this is one thing to take care. A low hanging fruit to increase security.

Connecting from web or queue processing server to ADO.Net

This is more often scenario. The client machine to web server communication is already encrypted. Nobody can intercept it. Once the communication reaches the hosting environment, it is difficult for an outside attacker to intercept the communication between web server and database server. This is true if we are in a corporate environment with someone else taken care of the network level protection.

Still there are chance that some insider attacker can intercept.

When we are in cloud environment

Another reason to encrypt the database communication is the cloud hosting. Though the cloud vendor says they are the best in the world to secure everything and obliged to keep things secret, there is still a factor of belief. What if something goes wrong and someone intercept the communication? So it is better to encrypt than taking a chance.

If someone hacks into the environment and obtain the encryption key as well, there is nothing to be done. For example the cloud provider is in a country which is in war with our country and our application is significant enough to help them to win the war. Its Unavoidable. If there is effect, there will be some side effects. We save cost but exposed to less secure environment. 

How to encrypt the SQL Server Communication

Though encrypting the communication is to be final answer to the security, let's see how to encrypt. SQL Server supports mainly certificate based encryption. In other words the certificate as key for encryption or used for key exchange to finalize the encryption. Digging into the details of how it works is not in the scope of this post. Below are some links as those are readily available.

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine

https://support.microsoft.com/en-us/help/316898/how-to-enable-ssl-encryption-for-an-instance-of-sql-server-by-using-mi

More visual help here.

When the connection gets encrypted? It is a great question. There are so many combinations of configuration and below link explains when the SQL Server encrypt, when it use self signed certificate, when it fails etc...
https://docs.microsoft.com/en-us/sql/relational-databases/native-client/features/using-encryption-without-validation?view=sql-server-2017

Problem 1 - When connecting from SQL Profiler

Client unable to establish connection ssl provider the target principal name is incorrect

https://stackoverflow.com/questions/37734311/mssql-with-ssl-the-target-principal-name-is-incorrect

We have to use the full computer name when connecting.

Other problems

There are variety of problems users reported when they enable encryption without reading all the docs. Some are related to permission of SQL Account to the certificate, some related to expired or wrong issued to certificates, some experienced because the certificate was in wrong store etc...

Verify the ADO.Net connection is encrypted

As we saw earlier, even after doing the settings there are chances of connection falling back to plain mode without encryption. This section is about how to ensure the connections are encrypted.

Using the sys.dm_exec_connections

There is a DMV called dm_exec_connections. It can be simply queries like below to check the connections and its encryption status.

SELECT session_id, connect_time,client_net_address, net_transport, auth_scheme, encrypt_option, local_tcp_port
FROM sys.dm_exec_connections
WHERE net_transport = 'TCP'

Play with the above code to explore more properties of  SQL server connections. If we need to check whether the connection is encrypted from .Net code, we have to use the @@SPID to get the connection detail in the same DMV.

What is the certificate used for encryption? - Registry

After we enable the encryption via user interface, we can go to registry and ensure the certificate. Below is the location in registry.

HKLM:\Software\Microsoft\Microsoft SQL Server\<SQL InstanceID>\MSSQLServer\SuperSocketNetLib

As instructed in the guidelines, the encryption is enabled per SQL Instance. So one machine can have 2 SQL Instances. One encrypted and other not.

What is the certificate used for encryption? - SQL ErrogLogs

The SQL logs is another place to ensure the certificate what is used for encryption.

References

https://blog.coeo.com/securing-connections-to-sql-server-with-tls

Tuesday, December 4, 2018

AngularJS Sunset

I was in the mood that AngularJS will live for some more time after the introduction of Angular. That came when the team announced that they will monitor the downloads of AngularJS and define the retirement strategy later. 

But now things have changed. Angular team announced a road map to the end of AngularJS with 1.7 version. When they say end, its the end of new releases and official support meaning there won't be any security patches to the AngularJS framework after the currently set LTS period.

What is AngularJS sunset means to us

The period stared from Jun 30, 2018 and it is till June 30, 2021. It doesn't mean they will release patch for each and every thing. The exact criteria is outlined in the announcement. 

https://blog.angular.io/stable-angularjs-and-long-term-support-7e077635ee9c
https://docs.angularjs.org/misc/version-support-status

We can use AngularJS ever after that period but if there are new security holes identified, hackers can exploit that to do whatever they can. So it is better to migrate to Angular from now itself. We have 3 years to get the migration done.

Bye bye AngularJS

Tuesday, November 27, 2018

Architecture v/s Code - Validating sys admins in SQL Server

In the misused agile world, it is always difficult to make sure the code is following the Architecture or Architecture document is following the code. 'Misused agile' means, using the word agile to make developers work all day and night for frequent releases. They struggle to meet the releases and there are high chances for taking deviations, shortcuts or hacks which eventually ends up Architecture and code going in 2 separate directions.

One if the best way to ensure they are inline by using Simon Brown;s C4 architecture model. C4 clearly document the structure of the code. One area which that model is not exactly covering as is the security model of the execution. Architects has to review the deployment document before any deployment or at least after deployment to ensure the security model is right.

With no further introduction let us take a problem of service accounts having higher permissions than needed. Either as Architects we can review each and every DB Server Instance and raise alarms or defects. Or make it automated.

Below script gives the list of logins. By looking at the sysadmin flag we can determine the role and raise a alarms. This can even be integrated with CI/CD systems as validation rule to block the deployments, if database updates are going via CI/CD pipeline.

SELECT loginname, sysadmin

FROM sys.syslogins

ORDER by sysadmin DESC

Happy reviewing. 

Tuesday, November 20, 2018

Detecting whether the code is running under Karma test runner

Disclaimer

Putting the disclaimer in the beginning as it is a bad practice to check whether our code is running under the test and do actions based on it. Ideally it should be done via dependency injection and mock objects.

But still to document there is an option to check whether the code is running under test and trying to explain why its needed in one scenario.

How to check code is running by Karma

The idea came from the below SO question.

https://stackoverflow.com/questions/26761928/how-to-check-if-you-are-using-karma

Before going into the code let us take some time to understand Karma. It is a test runner. It runs the JavaScript application by starting browser instance(s). Those instances can be either with UI or headless. Headless means no UI or consider as in memory.

Lets understand the code

function isKarmaRunning(){
  let isKarmaRunning = false;
  if (typeof window["__karma__"] !== "undefined") {
    isKarmaRunning = true;
  }
  console.log(`[bootstrap] isKarmaRunning - ${isKarmaRunning} ${typeof window["__karma__"]}`); // still undefined
  return isKarmaRunning;  
}

It is self explanatory and show the danger itself. It assumes when the code is run by Karma test runner, the window object will get a properly called __karma__. We can detect the presence of Karma by checking that property.

The danger is that in future when Karma decided to stop setting the __karma__ property or change its name, our code will fail.

Thanks to laukok for his question with solution.

One scenario to detect whether Karma is running

This is not a scenario where we must check for Karma. Rather its hack and you are free to suggest how to do it right.

The scenario is where AngularJS and PWA (Progressive Web Apps) are combined. PWA's ServiceWorker feature is used to intercept the web requests to populate cache or serve from cache instead of hitting server. The cache is filled to ensure that the application can be used even without internet connection.

The ServiceWorker has fetch API which is used to intercept the web requests. fetch API hook starts only after the service worker is properly initiated. It may take up to 100ms as per different sources. If AngularJS application is bootstraped before ServiceWorker comes into active mode, it will not catch the web requests hence not cached. The web requests include the requests from ng to get the html views

If the user is still online and use the application, there are chances that those web requests are again intercepted and stored into cache. But what if the user first launch the application and went to offline. The cache is not populated hence error.

In order to work the initialization delay one of the solutions is to explicitly bootstrap the Angular application once the ServierWorker has started. Then the problem is that the tests written will complain that the Angular application is not bootstraped. Then either we need to change all the tests to wait for the ServiceWorker to initialize or have the application changes to do below

If the application is running under Karma, boo Angular immediately. Else wait for the ServiceWorker to initialize.

Too complicated. Isn't it. As an software engineer we should have gone back to original problem and solved the issue by separating the components and directives to separate ng module or one of the other approaches. 

Hopefully once there is enough time to analyze there will be a follow up post with better solution. In mean time feel free to pour your comments.