Skip to main content

· One min read

This is going to be a very small blog post, but indeed it will be helpful for lot of Angular developers out there. One of the recent question i answered on Stack overflow on how to reverse engineer an Angular application.

The question was " I built an Angular app using ng build. I have the built version but I accidently deleted my code. Is there any way I can get my code back from my build version? "

Even though the correct answer is NO, but you will be able to retrieve 80% of the code with the following steps.

Step 1: Deploy the app code in the Dist folder

Step 2: Use Google Chrome Developer tools (F12).

Step 3: Under debugger tab, look under Webpack -> Src you will see all typescript files. you can copy and past the code provided which would help you to at least build the structure of your application.

· 5 min read

In any business, the value of your data is the cost to reproduce it.  For an instance if the data came from any form the cost is the amount of money in wages to recreate the data in the same form where it came from. While in the case of application running in production, some data can’t be reproduced. So if lost or destroyed, and there is no backup no amount of money can get it back. It is really important that having a data backup and recovery plan is important to the overall success of your business. If there is no data backup plan business can suffer from permanent data loss, massive downtime, and unnecessary expenses.

What is Cosmic Clone?

If you are from a SQL background dealt with many relational databases, system like SQL server, this can be achieved by out of the box tools such as SQL backup and N number of tools. If you are from NOSQL background and a fan of Azure CosmosDB (Microsoft’s NoSQL Document\Multi model Database) there has not been really a good tool to perform these tasks up to now. Personally i have been waiting for long time to get this kind of tool out. Finally Cosmic clone has arrived to serve the purpose.

Features of Cosmic Clone

  • To clone collections for any environments
  • Create collections with similar settings(indexes, partition, TTL etc)
  • To Anonymize data through scrubbing or shuffling of sensitive data in documents.

How does it differ from Cosmosdb Migrator tool?

As a Cosmosdb user, you might have already familiarized with the Cosmosdb migration tool. Data migration tool actually helps to copy documents but does not provide options either to create a similar collection (with partition keys or indexes) and it does not provide a way to copy the related code starting with stored procedures, UDFs, and triggers etc. Further, there are no options to anonymize data in a collection.
Cosmic Clone help to ease the above process and aid in the copy and anonymization of a cosmos collection.

How to use it?

Prerequisites:

  • Microsoft .Net Framework 4.6.1 or higher
  • Source Cosmos collection and read only keys to its account (Could be production environment)
  • Destination Cosmos Account and its read write keys
  • Make sure the IP address of the machine running the tool is allowed in the firewall settings.

Deployment:

  1. Navigate to https://github.com/Microsoft/CosmicClone and clone the repo (Switch to the demo branch if the master is not stable)
  2. Just Compile and Run the Code.
  3. Or Download a pre compiled binary from the releases section and run the “CosmicCloneUI.exe” file.
  4. For Best performance you can run the compiled code in an Azure VM that is in the same region as the source and destination Cosmos Collection. (Source : https://github.com/Microsoft/CosmicClone)

Create backup of a collection

Step 1: Provide source connection details.

Obtain the connection string and the keys from the source cosmosdb account and enter it as below. if you are not sure how to obtain them, read my earlier blog here. You should see a notification as Validation passed once you set up the connection details.

Step 2: Provide destination connection details

You can follow the above step to obtain the destination connection details and fill the same way as you did it in the above step.

Step 3: Provide the options you want to backup

In this step you could select the things you want to copy over to the destination collection. Starting with Documents, Stored Procedures, User defined functions, Triggers, Indexing policies and Partition Keys.
All the options are checked by default but allow you to configure to optout of any.

The next step is to Anonymous the data of a cosmos collection. But this is necessarily not needed if you are doing just a backup.

Step 4: See the data in the document explorer

That's all, once you click next, you will be able to see the documents being transferred to your new collection as per the settings you've provided. Within few clicks you can transfer the data from one collection to another collection residing in another account. Also you can see the status of each copied items in the logs window. Once it is done, explore the destination cosmos portal and one can observe the new collection created with the required settings.

How anonymization  helps with GDPR?

As we have seen it in the 3rd step, you are provided with an option to anonymize the data with few settings. With GDPR now you should mandate data anonymization in all non-production environments. With the amazing cosmic clone tool it can save the manual effort of a developer, as we will no longer need to write, test, update, or maintain their own anonymization scripts. Is not it cool? You can read about the steps to anomyze the data from here. Based on my experiment with Cosmic tool, it is the best tool so far to backup\clone\restore a azure Cosmos database Collection. Try it out and see. If you want to contribute to the repo, you can do that as well. Cheers!

· 4 min read

Recently, I was involved in working with a customer who had their data in on prem SQL server. As they are shifting their soultion to Cloud(Azure) with Cosmosdb, the first requirement was to migrate the existing data to Cosmosdb. In this post, I will be explaining on how to do the migration and also how to do the data transformation as Cosmosdb stores the data as key value(JSON) format in the collection whereas MSSQL stores it as a row in the table.

Cosmosdb Data Migration Tool:

The migration tool provided by Cosmosdb team supports a different various data sources. To date, it can import data that you currently may have stored in SQL Server, existing JSON files, flash files of comma separated values, MongoDB, Azure Table Storage.

PreRequisities
  1. MSSQL Server , if you don't have it grab it from here.
  2. Download Cosmosdb data migration tool.
  3. Cosmosdb Account on Azure.

In this post i will show on how to import data from AdventureWorks and if you don't already know, AdventureWorks is a popular sample database for SQL Server. There are more than 40 tables in the database and we will use one of the Views to migrate to the Cosmosdb. Lets pick vStoreWithAddress. It has the columns such as Name,AddressType,AddressLine etc.As we are migrating this data  to Cosmosdb we could make use of nested values by merging address fields together. Let's get started.

 Step 1:

Connect to AdventureWorks2017 database and open the view Sales.vStoreWithAddress. 1

Step 2:

Lets assume the requirement is to migrate the data which contains the AddressType as shipping. With this step we will also do the data transformation by merging the address fields together. So the query will be like, https://gist.github.com/sajeetharan/985bd411e3e80ebb8134a16c684748ce  

Step 3:

Open Cosmosdb data migration tool. You can open it by clicking the dtui.exe inside downloaded folder  as mentioned in the prerequisites section.

Step 4:

We need to fill the source information in the tool. As we know our data source is from SQL server. Pick the source as SQL. 3

Step 5:

You need to fill the connection string for SQL server, which you can obtain easily by going to server explorer and connecting to the SQL server as follows. 5 Once you enter the connection string verify if its working by clicking on verify.

Step 6:

Next step is to enter the query to select the source data, we can do this by either pasting the query or by selecting the SQL query file. 3

Step 7:

We need to give the Nesting Operator as “.” As we have used to merge the Address parts as one object. Once this is done click on next.

Step 8:

Next step is to fill the target information, as you know here our target database is Cosmosdb. I assume you have a Cosmosdb account, you can obtain your connection string by navigating to Azure portal and select Cosmosdb account.

6 Once you copy paste the connection string, one more extra thing you need to do is to append the database with the connections string https://gist.github.com/sajeetharan/1e6e21339820b8437b1d9542e8133d51#file-sajeetharan-com\_4192016\_mssql\_cosmosdb You can verify the connection string by clicking on verify button. Also give the collection name as you prefer. It is important to define a PartitionKey in order to query the data later.Partition key is used to group multiple documents together within physical partitions. Let's partition by “/address/postalCode” which we're storing as postal code nested beneath address and for throughput, we'll just go with the default here of a thousand request units per second. One more thing we need to set the indexing policy. You'll notice this large text box here where you can set the indexing policy.I want to choose the range indexing policy which you can do here is by right clicking inside the text box and selecting from the context menu. 7 Just click Next and you will be taken to the summary page, where you can review the Migration steps at once.

Step 9:

Once you click import on the last step. You will see a successful message as all the data been transferred with the number of documents created. 8

Step 10:

You could verify the migration by running a query on the Cosmosdb data explorer as follows, 10 That's how you  migrate data from SQL server to Cosmosdb and it is very easy using the Cosmosdb Migration tool. For the other modes of data migration i will be writing separate blogs. Hope this blog helps if someone wants to transform data and migrate the data from MSSQL to cosmosdb. Cheers!

· 5 min read

I just uninstalled Visual Studio 2017 and installed Visual Studio 2019 and the transition was very smooth. The installation just took 17 minutes and its faster than the time it took to install earlier versions. I managed to explore few features and it looks quite impressive. I decided to list down the top features that i noticed after exploring them.

(i) Live Share :

The top feature according to me and for almost every developers out there is Live Share. It allows multiple developers from different locations who are working remotely to collaborate and to jointly edit the same code in Visual Studio 2019 in real time.Almost 35 developers can work together on code at the same time and Live Share can be used with both VSCode and VisualStudio and you do not have to install any dependencies or libraries. Is not that cool?

Live sharing is straight forward with a single click of a button, simply requiring you to click on the Live Share button, hit 'Start collaboration session'.

1

The same can be done on VSCode as well. Amazing thing here is you get to see each and every action of other developer and what he/she is doing. There's also an audio channel so you can talk to those collaborating with you. With this real problem of collaboration is solved.

(ii) Smarter IntelliCode for AI:

One of the coolest thing that every developer like about Visual Studio is IntelliSense support. With Visual Studio 2019 IntelliCode's intelligent suggestions, with a wider range of AI-powered assistance for the auto complete and auto-formatting feature for AI developers and for even someone who wants to get started with AI.

(iii) New search and project templates

The new Visual Studio 2019 header search box, seen at the top of the screen, is designed to find anything in Visual Studio — including menu items, settings,
tool windows and more. The tool uses fuzzy search that return the correct information even if you make a typo(like google).

4.png

Additionally, fresh look of the new project window is really cool. You get to search for the project you want to create in the typehead, rather than having it as a treeview in the earlier versions of visual studio. Also you could add a filter on the window with the provided options as follows.

5

Microsoft is no more a typical Microsoft as you see above. you get to explore different types of languages with different platforms with Visual studio 2019.

(iv) Fresh Start window

One other thing which i liked the most in visual studio is the ever fast loading screen of the start window. The team has used asynchronous way to load the previous/recent projects which loads in few seconds which is very faster compared to the previous versions of visual studio.

6.PNG

As you can see, you can now clone or check out a GitHub repository directly from the Start window with devops integration. This is really amazing.

(v) Developer friendliness

There are n number of cool things have been added to visual studio 2019 to make developers life easy which includes the following features,

DeCompiled resources :

Now you can debug and step into the external packages you pull in from Nuget and anywhere! You can enable this by Go to the top menu bar. Select Tools > Options. Type “decompile” into the search bar. The Advanced section of Text Editor for C# will appear. Click on Advanced. Check the box that says Enable navigation to decompiled sources

7.PNG

Code CleanUp:

In VSCode and VS2017, Similar to Format Document, this new feature allows you to configure a predefined set of several rules to clean up in your code all at once. To set this up, follow the steps below.

Click the little broom icon at the bottom of the window. Select Configure Code Cleanup.

8.png

You do not have to manually do things anymore.

Solution Filter:

Are you working on the same project for longer years?. Ever had a monolithic solution with way too many projects inside of it? Does it all take a while to load? Now you can save the state of your solution with only the desired projects loaded up. I will not be explaining in detail, certainly this is supported in VS2019. This is a nice way to keep everything organized and loading fast when first opening up the solution in Visual Studio 2019. This can be really refreshing for any enterprise developer who is working on several projects.

But Wait, There’s More!

Interested in what else this new Visual Studio 2019 has to offer? You can get to know by checking out the Release Notes and FAQ.

I already started loving the new features, performance and collaboration improvements inside Visual Studio 2019. What about you? Don't wait just get started by downloading from here.

· 5 min read

For the past one month , I have been experimenting with one of the promising serverless framework for creating serverless functions.  With the modern applications moving to cloud with microservices this framework becomes very handy to create/manage your microservices in the form of functions.

What do we need microservices?

When I started my career as a developer most of the applications that I worked on are with three Tier architecture and most of the companies built the applications with monolithic architecture even before cloud platforms were existed. With the modern technologies everyone is decomposing the business functionalities of the application into several micro services to avoid single point of failure. Assume Uber as an application and their core functionalities such as Registration,Payment,Email notifications, Push notifications could be broken down to several microservices in order to avoid any downtime. With monolithic architecture single point of failure would cause the entire application to shut down. To understand in detail, look at the following diagram

Serverless != No server

Many of us have the common understanding of serverless as its about without a server. None can be executed/hosted without having a server. It’s just that the fack that you will have no way to actually see the server which executes your code. As a developer with serverless you do not have to worry about managing servers as it will be automatically handled. Serverless becomes handy because of the following reasons

  • Reduce time-to-market
  • Easier deployment
  • Scale automatically
  • Focus on business logic
  • Cost reduction

Serverless is used for mainly event event driven architecture where functions has an endpoint that triggers something. Say for example, Trigger a notification once the file is uploaded.

Serverless and Microservices are great couple together. You should choose serverless when your functions/services are,

  • Stateless
  • Short Job
  • Event-driven stuff, e.g. Time-based / webhook
  • Simple application with less dependencies

OpenFaaS – Serverless Functions Made Simple

There are so many frameworks out there to build applications with serverless out of them OpenFaas stands out as its not vendor locked and you can use it for both On prem as well as in any of the cloud platform. It is  very simple and need few commands to get your functions deployed anywhere. It can be exposed with Docker Swarm or Kubernetes to the outside world.

Following are the reasons if you ever want to choose OpenFaas,

  • Anything can be a function
  • Leverage existing skills in teams
  • Avoid vendor lock-in
  • Run anywhere - cloud or on-prem
  • 13,000 stars on github with large contributors

OpenFaaS Architecture

There are main two components that you should get to know before getting started with OpenFaas.

DFrkF4NXoAAJwN2.jpg

Function Watchdog

As the name indicates watchdog is responsible to convert http messages to stdin then it will be passed to functions and stdout vice versa.Any docker image could be turned to serverless by adding function watchdog

API Gateway / UI Portal

As you have heard from AWS API gateway, it does the similar job here as well. It provides an external route into your functions and collects Cloud Native metrics through Prometheus. It also scale functions according to demand by altering the service replica count in the Docker Swarm or Kubernetes API. It also provides the UI to invoke functions in your browser and create new ones as needed.

Faas-CLI

The command line interface helps you to deploy your functions or quickly create new functions from templates  in any programming language you prefer.

Setup OpenFaas on Kubernetes API

I am a fan of Microsoft Azure, I will be providing the steps to setup OpenFaas on Azure Kubernetes. I have done this at a workshop and you can find 10 steps to build a kubernetes cluster and then all steps can be done simply by hitting few kubectl and helm commands.

Step 1: Launch Azure Cloud shell with https://shell.azure.com/bash

Step 2: Create a resource group

az group create --name aksColomboRG --location eastus

Step 3: Create AKS cluster

az aks create  --resource-group aksColomboRG  --name openFaasCluster --node-vm-size Standard_A2_v2   --node-count 1 --enable-addons monitoring --generate-ssh-keys

Step 4: Connect to the cluster

az aks get-credentials --resource-group aksColomboRG --name openFaasCluster --admin -a  --overwrite-existing

Step 7: List the cluster nodes

kubectl get all -n kube-system

Kubectl is a command line interface for running commands against Kubernetes clusters.

Step 8: Install and Init Helm

helm init –upgrade

Helm fills the need to quickly and reliably provision container applications through easy install, update, and removal

Step 9: Install OpenFaas

git clone https://github.com/openfaas/faas-netes

Step 10: Create namespace OpenFaas

kubectl create ns openfaaskubectl

Step 11: Create second namespace for OpenFass Functions

kubectl create ns openfaas-fn

Step 12 : Check you have a tiller pod in ready state

kubectl -n kube-system get po

Step 13: Manually start the tiller

kubectl logs --namespace kube-system tiller-deploy-66cdfd5bc9-46sxv

When a user executes the Helm install command, a Tiller Server receives the incoming request and installs the appropriate package

Step 14: Resolve cannot list configmaps in the "kube-system"

kubectl create serviceaccount --namespace kube-system tiller

Step 16: A Helm chart for OpenFaaS is included in the cloned repository. Use this chart to deploy OpenFaaS into your AKS cluster.

helm repo add openfaas https://openfaas.github.io/faas-netes/

helm upgrade --install --namespace openfaas --set functionNamespace=openfaas-fn --set async=true --set serviceType=LoadBalancer openfaas openfaas/openfaas

Step 17: See OpenFaas live

kubectl get all -n openfaas

and you should copy the service/gateway-external url with the port and paste it in the browser. You should see OpenFaas live.

2019-02-23_13-43-40

Well, that’s all about for this post, I will write another post about how to execute functions and how to build your custom function in the coming days.

You can find the workshop slides from here. Keep watching :)