Skip to main content

· 6 min read

Microsoft's Power Platform is a low code platform which enable the organization to analyse data from multiple sources, act on it through application and automate business process. Power Platform contains 3 main aspects (Power Apps,Power BI and Microsoft Flow) and it also integrates two main ecosystems(Office 365 and Dynamics 365). In this blog we will see in detail about Power Apps which helps to quickly build apps that connects data and run on web and mobile devices.

Overview :

We wills see how to build a simple Power App to automate scaling of resources in Azure in a simple yet powerful fashion to help save you money in the cloud using multiple Microsoft tools. In this post, we will use Azure Automation run books to code Azure Power Shell scripts, MS Flow as orchestration tool and MS Power Apps as the simple interface. If you have an Azure Environment with lot of resources it becomes hassle to manage the scaling part of it if you don't have the auto scaling implemented. The following Power App will help to understand how easy it is to build the above scenario.

Prerequisites :

We will use the following Azure resources to showcase how scaling and automation can save lot of cost.

  • AppService Plan
  • Azure SQL database
  • Virtual Machines

Step 1: Create and Update Automation Account :

First we need to create the Azure Automation account. Navigate to Create Resources -> search for Automation Account on the search bar. Create a new resource as follows,

Once you have created the resource, you also need to install the required modules for this particular demo which includes
-AzureRM.Resources
-AzureRM.Profile

Installing necessary Modules

Step 2: Create Azure PowerShell RunBooks

Next step is to create the run books by going to the Runbooks blade, for this tutorial lets create 6 run books one for each resources and its purpose. We need to create PowerShell scripts for each of those types

Scale Up SQL
-Scale Up ASP
-Start VM
-Scale Down SQL
-Scale Down ASP
-Stop VM

Creating RunBook

PowerShell scripts for each of these are found in the Github repository. We need to create runbook for each of the scripts.

All the runbooks above can be scheduled and automated to run for desired length of time, particular days of the week and time frame or continuously with no expiry. For an enterprise for non-production you would want it to scale down end of business hours and at the weekend.

Once we tested with the above Powershell scripts and the runbooks, tested and published now we can move on the next step to create the Flow. Navigate to Flow and Create New App from the template.

New Flow from the template

Select the template as PowerApps Button and the first step we need to add is the automation job. When you search for automation you will see the list of jobs available. Select Create Job and pick the one you created above. If you want to all the actions in one app, you can add one by one, If not you need to create separate flows for each one.

In this one, i have created one with the ScaleUpDB job which will execute the Scale up command of the database.

Once you are done with all the steps save the flow with necessary name.

Once we create PowerApp flow buttons login to MS PowerApps with a work/school account. Navigate to Power Apps which will give a way to create a blank canvas for Mobile or tablet. Next you can then begin to customise the PowerApp with text labels, colour and buttons as below

PowerApps Name

In this case we will have a button to increase/decrease the count of instances of the sql db, my app looked like below with few labels and buttons.

AutoMateApp

Next is to link the flow to the button of the power App by navigating to the Actions -> Power Automate

Linking Button Action

Once both Scale Up/Scale down actions are linked, save the app and publish

Step 5 : Verify Automation

Inorder to verify if things are working correctly, click on scale up and scale down few times and navigate to Azure Portal and open the Automation account we created.

Navigate to the overview tab to see the requests for each job made via the power app as below.

Jobs executed.

In order to look at the history navigate to the Jobs blade

Jobs Execution

further you can build a customized app for different environments with different screens using Power App. With the help of Azure Alert, whenever you get an alert regarding the heavy usages of resources/spikes, with single click of button you will be able scale up and scale down the resources as you need.

Improvements and things to consider:

This is just a starting point to explore more on this functionality, but there are improvements you could add to make this really useful.

(i) Sometimes the azure automation action fails to start the runbook. When you are implementing flow needs to handle this condition.

(ii) Sometimes a runbook action will be successful, but the runbook execution errored. Consider using try/catch blocks in the PowerShell and output the final result as a JSON object that can be parsed and further handled by the flow.

(iii) We should update your code to use the Az modules rather than AzureRM.

Note : The user who executes the PowerApp also needs permission to execute runbooks in the Automation Account.

With this App, It becomes handy for the operations team to manage the portal without logging in. Hope it helps someone out there! Cheers.

· 7 min read

Many times you would have wanted to have one view/dashboard of all the Github issues created for your open source repositories. I have almost 150 repositories and it becomes really hard to find which are the priority ones to be fixed. In this post we will see how you can create a one dashboard/report to view all your github issues in a page using Azure Function(3.X with Typescript) and Azure CosmosDB.

PreRequisities:

You will need to have an Azure Subscription and a Github Account. If you do not have an Azure subscription you can simply create one with free trial. Free trial provides you with 12 months of free services. We will use Azure Function and CosmosDB to build this solution.

Step 1 : Create Resource Group

Inorder to manage deploy the function app and cosmosdb we first need to create Resource Group. You can create one named "gh-issue-report"

Step 2: Create the Azure Cosmosdb Account

To store the related data of the GitHub issue we need to create a CosmosDB account. To Create CosmosDB account, navigate to the Azure portal and click the Create Resource. Search for Azure Cosmosdb on the market place and create the account as follows.

CosmosDB Creation

Step 3:  Create the Function app

If you have noticed my previous blog, i have mentioned about how to create an Azure function. Here is an image of the Function App i created.

Creating Function App

Create Typescript Function:

As you see i have selected Runtime stack as Node.js which will be used to run the function written with Typescript.  Open Visual Studio Code(Make sure you have already installed the VSCode with the function core tools and extension). Select Ctrl + Shif + P to create a new Function Project and select the language as Typescript.

Create Typescript Function

 Select the template as Timer trigger as we need to run every 5 minutes and you need to configure the cron expression (0 */5 * * * *) as well. (You can have custom time)

Give the function name as gitIssueReport, You will see the function getting created with the necessary files.

Step 4 : Add Dependencies to the Function App

Let's try to add the necessary dependencies to the project. We will use bluebird as a dependency to handle the requests. Also gh-issues-api library to interact with Github and get the necessary issues. You need to add the dependencies in the package.json folder under dependencies.

 "dependencies": {
"@types/node": "^13.7.0",
"bluebird": "^3.4.7",
"gh-issues-api": "0.0.2"
}

You can view the whole package.json here.

Step 5: Set Output Binding

Let's set the output binding to CosmosDB to write the issues to the collection. You can set it by modifying the function.json as

{
"type": "cosmosDB",
"name": "issueReport",
"databaseName": "gh-issues",
"collectionName": "open-issues",
"createIfNotExists": true,
"connectionStringSetting": "gh-issue_DOCUMENTDB",
"direction": "out"
}

Where type cosmosDB denotes the database output binding and you can see that the database name and collection as configured.

Step 6 : Code to Retrieve the Github Repository Issues

The actual logic of the function is as follows,


import Promise = require('bluebird');

import {
GHRepository,
IssueType,
IssueState,
IssueActivity,
IssueActivityFilter,
IssueLabelFilter,
FilterCollection
} from 'gh-issues-api';

export function index(context: any, myTimer: any) {
var timeStamp = new Date().toISOString();

if(myTimer.isPastDue) {
context.log('Function trigger timer is past due!');
}

const repoName = process.env['repositoryName'];
const repoOwner = process.env['repositoryOwner'];
const labels = [
'bug',
'build issue',
'investigation required',
'help wanted',
'enhancement',
'question',
'documentation',
];

const repo = new GHRepository(repoOwner, repoName);
var report = {
name: repoName,
at: new Date().toISOString()
};

context.log('Issues for ' + repoOwner + '/' + repoName, timeStamp);
repo.loadAllIssues().then(() => {
var promises = labels.map(label => {
var filterCollection = new FilterCollection();
filterCollection.label = new IssueLabelFilter(label);
return repo.list(IssueType.All, IssueState.Open, filterCollection).then(issues => report[label] = issues.length);
});
var last7days = new Date(Date.now() - 604800000)
var staleIssuesFilter = new IssueActivityFilter(IssueActivity.Updated, last7days);
staleIssuesFilter.negated = true;
var staleFilters = new FilterCollection();
staleFilters.activity = staleIssuesFilter;
promises.push([
repo.list(IssueType.Issue, IssueState.Open).then(issues => report['total'] = issues.length),
repo.list(IssueType.PulLRequest, IssueState.Open).then(issues => report['pull_request'] = issues.length),
repo.list(IssueType.All, IssueState.Open, staleFilters).then(issues => report['stale_7days'] = issues.length)
]);

return Promise.all(promises);
}).then(() => {
var reportAsString = JSON.stringify(report);
context.log(reportAsString);
context.bindings.issueReport = reportAsString;
context.done();
});;
}

You can see that the document is set as a input to the CosmosDB with the binding named issueReport.

Step 7: Deploy the Function

Deploy the Function App. You can deploy the function app to the Azure with the keys Ctrl+Shift+P and select Deploy to the Function App

Deploy Function App

Step 8 : Verify/Install the Dependencies

Once the deployment is succesfful, Navigate to Azure portal and open the function app to make sure that everything looks good. If you dont see the dependencies make sure to install the dependencies manually by navigating to the Kudu Console of the function App.

Note : Make sure to stop the Function app before you head over to Kudu.

ick on the Platform Features tab. Under Development Tools, click Advanced tools (Kudu). Kudu will open on it’s own in a new window.

Navigate to KUDU console

In the top menu of the Kudu Console, click Debug Console and select CMD

In the command prompt, we’ll want to navigate to D:\home\site\wwwroot. You can do so by using the command cd site\wwwroot and press enter on your keyboard. Once you’re in wwwroot, run the command npm i bluebird to install the package. Also do the same for gh-issues-api

Step 8: Set Environment Variables (Repository)

As you could see in the above code, we are setting two environment variables to read the repository name and the repository owner which are needed to fetch the issues information. You can set those variable son the Azure portal as follows.

Navigate to the Overview tab for your function and click Configuration. As you can see below I've configured those values.

Function App Settings

Step 9: Verify the Output Binding

Just to make sure that our settings in the function.json has been reflected or not navigate to the Functions and select the Function and make sure all the binding values are correct. If not create a new binding to cosmosdb account you created as mentioned in the step Step 3 (Instead of Twilio select Cosmosdb)

Step 10 : Run and Test the Function

Now its time to see the function app running and issues being reported. Navigate to your function app and click Run. You can see the Function Running as shown below.

Run Function App

Step 11: Check Live App Metrics

If you see any errors you can always navigate to Monitor section of the Function app and select Live App Metrics

Live metrics of the function app

Step 12: Verify the data in cosmosdb

If everything goes well, you can navigate to Cosmosdb Account and open the collection with the data Explorer.

Data Explorer Cosmosdb

You will see that there are many documents inserted in the collection.

Cosmosdb collection with Github repository Issues

Now you can modify this function to retrieve the issues from all of your repositories and use the data stored in the cosmosdb collection to build a dashboard to show the issues with priority. Also you can make use of this post to send a notification to someone about the issue as well.

Hope this simple function will help someone to build a dashboard out of the data collected and make them more productive.Cheers!

· 2 min read

I was checking out this cool feature on the Azure Portal Today. I usually spent 3 hours per week on evaluating the new features or building something new on Azure. Azure Portal is for a lot of developers is the go-to place to manage all their Azure resources and services. Most often i hear from the developers is that Portal takes much time to load when you login and sometimes we feel that portal is slow. Today i figured out a way to debug the portal loading time. If you are developer who considers Azure Portal as your website and want to know about the duration for each view of the page, while you are logged in into the Azure Portal and press the keyboard shortcut CTRL + ALT + D you can see the load time and other useful information for every title.

Azure Portal Load Time

You can simply enable/disable certain features on the portal by toggling the Experiments. You can also enable Debug Hub to see if there are any exceptions/issues while loading the portal related elements.

Enable/Disable certain features

One other tip i would like to highlight here is that keyboard shortcuts that you can use specifically for the Azure portal.

To see them all, you can open the Keyboard shortcut help item in the Help Menu on the top-right of the Portal.

Shortcut keys

Hope it helps you to figure out if there is a slowness while loading the Azure Portal. If you are new to Azure and want to get start on Azure, explore my tool Azure360. Share this tip with your colleagues.

· 5 min read

We have a wide variety of options to store data in Microsoft Azure. Nevertheless, every storage option has a unique purpose for its existence. In this blog, we will discuss ADLS (Azure Data Lake Storage) and its multi-protocol access that Microsoft introduced in the year 2019.

Introduction to ADLS (Azure Data Lake Storage)

According to the Microsoft definition, it is an enterprise-wide hyper-scale repository for big data analytics workloads and enables you to capture data of any size and ingestion speed in one single space for operational and exploratory analytics.

The main purpose of its existence is to enable analytics on the stored data (it may be of any type structured, semi-structured and unstructured data) and provide enterprise-grade capabilities like scalability, manageability, reliability, etc.

Where does it build?

ADLS is built on the top of the Azure Blob Storage. Blob Storage is one of the storage services under the suite of Storage accounts. Blob storage lets you store any type of data and it doesn’t necessarily to be a specific data type.

Does the functionality of ADLS sound like the Blob storage?

From the above paragraphs, it looks like both ADLS and Blob storage has the same functionality. Because, both the services can be used to store any type of data. But, as I said before, every service has its purpose for its existence. Let us explore, what is the difference between ADLS and Blob storage in the following.

Difference between ADLS and Blob storage

Purpose

It is optimized for analytical purposes on the data stored in the ADLS, but Blob storage is a usual way of storing file-based information in Azure where the data which will not be accessed very often also called as cold storage.

Cost

In both the storage options, we need to pay the amount for the data stored and I/O operations. In the case of ADLS, the cost is slightly higher than the Blob.

Support for Web HDFS interface

ADLS supports a standard web HDFS interface and can access the files and directories in Hadoop. Blob does not support this feature.

I/O performance

ADLS is built for running large scale systems that require massive read throughput when queried against the DB at any pace. Blob is used for store data which will be accessed infrequently.

Encryption at rest

Since ADLS GA, it supports encryption at rest. It encrypts data flowing in public networks and at rest. Blob Storage does not support encryption at rest. See more details on the comparison here.

Now, without any further delay let us dig on the Multi-protocol access for ADLS.

Multi-protocol access for ADLS

This is one of the significant announcements that Microsoft has done in the year 2019 as far as ADLS is concern. Multi-protocol access to the same data allows you to leverage existing object storage capabilities on Data Lake Storage accounts, which are hierarchical namespace-enabled storage accounts built on top of Blob storage. This allows you to put all your different types of data in the data lake so that the users can make the best use of your data as the use case evolves.

The multi-protocol concept can be achieved via Azure Blob storage API and Azure Data Lake Storage API. The convergence of both the existing services, ADLS Gen1 and blob storage, paved the path to a new term called Azure Data Lake Storage Gen 2.

Expanded feature set

With the announcement of multi-protocol access, existing blob features such as access tiers and lifecycle management policies are now unlocked for ADLS. Furthermore, it enables many of the features and ecosystem support of blob storage is now supported for your data lake storage.

This could be a great shift because your blob data can now be used for analytics. The best thing is you don’t need to update the existing applications to get access to your data stored in Data Lake Storage. Moreover, you can leverage the power of both your analytics and object storage applications to use your data most effectively.

While exploring the expanded feature sets, one of the best things I could found is that ADLS can now be integrated with Azure Event Grid.

Yes, we have one more publisher on the list for Azure Event Grid. Azure Event Grid can now be used to consume events generated from Azure Data Lake Storage Gen2 and routed to its subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints.

Modern Data Warehouse scenario

The above image depicts the use case scenario of ADLS integration with Event Grid. First off, there are a lot of data comes from different sources like Logs, Media, Files and Business apps. Those data are ending up in the ADLS via Azure Data Factory and the Event Grid which listens to the ADLS gets triggered once data reaches it. Further, the event gets routed via Event Grid and Functions to Azure Databricks. The file will be processed by the databricks job and writes the output back to Azure Data Lake Storage Gen2. Meanwhile, Azure Data Lake Storage Gen2 pushes a notification to Event Grid which triggers an Azure Function to copy data to Azure SQL Data Warehouse. Finally, the data will be served via Azure Analysis Services and PowerBI.

Wrap-up

In this blog, we have seen an introduction about the Azure Data Lake Storage and the difference between ADLS and blob storage. Further, we investigated the multi-protocol access which is one of the new entrants in ADLS. Finally, we looked into one of the extended feature sets - integration of ADLS with Azure Event Grid and its use case scenario.

I hope you enjoyed reading this article. Happy Learning!

Image Credits: Microsoft

This article was contributed to my site by Nadeem Ahamed and you can read more of his articles from here.

· One min read

There are cases if you want to start/stop all VMs in particular resource group in parallel within Azure. You can set it up with Automation using Scheduled actions. Other way which you can do by using PowerShell or Azure CLI

If you are using PowerShell, simply you can do

Get-AzVm -ResourceGroupName 'MyResourceGroup' | Start-AzVM

If you are using Azure CLI/Bash

az vm start --ids $(az vm list -g MyResourceGroup --query "[].id" -o tsv)

Where MyResourceGroup is the name of your ResourceGroup. Happy Azurifying!