Skip to main content

58 posts tagged with "azure"

View All Tags

· 7 min read

Many times you would have wanted to have one view/dashboard of all the Github issues created for your open source repositories. I have almost 150 repositories and it becomes really hard to find which are the priority ones to be fixed. In this post we will see how you can create a one dashboard/report to view all your github issues in a page using Azure Function(3.X with Typescript) and Azure CosmosDB.

PreRequisities:

You will need to have an Azure Subscription and a Github Account. If you do not have an Azure subscription you can simply create one with free trial. Free trial provides you with 12 months of free services. We will use Azure Function and CosmosDB to build this solution.

Step 1 : Create Resource Group

Inorder to manage deploy the function app and cosmosdb we first need to create Resource Group. You can create one named "gh-issue-report"

Step 2: Create the Azure Cosmosdb Account

To store the related data of the GitHub issue we need to create a CosmosDB account. To Create CosmosDB account, navigate to the Azure portal and click the Create Resource. Search for Azure Cosmosdb on the market place and create the account as follows.

CosmosDB Creation

Step 3:  Create the Function app

If you have noticed my previous blog, i have mentioned about how to create an Azure function. Here is an image of the Function App i created.

Creating Function App

Create Typescript Function:

As you see i have selected Runtime stack as Node.js which will be used to run the function written with Typescript.  Open Visual Studio Code(Make sure you have already installed the VSCode with the function core tools and extension). Select Ctrl + Shif + P to create a new Function Project and select the language as Typescript.

Create Typescript Function

 Select the template as Timer trigger as we need to run every 5 minutes and you need to configure the cron expression (0 */5 * * * *) as well. (You can have custom time)

Give the function name as gitIssueReport, You will see the function getting created with the necessary files.

Step 4 : Add Dependencies to the Function App

Let's try to add the necessary dependencies to the project. We will use bluebird as a dependency to handle the requests. Also gh-issues-api library to interact with Github and get the necessary issues. You need to add the dependencies in the package.json folder under dependencies.

 "dependencies": {
"@types/node": "^13.7.0",
"bluebird": "^3.4.7",
"gh-issues-api": "0.0.2"
}

You can view the whole package.json here.

Step 5: Set Output Binding

Let's set the output binding to CosmosDB to write the issues to the collection. You can set it by modifying the function.json as

{
"type": "cosmosDB",
"name": "issueReport",
"databaseName": "gh-issues",
"collectionName": "open-issues",
"createIfNotExists": true,
"connectionStringSetting": "gh-issue_DOCUMENTDB",
"direction": "out"
}

Where type cosmosDB denotes the database output binding and you can see that the database name and collection as configured.

Step 6 : Code to Retrieve the Github Repository Issues

The actual logic of the function is as follows,


import Promise = require('bluebird');

import {
GHRepository,
IssueType,
IssueState,
IssueActivity,
IssueActivityFilter,
IssueLabelFilter,
FilterCollection
} from 'gh-issues-api';

export function index(context: any, myTimer: any) {
var timeStamp = new Date().toISOString();

if(myTimer.isPastDue) {
context.log('Function trigger timer is past due!');
}

const repoName = process.env['repositoryName'];
const repoOwner = process.env['repositoryOwner'];
const labels = [
'bug',
'build issue',
'investigation required',
'help wanted',
'enhancement',
'question',
'documentation',
];

const repo = new GHRepository(repoOwner, repoName);
var report = {
name: repoName,
at: new Date().toISOString()
};

context.log('Issues for ' + repoOwner + '/' + repoName, timeStamp);
repo.loadAllIssues().then(() => {
var promises = labels.map(label => {
var filterCollection = new FilterCollection();
filterCollection.label = new IssueLabelFilter(label);
return repo.list(IssueType.All, IssueState.Open, filterCollection).then(issues => report[label] = issues.length);
});
var last7days = new Date(Date.now() - 604800000)
var staleIssuesFilter = new IssueActivityFilter(IssueActivity.Updated, last7days);
staleIssuesFilter.negated = true;
var staleFilters = new FilterCollection();
staleFilters.activity = staleIssuesFilter;
promises.push([
repo.list(IssueType.Issue, IssueState.Open).then(issues => report['total'] = issues.length),
repo.list(IssueType.PulLRequest, IssueState.Open).then(issues => report['pull_request'] = issues.length),
repo.list(IssueType.All, IssueState.Open, staleFilters).then(issues => report['stale_7days'] = issues.length)
]);

return Promise.all(promises);
}).then(() => {
var reportAsString = JSON.stringify(report);
context.log(reportAsString);
context.bindings.issueReport = reportAsString;
context.done();
});;
}

You can see that the document is set as a input to the CosmosDB with the binding named issueReport.

Step 7: Deploy the Function

Deploy the Function App. You can deploy the function app to the Azure with the keys Ctrl+Shift+P and select Deploy to the Function App

Deploy Function App

Step 8 : Verify/Install the Dependencies

Once the deployment is succesfful, Navigate to Azure portal and open the function app to make sure that everything looks good. If you dont see the dependencies make sure to install the dependencies manually by navigating to the Kudu Console of the function App.

Note : Make sure to stop the Function app before you head over to Kudu.

ick on the Platform Features tab. Under Development Tools, click Advanced tools (Kudu). Kudu will open on it’s own in a new window.

Navigate to KUDU console

In the top menu of the Kudu Console, click Debug Console and select CMD

In the command prompt, we’ll want to navigate to D:\home\site\wwwroot. You can do so by using the command cd site\wwwroot and press enter on your keyboard. Once you’re in wwwroot, run the command npm i bluebird to install the package. Also do the same for gh-issues-api

Step 8: Set Environment Variables (Repository)

As you could see in the above code, we are setting two environment variables to read the repository name and the repository owner which are needed to fetch the issues information. You can set those variable son the Azure portal as follows.

Navigate to the Overview tab for your function and click Configuration. As you can see below I've configured those values.

Function App Settings

Step 9: Verify the Output Binding

Just to make sure that our settings in the function.json has been reflected or not navigate to the Functions and select the Function and make sure all the binding values are correct. If not create a new binding to cosmosdb account you created as mentioned in the step Step 3 (Instead of Twilio select Cosmosdb)

Step 10 : Run and Test the Function

Now its time to see the function app running and issues being reported. Navigate to your function app and click Run. You can see the Function Running as shown below.

Run Function App

Step 11: Check Live App Metrics

If you see any errors you can always navigate to Monitor section of the Function app and select Live App Metrics

Live metrics of the function app

Step 12: Verify the data in cosmosdb

If everything goes well, you can navigate to Cosmosdb Account and open the collection with the data Explorer.

Data Explorer Cosmosdb

You will see that there are many documents inserted in the collection.

Cosmosdb collection with Github repository Issues

Now you can modify this function to retrieve the issues from all of your repositories and use the data stored in the cosmosdb collection to build a dashboard to show the issues with priority. Also you can make use of this post to send a notification to someone about the issue as well.

Hope this simple function will help someone to build a dashboard out of the data collected and make them more productive.Cheers!

· 5 min read

We have a wide variety of options to store data in Microsoft Azure. Nevertheless, every storage option has a unique purpose for its existence. In this blog, we will discuss ADLS (Azure Data Lake Storage) and its multi-protocol access that Microsoft introduced in the year 2019.

Introduction to ADLS (Azure Data Lake Storage)

According to the Microsoft definition, it is an enterprise-wide hyper-scale repository for big data analytics workloads and enables you to capture data of any size and ingestion speed in one single space for operational and exploratory analytics.

The main purpose of its existence is to enable analytics on the stored data (it may be of any type structured, semi-structured and unstructured data) and provide enterprise-grade capabilities like scalability, manageability, reliability, etc.

Where does it build?

ADLS is built on the top of the Azure Blob Storage. Blob Storage is one of the storage services under the suite of Storage accounts. Blob storage lets you store any type of data and it doesn’t necessarily to be a specific data type.

Does the functionality of ADLS sound like the Blob storage?

From the above paragraphs, it looks like both ADLS and Blob storage has the same functionality. Because, both the services can be used to store any type of data. But, as I said before, every service has its purpose for its existence. Let us explore, what is the difference between ADLS and Blob storage in the following.

Difference between ADLS and Blob storage

Purpose

It is optimized for analytical purposes on the data stored in the ADLS, but Blob storage is a usual way of storing file-based information in Azure where the data which will not be accessed very often also called as cold storage.

Cost

In both the storage options, we need to pay the amount for the data stored and I/O operations. In the case of ADLS, the cost is slightly higher than the Blob.

Support for Web HDFS interface

ADLS supports a standard web HDFS interface and can access the files and directories in Hadoop. Blob does not support this feature.

I/O performance

ADLS is built for running large scale systems that require massive read throughput when queried against the DB at any pace. Blob is used for store data which will be accessed infrequently.

Encryption at rest

Since ADLS GA, it supports encryption at rest. It encrypts data flowing in public networks and at rest. Blob Storage does not support encryption at rest. See more details on the comparison here.

Now, without any further delay let us dig on the Multi-protocol access for ADLS.

Multi-protocol access for ADLS

This is one of the significant announcements that Microsoft has done in the year 2019 as far as ADLS is concern. Multi-protocol access to the same data allows you to leverage existing object storage capabilities on Data Lake Storage accounts, which are hierarchical namespace-enabled storage accounts built on top of Blob storage. This allows you to put all your different types of data in the data lake so that the users can make the best use of your data as the use case evolves.

The multi-protocol concept can be achieved via Azure Blob storage API and Azure Data Lake Storage API. The convergence of both the existing services, ADLS Gen1 and blob storage, paved the path to a new term called Azure Data Lake Storage Gen 2.

Expanded feature set

With the announcement of multi-protocol access, existing blob features such as access tiers and lifecycle management policies are now unlocked for ADLS. Furthermore, it enables many of the features and ecosystem support of blob storage is now supported for your data lake storage.

This could be a great shift because your blob data can now be used for analytics. The best thing is you don’t need to update the existing applications to get access to your data stored in Data Lake Storage. Moreover, you can leverage the power of both your analytics and object storage applications to use your data most effectively.

While exploring the expanded feature sets, one of the best things I could found is that ADLS can now be integrated with Azure Event Grid.

Yes, we have one more publisher on the list for Azure Event Grid. Azure Event Grid can now be used to consume events generated from Azure Data Lake Storage Gen2 and routed to its subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints.

Modern Data Warehouse scenario

The above image depicts the use case scenario of ADLS integration with Event Grid. First off, there are a lot of data comes from different sources like Logs, Media, Files and Business apps. Those data are ending up in the ADLS via Azure Data Factory and the Event Grid which listens to the ADLS gets triggered once data reaches it. Further, the event gets routed via Event Grid and Functions to Azure Databricks. The file will be processed by the databricks job and writes the output back to Azure Data Lake Storage Gen2. Meanwhile, Azure Data Lake Storage Gen2 pushes a notification to Event Grid which triggers an Azure Function to copy data to Azure SQL Data Warehouse. Finally, the data will be served via Azure Analysis Services and PowerBI.

Wrap-up

In this blog, we have seen an introduction about the Azure Data Lake Storage and the difference between ADLS and blob storage. Further, we investigated the multi-protocol access which is one of the new entrants in ADLS. Finally, we looked into one of the extended feature sets - integration of ADLS with Azure Event Grid and its use case scenario.

I hope you enjoyed reading this article. Happy Learning!

Image Credits: Microsoft

This article was contributed to my site by Nadeem Ahamed and you can read more of his articles from here.

· 2 min read

I have come across this question about "How to deploy a web app within a sub folder on Azure" in Stackoverflow many times. Even though there is an official documentation, this question has not been addressed in general. With Virtual Directories, You could keep your web sites in separate folders and use the ‘virtual directories and applications’ settings in Azure to publish the two different projects under the same site.

However, say if you have an ASP.NET Core/Angular app to a sub-folder inside Azure Web App (App Service), and wanted to deploy on Azure inside a sub-folder. You can simply navigate to Azure portal -> Select the Web App -> Overview

  • Download the publish profile
  • Import in Visual Studio
  • Edit the web-deploy profile(Normally the publish profile will have Web Deploy as well as FTP profile)
    • Change Site Name from your-site to your-site\folder\sub-folder
    • Change the Destination URL from http://your-site.azurewebsites.net to http://your-site.azurewebsites.net/folder/sub-folder
  • Publish

You should be getting an error as follows,

System.TypeLoadException: Method ‘get_Settings’ in type ‘Microsoft.Web.LibraryManager.Build.HostInteraction’ from assembly ‘Microsoft.Web.LibraryManager.Build

You can resolve the above issue by updating the nuget package named Microsoft.Web.LibraryManager.Build in your project.

One other thing that you should be aware is that, Go to portal > demo-site App Service > Configuration > Path Mappings > Virtual applications and directories. And add the following,

Virtual PathPhysical PathType
/foldersite\wwwroot\folderFolder
/folder/sub-foldersite\wwwroot\folder\sub-folderApplication

Configuration Virtual Directory

Now publish from Visual Studio. If you only need to publish to a first level folder, i.e to your-site\folder, then all you have to do is, change the Type to Application in the Path mappings for /folder, and skip the sub-folder entry since you don’t need it. And correct the Site Name and Destination URL in the publish profile accordingly.

Hope there will be no more questions on the same. Happy Coding!

· 2 min read

I came across this question about "How to deploy a web app within a sub folder on Azure" in Stackoverflow many times. Even though there is an official documentation, this question has not been addressed in general. With Virtual Directories, You could keep your web sites in separate folders and use the ‘virtual directories and applications’ settings in Azure to publish the two different projects under the same site.

However, say if you have an ASP.NET Core/Angular app to a sub-folder inside Azure Web App (App Service), and wanted to deploy on Azure inside a sub-folder. You can simply navigate to Azure portal -> Select the Web App -> Overview

  • Download the publish profile
  • Import in Visual Studio
  • Edit the web-deploy profile(Normally the publish profile will have Web Deploy as well as FTP profile)
    • Change Site Name from your-site to your-site\folder\sub-folder
    • Change the Destination URL from http://your-site.azurewebsites.net to http://your-site.azurewebsites.net/folder/sub-folder
  • Publish

You should be getting an error as follows,

System.TypeLoadException: Method ‘get_Settings’ in type ‘Microsoft.Web.LibraryManager.Build.HostInteraction’ from assembly ‘Microsoft.Web.LibraryManager.Build

You can resolve the above issue by updating the nuget package named Microsoft.Web.LibraryManager.Build in your project.

One other thing that you should be aware is that, Go to portal > demo-site App Service > Configuration > Path Mappings > Virtual applications and directories. And add the following,

Virtual PathPhysical PathType
/foldersite\wwwroot\folderFolder
/folder/sub-foldersite\wwwroot\folder\sub-folderApplication

Configuration Virtual Directory

Now publish from Visual Studio. If you only need to publish to a first level folder, i.e to your-site\folder, then all you have to do is, change the Type to Application in the Path mappings for /folder, and skip the sub-folder entry since you don’t need it. And correct the Site Name and Destination URL in the publish profile accordingly.

Hope there will be no more questions on the same. Happy Coding!

· 8 min read

A programmer can code for days continously without a break. I have done it when I started my career as a programmer. In IT field, it gets worse if you continously work without taking 5 minutes break every 30 minutes. In this blog I will explain how you can find yourself to remind someone to get up and take that mandatory break.

PreRequisites:

Sign up Twilio

In order to yse Twilio, you need to sign up and purchase a voice enabled phone number. If you’re new user to Twilio, you can start with a free trial.

Sign up Azure:

In order to deploy your Azure function, you need to have Azure subscription. You can create a FREE Azure subscription to setup your Azure function. The free trial will provide you with 12 months of free services.

Steps to Create the Function:

Step 1 : Create the Function app

Let's start off by creating an app for our requirement. In the Azure portal, click + Create a Resource

When the Azure Marketplace  appears and in the list, click Compute. In the Featured list, click Function App (note: if Function App does not appear, then click See all).

Then you need to fill the function app settings, you can follow the image to setup your funciton,

Step 2 : Add Function

We just completed creating the function app and we need to add a function that provides the capability of alerting the user that is configured with a trigger. The trigger will start the function which sends the Twilio SMS message. We’ll be using a Timer trigger for this tutorial.

In the left menu, click Resource groups and select the resource group you created in the last step.

Click on the App Service which is highlighted. Once the page loads, click the + button next to Functions to create a new function.

Add a Function

On the next screen, you’ll need to choose a development environment. Since we’ll be creating the function in the Azure Portal, select In-Portal and Continue.

Select In Portal

Since we want to create a Timer trigger, you’ll need to select Timer and click Create.

You should now see TimerTrigger1 listed under Functions in the left menu.

Step 3 : Integrate with Twilio SMS

As the next step we need to integrate Twilio SMS with the function App we created. Under TimerTrigger1 click Integrate

Integrate Function

under Outputs, click + New Output and select Twilio SMS.

When you click on that, you will get a warning saying Extensions not installed. You’ll need to install the Mirosoft.Azure.WebJobs.Extensions.Twilio extension. You can do so by clicking Install. This process can take up to 20 minutes so just give it a moment to complete installation.

While the function extensions are getting installed, you need to update the values in the relevant fields, the values can be obtained from your Twilio dashboard as shown below,

and fill them in the environment variables after you installed the extension, you need to set the environment variables which will be used within the function.

Step 4: Set Environment variables

It's ok to hardcode the Twilio credentials in the output environment variables. However, if you are running this app in production you should always use environment variables so that you dont expose the credentials to others. As you obtained the values from the Twilio dashboard, copy and save those values,

You can create environment variables in the Azure Portal by going to the Overview tab for your function and click Configuration.

Add the environment variables one by one,

KeyValue
TWILIO_SIDACXXXXXXXXXXX
TWILIO_TOKENAuth token obtained from the Twilio dashboard
SENDER_NUMBERYour twilio number +94 77 330 XXXXX
RECIPIENT_NUMBERPhone number that recieves the message

Environment variables

You can create the first envrionment variable by clicking on the New Application Setting and repeat the same for rest of those variables as shown above,

New appsettings Configuration

Add the first setting TWILIO_SID

TWILIO_SID Environment Variable

Once you are done with each setting value click OK. When you are adding both SENDER_NUMBER and RECIPIENT_NUMBER be extra careful as it can be tricky and make sure to use the E.164 format referenced above. After all environment variables have been added click Save to save the updates that were made to the Application Settings.

Step 5 : Timer settings

Whenever you create a timer function, By default, Azure sets your function to trigger the text message every 5 minutes. You can change how frequent the timer triggers by going to the Integrate and updating the values in Timer trigger.

Goto your function and select on Integrate and then you need to update the value of the interval as you need. The Schedule field contains a sequence that using CRON expressions. For the purpose of testing the function, change the number 5 to the number 2 and click Save. You can later change the frequency after you confirm that the function works properly.

CRON Expression

If you're creating the Function App using Visual Studio code, follow this sample app to create a timer function which is more easier with Visual Studio Code.

Step 6: Modify function.json File

As we are done with all the configuration steps, now we need to update the function.json within our TimerTrigger1 function.  Go back over to the function app and click TimerTrigger1. On the far-right side of the screen, click View Files.

You will see two files:

  • function.json
  • index.js

Click function.json to open the file. Since the file is currently missing the “to”: “RECIPIENT_NUMBER”, we’ll need to add this to our file.

Now we need to add the logic to create the message and send the message using the Twilio to the relevant reciever.

Navigate to my Github repo and grab the code for this file. You’ll want to replace the existing code in the function.json file with the new code that you just copied from GitHub.

{
"bindings": [
{
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 */2 * * * *"
},
{
"type": "twilioSms",
"name": "message",
"accountSidSetting": "REPLACE_WITH_YOUR_ACCOUNT_SID",
"authTokenSetting": "REPLACE_WITH_YOUR_AUTH_TOKEN",
"from": "SENDER_NUMBER",
"to": "RECIPIENT_NUMBER",
"direction": "out"
}
]
}

Step 7 : Let's Add logic to the index.js file

When Azure creates a function, it adds default code to help setup your function. We will the code for the Twilio SMS message to this code.

In the View Files menu, click the index.js file. You’ll want to replace the existing code in the index.js file with the code below.

const twiAccountSid = process.env.TWILIO_SID;
const twiAuthToken = process.env.TWILIO_TOKEN;
const client = require('twilio')(twiAccountSid, twiAuthToken);
module.exports = async function (context, myTimer) {
var timeStamp = new Date().toISOString();
if (myTimer.IsPastDue)
{
context.log('JavaScript is running late!');
}
client.messages
.create({ from: process.env.SENDER_NUMBER,
body: "Time to have cofee and take a break for 5 minutes!",
to: process.env.RECIPIENT_NUMBER
})
.then(message => {
context.log("Message sent");
context.res = {
body: 'Text successfully sent'
};
context.log('JavaScript timer trigger done!', timeStamp);
context.done();
}).catch(err => {
context.log.error("Twilio Error: " + err.message + " -- " + err.code);
context.res = {
status: 500,
body: `Twilio Error Message: ${err.message}\nTwilio Error code: ${err.code}`
};
context.done();
});
}

Step 8 : Install the dependencies (Twilio)

As you can see the first line of the code is required to use twilio helper library. We’ll need to install the twilio-node from npm so that it’s available to our function. To do so, we’ll need to first add a new file to our function.

Add package.json file

In the View Files window, click Add. Type the file name package.json and click enter. You will see an empty content page in the middle of the screen.

Add Package.json

Add the code below to the package.json file.

{
"name": "doc247",
"version": "1.0.0",
"description": "Alert an employee with an SMS to take a break",
"main": "index.js",
"scripts": {
"test": "echo \"No tests yet...\""
},
"author": "Sajeetharan",
"dependencies": {},
"devDependencies": {
"twilio": "^3.0.0"
}
}

Now we have added Twilio as a dependency to the package.json file , as the next step we need to install it as a dependency on the environment itself. As you are aware you can install the dependencies using the deploy command as well as you can install it using the Kudu.

Note : Make sure to stop the Function app before you head over to Kudu.

Click on the Platform Features tab. Under Development Tools, click Advanced tools (Kudu). Kudu will open on it’s own in a new window.

Navigate to Kudu from the function app

In the top menu of the Kudu Console, click Debug Console and select CMD

Kudu Debug Console

In the command prompt, we’ll want to navigate to D:\home\site\wwwroot. You can do so by using the command cd site\wwwroot and press enter on your keyboard. Once you’re in wwwroot, run the command npm i twilio to install the package.

Install Dependencies Twilio

You will also notice a new node_modules folder added to the file structure. Back in the Overview tab for the function app, click Start.

node_modules Folder

Step 9 : Run and Test the Function

Back in the Overview tab for the function app, click Start. App and click TimerTrigger1. Make sure that you’re in the index.js file. Click Test next to View Files (far right-side of the screen). At the top of the index.js file, click Run

If everything was successful, the personshould receive a text message after 20 minutes with your message!

You can change the frequency for your timer by heading back to Integrate and changing the Schedule field. Be sure to read up on CRON expressions before entering a new frequency.

If you’re curious to learn more about Azure Functions, I would suggest taking this Microsoft Learn module. You can access complete source code from here. If you want to learn more on Azure visit http://azure360.info/ .Happy Coding!