Skip to main content

4 posts tagged with "cognitiveservice"

View All Tags

· 7 min read

Overview:

Due to the recent COVID outbreak and as it continues to spread throughout the world, employees are being to asked to work from home. While most of the companies are already getting adapted to this new way of working, there are mixed opinions among employees from different parts of the world. IMO , Working from home is a good option for new parents, people with disabilities and others who aren’t well served by a traditional office setup. As this was appreciated by most of my colleagues and industry friends, i wanted to see how everyone is reacting to this new way of working across the world. In this post, i will explain how i built an application in 10 minutes to solve this particular question in mind using server less computing offered by Azure.

PreRequisities:

You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.

Services used:

  • Azure Logic Apps
  • Azure Functions
  • Azure CosmosDB
  • Cognitive Service
  • PowerBI

Architecture:

Architecture

Architecture of the solution is very simple and it uses most of the Azure managed services that handle the infrastructure for you.Whenever a new tweet is posted Logic Apps receives and processes the tweet. Sentiment score of the tweet can be analyzed using the Cognitive service then Azure function is used here to detect the sentiment of the tweet and finally inserted as a row in the powerBI to visualize in the dashboard. You can also use SQL server/Cosmosdb to store the tweet data if you want to process it later.

How to build the application:

Step 1: Create the Resource Group

As the first step, we need to create the resource group that contains all the resources needed. Navigate to Azure Portal and create the resource group named "wfh-sentiment"

Step 2 : Create the Function App

As the next step lets create the Function App which we need to detect the sentiment of the tweet. You can create and deploy the function app using Visual Studio Code. Open Visual Studio Code(Make sure you have already installed the VSCode with the function core tools and extension). Select Ctrl + Shif + P to create a new Function Project and select the language as C# ( But you could consider using any of the language that you are familiar with)

Create new Function App

Select language as C#

Select the trigger as HttpTrigger

Give the name of the Function

Provide the name of the function

and the logic of the Function app is as follows,

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using System.Net.Http;
namespace WorkFromHome
{
public static class DecideSentinment
{
[FunctionName("DecideSentinment")]
public static async Task<HttpResponseMessage> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequestMessage req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string Sentiment = "POSITIVE";
//Getting the score from the Cognitive Service and determining the sentiment
double score = await req.Content.ReadAsAsync<double>();
if(score < 0.3){
Sentiment = "NEGATIVE";
}
else if(score < 0.6){
Sentiment = "NEUTRAL";
}
return req.CreateResponse(System.Net.HttpStatusCode.OK,Sentiment);
}
}
}

And the source code can be found here. Then , you can deploy the function App to Azure with simple command using Ctrl+Shift+P and deploy to Function App.

Step 3: Create the Azure Cognitive Service to determine the sentiment of the tweet text

As we discussed above, lets create the cognitive service to determine the sentiment score of the tweet. Go to the same resource group and search for cognitive service and create a new service as follows,

Create Cognitive Service

Step 4: Create Cosmosdb to store the data

In my application, i have made this step optional as i don't need to save the tweet data for historical analysis. But you can definitely use cosmosdb to store the tweets to process later. As how you created the Cognitive service create a new cosmosdb account and a database to store the data as follows,

Cosmosdb to store tweets data

Step 5: Create PowerBI dataset to visualize the data

Navigate to PowerBI portal and create a new dataset to visualize the data we collected as follows,

Create new Streaming Data set in the work space

Select API in the new streaming data set option

Configure the fields as above.

Step 6: Create the Logic App and configure the Flow

This is the core part of the application as we are going to link together the above component as one flow. You can connect these flows using designer as well as using YAML code. I will be using Designer to create the flow.

As denoted above the first step we need to add the twitter connector which you can pick from the available list of connector named "when a new tweet is posted"

Connector when new tweet is posted

You need to configure the search text which you want to get the tweets , in this case i am going to use the Hashtag "#WFH" and set the interval as 30 seconds.

Look for new tweets on every 30 seconds

The second step is to pass the tweet to Azure cognitive service to analyse the sentiment of the tweet and get the score as output

Select detect sentiment as the next step

You need to provide the key and the URL which could be obtained from the cognitive service you created above.

Configure the detect sentiment of the tweet with the input as the tweet text

The third step is to pass the score obtained above to Azure function which we already deployed to determine the sentiment of the tweet, select the azure function from the connector list as follows,

Select Azure Function which will display the functions already deployed to azure

Configure score from the Cognitive service as an input to the Azure function

Next step is to stream the data set to powerBI so that it will be readily available for the visualization. Select the below connector as next step

Configure Add rows to a dataset to insert data to PowerBI

We are almost done with the configuration, as the last step you need to map the data fields from the above steps to insert into the dataset and the final configuration looks as below.

Mapping the dataset with the outputs from the previous steps

Step 7: Visualize it in PowerBI

Now we have configured all the steps required in the logic app, navigate to PowerBI and select the data set from which you want to create the report/dashboard. In this case we will select the data set which we have already created as follows,

Select the dataset

Rest is yours, you can create lot of usual charts/visualizations according to the way you need. I have created four basic metrics to see how world reacts to "work from home"

  • Indicate the total number of unique tweets
  • Distribution of sentiments using a pie chart
  • Table which displays all the data (user,location,sentiment,score and the tweet)
  • Worldmap which shows how distribution of sentiments look like

and this is how my application/dashboard look like.

Final Dashboard with RealTime Tweets

As you can see the tweets and the sentiments are being inserted to the data set and most of the sentiments are being Positive(Looks green !!!). You can replicate the same architecture for your scenarios ( Brands/ Public opinion etc).

As you see some complex scenarios/problems can be easily sorted out with the help of serverless computing and that is the power of Azure. Cheers!

For those who are interested you can view the Live dashboard.

· 2 min read

Biometric Face Recognition is the process and ability of a bio metric machine to identify and recognize the face of an individual. It deals with either to grant access to a secured system or to find out the details of a person by matching the face with existing data in the machine’s system.

Facial recognition is full of potential and can be easily incorporated to increase the security measures of any device/object. Apart from all the excitement, this technology is still developing, the more faces are fed in the algorithm, the more accurate it becomes. Therefore, there’s no need to be afraid of facial technology as it is being used for good ethical uses and safe practices.

Industries around the globe have already started to use face detection for several purposes. As a series of my ideas, continuation with Architecture for traffic problem.

This post focuses on one of the solution with Azure by using various services such as Cognitive Service, Blob Storage, Event Grid, Azure Functions and Cosmosdb to build the right architecture that would solve the above mentioned use case.

Overall Architecture:

Components used:

  • Azure CosmosDB
  • Azure Functions
  • EventGrid
  • Cognitive Face API
  • Storage (Queue,Blob)

Architecture:

The above architecture is very self explanatory, it comprises of two main components/flow.

  • Training Faces of Individuals
  • Identifying faces of Individuals

Every process is achieved in the above case using Azure function. Each operation will have separate function to achieve result. Main operations such as RegisterUser,TrainFace,TriggerTrain are simple Azure functions in the above diagram. Images are uploaded to Blob Storage using SAS token and face detection is done using cognitive service and references are stored in CosmosDB. EventGrid is used to route the events according to the invokes, for ex, to train image whenever a user uploads while registration and tag the face of the individual in the database.

I have used the above architecture with one case study and hope it will help someone out there who wants to build similar solution. Cheers!

· 2 min read

Biometric Face Recognition is the process and ability of a bio metric machine to identify and recognize the face of an individual either to grant access to a secured system or to find out the details of a person by matching the face with existing data in the machine’s system.

Facial recognition is full of potential and can be easily incorporated to increase the security measures of any device/object. Apart from all the excitement, this technology is still developing, the more faces are fed in the algorithm, the more accurate it becomes. Therefore, there’s no need to be afraid of facial technology as it is being used for good ethical uses and safe practices.

Industries around the globe have already started to use face detection for several purposes. As a series of my ideas, continuation with Architecture for traffic problem, This post focus on one of the solution with Azure by using various services such as Cognitive Service, Blob Storage, Event Grid, Azure Functions and Cosmosdb to build the right architecture that would solve the above mentioned use case.

Overall Architecture:

Components used:

  • Azure CosmosDB
  • Azure Functions
  • EventGrid
  • Cognitive Face API
  • Storage (Queue,Blob)

Architecture:

The above architecture is very self explanatory, it comprises of two main components/flow.

  • Training Faces of Individuals
  • Identifying faces of Individuals

Every process is achieved in the above case using Azure function. Each operation will have separate function to achieve result. Main operations such as RegisterUser,TrainFace,TriggerTrain are simple Azure functions in the above diagram. Images are uploaded to Blob Storage using SAS token and face detection is done using cognitive service and references are stored in CosmosDB. EventGrid is used to route the events according to the invokes, for ex, to train image whenever a user uploads while registration and tag the face of the individual in the database.

I have used the above architecture with one case study and hope it will help someone out there who wants to build similar solution. Cheers!

· 6 min read

The first ever Github Universe viewing party in SriLanka took place on last Thursday organized by the Github Campus Experts in the country. It was an event to share all the exciting news and updates on Github and it was a great success. I decided to write this blog based on the session i presented on “Github Actions”. It’s amazing to see the new features announced by the Github over the span of last 12 months out of which Github actions was the latest one and it was made generally available on  few days ago (November 13, 2019) to build CI/CD pipelines from GitHub itself. I was excited about this announcement and tested it with two of my projects and I have to say I’m impressed.

As a example, in this post I will explain about how to build a “Emotion detection app” with angular and deploy it on one of the public cloud(Azure) with Github Actions. Below is the simple architecture diagram to get an understanding on how I am going to leverage Github action to deploy my Angular application to the Cloud. Here is the simple architecture of the application that i have demonstrated.

PreRequisities:

Step 1 : Create the Resource group On Azure :

As the first step, we need to create the AppService on Azure to Deploy the Angular application.Navigate to https://portal.azure.com/ and you will be directed to the home page on the portal. Let’s create a resource group to group the resources we create.

Step 2: Create the App service to deploy the Angular app.

As the second step, create an app service to deploy the Angular application

Step 3: Create the Cognitive Service

Create Cognitive service to integrate the emotion detection part. We will use detect api to detect the attributes in a picture.

If you want to store the data , you can create a new cosmosdb to store the results which i have not included here.

Step 4: Code the Angular App

You need to create the component to upload a file and pass the file to the cognitive service to detect the attributes and use ngFor on the template to display the results.

Get the keys of the cognitive service and the url from the portal as follow

You can access the whole code from here. Make sure to replace the Ocp-Apim-Subscription-Key and the url according to the endpoint you created above.

makeRequest() {
let data, contentType;
if (typeof this.image === 'string' && !this.image.startsWith('data')) {
data = { url: this.image };
contentType = 'application/json';
} else {
data = this.fileToUpload;
contentType = 'application/octet-stream';
}

const httpOptions = {
headers: new HttpHeaders({
'Content-Type': contentType,
'Ocp-Apim-Subscription-Key': 'eb491c17bd874d2f9d410eedde346366'
})
};

this.http
.post(
'https://eastus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=emotion',
data,
httpOptions
)
.subscribe(body => {
if (body && body[0]) {
console.log(body);
this.output = body;
this.thing = body[0].faceAttributes.emotion;
this.result = this.getTop();
this.noFace = false;
} else {
this.noFace = true;
}
});
}

Step 5: Push the Code to Github

You can push the code to your own repository on GitHub and let’s create the build and deploy pipeline via the GitHub actions. Navigate to your repository and click on Actions

Step 6: Create Github Action with Workflow

Create a new workflow by clicking on the new workflow. You will get to see different templates by default to build the pipeline according to the application language as below

In this case, I will create my own workflow by clicking on the setup workflow for yourself. Name the workflow as angular.yaml. You can see a new file being generated under your repository as github_action_angular/.github/workflows/azure.yml

name: Deploy to Azure
on:
push:
branches:
- master
env:
AZURE_WEBAPP_NAME: github-actions-spa
AZURE_WEBAPP_PACKAGE_PATH: './dist/angulargithubaction'
NODE_VERSION: '10.x'

jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@master
- name: Use Node.js ${{ env.NODE_VERSION }}
uses: actions/setup-node@v1
with:
node-version: ${{ env.NODE_VERSION }}
- name: Install dependencies
run: npm install
- name: Build
run: npm run build -- --prod
- name: 'Deploy to Azure WebApp'
uses: azure/webapps-deploy@v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}

The workflow is really simple. As you see it includes a name and few actions starting with when you need to do the build and deploy. Buildon: push indicates that whenever there is a new commit the code needs to be built again. Also you have to define NodeJS version and will run our build on ubuntu server. And you have a few regular steps that we usually do with building angular application if we are familiar with Angular apps development.

Also as an option you  run that configuration only for branches other than master. For master branch we have separate configuration (with deployment to Azure). So it is flexible to maintain different workflows to different branches/environments. Is not that cool?

Step 7: Configure the Pipeline,Secrets

As the next step you need to create in GitHub Secrets page new secrets. It’s important to save the secret name whenever you need to deploy to production/development using secrets is one of the best practice. You can get the the keys from the publish profile of the app service.

Create new secret as above with the values got from profile.

  We have to configure the values in angular.yaml as follows:

  • app-name — application name in Azure
  • publish-profie — name of the secret from GitHub
  • package — path to directory which we would like to deploy (in above example: ./dist/yourSPAApp.

And that’s it. Really clear and simple! You can just check if the deployment has been successful or not by navigating to the Kudu.

And you can see the application working successfully on Azure. As the next step you can include unit tests to run when you do the build. Using the Angular CLI and Github Actions, it has become very easy to create and test frontend Web apps. Check out the fulling working demo repo below as well as the current build status for the demo!.

Start using Github Action and deploy your app within few seconds. You can use Github actions to deploy any application to any cloud as i've explained above.

You can access the Session Slides from here and the repository from here.