As per a survey, 57% of young individuals agreed they do not have the right connections to find a mentor and more than 50% of them couldn't find a job that they are passionate about. As a result I was exploring if there is any platform that would solve this major problem. Yes, there are some existing online apps but those don't serve the complete purpose to the extent that i expected. I decided to start a pet project to build this platform during my spare time , in this post i will be sharing the architecture of the application and how i was able to quickly spin up this application.
As i explained in the previous posts, Cosmosdb and Azure Functions are great combo to build applications and deploy in quick time without worrying about underlying infrastructure. You can read about some of the reference architectures i have posted in the past from the below links,
MentorLab has been made to scale up the existing students and mentors using Azure Services and Serverless Architecture to provide a cost-economic one stop solution which is dependable and truly secure. The objective is to give the students a platform which is built on a serverless architecture and can be remotely accessed irrespective of geographic location.
Flutter App is the front end application which is accessed by Mentor and Developer with different types of logins, All the requests from the mobile app will be router via the AppGateway. The backend APIs are built as servelress APIs with Azure functions with the support of Cosmosdb Trigger. Cosmosd's serverless feature is a great offering when building these kind of applications, as it is a cost-effective option for databases with sporadic traffic patterns and modest bursts. It eliminates the concept of provisioned throughput and instead charges you for the RUs your database operations consume. In this scenario, i have chosen Mongo API for the CRUD operations. The APIs are registered as endpoints with the Azure API management with right policies in place.
Some of the additional components you could see in the diagram are the CI/CD pipelines with Github Actions and Azure AD B2C for the authorization, Key vault for storing the connection strings,keys in a secured way. And finally application insights to generate the related metrics and for troubleshooting.
It nearly took just 3 days to build this application and going forward i am planning to add more features such as video conferencing with the help of Azure Communication and Media services . All these components just costs 36$/Month to host this application on Azure.
Hope this reference architecture helps you to kickstart your work for similar application. Feel free to add your thoughts/Questions as comments in the section below. Happy Hacking!
Are you are .Net developer interested in building Microservice based applications to the Cloud?. Recently i gave a talk about announcements on DotNet ecosystem and building Cloud Native applcitions with Tye and i decided to write on the same topic . Tye is an amazing project from Microsoft to ease the development, deployment with two fundamental cloud native technologies such as Docker and Kubernetes. If you have been deploying Applications in production using containers, you must be already aware of how complex steps you need to make in order to get your application running. I have worked on multiple projects where I have to run many modules as dependencies at the same time, even though I was only working on one of them. There has always been a requirement for a tool to start, deploy applications with a simple command to your desired environment without spending much time on creating docker files, configurations etc.
Tye is an experimental developer tool from the .NET team at Microsoft that makes developing, testing, and deploying microservices and distributed applications easier. It comprises a local orhestrator to assist the developers to develop microservices, deploy them to kubernetes with minimal configuration and steps.
Service discovery via configuration conventions makes Developers to expose their API with proper documentation
Add dependencies (Redis, SQL Server, etc.) without writing docker files
Run and debug locally using containers and Kubernetes
Local dashboard for metrics, logging, debugging
Automatically dockerize and deploy to Azure Kubernetes Service
What I like really like about Project Tye is that it has a very small footprint and it is so simple to get started with after spending so many years with docker and K8s. It offers the following capabilities,
This section will demonstrate how to use Tye to run multi-project application.
Step 1 : Make a new folder named dotnetconfApp
mkdir dotnetconfApp cd dotnetconfApp
Step 2 : Create a frontend project with Razor
dotnet new razor -n frontend
Step 3 : To run the frond end project, execute the following command
tye run frontend
As you could see above tye will do two things: start the frontend application and run a dashboard. Navigate to http://localhost:8000 to see the dashboard running.
The dashboard should show the frontend application running.
The Logs column has a link to view the streaming logs for the service.
the Bindings column has links to the listening URLs of the service.
Step 1 : Let's go ahead and add a Web API to fetch the data for the front end application. If you haven't already, stop the existing tye run command using Ctrl + C. Create a backend API that the frontend will call inside of the microservices/ folder.
dotnet new webapi -n backend
Step 2 : Create a solution file and add both projects
dotnet new sln dotnet sln add frontend backend
You should have a solution called dotnetconfapp.sln that references the frontend and backend projects.
Step 3: Run the multiple projects with tye . Execute the following command within the solution folder,
tye run
The dashboard should show both the frontend and backend services. You can navigate to both of them through either the dashboard of the url outputted by tye run.
Make the Communication between the frontend and backend
As you see above, we have both the backend and frontend applications running, let's make them communicate. By default, tye enables service discovery by injecting environment variables with a specific naming convention.
Step 1: Open the solution folder with VSCode or Visual Studio
Step 2 : As we need to fetch and bind the data from the sample WebApi , lets add a Contract named WeatherForecast.cs to the frontend project and it should match the contract that exist in the backend project WeatherForecast.cs
using System; namespace frontend { public class WeatherForecast { public DateTime Date { get; set; } public int TemperatureC { get; set; } public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); public string Summary { get; set; } } }
Step 3: Add a file WeatherClient.cs to the frontend project with the following contents:
using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; namespace frontend { public class WeatherClient { private readonly JsonSerializerOptions options = new JsonSerializerOptions() { PropertyNameCaseInsensitive = true, PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }; private readonly HttpClient client; public WeatherClient(HttpClient client) { this.client = client; } public async Task<WeatherForecast[]> GetWeatherAsync() { var responseMessage = await this.client.GetAsync("/weatherforecast"); var stream = await responseMessage.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<WeatherForecast[]>(stream, options); } } }
Step 4 : Add a reference to the Microsoft.Tye.Extensions.Configuration package to the frontend project
Step 5 : Register this client in frontend by adding the following to the existing ConfigureServices method to the existing Startup.cs file:
... public void ConfigureServices(IServiceCollection services) { services.AddRazorPages(); /** Add the following to wire the client to the backend **/ services.AddHttpClient<WeatherClient>(client => { client.BaseAddress = Configuration.GetServiceUri("backend"); }); /** End added code **/ } ...
This will connect the WeatherClient to use the correct URL for the backend service.
Step 6 : Add a Forecasts property to the Index page model under Pages\Index.cshtml.cs in the frontend project.
... public WeatherForecast[] Forecasts { get; set; } ...
Change the OnGet method to take the WeatherClient to call the backend service and store the result in the Forecasts property:
Step 8 : Run the project with tye run and the frontend service should be able to successfully call the backend service!
When you visit the frontend service you should see a table of weather data. This data was produced randomly in the backend service. Since you are seeing the data in a web UI in the frontend means that the services are able to communicate.
If you are getting any issues with the self signed scertifications, you need to run the below commands,
Step 4 : Now everything is ready do easy deploy the solution. Navigate to the solution folder and execute following:
tye deploy --interactive
You will be prompted to enter the Container Registry (ex: 'example.azurecr.io' for Azure or 'example' for dockerhub) where the Kubernetes service is installed.
The deployment process has built the docker container and pushed repositories to my registry:
If you are using VScode with the Kubernetes extension installed, once you authenticate to Azure using Azure extension, you will be able to navigate to the cluster and click on Workloads -> Pods -> Right Click Get
You can also explore the running pods from the Azure portal or using Cloudshell as well. You'll have two services in addition to the built-in kubernetes service.
You can visit the frontend application by port forwarding to the frontend service.
kubectl port-forward svc/frontend 5000:80
Now navigate to http://localhost:5000 to view the frontend application working on Kubernetes. You should see the list of weather forecasts just like when you were running locally.
Some important Notes:
⚠️ Currently tye does not provide a way to expose pods/services created to the public internet.
⚠️ Currently tye does not automatically enable TLS within the cluster, and so communication takes place over HTTP instead of HTTPS.
In this post I explained how easy it is to get started with Project Tye and How to run a simple web application. You can certainly do the same adding an external dependency like SQL server or Redis cache without creating docker files manually. In short, Tye simplifies the multiple gestures for all the services within an application to a single command. Even though Project Tye is an experimental tool , you can watch out for more features around service discovery, diagnostics , observability, configuration and logging when it comes to building cloud native applications with Microservices. Certainly it's a great addition to the tooling what .Net ecosystem provides and hope you will consider exploring it more. Cheers!
This year I've started focusing deeply on the areas around App Modernization with various cloud native offerings with different cloud vendors. One of my main focuses has been Kuberentes and how it can help organizations to design scalable applications to cater their needs. Scaling is one of the interesting concepts on Kuberentes and a very important subject to look at. This post will help you to understand the basics of the scalability aspects in Kuberentes and how you can use Event driven Applications with Kubernetes in detail.
Scaling Kubernetes
In general, Kuberentes provides two ways to scale your applications with the capabilities such as,
Cluster Scaling - It Enables users to add and remove nodes to provide more resources to run on. This is applicable to scale in an infrastructure level
Application Scaling - It can be achieved based on how your applications are running by changing the characteristics of underlying pods. Either by adding more copies or by changing the resources available to run same as how you do with services like App service on Azure.
Cluster Autoscaler : The first way of scaling can be achieved with Cluster Autoscaler. It is a tool enables automatically scaling the cluster. Most of the cloud vendors have this capability which automatically add nodes that user don't have to take care of adding more nodes. It basically add nodes when the capacity demand is there and remove nodes when they are no longer needed. I have widely used the Autoscaler with Microsoft Azure Kubernetes Service which allows you to start scaling your AKS cluster.
Event-driven applications are a key pattern for cloud-native applications. Event-driven is at the core of many growing trends, including Serverless compute like Azure Functions. There are so many scenarios where Kubernetes with Azure functions can be used in order to serve your applications with minimized cost.
Hybrid Solutions, Which needs some data to be processed on their environment
Applications with some specific requirements of Memory and GPUs
Applications which are already running on Kubernetes, so that you can leverage existing investments
Example Scenario:
Customer Scenario
In this example, lets consider a retail customer like eBay. For a retail customers who processing millions of orders and having a hybrid environment. If the customer does a lot of processing on the data center using things like Kubernetes and at some point they push those data to the cloud using eventhub or transform their data and put in the right place etc. What if the customer is getting sudden spike on a certain day like "Black Friday" and need to make sure that the compute can scale rapidly and also wanted to make sure that the orders are processed in the correct manner.
Since Azure functions is open sourced so one of the things that azure functions team working with and talking to the community some partners like Redhat is how can they start to bring more of these experiences to that other side to where you don't want to have lock-in to a specific cloud vendor and to run these service workloads anywhere could be on other clouds. Kubernetes Event-Driven Autoscaling (KEDA) which is a Microsoft & Red Hat partnership that would make auto scaling Kubernetes workloads a lot easier. With Azure Functions, you write code which is triggered when a certain trigger occurs and they handle the scaling for you, but you have no control over it. With combining Kubernetes you have to tell it how to scale your application so it's fully up to you! On May 6th, 2019 Microsoft announced that they have partnered with Red Hat to build Kubernetes-based event-driven autoscaling (KEDA) which brings both worlds closer together.
What is Kubernetes-based event driven autoscaling (KEDA) ?
KEDA provides an autoscaling infrastructure that allows you to very easily autoscale your applications based on your criteria. Nothing to process? No problem, KEDA will scale your app back to 0 instances unless there is work to do.
How does it work?
KEDA comes with a set of core components to provide the scaling infrastructure:
A Controller to coordinate all the work and watch for new ScaledObjects
A Kubernetes Custom Metric Server
A set of scalers which allow you to scale on external services
The controller is the heart of KEDA and is handling the following responsibilities :
Watching for new ScaledObjects
Ensuring that deployments where no events occur, scale back to 0 nodes. Once events occur, it makes sure that it scales from 0 to n.
How it Differs from Kubernetes ?
"Default Kubernetes Scaling is not well suited for Event Driven Applications"
By default kuberentes is not very well suited for event-driven scaling and that's because by default kuberentes can really only do resource based scaling looking at CPU and Memory.
As an application admin, you can deploy ScaledObject resources in your cluster which define the scaling rules for your application based on a given trigger.
These triggers are also referred to as “Scalers”. They provide a catalog of supported sources on which you can autoscale and provide the required custom metric feeds to scale on. This allows KEDA to very easily support new scale sources by adding an individual scaler for that service. Let’s have a look at a ScaledObject that automatically scales based on Service Bus Queue depth.
Scaled Object Deployment with Kafka using KEDA
As you see in the file, All it contains is that whatever deployment that you are going to scale it goes under scaletargetreference. You can also set some metadata such as how frequently to pull for events that you can control. Metadata such as Minimums and maximums and then you can define the event source in this case it is mention as Kafka, you can also mention things like service bus etc. Once the deployment is applied on kuberentes you can see that it will identify that scaled objects. Based on the events on the eventssources it is going to identify and scale it automatically. HPA does the autoscaling.
With the production release of KEDA back in 2019, you can now safely run your azure function apps on Kubernetes and its recommended by the product group. This allows the users to build serverless applications once re-use them on other infrastructures as well. Let's see How to build an application that supports the above scenario discussed.
Once these resources are created you can double check it by navigating to azure portal and opening the resource group.
Resources for the Kubernetes Event Driven Autoscaling
Now we have created all the necessary resources. In the above step you can see that we have create two nodes but have not deployed anything to those nodes, but in real scenarios that nodes can contain some applications already running on Kubernetes.
Step 5 : Create Azure Function to Process the Queue Message
Let's create an Azure function to process these queue messages, you can navigate to the folder and create a containerized function as follows,
In this case we need to create a function to get triggered when there is a new message in the queue, so select the Trigger as QueueTrigger from the template,
Select Queue Trigger
and give a function name as follows,
Create Function with QueueTrigger template
Open the function in vscode and configure the storage queue connection string in the local.appsettings.json, the connection string can be obtained from the Azure portal in Storage Account "saKeda" -> Queues -> Access Keys
Step 6 : Build the function and Enable Keda on AKS cluster
As we have everything ready, let's build the function with the below command,
docker build -t processorder:v1 .
As the next step , we need to enable KEDA on the Azure Kubernetes Cluster. This is similar on any Kubernetes environment which can be achieved by below,
Keda.yaml is the configuration file which has all details. In our case we define that we want to use the queue trigger and what our criteria is. For our scenario we’d like to scale out if there are 5 or more messages in the orders queue with a maximum of 10 concurrent replicas which is defined via maxReplicaCount. Let's apply the yaml file on the cluster.
Make sure that you are tagging the correct registry name , here i am using my docker registry, instead you can consider using Azure Container Registry as well.
Let's view the kubernetes cluster. if you have install the kubernetes extension you can easily view the status of the cluster using vscode.
Step 8 : Publish messages to the Queue
Inorder to test the scaling of Azure function with KEDA, i created a sample console application which pushes the messages to the queue we created. And the code looks as follows,
using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Queue; using System; using System.Threading; using System.Threading.Tasks; namespace Serverless { class Program { static async Task Main(string[] args) { CloudStorageAccount storageClient = CloudStorageAccount.Parse("connection string"); CloudQueueClient queueClient = storageClient.CreateCloudQueueClient(); CloudQueue queue = queueClient.GetQueueReference("sqkeda"); for (int i = 0; i < 100000000; i++) { await queue.AddMessageAsync(new CloudQueueMessage("Hello KEDA , See the magic!")); } } } }
As the messages are getting inserted to the queue, the number of replicas gets increased which you can see from the image below.
Once all the messages have been processed KEDA will scale the deployment back to 0 pod instances. Overall process is simplified in the diagram below,
We have easily deployed a .NET Core 3.1 Function on Kubernetes which was processing messages from Storage Queue. Once we’ve deployed a ScaledObject for our Kubernetes deployment it started scaling the pods out and in according to the queue depth. This makes the application very easily plug in autoscaling with existing application without making any changes!
This makes Kubernetes-based event-driven autoscaling (KEDA) a great addition to the autoscaling toolchain, certainly for Azure customers and ISVs who are building solutions. Hope this was useful in someway! Cheers!
For the past one month , I have been experimenting with one of the promising serverless framework for creating serverless functions. With the modern applications moving to cloud with microservices this framework becomes very handy to create/manage your microservices in the form of functions.
When I started my career as a developer most of the applications that I worked on are with three Tier architecture and most of the companies built the applications with monolithic architecture even before cloud platforms were existed. With the modern technologies everyone is decomposing the business functionalities of the application into several micro services to avoid single point of failure. Assume Uber as an application and their core functionalities such as Registration,Payment,Email notifications, Push notifications could be broken down to several microservices in order to avoid any downtime. With monolithic architecture single point of failure would cause the entire application to shut down. To understand in detail, look at the following diagram
Many of us have the common understanding of serverless as its about without a server. None can be executed/hosted without having a server. It’s just that the fack that you will have no way to actually see the server which executes your code. As a developer with serverless you do not have to worry about managing servers as it will be automatically handled. Serverless becomes handy because of the following reasons
Reduce time-to-market
Easier deployment
Scale automatically
Focus on business logic
Cost reduction
Serverless is used for mainly event event driven architecture where functions has an endpoint that triggers something. Say for example, Trigger a notification once the file is uploaded.
Serverless and Microservices are great couple together. You should choose serverless when your functions/services are,
There are so many frameworks out there to build applications with serverless out of them OpenFaas stands out as its not vendor locked and you can use it for both On prem as well as in any of the cloud platform. It is very simple and need few commands to get your functions deployed anywhere. It can be exposed with Docker Swarm or Kubernetes to the outside world.
Following are the reasons if you ever want to choose OpenFaas,
As the name indicates watchdog is responsible to convert http messages to stdin then it will be passed to functions and stdout vice versa.Any docker image could be turned to serverless by adding function watchdog
As you have heard from AWS API gateway, it does the similar job here as well. It provides an external route into your functions and collects Cloud Native metrics through Prometheus. It also scale functions according to demand by altering the service replica count in the Docker Swarm or Kubernetes API. It also provides the UI to invoke functions in your browser and create new ones as needed.
I am a fan of Microsoft Azure, I will be providing the steps to setup OpenFaas on Azure Kubernetes. I have done this at a workshop and you can find 10 steps to build a kubernetes cluster and then all steps can be done simply by hitting few kubectl and helm commands.
and you should copy the service/gateway-external url with the port and paste it in the browser. You should see OpenFaas live.
Well, that’s all about for this post, I will write another post about how to execute functions and how to build your custom function in the coming days.
You can find the workshop slides from here. Keep watching :)
For the past one month , I have been experimenting with one of the promising serverless framework for creating serverless functions. With the modern applications moving to cloud with microservices this framework becomes very handy to create/manage your microservices in the form of functions.
When I started my career as a developer most of the applications that I worked on are with three Tier architecture and most of the companies built the applications with monolithic architecture even before cloud platforms were existed. With the modern technologies everyone is decomposing the business functionalities of the application into several micro services to avoid single point of failure. Assume Uber as an application and their core functionalities such as Registration,Payment,Email notifications, Push notifications could be broken down to several microservices in order to avoid any downtime. With monolithic architecture single point of failure would cause the entire application to shut down. To understand in detail, look at the following diagram
Many of us have the common understanding of serverless as its about without a server. None can be executed/hosted without having a server. It’s just that the fack that you will have no way to actually see the server which executes your code. As a developer with serverless you do not have to worry about managing servers as it will be automatically handled. Serverless becomes handy because of the following reasons
Reduce time-to-market
Easier deployment
Scale automatically
Focus on business logic
Cost reduction
Serverless is used for mainly event event driven architecture where functions has an endpoint that triggers something. Say for example, Trigger a notification once the file is uploaded.
Serverless and Microservices are great couple together. You should choose serverless when your functions/services are,
There are so many frameworks out there to build applications with serverless out of them OpenFaas stands out as its not vendor locked and you can use it for both On prem as well as in any of the cloud platform. It is very simple and need few commands to get your functions deployed anywhere. It can be exposed with Docker Swarm or Kubernetes to the outside world.
Following are the reasons if you ever want to choose OpenFaas,
As the name indicates watchdog is responsible to convert http messages to stdin then it will be passed to functions and stdout vice versa.Any docker image could be turned to serverless by adding function watchdog
As you have heard from AWS API gateway, it does the similar job here as well. It provides an external route into your functions and collects Cloud Native metrics through Prometheus. It also scale functions according to demand by altering the service replica count in the Docker Swarm or Kubernetes API. It also provides the UI to invoke functions in your browser and create new ones as needed.
I am a fan of Microsoft Azure, I will be providing the steps to setup OpenFaas on Azure Kubernetes. I have done this at a workshop and you can find 10 steps to build a kubernetes cluster and then all steps can be done simply by hitting few kubectl and helm commands.
and you should copy the service/gateway-external url with the port and paste it in the browser. You should see OpenFaas live.
Well, that’s all about for this post, I will write another post about how to execute functions and how to build your custom function in the coming days.
You can find the workshop slides from here. Keep watching :)