Skip to main content

6 posts tagged with "aks"

View All Tags

· 5 min read

Overview:

Are you hearing the word Devspaces for the first time?. Let me put this way, Imagine a developer having to deal with a large application which has chunks of Microservices and want to get some functionality done?. There will be many risks and one of them would be dealing with right environments. As we know the best way to counter this issue within a team would be to containerize and host it on cloud. Which will let the developer to work on the particular feature and debug the container without creating the environment locally. That is what exactly Azure Dev Spaces does.

What is Devspaces?

Devspaces allows you to build, test and run code inside any Kubernetes cluster. With a DevSpace, you run everything remotely inside pods running on top of a Kubernetes cluster. Additionally, the DevSpace CLI takes care things automatically such building and pushing images for you when you make changes to your Dockerfile. If you are making a source code change, the DevSpace CLI does not require you to rebuild and redeploy.

It rather syncs your locally edited source to straight to the containers running inside Kubernetes.  This makes you to edit locally but compile and run everything remotely inside Kubernetes and still use modern development features such as hot reloading. Azure Dev Spaces supports development with a minimum development machine setup. Developers can live debug on Azure Kubernetes Services (AKS) with development tools like Visual Studio, Visual Studio Code or Command Line.

With the recent announcement of Bridge To Kubernetes GA, Azure Dev Spaces will be retired on October 31, 2023. Developers should move to using Bridge to Kubernetes, a client developer tool.

What is Bridge to Kubernetes?

It was formerly called as Local Process with Kubernetes. Bridge to Kubernetes is an iterative development tool offered in Visual Studio and VS Code through extensions that you can pick up in the marketplace. IT allows developers to write, test and debug microservice code on their development workstations while consuming dependencies and inheriting existing configuration from a Kubernetes environment. There are lot of different tools and methods for solving these kind of challenges when you are working on a single micro-service in the context of a larger application. Those different methods and tools into three main types .There's the local, remote and hybrid approach as shown in the image below

Development Approaches

If you look at the above picture, developers are shifting from Local development methods to hybrid methods which offers the best way to deal with building applications to the cloud with containers/kuberentes. With Hybrid approach, it allows developers to write code on their development workstation, but also allow them to connect to external dependencies that are running in some remote environment. So it actually fulfilling all those external dependencies by connecting them. Let's say if you are running your application on Kubernetes on Azure, you can connect all the dependencies from your local environment and have the whole end-to-end workflow.

Bridge to Kubernetes Scenario

Consider the above scenario in the diagram, Assuming that i am working on a Microservice that deals with Products and the other Microservices which are developed using different stack are deployed on Kubernetes cluster on Azure. If i want to connect to any or multiple microservices and run some integration tests in my local environment Bridge to Kubernetes will help to achieve the requirement. Following are some of the Key features that Bridge to Kubernetes offers similar to Devspaces,

Accelerating and Simplifying Microservice Development

It basically eliminates the need to manually push code, configure and compile external dependencies on your development environment so that you can focus on code without worrying about other factors.

Easy Debugging Code

It lets you to run your usual debug profile with the added kuberentes cluster configuration. It allows developers to debug code in the way they want would while taking advantage of the speed and flexibility of local debugging.

Developing and Testing End-to-End

One of the important feature is the Integration testing during development time. Select an existing service in the cluster to route to your development machine where an instance of that service is running locally. Developers can initiate request through the frontend of the application running in Kubernetes and it will route between services running in the cluster until the service you specified to redirect is called same as how you would do debugging by adding a breakpoint in your code.

How to get started with Bridge to Kubernetes?

You can Start debugging your Kubernetes applications today using Bridge to Kubernetes.You need to download the extensions from the Visual Studio and VS Code marketplaces.

Bridge to Kubernetes VSCode extension

If you would like to explore more with a sample application follow the example given on Use Bridge to Kubernetes . Also kindly note the Bridge to Kubernetes collects usage data and sends it to Microsoft to help improve our products and services.

Start using Bridge to kuberentes and deploy things to production even faster than before! Cheers!

· 11 min read

Overview:

This year I've started focusing deeply on the areas around App Modernization with various cloud native offerings with different cloud vendors. One of my main focuses has been Kuberentes and how it can help organizations to design scalable applications to cater their needs. Scaling is one of the interesting concepts on Kuberentes and a very important subject to look at. This post will help you to understand the basics of the scalability aspects in Kuberentes and how you can use Event driven Applications with Kubernetes in detail.

Scaling Kubernetes

In general, Kuberentes provides two ways to scale your applications with the capabilities such as,

Cluster Scaling - It Enables users to add and remove nodes to provide more resources to run on. This is applicable to scale in an infrastructure level

Application Scaling - It can be achieved based on how your applications are running by changing the characteristics of underlying pods. Either by adding more copies or by changing the resources available to run same as how you do with services like App service on Azure.

Cluster Autoscaler : The first way of scaling can be achieved with Cluster Autoscaler. It is a tool enables automatically scaling the cluster. Most of the cloud vendors have this capability which automatically add nodes that user don't have to take care of adding more nodes. It basically add nodes when the capacity demand is there and remove nodes when they are no longer needed. I have widely used the Autoscaler with Microsoft Azure Kubernetes Service which allows you to start scaling your AKS cluster.

Event Driven Autoscaling with Kubernetes :

Necessity:

Event-driven applications are a key pattern for cloud-native applications. Event-driven is at the core of many growing trends, including Serverless compute like Azure Functions. There are so many scenarios where Kubernetes with Azure functions can be used in order to serve your applications with minimized cost.

  • Hybrid Solutions, Which needs some data to be processed on their environment
  • Applications with some specific requirements of Memory and GPUs
  • Applications which are already running on Kubernetes, so that you can leverage existing investments

Example Scenario:

Customer Scenario

In this example, lets consider a retail customer like eBay. For a retail customers who processing millions of orders and having a hybrid environment. If the customer does a lot of processing on the data center using things like Kubernetes and at some point they push those data to the cloud using eventhub or transform their data and put in the right place etc. What if the customer is getting sudden spike on a certain day like "Black Friday" and need to make sure that the compute can scale rapidly and also wanted to make sure that the orders are processed in the correct manner.

Arrival of KEDA :

Since Azure functions is open sourced  so one of the things that azure functions team working with and talking to the community some partners like Redhat is how can they  start to bring more of these experiences to that other side to where you don't want to have lock-in to a specific cloud vendor and to run these service workloads anywhere could be on other clouds. Kubernetes Event-Driven Autoscaling (KEDA) which is a Microsoft & Red Hat partnership that would make auto scaling Kubernetes workloads a lot easier. With Azure Functions, you write code which is triggered when a certain trigger occurs and they handle the scaling for you, but you have no control over it. With combining Kubernetes you have to tell it how to scale your application so it's fully up to you! On May 6th, 2019 Microsoft announced that they have partnered with Red Hat to build Kubernetes-based event-driven autoscaling (KEDA) which brings both worlds closer together.

What is Kubernetes-based event driven autoscaling (KEDA) ?

KEDA provides an autoscaling infrastructure that allows you to very easily autoscale your applications based on your criteria. Nothing to process? No problem, KEDA will scale your app back to 0 instances unless there is work to do.

How does it work?

KEDA comes with a set of core components to provide the scaling infrastructure:

  • Controller to coordinate all the work and watch for new ScaledObjects
  • Kubernetes Custom Metric Server
  • A set of scalers which allow you to scale on external services

Kubernetes-based event-driven autoscaling (KEDA) Architecture

The controller is the heart of KEDA and is handling the following responsibilities :

  1. Watching for new ScaledObjects
  2. Ensuring that deployments where no events occur, scale back to 0 nodes. Once events occur, it makes sure that it scales from 0 to n.

How it Differs from Kubernetes ?

"Default Kubernetes Scaling is not well suited for Event Driven Applications"

By default kuberentes is not very well suited for event-driven scaling and that's because by default kuberentes can really only do resource based scaling looking at CPU and Memory.

What K8s can do?What K8s can't do?
Scheduling of containersInvoke code based on external events
Capacity managementScale based on external metrics

When do we need KEDA!

Sample Deployment :

As an application admin, you can deploy ScaledObject resources in your cluster which define the scaling rules for your application based on a given trigger.

These triggers are also referred to as “Scalers”. They provide a catalog of supported sources on which you can autoscale and provide the required custom metric feeds to scale on. This allows KEDA to very easily support new scale sources by adding an individual scaler for that service. Let’s have a look at a ScaledObject that automatically scales based on Service Bus Queue depth.

Scaled Object Deployment with Kafka using KEDA

As you see in the file, All it contains is that whatever deployment that you are going to scale it goes under scaletargetreference. You can also set some metadata such as how frequently to pull for events that you can control. Metadata such as Minimums and maximums and then you can define the event source in this case it is mention as Kafka, you can also mention things like service bus etc. Once the deployment is applied on kuberentes you can see that it will identify that scaled objects. Based on the events on the eventssources it is going to identify and scale it automatically. HPA does the autoscaling.

Run Azure Functions Anywhere

With the production release of KEDA back in 2019, you can now safely run your azure function apps on Kubernetes and its recommended by the product group. This allows the users to build serverless applications once re-use them on other infrastructures as well. Let's see How to build an application that supports the above scenario discussed.

PreRequisities :

This article requires you to have the following tools & services:

  • Azure CLI
  • Azure Subscription
  • .NET Core 3.1
  • Kubernetes cluster with KEDA installed
  • Function Tools
  • Visual Studio Code
  • Docker Desktop

Step 1: Create a Resource Group

First step is to create the resource group using which all the necessary resources will be grouped together!

https://gist.github.com/sajeetharan/839847fe89b1b3ea90679ae7d2782d6e.jsView this gist on GitHubCreate Resource Group Named "rgKeda" in "Southeastasia" region

Step 2 : Create a Storage Account

Let's create a Storage Account to store the order messages, You can do this by the command,

https://gist.github.com/sajeetharan/fc885f9ddb7916c015ff470823f1c8f0.jsView this gist on GitHubCreate Storage Account Named "sakeda"

Step 3 : Create a Queue in Service Bus

Next step is to create the queue to store the orders under the namespace "sbqOrders"

https://gist.github.com/sajeetharan/685feca2dbc8e591a759a8da4b36c491.jsView this gist on GitHubCreate Storage Queue to store the order messages

Step 4 : Create Azure Kubernetes Service

To showcase the feature of event driven scaling with the kubernetes let's create a Kubernetes cluster on Azure with two nodes.

https://gist.github.com/sajeetharan/94812cad484f0de46490e5069e08a8c6.jsView this gist on GitHub

Once these resources are created you can double check it by navigating to azure portal and opening the resource group.

Resources for the Kubernetes Event Driven Autoscaling

Now we have created all the necessary resources. In the above step you can see that we have create two nodes but have not deployed anything to those nodes, but in real scenarios that nodes can contain some applications already running on Kubernetes.

Step 5 : Create Azure Function to Process the Queue Message

Let's create an Azure function to process these queue messages, you can navigate to the folder and create a containerized function as follows,

https://gist.github.com/sajeetharan/569c6ef142d5b64ce5a5c3fb134e920d.jsView this gist on GitHubCreate a containerized Function

Make sure to select the preferred language and the app will be created as follows,

lets create the new function as ,

https://gist.github.com/sajeetharan/d24f16c79111688f5c2167bf8947856f.jsView this gist on GitHubCreate new function

In this case we need to create a function to get triggered when there is a new message in the queue, so select the Trigger as QueueTrigger from the template,

Select Queue Trigger

and give a function name as follows,

Create Function with QueueTrigger template

Open the function in vscode and configure the storage queue connection string in the local.appsettings.json, the connection string can be obtained from the Azure portal in Storage Account "saKeda" -> Queues -> Access Keys

{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"StorageConnection": "DefaultEndpointsProtocol=https;AccountName=sakeda;AccountKey=uPmdavPQ5DOYD3A5sJbeODP5He9OBv3sYpmlGj6fs7mQYZsk7P/ShP6Go9u+waBBean+1PJwjUEkGVnRsHf/rNg==;EndpointSuffix=core.windows.net"
}
}

Whole function code can be viewed from here.

Step 6 : Build the function and Enable Keda on AKS cluster

As we have everything ready, let's build the function with the below command,

 docker build -t processorder:v1 .

As the next step , we need to enable KEDA on the Azure Kubernetes Cluster. This is similar on any Kubernetes environment which can be achieved by below,

https://gist.github.com/sajeetharan/76b6529d6adc0235f50921d0ba6c0519.jsView this gist on GitHub

Keda.yaml is the configuration file which has all details. In our case we define that we want to use the queue trigger and what our criteria is. For our scenario we’d like to scale out if there are 5 or more messages in the orders queue with a maximum of 10 concurrent replicas which is defined via maxReplicaCount. Let's apply the yaml file on the cluster.

kubectl apply -f keda.yaml --namespace="kube-system"

Step 7 : Deploy the container as Azure function extended with AKS

Final step is to deploy the function to kubernetes environment with the following command,

https://gist.github.com/sajeetharan/fc52f2e6a686bca16acc190fee2137c0.jsView this gist on GitHub

Make sure that you are tagging the correct registry name , here i am using my docker registry, instead you can consider using Azure Container Registry as well.

Let's view the kubernetes cluster. if you have install the kubernetes extension you can easily view the status of the cluster using vscode.

Step 8 : Publish messages to the Queue

Inorder to test the scaling of Azure function with KEDA, i created a sample console application which pushes the messages to the queue we created. And the code looks as follows,

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;
using System;
using System.Threading;
using System.Threading.Tasks;

namespace Serverless
{
class Program
{
static async Task Main(string[] args)
{
CloudStorageAccount storageClient = CloudStorageAccount.Parse("connection string");
CloudQueueClient queueClient = storageClient.CreateCloudQueueClient();

CloudQueue queue = queueClient.GetQueueReference("sqkeda");
for (int i = 0; i < 100000000; i++)
{
await queue.AddMessageAsync(new CloudQueueMessage("Hello KEDA , See the magic!"));
}

}
}
}

As the messages are getting inserted to the queue, the number of replicas gets increased which you can see from the image below.

Once all the messages have been processed KEDA will scale the deployment back to 0 pod instances. Overall process is simplified in the diagram below,

Serverless eventing with Kubernetes

Conclusion :

We have easily deployed a .NET Core 3.1 Function on Kubernetes which was processing messages from Storage Queue. Once we’ve deployed a ScaledObject for our Kubernetes deployment it started scaling the pods out and in according to the queue depth. This makes the application very easily plug in autoscaling with existing application without making any changes!

This makes Kubernetes-based event-driven autoscaling (KEDA) a great addition to the autoscaling toolchain, certainly for Azure customers and ISVs who are building solutions. Hope this was useful in someway! Cheers!

· 7 min read

Quarantine, self isolation , social distancing for the past one month, I’m living with these words. While most of us are investing this time to learn new technologies/tools. I challenged myself to skill up myself and have deep knowledge on certain services on Azure.

Kubernetes provides a uniform way of managing containers. Its aim is to remove the complexity of deciding where applications should be scheduled to run, how to locate them, how to ensure they are running, autoscale or deploy. Azure Kubernetes is a service on Azure that help Customer achieve their business goals, by providing a layer of automation on top of their infrastructure. Going towards the technical features, Azure Kubernetes has a lot to offer, but at the end of the day, is a great platform for saving money or growing faster.

Azure Kubernetes service offers great set for microservice architectures. If your application needs to start hundreds of containers quickly or will terminate them just as quickly and to have full control of those services, AKS is a great option. There are other scenarios such as Bigdata, IOT scenarios you would consider AKS as a preferred choice. In this post i will explain how to easily setup your application running on AKS cluster in 10 minutes with CI/CD pipelines.

PreRequisities:

You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.

How to build & Deploy the application:

If you are a beginner with Azure Kubernetes Service, Azure Devops is the best place that you need to look in order to understand how an Application is deployed on Azure Kubernetes. The Azure DevOps Project simplifies the setup of an entire continuous integration (CI) and continuous delivery (CD) pipeline to Azure with Azure DevOps. Cool thing is that, you can start with existing code or use one of the provided sample applications. It enables you to quickly deploy that application to various Azure services such as Virtual Machines, App Service, Azure Kubernetes Services (AKS), Azure SQL Database, and Azure Service Fabric.

Lets Deploy an Node.js App to Azure Kubernetes Service :

Navigate to Azure Portal and search for Azure Devops Project in the market place/search bar.

Azure Devops Project

Let's go ahead and add a new project.

Add new Azure Devops Project

Azure Devops project enables developers to launch an app withany Azure App Service in just a few quick steps, providing everything needed to develop, deploy and monitor an app. Create a DevOps Project, and it provisions all the Azure resources and provides a Git code repository, Application Insights integration and a continuous delivery pipeline setup for deployment to Azure. The DevOps Project dashboard lets you monitor code commits, builds and deployments from a single view in the Azure portal. How cool is that ?

With the help of Azure DevOps Projects, you can build an Azure application, on an Azure service, in quick time. You also get automatic full CI/CD pipeline integration, built-in monitoring and deployment to the platform of your choice. Azure Devops Project supports almost all the latest languages out there in practice such as .Net,Java,Node.js,PHP,Python and Go.

Next step is to select the Language you want to have the application on, I will go ahead and choose Nodejs as my application language. But you could choose any language that you want to test.

Create Node.js Devops Project

Once you select the language, next step is to select the framework in which you want the application to be based on , For example, if you choose Python it could be based on Flask,Django etc. Similarly you have the flexibility to choose the framework once you decide the language. In this case i will go ahead and choose express.

Select the Framework

Next step is the critical part of the process, This is the step that defines which service you would be using to deploy the app. You can Run your application on Windows or Linux. Simply deploy to Azure Web App, Virtual Machine, Service Fabric or choose Azure Kubernetes Service for your application. Each of those option provides deployment in an elegant and fast way. In this case, we will deploy the application to Azure Kubernetes service.

Azure Kubernetes Service to Deploy

Once you are done with the above step, final step is passing the configuration details for the Kubernetes cluster on AKS as follows,

Most of the settings are self explanatory , you can change the size of underlying VMs based on your requirement. The default number of nodes for your cluster comes as 3 , if you need to make changes to your cluster and the container registry settings click on Additional Settings. Here you can configure the Kubernetes version, Node count, App Insights and resource group location. The HTTP application routing solution makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster. In this case we will disable it.

Additional Settings AKS configuration

Container registry is needed as your images needs to be pushed to them. Once you're good with all settings, click ok and done!. You will see a notification box as below.

K8s cluster, Container Registtry, CI/CD pipelines are created

Once everything is created you will be redirected to a Dashboard page as below.

Resources in page

The four stages involved are:

  1. Azure Kubernetes Cluser: Created and configured your Azure Kubernetes Cluster and application endpoint.
  2. Azure Container Registry : Created and application image is pushed to the container registry.
  3. Repository: Created a distributed Git repository and checked in sample code.
  4. CI/CD Pipeline: Seamlessly connected with the Azure Devops collaboration solution allows you to plan, test, release and monitor your solutions.
  5. Application Insights:  Created and configured your Application Insights telemetry which enables active monitoring and learning to proactively detect issues and continuously analyze and test hypotheses without code.

You can see all the resources created on Azure under the resource group

Resource Group with All resources

When you click on the Kuberentes cluster, you can see the Kubernetes related resources such as dashboard logs etc.

Kubernetes cluster resources

And if you navigate to the blade you can see the settings such as Enabling Dev spaces , Kubernetes version, Application Insights etc.

On the Azure Devops side, you will be able to see new Azure Devops project created with Dashboard, Backlog items, CI/CD pipelines etc.

Azure Devops Project with CI/CD pipelines

And when you click on the application endpoint, you would see that the application running successfully on Azure Kubernetes service.

Nodejs App on AKS

In order to verify the services and pods, you could follow the steps provided in the Azure Kubernetes dashboard configuration and when you open up the dashboard, you can see the status of each services.

Azure Kubernetes dashboard

I have spent more than 4 days in the past to configure Kubernetes to deploy my application. But Azure Devops project Simplify and speed up the DevOps process with Azure DevOps services. If you want to explore more kind of scenarios on different services on Azure its worth to explore Azure Devops Labs. I hope it makes it easier to get started with any of the deployment with Azure services. Cheers!

· 3 min read

This week I participated my first ever OpenHack on DevOps organized by Microsoft in Singapore. It was a three day event from 26th-29th November 2019. The hackathon was focused on Azure DevOps and Azure Kubernetes services. There were participants from all over the world gathered at one place.

There were over 90+ Participants comprised of internal employees as well as customers. Participants were divided into 6 members per team with one coach. The content was set as 8 challenges. Coach from each team was some Cloud Solution Architect from Microsoft who was helping and guiding the team during the challenges with some hints to solve it. One of the cool thing of the hack was that each team could apply their own solutions in unique ways. We as a team were supposed to find our way out to solve the challenges. There was no one way, we were free to take our decisions and paths as deemed fit. If you are wondering about the agenda and what happened in the hack, here you go.

My Team RockStars - Announced as Happiest team among all

What I really liked about the openhack was that each team member was really able to understand what's the challenge and was able to get the team's support whenever they're stuck. Before we start each challenge, one member from the team was assigned as a Scrum Master and he has to drive the entire team to complete the challgne. In each challenge, one has to elloborate the feature of whatever the tools/technologies that we would use in the challenge. There was whiteboarding session included in each challenge before we get in to try to solve the challenge. It was a hands-on rather than attending any tech talk about a specific topic. The tasks were set, challenges were well organized, the environment was prepared, code was almost prepared (with some changes) so that we can focus on learning how we can use Azure DevOps as a tool to ensure zero down-time for production ready application. Kubernetes was chosen as an orchestration framework. Azure monitor was used as the Monitoring service.

Microsoft OpenHack is a developer focused event where a wide variety of participants (Open) learn through hands-on experimentation (Hack) using challenges based on real world customer engagements designed to mimic the developer journey.

For every challenge the links to documentation and resources were provided to understand relevant topics and areas at hand. Besides the actual work, it was a great opportunity to network and discuss broader topics with fellow team members and other participants. It was not just about solving challenges, but each one was appreciating others work whenever we accompolished something. we were given with some cool swags including stickers,notebook wireless charger and Azure Devops badge.

There was not a real winner(team) out of this openhack, all the teams who participated thoroughly enjoyed and it was about sharing and solving real issues.Overall, I think it was a great learning experience for all the participants with great focus on getting things done. I will definitely keep an eye on such events in the future and try to join as a Devops coach for the upcoming events. More than the hackathon it was not just about technology but about teamwork. If you want to have the same experience try to join any of the OpenHacks from here.

· 5 min read

For the past one month , I have been experimenting with one of the promising serverless framework for creating serverless functions.  With the modern applications moving to cloud with microservices this framework becomes very handy to create/manage your microservices in the form of functions.

What do we need microservices?

When I started my career as a developer most of the applications that I worked on are with three Tier architecture and most of the companies built the applications with monolithic architecture even before cloud platforms were existed. With the modern technologies everyone is decomposing the business functionalities of the application into several micro services to avoid single point of failure. Assume Uber as an application and their core functionalities such as Registration,Payment,Email notifications, Push notifications could be broken down to several microservices in order to avoid any downtime. With monolithic architecture single point of failure would cause the entire application to shut down. To understand in detail, look at the following diagram

Serverless != No server

Many of us have the common understanding of serverless as its about without a server. None can be executed/hosted without having a server. It’s just that the fack that you will have no way to actually see the server which executes your code. As a developer with serverless you do not have to worry about managing servers as it will be automatically handled. Serverless becomes handy because of the following reasons

  • Reduce time-to-market
  • Easier deployment
  • Scale automatically
  • Focus on business logic
  • Cost reduction

Serverless is used for mainly event event driven architecture where functions has an endpoint that triggers something. Say for example, Trigger a notification once the file is uploaded.

Serverless and Microservices are great couple together. You should choose serverless when your functions/services are,

  • Stateless
  • Short Job
  • Event-driven stuff, e.g. Time-based / webhook
  • Simple application with less dependencies

OpenFaaS – Serverless Functions Made Simple

There are so many frameworks out there to build applications with serverless out of them OpenFaas stands out as its not vendor locked and you can use it for both On prem as well as in any of the cloud platform. It is  very simple and need few commands to get your functions deployed anywhere. It can be exposed with Docker Swarm or Kubernetes to the outside world.

Following are the reasons if you ever want to choose OpenFaas,

  • Anything can be a function
  • Leverage existing skills in teams
  • Avoid vendor lock-in
  • Run anywhere - cloud or on-prem
  • 13,000 stars on github with large contributors

OpenFaaS Architecture

There are main two components that you should get to know before getting started with OpenFaas.

DFrkF4NXoAAJwN2.jpg

Function Watchdog

As the name indicates watchdog is responsible to convert http messages to stdin then it will be passed to functions and stdout vice versa.Any docker image could be turned to serverless by adding function watchdog

API Gateway / UI Portal

As you have heard from AWS API gateway, it does the similar job here as well. It provides an external route into your functions and collects Cloud Native metrics through Prometheus. It also scale functions according to demand by altering the service replica count in the Docker Swarm or Kubernetes API. It also provides the UI to invoke functions in your browser and create new ones as needed.

Faas-CLI

The command line interface helps you to deploy your functions or quickly create new functions from templates  in any programming language you prefer.

Setup OpenFaas on Kubernetes API

I am a fan of Microsoft Azure, I will be providing the steps to setup OpenFaas on Azure Kubernetes. I have done this at a workshop and you can find 10 steps to build a kubernetes cluster and then all steps can be done simply by hitting few kubectl and helm commands.

Step 1: Launch Azure Cloud shell with https://shell.azure.com/bash

Step 2: Create a resource group

az group create --name aksColomboRG --location eastus

Step 3: Create AKS cluster

az aks create  --resource-group aksColomboRG  --name openFaasCluster --node-vm-size Standard_A2_v2   --node-count 1 --enable-addons monitoring --generate-ssh-keys

Step 4: Connect to the cluster

az aks get-credentials --resource-group aksColomboRG --name openFaasCluster --admin -a  --overwrite-existing

Step 7: List the cluster nodes

kubectl get all -n kube-system

Kubectl is a command line interface for running commands against Kubernetes clusters.

Step 8: Install and Init Helm

helm init –upgrade

Helm fills the need to quickly and reliably provision container applications through easy install, update, and removal

Step 9: Install OpenFaas

git clone https://github.com/openfaas/faas-netes

Step 10: Create namespace OpenFaas

kubectl create ns openfaaskubectl

Step 11: Create second namespace for OpenFass Functions

kubectl create ns openfaas-fn

Step 12 : Check you have a tiller pod in ready state

kubectl -n kube-system get po

Step 13: Manually start the tiller

kubectl logs --namespace kube-system tiller-deploy-66cdfd5bc9-46sxv

When a user executes the Helm install command, a Tiller Server receives the incoming request and installs the appropriate package

Step 14: Resolve cannot list configmaps in the "kube-system"

kubectl create serviceaccount --namespace kube-system tiller

Step 16: A Helm chart for OpenFaaS is included in the cloned repository. Use this chart to deploy OpenFaaS into your AKS cluster.

helm repo add openfaas https://openfaas.github.io/faas-netes/

helm upgrade --install --namespace openfaas --set functionNamespace=openfaas-fn --set async=true --set serviceType=LoadBalancer openfaas openfaas/openfaas

Step 17: See OpenFaas live

kubectl get all -n openfaas

and you should copy the service/gateway-external url with the port and paste it in the browser. You should see OpenFaas live.

2019-02-23_13-43-40

Well, that’s all about for this post, I will write another post about how to execute functions and how to build your custom function in the coming days.

You can find the workshop slides from here. Keep watching :)