Category

Kubernetes

Running Kubernetes Workloads on AWS Spot Instances-Part VIII

By | AWS, Blogs, Kubernetes | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

Till now we have practised a lot on the OnDemand Nodes of K8s Cluster. This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster. Spot Instances can save you up to 70–90% cost as compared to OnDemand.Though Spot EKSInstances are cheaper, you cannot run all your worker nodes as Spot. You must have some OnDemand Instances as a backup because Spot Instances can betray you anytime with the interruptions 😉

In this article, we are discussing how you can use Spot Instances on EKS Cluster as well as the cluster you own on EC2 Servers.

Refer to our public Github Repo which contains the files/templates we have used in the implementation. This blog is covering the below-mentioned points:

Kubernetes Operations with AWS EKS

AWS EKS is a managed service that simplifies the management of Kubernetes servers. It provides a highly available and secure K8s control plane. There are two major components associated with your EKS Cluster:

  • EKS control plane which consists of control plane nodes that run the Kubernetes software, like etcd and the Kubernetes API server.
  • EKS worker nodes that are registered with the control plane.

With EKS, the need to manage the installation, scaling, or administration of master nodes is no longer required i.e. AWS will take care of the control plane and let you focus on your worker nodes and application.

Prerequisites

  • EC2 Server to provision the EKS cluster using AWSCLI commands.
  • The latest version of AWSCLI Installed on your Server
  • IAM Permissions to create the EKS Cluster. Create an IAM Instance profile with the permissions attached and assign to the EC2 Server.
  • EKS Service Role
  • Kubectl installed on the server.

Provision K8s Cluster with EKS

Execute the below command to provision an EKS Cluster:

aws eks create-cluster --name puck8s --role-arn arn:aws:iam::ACCOUNT:role/puc-eks-servicerole --resources-vpc-config subnetIds=subnet-xxxxx,subnet-xxxxx,subnet-xxxxxx,securityGroupIds=sg-xxxxx --region us-east-2

We have given private subnets available in our account to provision a private cluster.

Wait for the cluster to become available.

aws eks describe-cluster --name puck8s --query cluster.status --region us-east-2

Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes(Link in the References section below). Install it using the below commands:

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
cp ./aws-iam-authenticator /usr/bin/aws-iam-authenticator

Update ~/.kube/config file which will be used by kubectl to access the cluster.

aws eks update-kubeconfig --name puck8s --region us-east-2

Execute “kubectl get svc”.

Launch Spot and OnDemand Worker Nodes

We have provisioned the EKS worker nodes using a cloud formation template provided by AWS. The template is available in our Github repo as well i.e. provision-eks-worker-nodes/amazon-eks-node group-with-spot.yaml. The template will provision three Autoscaling Groups:

  • 2 ASG with Spot Instances with two different Instance types as given in the parameters
  • 1 ASG with OnDemand Instance with Instance type as given in the parameter

Create a Cloudformation stack and provide the values in the parameters. For the AMI parameter, enter the ID from the below table:

| Region                  |      AMI               | 
|-------------------------| ---------------------- |
| US East(Ohio)(us-east-2)| ami-0958a76db2d150238 |

Launch the stack and wait for the stack to be completed. Note down the Instance ARN from the Outputs.

Now get the config map from our repo.

https://github.com/powerupcloud/kubernetes-spot-webinar/blob/master/provision-eks-worker-nodes/aws-cm-auth.yaml

Open the file “aws-cm-auth.yaml ” and replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

kubectl apply -f aws-auth-cm.yaml
kubectl get nodes --watch

Wait for the nodes to be ready.

Kubernetes Operations with KOPS

Kops is an official Kubernetes project for managing production-grade Kubernetes clusters. It has commands for provisioning multi-node clusters, updating their settings including nodes and masters, and applying infrastructure changes to an existing cluster. Currently, Kops is actually the best tool for managing k8s cluster on AWS.

Note: You can use kops in the AWS regions which AWS EKS doesn't support.

Prerequisites:

  • Ec2 Server to provision the cluster using CLI commands.
  • Route53 domain, (for example, k8sdemo.powerupcloud.com) in the same account from where you are provisioning the cluster. Kops uses DNS for identifying the cluster. It adds the records for APIs in your Route53 Hosted Zone.
Note: For public hosted zone, you will have to add the NS records for the above domain to your actual DNS. For example, we have added an NS record for "k8sdemo.powerupcloud.com" to "powerupcloud.com". This will be used for the DNS resolution. For the private hosted zone, ensure to add the VPCs.
  • IAM Permissions to create the cluster resources and update DNS records in Route53. Create an IAM Instance profile with the permissions attached and assign to the EC2 Server.
  • S3 bucket for the state store.
  • Kubectl installed.

Install Kops

Log into the EC2 server and execute the below command to install Kops on the Server:

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

Provision K8s Cluster

kops create cluster k8sdemo.powerupcloud.com --ssh-public-key ~/.ssh/id_rsa.pub --master-zones ap-south-1a --zones ap-south-1a,ap-south-1b,ap-south-1a --master-size=t2.medium --node-count=1 --master-count 1 --node-size t2.medium --topology private --dns public --networking calico --vpc vpc-xxxx --state s3://k8sdemo-kops-state-store --subnets subnet-xxxx,subnet-xxxx --utility-subnets subnet-xxxx,subnet-xxxx --kubernetes-version 1.11.4 --admin-access xx.xxx.xxxx.xx/32 --ssh-access xx.xxx.xxx.xx/32 --cloud-labels "Environment=DEMO"

Refer to our previous blog for the explanation of the arguments in the above command.

kops update cluster --yes

Once the above command is successful, we will have a private K8s Cluster ready with Master and Nodes in the private subnets.

Use the command “kops validate cluster CLUSTER_NAME” to validate the nodes in your k8s cluster.

Create Instance Groups for Spot and OnDemand Instances

Kops Instance Group helps in the grouping of similar instances which maps to an Autoscaling Group in AWS. We can use the “kops edit” command to edit the configuration of the nodes in the editor. The “kops update” command applies the changes to the existing nodes.

Once we have provisioned the cluster, we will have two Instance groups i.e. One for master and One for Nodes. Execute the below command to get available Instance Groups:

kops get ig

Edit nodes instance group to provision spot workers. Add the below Key Values. Set the max price property to your bid. For example, “0.10” represents a spot-price bid of $0.10 (10 cents) per hour.

spec:
...
maxPrice: "1.05"
nodeLabels:
lifecycle: Ec2Spot
node-role.kubernetes.io/spot-worker: "true"

The final configuration will look like as shown in the below screenshot:

Create one more Spot Instance Group for a different instance type.

kops create ig nodes2 --subnet ap-south-1a,ap-south-1b --role Node
kops edit ig nodes2

Add maxPrice and node labels and the final configuration will look like as shown in the below screenshot:

Now, we have configured two spot worker node groups for our cluster. Create an instance group for OnDemand Worker Nodes by executing the below command:

kops create ig ondemand-nodes --subnet ap-south-1a,ap-south-1b --role Node

kops edit ig ondemand-nodes

Add node labels for the OnDemand Workers.

Also, we have added taints to avoid the pods from OnDemand Worker Nodes. Preferably, the new pods will be assigned to the Spot workers.

To apply the above configurations, execute the below command:

kops update cluster
kops update cluster --yes
kops rolling-update cluster --yes

Cluster Autoscaler

Cluster Autoscaler is an open-source tool which automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources
  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

CA will run as a daemonset on the Cluster OnDemand Nodes. The YAML file for daemonset is provided in our Github Repo i.e. https://github.com/powerupcloud/kubernetes-spot-webinar/tree/master/cluster-autoscaler.

Update the following variables in the cluster-autoscaler/cluster-autoscaler-ds.yaml

  • Autoscaling Group Names of On-demand and Spot Groups
  • Update minimum count of instances in the Autoscaling group
  • Update max count of instances in the Autoscaling group
  • AWS Region
  • mode selector will ensure to run the CA pods on the OnDemand nodes always.

Create ClusterAutoscaler on both of the k8s clusters EKS as well as the cluster provisioned using kops. Ensure to attach the below permissions to the IAM Role assigned to the cluster worker nodes:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}

The daemonset YAML file for the EKS Cluster will look like as shown in the below screenshot.

Similarly, for the cluster provisioned using Kops, the yaml file will be :


Create the DaemonSet.

kubectl create -f cluster-autoscaler/cluster-autoscaler-ds.yaml

Now create a pod disruption budget for CA which will ensure to run atleast one cluster autoscaler pod always.

kubectl create -f cluster-autoscaler/cluster-autoscaler-pdb.yaml

Verify the Cluster autoscaler pod logs in kube-system namespace:

kubectl get pods -n kube-system
kubectl logs -f pod/cluster-autoscaler-xxx-xxxx -n kube-system

Spot Termination Handler

The major fallbacks of a Spot Instance are –

  • it may take a long time to become available (or may never become available),
  • and maybe reclaimed by AWS at any time.

Amazon EC2 can interrupt your Spot Instance when the Spot price exceeds your maximum price, when the demand for Spot Instances rises, or when the supply of Spot Instances decreases. Whenever you are opting for Spot, you should always be prepared for the interruptions.

So, we are creating one interrupt handler on the clusters which will run as a daemonset on the OnSpot Worker Nodes. The workflow of the Spot Interrupt Handler can be summarized as:

  • Identify that a Spot Instance is being reclaimed.
  • Use the 2-minute notification window to gracefully prepare the node for termination.
  • Taint the node and cordon it off to prevent new pods from being placed.
  • Drain connections on the running pods.
  • To maintain desired capacity, replace the pods on remaining nodes

Create the Spot Interrupt Handler DaemonSet on both the k8s clusters using the below command:

kubectl apply -f spot-termination-handler/deploy-k8-pod/spot-interrupt-handler.yaml

Deploy Microservices with Istio

We have taken a BookInfo Sample application to deploy on our cluster which uses Istio.

Istio is an open platform to connect, manage, and secure microservices. For more info, see the link in the References section below. To deploy Istio on the k8s cluster, follow the steps below:

wget https://github.com/istio/istio/releases/download/1.0.4/istio-1.0.4-linux.tar.gz
tar -xvzf istio-1.0.4-linux.tar.gz
cd istio-1.0.4

In our case, we have provisioned the worker nodes in private subnets. For Istio to provision a publically accessible load balancer, tag the public subnets in your VPC with the below tag:

kubernetes.io/cluster/puck8s:shared

Install helm from the link below:

https://github.com/helm/helm

kubectl create -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller --wait
helm install --wait --name istio --namespace istio-system install/kubernetes/helm/istio --set global.configValidation=false --set sidecarInjectorWebhook.enabled=false
kubectl get svc -n istio-system

You will get the LoadBalancer endpoint.

Create a gateway for the Bookinfo sample application.

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

The BookInfo sample application source code, Dockerfile, and Kubernetes deployment YAML files are available in the sample-app directory in our Github repo.

Build a docker image out of provided Dockerfiles and update the IMAGE variable in k8s/deployment.yaml for all the four services. Deploy each service using:

kubectl apply -f k8s

Hit http://LB_Endpoint/productpage. you will get the frontend of your application.

AutoScaling when the Application load is High

If the number of pods increases with the application load, the cluster autoscaler will provision more worker nodes in the Autoscaling Group. If the Spot Instance is not available, it will opt for OnDemand Instances.

Initial Settings in the ASG:

Scale up the number of pods for one deployment, for example, product page. Execute:

kubectl scale --replicas=200 deployment/productpage-v1

Watch the Cluster Autoscaler manage the ASG.

Similarly, if the application load is less, CA will manage the size of the ASG.

Note: We dont recommend to run the stateful applications on Spot Nodes. Use OnDemand Nodes for your stateful services.

and that’s all..!! Hope you found it useful. Happy Savings..!!

References:

Automated Deployment of PHP Application using Gitlab CI on Kubernetes — Part VII

By | AWS, Blogs, Kubernetes | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

Recently, we have got an opportunity to develop and deploy an application on a Kubernetes cluster running on AWS Cloud. We have developed a sample PHP application which will parse the CSV file and upload the content of the file into a MySQL RDS Instance. The application UI also supports some other functionalities like updating/deleting a particular row from the database, store and view processed files via AWS S3 bucket and view all the records of MySQL database. The Kubernetes Cluster is being provisioned using KOPS tool.​​ This article discusses the following points:

​​Prerequisites

  • ​​Route53 hosted zone (Required for KOPS)
  • ​​One S3 Bucket (Required for KOPS to store state information)
  • ​​One S3 bucket (Required to store the processed CSV files, for example, pucdemo-processed-.csv)
  • ​​One S3 bucket to store Application Access logs of the Loadbalancer.
  • ​​MySQL RDS in private subnet.3306 port is opened to the Kubernetes Cluster Nodes.
  • ​​Table to store the data from .CSV File in supported variables. In our case, we have used the following command to create a table in the database.
create database csvdb;
CREATE TABLE puc_csv(
sku INT,
name VARCHAR(200),
price DOUBLE
);

​​Setup

  • ​​Cloud: Amazon Web Services
  • ​​Scripting Languages Used: HTML, Javascript and PHP
  • ​​Kubernetes Version: 1.11
  • ​​K8s Cluster Instance Type: t2.medium
  • ​​Instances are launched in Private subnets
  • ​​3 masters and 2 nodes (Autoscaling Configured)
  • ​​K8s Master / Worker node is in the Autoscaling group for HA / Scalability / Fault Tolerant
  • ​​S3 buckets to store data (details in Prerequisites)
  • ​​Route53 has been used for DNS Management
  • ​​RDS — MySQL 5.7 (MultiAZ Enabled)

​​Provision Kubernetes Cluster on AWS

kops create cluster pucdemo.powerupcloud.com --ssh-public-key ~/.ssh/id_rsa.pub --master-zones ap-south-1a --zones ap-south-1a,ap-south-1b,ap-south-1a --master-size=t2.medium --node-count=2 --master-count 3 --node-size t2.small --topology private --dns public --networking calico --vpc vpc-xxxx --state s3://pucdemo-kops-state-store --subnets subnet-xxxx,subnet-xxxx --utility-subnets subnet-xxx,subnet-xxx --kubernetes-version 1.11.0 --api-loadbalancer-type internal --admin-access 172.31.0.0/16 --ssh-access 172.31.xx.xxx/32 --cloud-labels "Environment=TEST" --master-volume-size 100 --node-volume-size 100 --encrypt-etcd-storage;

​​where,

  • ​ We have provided our public key in the argument — ssh-public-key. The respective private key will be used for SSH access to your master and nodes.
  • private subnets are provided as arguments in “ — subnets”: will be used by Kubernetes API(internal)
  • ​​public subnets are provided as arguments in “— utility-subnets”: will be used by Kubernetes services(external)
  • ​​ “— admin-access” will have the IP CIDR for which the Kubernetes API port will be allowed.
  • ​​ “— ssh-access” will have the IP from where you will be able to SSH into master nodes of Kubernetes Cluster.
  • ​​pucdemo.powerupcloud.com is the hosted zone created in Route 53. KOPS will create API related DNS records within it.

​​Attach ECR Full access policy to cluster nodes Instance Profile.

​​Create Required Kubernetes Resources

Clone the below Github repo:

https://github.com/powerupcloud/k8s-data-from-csvfile-to-database.git

Create Gitlab Instance:

Replace the values for the following variable in the Kubernetes-gitlab/gitlab-deployment.yml :

  • GITLAB_ROOT_EMAIL
  • GITLAB_ROOT_PASSWORD
  • GITLAB_HOST
  • GITLAB_SSH_HOST
kubectl create -f kubernetes-gitlab/gitlab-ns.yml
kubectl create -f kubernetes-gitlab/postgresql-deployment.yml
kubectl create -f kubernetes-gitlab/postgresql-svc.yml
kubectl create -f kubernetes-gitlab/redis-deployment.yml
kubectl create -f kubernetes-gitlab/redis-svc.yml
kubectl create -f kubernetes-gitlab/gitlab-deployment.yml
kubectl create -f kubernetes-gitlab/gitlab-svc.yml

kubectl get svc -n gitlab” will give the provisioned Loadbalancer Endpoint. Create a DNS Record for the Endpoint, for example, git.demo.powerupcloud.com.

Create Gitlab Runner:

Replace the values for the following variable in the gitlab-runners/configmap.yml :

  • Gitlab URL
  • Registration Token

Go to the Gitlab Runners section in the Gitlab console to get the above values.

kubectl create -f gitlab-runners/rbac.yaml
kubectl create -f gitlab-runners/configmap.yaml
kubectl create -f gitlab-runners/deployment.yaml

Create CSVParser Application:

Create a base Docker image with Nginx and php7.0 installed on it and push to ECR. Give the base image in csvparser/k8s/deployment.yaml.

kubectl create -f csvparser/k8s/deployment.yaml
kubectl create -f csvparser/k8s/service.yaml

kubectl get svc” will give the provisioned Loadbalancer Endpoint. Create a DNS Record for the Endpoint, for example, app.demo.powerupcloud.com.

Application Functionality

  • Basic Authentication is enabled for the main page.
  • The browse field will accept the CSV file only.
  • After uploading, the data will be imported into the database by clicking the “Import” button.
  • The processed files can be viewed by clicking on the “View Files” button.
  • “View Data” button will list the records from the database in tabular format.
  • The data record can be edited inline and updated into the database by clicking the “Archive” button.
  • A particular row can be deleted from the database by clicking the “Delete” button.
  • The application is running on two different nodes in different subnets and is being deployed under a Classic LoadBalancer.

CI/CD

  • The Gitlab Instance and Runner are running as pods on the Kubernetes Cluster.
  • The application code is available in the Gitlab Repository along with Dockerfile and .gitlab-ci.yml
  • The pipeline is implemented in Gitlab Console using .gitlab-ci.yml file.
  • Whenever a commit is pushed to the Repository, the pipeline is triggered which will execute the following steps in a pipeline:
  • Build: Build a docker image from the Dockerfile and push to AWS ECR Repo.
  • Deploy: Updates the docker image for the already running application pod on Kubernetes Cluster.

Application in Action

Hit the Gitlab Service:

Sign in with the credentials.

Create a new Project and push the code. It will look like:

The Pipelines will look like:



The Application

View Data:

View Processed Files:

Editable table:

“Archive” will update the database.

Delete will delete the row from the database.

Note: We don’t recommend the application code to use in any scenario. It’s just for our testing purpose. It is not written using the best practices. This article showcases the provisioning of Kubernetes cluster using KOPS with best practices and the deployment of any PHP application on the cluster using Gitlab pipelines.

Hope you found it useful. Keep following our blogs for the more interesting articles on Kubernetes. Do visit the previous parts of this series.

References