Category

AWS

Customer support enablement with AWS Connect

By | AWS, Case Study | No Comments

Customer: A multinational home appliance manufacturer

 

The Problem:

There were several legacy issues with the existing system, as detailed below with the information being provided across categories including service schedules/inquiries, spare part status, service location for maintenance, product information, etc.

  • No Call Recording facility from Avaya
  • No Historical Data and Reports generation. Agents were manually generating reports daily and then aggregating them on excel every week for the weekly report
  • Public Holiday Announcement & Operational Hours changes – Ex: During Ramzan, it’s closes early, involved doing a manual recording and deploying it on the server
  • Scalability issues: A limit of 12 in a queue based on the support from the existing systems – 8 for inbound calls and 4 outbound and concurrent inbound calls
  • Average speed of answering calls was 35 seconds

The approach:

The client wanted to do a pilot project using Amazon Connect, moving from their current voice system hosted in their Mumbai region to Amazon services to achieve the following functionalities:

  1. ‎Ability to take voice calls
  2. On-call connect, an option to choose a language (English/Bahasa)
  3. Call routing based on the language proficiency of the agent
  4. Ability to record calls
  5. Ability to help supervision of calls
  6. Ability to transfer/conference calls
  7. Scalable environment
  8. The ability to generate records in real-time

Solution flow & design:

 

The steps:

  1. Customer calls into the service center number
  2. The Call is routed to AWS Connect through Twilio or equivalent ISP
  3. As per the routing profile, AWS Connect directs the call to the agent
  4. Agent will get a notification in Instaedge CRM of the incoming call, if the mobile of incoming matches with any record in the customer Database, the customer information will be displayed in the Instaedge
  5. The agent will have to log into the Connect panel separately with credentials.

Making the Connected Car ‘Real-time Data Processing’ Dream a Reality

By | Analytics, Automation, AWS, Blogs | No Comments

Written by Jeremiah Peter, Solution specialist-Advanced Services Group, Contributor: Ravi Bharati, Tech Lead and Ajay Muralidhar,  Sr. Manager-Project Management at Powerupcloud Technologies

Connected car landscape

Imagine driving your car on a busy dirt road in the monsoon, dodging unscrupulous bikers, jaywalking pedestrians and menacing potholes. Suddenly, a fellow driver makes wild gestures to inform you that the rear door is unlocked, averting an imminent disaster.

In a connected car system, these events are tracked in near real-time and pushed to the driver’s cell phone within seconds. Although the business relevance of real-time car notifications is apparent, the conception of the underlying technology and infrastructure hardly is. The blog attempts to demystify the inner workings of handling data at scale for an Indian automobile behemoth and equips you with a baseline understanding of storing and processing vast troves of data for IoT enabled vehicles.

The paradigm of shared, electric and connected mobility, which seemed a distant reality a few years ago, is made possible through IoT sensors. Laced with tiny data transmitting devices, vehicles can send valuable information such as Battery Percentage, Distance to Empty (DTE), AC On/Off, Door Locked/Unlocked, etc. to the OEM. The service providers use this information to send near real-time alerts to consumers, weaving an intelligent and connected car experience. Timely analysis and availability of data, thus, becomes the most critical success component in the connected car ecosystem.

Before reaching the OEM’s notification system, data is churned through various phases such as data collection, data transformation, data labeling, and data aggregation. With the goal of making data consumable, manufacturers often struggle to set up a robust data pipeline that can process, orchestrate and analyze information at scale.

The data conundrum

According to Industry Consortium 5GAA, connected vehicles ecosystem can generate up to 100 terabytes of data each day. The interplay of certain key factors in the data transmission process will help you foster a deeper understanding of the mechanics behind IoT-enabled cars. As IoT sensors send data to a TCP/IP server, parsers embedded within the servers push all the time series data to a database. The parsing activity converts machine data (hexadecimal) into a human-readable format (Json) and subsequently triggers a call to a notification service. The service enables OEM’s to send key notifications over the app or through SMS to the end-consumer.

Given the scale and frequency of data exchange, the OEM’s earlier set up was constrained by the slow TCP/IP data transfer rate (Sensor data size: TCP/IP- 360 bytes; MQTT- 440 bytes). The slow transfer rate has far-reaching implications over the user experience, delaying notifications by 6-7 minutes. As part of a solution-driven approach, Powerup experts replaced the existing TCP/IP servers with MQTT servers to enhance the data transfer rate. The change affected a significant drop in notification send-time, which is presently calibrated at around 32-40 seconds.

Furthermore, the OEM’s infrastructure presented another unique challenge in that only 8 out of 21 services were containerized. The rest of the services ran on plain Azure VM’s. To optimize costs, automate scalability and reduce operational overhead, all services are deployed on Docker Containers. Containers provide a comprehensive runtime environment that includes dependencies, libraries, framework and configuration files for applications to run. However, containers require extensive orchestration activities to aid scalability and optimal resource management. AWS Fargate is leveraged to rid the OEM’s infrastructure management team of routine container maintenance chores such as provisioning, patching, cluster and capacity management

Moreover, MQTT and TCP IP brokers were also containerized and deployed on Fargate to ensure that all IoT sensor data is sent to the AWS environment. Once inside the AWS environment, sensor data is pushed to Kinesis Stream and Lambda to identify critical data and to call the AWS notification service-SNS. However, the AWS solution could not be readily implemented since the first generation of electric vehicles operated on 2G sim cards, which did not allow change of IP whitelisting configuration. To overcome the IP whitelisting impediment, we set up an MQTT bridge and configured TCP port forwarding to proxy the request from Azure to AWS. Once the first generation vehicles are called back, the new firmware will be updated over-the-air, enabling whitelisting of new AWS IP addresses. The back-handed approach will help the OEM to fully cut-over to the AWS environment without downtime or loss of sensor data.

On the Database front, the OEM’s new infrastructure hinges on the dynamic capabilities of Cassandra DB and PostgreSQL. Cassandra is used for storing Time Series data from IoT sensors. PostgreSQL database contains customer profile/vehicle data and is mostly used by the Payment Microservice. Transactional data is stored in PostgreSQL, which is frequently called upon by various services. While PostgreSQL holds a modest volume of 150 MB Total, the database size of Cassandra is close to 120 GB.

Reaping the benefits

While consumers will deeply benefit from the IoT led service notifications, fleet management operators can also adopt innovative measures to reduce operational inefficiencies and enhance cost savings. Most fleet management services today spend a significant proportion on administrative activities such as maintaining oversight on route optimization, tracking driver and vehicle safety, monitoring fuel utilization, etc. A modern fleet management system empowers operators to automate most of these tasks.

Additionally, preventive maintenance can help operators augment vehicle lifecycle by enabling fleet providers to pro-actively service vehicles based on vehicular telemetry data such as battery consumption, coolant temperature, tire pressure, engine performance and idling status (vehicle kept idle). For instance, if a truck were to break-down due to engine failure, the fleet operator could raise a ticket and notify the nearest service station before the event occurred, cutting down idle time.

Conclusion

With 7000 cars in its current fleet, the OEM’s infrastructure is well-poised to meet a surge of more than 50,000 cars in the near future. Although the connected car and autonomous driving segment still goes through its nascent stages of adoption, it will continue to heavily draw upon the OEM’s data ingestion capabilities to deliver a seamless experience, especially when the connected car domain transcends from a single-vehicle application to a more inclusive car-to-car communication mode. Buzzwords such as two-way data/telematic exchanges, proximity-based communications and real-time feedback are likely to become part of common parlance in mobility and fleet management solutions.

As the concept of the Intelligent Transport System gathers steam, technology partners will need to look at innovative avenues to handle high volume/velocity of data and build solutions that are future-ready. To know more about how you can transform your organization’s data ingestion capability, you can consult our solution experts here.

Running Kubernetes Workloads on AWS Spot Instances-Part VIII

By | AWS, Blogs, Kubernetes | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

Till now we have practised a lot on the OnDemand Nodes of K8s Cluster. This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster. Spot Instances can save you up to 70–90% cost as compared to OnDemand.Though Spot EKSInstances are cheaper, you cannot run all your worker nodes as Spot. You must have some OnDemand Instances as a backup because Spot Instances can betray you anytime with the interruptions 😉

In this article, we are discussing how you can use Spot Instances on EKS Cluster as well as the cluster you own on EC2 Servers.

Refer to our public Github Repo which contains the files/templates we have used in the implementation. This blog is covering the below-mentioned points:

Kubernetes Operations with AWS EKS

AWS EKS is a managed service that simplifies the management of Kubernetes servers. It provides a highly available and secure K8s control plane. There are two major components associated with your EKS Cluster:

  • EKS control plane which consists of control plane nodes that run the Kubernetes software, like etcd and the Kubernetes API server.
  • EKS worker nodes that are registered with the control plane.

With EKS, the need to manage the installation, scaling, or administration of master nodes is no longer required i.e. AWS will take care of the control plane and let you focus on your worker nodes and application.

Prerequisites

  • EC2 Server to provision the EKS cluster using AWSCLI commands.
  • The latest version of AWSCLI Installed on your Server
  • IAM Permissions to create the EKS Cluster. Create an IAM Instance profile with the permissions attached and assign to the EC2 Server.
  • EKS Service Role
  • Kubectl installed on the server.

Provision K8s Cluster with EKS

Execute the below command to provision an EKS Cluster:

aws eks create-cluster --name puck8s --role-arn arn:aws:iam::ACCOUNT:role/puc-eks-servicerole --resources-vpc-config subnetIds=subnet-xxxxx,subnet-xxxxx,subnet-xxxxxx,securityGroupIds=sg-xxxxx --region us-east-2

We have given private subnets available in our account to provision a private cluster.

Wait for the cluster to become available.

aws eks describe-cluster --name puck8s --query cluster.status --region us-east-2

Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes(Link in the References section below). Install it using the below commands:

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
cp ./aws-iam-authenticator /usr/bin/aws-iam-authenticator

Update ~/.kube/config file which will be used by kubectl to access the cluster.

aws eks update-kubeconfig --name puck8s --region us-east-2

Execute “kubectl get svc”.

Launch Spot and OnDemand Worker Nodes

We have provisioned the EKS worker nodes using a cloud formation template provided by AWS. The template is available in our Github repo as well i.e. provision-eks-worker-nodes/amazon-eks-node group-with-spot.yaml. The template will provision three Autoscaling Groups:

  • 2 ASG with Spot Instances with two different Instance types as given in the parameters
  • 1 ASG with OnDemand Instance with Instance type as given in the parameter

Create a Cloudformation stack and provide the values in the parameters. For the AMI parameter, enter the ID from the below table:

| Region                  |      AMI               | 
|-------------------------| ---------------------- |
| US East(Ohio)(us-east-2)| ami-0958a76db2d150238 |

Launch the stack and wait for the stack to be completed. Note down the Instance ARN from the Outputs.

Now get the config map from our repo.

https://github.com/powerupcloud/kubernetes-spot-webinar/blob/master/provision-eks-worker-nodes/aws-cm-auth.yaml

Open the file “aws-cm-auth.yaml ” and replace the <ARN of instance role (not instance profile)> snippet with the NodeInstanceRole value that you recorded in the previous procedure, and save the file.

kubectl apply -f aws-auth-cm.yaml
kubectl get nodes --watch

Wait for the nodes to be ready.

Kubernetes Operations with KOPS

Kops is an official Kubernetes project for managing production-grade Kubernetes clusters. It has commands for provisioning multi-node clusters, updating their settings including nodes and masters, and applying infrastructure changes to an existing cluster. Currently, Kops is actually the best tool for managing k8s cluster on AWS.

Note: You can use kops in the AWS regions which AWS EKS doesn't support.

Prerequisites:

  • Ec2 Server to provision the cluster using CLI commands.
  • Route53 domain, (for example, k8sdemo.powerupcloud.com) in the same account from where you are provisioning the cluster. Kops uses DNS for identifying the cluster. It adds the records for APIs in your Route53 Hosted Zone.
Note: For public hosted zone, you will have to add the NS records for the above domain to your actual DNS. For example, we have added an NS record for "k8sdemo.powerupcloud.com" to "powerupcloud.com". This will be used for the DNS resolution. For the private hosted zone, ensure to add the VPCs.
  • IAM Permissions to create the cluster resources and update DNS records in Route53. Create an IAM Instance profile with the permissions attached and assign to the EC2 Server.
  • S3 bucket for the state store.
  • Kubectl installed.

Install Kops

Log into the EC2 server and execute the below command to install Kops on the Server:

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

Provision K8s Cluster

kops create cluster k8sdemo.powerupcloud.com --ssh-public-key ~/.ssh/id_rsa.pub --master-zones ap-south-1a --zones ap-south-1a,ap-south-1b,ap-south-1a --master-size=t2.medium --node-count=1 --master-count 1 --node-size t2.medium --topology private --dns public --networking calico --vpc vpc-xxxx --state s3://k8sdemo-kops-state-store --subnets subnet-xxxx,subnet-xxxx --utility-subnets subnet-xxxx,subnet-xxxx --kubernetes-version 1.11.4 --admin-access xx.xxx.xxxx.xx/32 --ssh-access xx.xxx.xxx.xx/32 --cloud-labels "Environment=DEMO"

Refer to our previous blog for the explanation of the arguments in the above command.

kops update cluster --yes

Once the above command is successful, we will have a private K8s Cluster ready with Master and Nodes in the private subnets.

Use the command “kops validate cluster CLUSTER_NAME” to validate the nodes in your k8s cluster.

Create Instance Groups for Spot and OnDemand Instances

Kops Instance Group helps in the grouping of similar instances which maps to an Autoscaling Group in AWS. We can use the “kops edit” command to edit the configuration of the nodes in the editor. The “kops update” command applies the changes to the existing nodes.

Once we have provisioned the cluster, we will have two Instance groups i.e. One for master and One for Nodes. Execute the below command to get available Instance Groups:

kops get ig

Edit nodes instance group to provision spot workers. Add the below Key Values. Set the max price property to your bid. For example, “0.10” represents a spot-price bid of $0.10 (10 cents) per hour.

spec:
...
maxPrice: "1.05"
nodeLabels:
lifecycle: Ec2Spot
node-role.kubernetes.io/spot-worker: "true"

The final configuration will look like as shown in the below screenshot:

Create one more Spot Instance Group for a different instance type.

kops create ig nodes2 --subnet ap-south-1a,ap-south-1b --role Node
kops edit ig nodes2

Add maxPrice and node labels and the final configuration will look like as shown in the below screenshot:

Now, we have configured two spot worker node groups for our cluster. Create an instance group for OnDemand Worker Nodes by executing the below command:

kops create ig ondemand-nodes --subnet ap-south-1a,ap-south-1b --role Node

kops edit ig ondemand-nodes

Add node labels for the OnDemand Workers.

Also, we have added taints to avoid the pods from OnDemand Worker Nodes. Preferably, the new pods will be assigned to the Spot workers.

To apply the above configurations, execute the below command:

kops update cluster
kops update cluster --yes
kops rolling-update cluster --yes

Cluster Autoscaler

Cluster Autoscaler is an open-source tool which automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources
  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

CA will run as a daemonset on the Cluster OnDemand Nodes. The YAML file for daemonset is provided in our Github Repo i.e. https://github.com/powerupcloud/kubernetes-spot-webinar/tree/master/cluster-autoscaler.

Update the following variables in the cluster-autoscaler/cluster-autoscaler-ds.yaml

  • Autoscaling Group Names of On-demand and Spot Groups
  • Update minimum count of instances in the Autoscaling group
  • Update max count of instances in the Autoscaling group
  • AWS Region
  • mode selector will ensure to run the CA pods on the OnDemand nodes always.

Create ClusterAutoscaler on both of the k8s clusters EKS as well as the cluster provisioned using kops. Ensure to attach the below permissions to the IAM Role assigned to the cluster worker nodes:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}

The daemonset YAML file for the EKS Cluster will look like as shown in the below screenshot.

Similarly, for the cluster provisioned using Kops, the yaml file will be :


Create the DaemonSet.

kubectl create -f cluster-autoscaler/cluster-autoscaler-ds.yaml

Now create a pod disruption budget for CA which will ensure to run atleast one cluster autoscaler pod always.

kubectl create -f cluster-autoscaler/cluster-autoscaler-pdb.yaml

Verify the Cluster autoscaler pod logs in kube-system namespace:

kubectl get pods -n kube-system
kubectl logs -f pod/cluster-autoscaler-xxx-xxxx -n kube-system

Spot Termination Handler

The major fallbacks of a Spot Instance are –

  • it may take a long time to become available (or may never become available),
  • and maybe reclaimed by AWS at any time.

Amazon EC2 can interrupt your Spot Instance when the Spot price exceeds your maximum price, when the demand for Spot Instances rises, or when the supply of Spot Instances decreases. Whenever you are opting for Spot, you should always be prepared for the interruptions.

So, we are creating one interrupt handler on the clusters which will run as a daemonset on the OnSpot Worker Nodes. The workflow of the Spot Interrupt Handler can be summarized as:

  • Identify that a Spot Instance is being reclaimed.
  • Use the 2-minute notification window to gracefully prepare the node for termination.
  • Taint the node and cordon it off to prevent new pods from being placed.
  • Drain connections on the running pods.
  • To maintain desired capacity, replace the pods on remaining nodes

Create the Spot Interrupt Handler DaemonSet on both the k8s clusters using the below command:

kubectl apply -f spot-termination-handler/deploy-k8-pod/spot-interrupt-handler.yaml

Deploy Microservices with Istio

We have taken a BookInfo Sample application to deploy on our cluster which uses Istio.

Istio is an open platform to connect, manage, and secure microservices. For more info, see the link in the References section below. To deploy Istio on the k8s cluster, follow the steps below:

wget https://github.com/istio/istio/releases/download/1.0.4/istio-1.0.4-linux.tar.gz
tar -xvzf istio-1.0.4-linux.tar.gz
cd istio-1.0.4

In our case, we have provisioned the worker nodes in private subnets. For Istio to provision a publically accessible load balancer, tag the public subnets in your VPC with the below tag:

kubernetes.io/cluster/puck8s:shared

Install helm from the link below:

https://github.com/helm/helm

kubectl create -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller --wait
helm install --wait --name istio --namespace istio-system install/kubernetes/helm/istio --set global.configValidation=false --set sidecarInjectorWebhook.enabled=false
kubectl get svc -n istio-system

You will get the LoadBalancer endpoint.

Create a gateway for the Bookinfo sample application.

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

The BookInfo sample application source code, Dockerfile, and Kubernetes deployment YAML files are available in the sample-app directory in our Github repo.

Build a docker image out of provided Dockerfiles and update the IMAGE variable in k8s/deployment.yaml for all the four services. Deploy each service using:

kubectl apply -f k8s

Hit http://LB_Endpoint/productpage. you will get the frontend of your application.

AutoScaling when the Application load is High

If the number of pods increases with the application load, the cluster autoscaler will provision more worker nodes in the Autoscaling Group. If the Spot Instance is not available, it will opt for OnDemand Instances.

Initial Settings in the ASG:

Scale up the number of pods for one deployment, for example, product page. Execute:

kubectl scale --replicas=200 deployment/productpage-v1

Watch the Cluster Autoscaler manage the ASG.

Similarly, if the application load is less, CA will manage the size of the ASG.

Note: We dont recommend to run the stateful applications on Spot Nodes. Use OnDemand Nodes for your stateful services.

and that’s all..!! Hope you found it useful. Happy Savings..!!

References:

AWS bulk Tagging tool -Part II: Graffiti Monkey

By | AWS, Blogs | No Comments

In our last blog post, we have explained how to tag the EC2, RDS, and S3 in bulk numbers by aws-tagger.

Since this tool won’t support bulk volume/snapshots tagging so we have configured another tool for completing the tagging.

In this blog post, we are going to explain about Volume and Snapshots tagging by this amazing tool called Graffiti Monkey.

The Graffiti Monkey goes around tagging things. By looking at the tags an EC2 instance has, it copies those tags to the EBS Volumes that are attached to it, and then copies those tags to the EBS Snapshots.

Setup:

  1. Install graffiti monkey on the EC2 machine.
  2. Create an IAM user with the access key and secret key to provide permission to graffiti monkey.
  3. Create the config file(YAML) with all tags details needs to copy to the EC2.

Let’s start the hands-on:

Login to your EC2 Linux machine:

i)First, install pip on the machine.

yum install python-pip

ii)Second, install the graffiti-monkey

pip install graffiti_monkey

1-Config file(YAML):

  • Create the config file (YAML) in the AWS-EC2 machine. (Below is the sample)
  • We are going to use the same YAML file here for all the accounts as the tags are common for all of them.
  • If you add the new tags in EC2, then you need to add the new tags to this YAML file also as per the requirement.

tagging.yaml:

------region: eu-west-1instance_tags_to_propagate:- ‘Business Unit’- ‘Project’- ‘Customer’- ‘Environment’- ‘Product’- ‘Version’- ‘Requestor’- ‘Revenue_Type’- ‘Business_Model’- ‘Service’volume_tags_to_propagate:- ‘Business Unit’- ‘Project’- ‘Customer’- ‘Environment’- ‘Product’- ‘Version’- ‘Requestor’- ‘Revenue_Type’- ‘Business_Model’- ‘Service’- ‘Name’- ‘instance_id’- ‘device’

2-AWS Credentials:

  • We will create access and secret key for this IAM to provide permission to our EC2 to tag the resources into the account.
  • We need to attach the below permission to the IAM user from respective accounts.
{
“Version”: “2012–10–17”,
“Statement”: [{
“Action”: [
“ec2:Describe*”,“ec2:CreateTags”,
],
“Effect”: “Allow”,
“Resource”: “*”
}
]
}
  • We can attach IAM role also, to the EC2 machine directly for permission.

Graffiti_monkey command:

  • Now run the below command from the EC2 machine CLI:
graffiti-monkey --region us-east-1 --config tagging.yaml

Output:

Snapshot:

From console:

Volumes and Snapshot:

I hope this is helpful, please comment below in case of any implementation issues.

References: https://github.com/Answers4AWS/graffiti-monkey

AWS bulk Tagging tool -Part I: aws-tagger

By | AWS, Blogs | No Comments

Written by Mudita Misra, Cloud Engineer, Powerupcloud Technologies

Why and How the aws-tagger is useful for us?

Use case: “What if we have bulk AWS untagged resources and we need to get the billing based on tags in one or two daytime then how will we do it??”

In this article, we are going to explain how we can do the AWS resource tagging for bulk in number resources in just a few minutes.

Scenario:

  1. There was a requirement for one of our customers where we were having multiple accounts with bulk resources-EC2, RDS and S3. These resources have to be tagged with 8–9 Business tags for billing/segregation purposes. So we have explored and implemented aws-tagger to make the tagging someway easier.
  2. Tagging AWS resources is hard because each resource type has a different API which is slightly different. The AWS bulk tagging tool eliminates these differences so that you can simply specify the resource ID and the tags and it takes care of the rest.

Note: Any tags that already exist on the resource will not be removed, but the values will be updated if the tag key already exists. Tags are case sensitive.

Setup:

  1. Install aws-tagger on the local/EC2-machine
  2. Create IAM user with access key and secret key to provide permission to aws-tagger to apply the tags on the resources.
  3. Create the CSV file with all tags details.

Let’s start the hands-on:

  1. We can do it from our local machine and also we can have one AWS EC2 Linux/Windows machine from customer private network(if concerned).

i)First, install pip on the machine.

yum install python-pip

ii)Second, install the aws-tagger

pip install aws-tagger

AWS Credentials:

  1. We will create access key and secret key for this IAM to provide permission to our EC2/local to tag the resources into the account.
  2. We need to attach the below permission to the IAM user from different respective accounts.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*",
"ec2:CreateTags",
"rds:Describe*",
"rds:AddTagsToResource",
"s3:Describe*",
"s3:PutBucketTagging",
"s3:GetBucketTagging"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

aws configure

How many ways are there to do tagging by aws tagger???

We have below ways according to requirements:

1. Tag individual resource with a single tag

aws-tagger --resource i-07axxxxxxx --tag "Business:Production"

2. Tag multiple resources with multiple tags

aws-tagger --resource i-07axxxxxxxx --resource i-045xxxxxx --tag "Business:Production" --tag "User:Mudita"

3. Tag multiple resources from a CSV file (for bulk resources)

We need to create a CSV file which will be having the Resource ID, Region ID and tag keys with values to be attached to the respective resources.

Note: Make sure no key-value should be empty/blank if you are not sure about values put ‘NA’ or ‘-’

i) We can create an excel file in google sheets and later save it as the CSV file and use it for tagging.

For example:

ii) Download/Copy the CSV file to the local/AWS-EC2 machine.

AWS TAGGER:

Now run the below command for CSV file:

aws-tagger --csv tagger-ec2-details-mudita\ -\ aws-tagger.csv

If the command returns to the next line, there is no error and the resources are been tagged. We can verify the tags now from our AWS console.

Implemented AWS Tagger on the following AWS resource types:

1. EC2 instances

aws-tagger --resource i-07XXXXXXX --tag "Business:Production" --tag "User:Mudita"

2. S3 buckets

aws-tagger --resource mudita-powerup-bucket --tag "Business:Production" --tag "User:Raju"

3. RDS instances

aws-tagger --resource arn:aws:rds:us-east-1:1111XXXX:db:mudita-db --tag "Business:Production" --tag "User:Mudita"

I hope this is helpful, please comment below in case of any implementation issues.

Any EC2 volumes that are attached to the instance will be automatically tagged but in case of bulk resources, we won’t recommend aws tagger. We will be coming up with a new method for tagging volumes and snapshots in our next part.

Keep following the blog post for the upcoming part on how to tag Volumes and Snapshots attached to EC2 instances.

For more resources, you can follow below Github link:

Reference: https://github.com/washingtonpost/aws-tagger

Automated Deployment of PHP Application using Gitlab CI on Kubernetes — Part VII

By | AWS, Blogs, Kubernetes | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

Recently, we have got an opportunity to develop and deploy an application on a Kubernetes cluster running on AWS Cloud. We have developed a sample PHP application which will parse the CSV file and upload the content of the file into a MySQL RDS Instance. The application UI also supports some other functionalities like updating/deleting a particular row from the database, store and view processed files via AWS S3 bucket and view all the records of MySQL database. The Kubernetes Cluster is being provisioned using KOPS tool.​​ This article discusses the following points:

​​Prerequisites

  • ​​Route53 hosted zone (Required for KOPS)
  • ​​One S3 Bucket (Required for KOPS to store state information)
  • ​​One S3 bucket (Required to store the processed CSV files, for example, pucdemo-processed-.csv)
  • ​​One S3 bucket to store Application Access logs of the Loadbalancer.
  • ​​MySQL RDS in private subnet.3306 port is opened to the Kubernetes Cluster Nodes.
  • ​​Table to store the data from .CSV File in supported variables. In our case, we have used the following command to create a table in the database.
create database csvdb;
CREATE TABLE puc_csv(
sku INT,
name VARCHAR(200),
price DOUBLE
);

​​Setup

  • ​​Cloud: Amazon Web Services
  • ​​Scripting Languages Used: HTML, Javascript and PHP
  • ​​Kubernetes Version: 1.11
  • ​​K8s Cluster Instance Type: t2.medium
  • ​​Instances are launched in Private subnets
  • ​​3 masters and 2 nodes (Autoscaling Configured)
  • ​​K8s Master / Worker node is in the Autoscaling group for HA / Scalability / Fault Tolerant
  • ​​S3 buckets to store data (details in Prerequisites)
  • ​​Route53 has been used for DNS Management
  • ​​RDS — MySQL 5.7 (MultiAZ Enabled)

​​Provision Kubernetes Cluster on AWS

kops create cluster pucdemo.powerupcloud.com --ssh-public-key ~/.ssh/id_rsa.pub --master-zones ap-south-1a --zones ap-south-1a,ap-south-1b,ap-south-1a --master-size=t2.medium --node-count=2 --master-count 3 --node-size t2.small --topology private --dns public --networking calico --vpc vpc-xxxx --state s3://pucdemo-kops-state-store --subnets subnet-xxxx,subnet-xxxx --utility-subnets subnet-xxx,subnet-xxx --kubernetes-version 1.11.0 --api-loadbalancer-type internal --admin-access 172.31.0.0/16 --ssh-access 172.31.xx.xxx/32 --cloud-labels "Environment=TEST" --master-volume-size 100 --node-volume-size 100 --encrypt-etcd-storage;

​​where,

  • ​ We have provided our public key in the argument — ssh-public-key. The respective private key will be used for SSH access to your master and nodes.
  • private subnets are provided as arguments in “ — subnets”: will be used by Kubernetes API(internal)
  • ​​public subnets are provided as arguments in “— utility-subnets”: will be used by Kubernetes services(external)
  • ​​ “— admin-access” will have the IP CIDR for which the Kubernetes API port will be allowed.
  • ​​ “— ssh-access” will have the IP from where you will be able to SSH into master nodes of Kubernetes Cluster.
  • ​​pucdemo.powerupcloud.com is the hosted zone created in Route 53. KOPS will create API related DNS records within it.

​​Attach ECR Full access policy to cluster nodes Instance Profile.

​​Create Required Kubernetes Resources

Clone the below Github repo:

https://github.com/powerupcloud/k8s-data-from-csvfile-to-database.git

Create Gitlab Instance:

Replace the values for the following variable in the Kubernetes-gitlab/gitlab-deployment.yml :

  • GITLAB_ROOT_EMAIL
  • GITLAB_ROOT_PASSWORD
  • GITLAB_HOST
  • GITLAB_SSH_HOST
kubectl create -f kubernetes-gitlab/gitlab-ns.yml
kubectl create -f kubernetes-gitlab/postgresql-deployment.yml
kubectl create -f kubernetes-gitlab/postgresql-svc.yml
kubectl create -f kubernetes-gitlab/redis-deployment.yml
kubectl create -f kubernetes-gitlab/redis-svc.yml
kubectl create -f kubernetes-gitlab/gitlab-deployment.yml
kubectl create -f kubernetes-gitlab/gitlab-svc.yml

kubectl get svc -n gitlab” will give the provisioned Loadbalancer Endpoint. Create a DNS Record for the Endpoint, for example, git.demo.powerupcloud.com.

Create Gitlab Runner:

Replace the values for the following variable in the gitlab-runners/configmap.yml :

  • Gitlab URL
  • Registration Token

Go to the Gitlab Runners section in the Gitlab console to get the above values.

kubectl create -f gitlab-runners/rbac.yaml
kubectl create -f gitlab-runners/configmap.yaml
kubectl create -f gitlab-runners/deployment.yaml

Create CSVParser Application:

Create a base Docker image with Nginx and php7.0 installed on it and push to ECR. Give the base image in csvparser/k8s/deployment.yaml.

kubectl create -f csvparser/k8s/deployment.yaml
kubectl create -f csvparser/k8s/service.yaml

kubectl get svc” will give the provisioned Loadbalancer Endpoint. Create a DNS Record for the Endpoint, for example, app.demo.powerupcloud.com.

Application Functionality

  • Basic Authentication is enabled for the main page.
  • The browse field will accept the CSV file only.
  • After uploading, the data will be imported into the database by clicking the “Import” button.
  • The processed files can be viewed by clicking on the “View Files” button.
  • “View Data” button will list the records from the database in tabular format.
  • The data record can be edited inline and updated into the database by clicking the “Archive” button.
  • A particular row can be deleted from the database by clicking the “Delete” button.
  • The application is running on two different nodes in different subnets and is being deployed under a Classic LoadBalancer.

CI/CD

  • The Gitlab Instance and Runner are running as pods on the Kubernetes Cluster.
  • The application code is available in the Gitlab Repository along with Dockerfile and .gitlab-ci.yml
  • The pipeline is implemented in Gitlab Console using .gitlab-ci.yml file.
  • Whenever a commit is pushed to the Repository, the pipeline is triggered which will execute the following steps in a pipeline:
  • Build: Build a docker image from the Dockerfile and push to AWS ECR Repo.
  • Deploy: Updates the docker image for the already running application pod on Kubernetes Cluster.

Application in Action

Hit the Gitlab Service:

Sign in with the credentials.

Create a new Project and push the code. It will look like:

The Pipelines will look like:



The Application

View Data:

View Processed Files:

Editable table:

“Archive” will update the database.

Delete will delete the row from the database.

Note: We don’t recommend the application code to use in any scenario. It’s just for our testing purpose. It is not written using the best practices. This article showcases the provisioning of Kubernetes cluster using KOPS with best practices and the deployment of any PHP application on the cluster using Gitlab pipelines.

Hope you found it useful. Keep following our blogs for the more interesting articles on Kubernetes. Do visit the previous parts of this series.

References

Kubernetes Assigning a Specific Pod to a particular Cluster Node — Part VI

By | AWS, Blogs | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

In this article, we are discussing how one can deploy a specific MicroService on a particular node. As a solution, we are using Taint and Tolerations feature of Kubernetes. Toleration is applied to pods, and allow the pods to schedule onto nodes with matching taints.

Setup:

  • Kubernetes — v1.8+
  • All Cluster Nodes resides in the Public Subnets.
  • Cluster autoscaler configured for the Cluster Nodes.
  • Prometheus being used for the monitoring.

Requirement:

  • One specific Microservice needs to run on a Private Node.

Workflow:

  • Provision a private node and add it to the running Kubernetes Cluster.
  • Taint the new Private Node with tolerations.
  • Deploy the MicroService
  • Attach the new node to the existing nodes autoscaling group
  • Fix Prometheus DaemonSetsMissScheduled Alert

Provision Private Node

Since all the available nodes exist in public subnet, first we need to start with provisioning a Private node and adding that node to the existing Kubernetes Cluster. Note the AMI used by the running cluster node and launch an EC2 Server from that AMI. Select existing private subnet and existing nodes IAM role.

Copy the userdata script from the existing Nodes launch configuration. The script is required to join the Node to the Kubernetes Cluster as soon as it is provisioned.

Paste it in the user data in the Advanced Details section.

Add the Tags the same as an existing node. Ensure to add the “KubernetesCluster” tag.

Launch it. Once the server is provisioned, login to the server and check the Syslog. Ensure the docker containers are running.

docker ps

Now execute the below command on the server from where you will be able to access the Kubernetes API.

kubectl get nodes

It should list the new private node. The node is now added to the existing Kubernetes Cluster.

Taint the Private Node

Taint the private node by executing the below command:

kubectl taint nodes ip-xx.xx.xx.xxx.REGION.compute.internal private=true:NoSchedule

where key=value is private=true and the effect is NoSchedule. This means that no pod will be able to schedule onto the specified node unless it has matching toleration. The key-value pair can be modified here.

If you want to list the available tainted nodes, you can list it via a template:

tolerations.tmpl

{{printf "%-50s %-12s\n" "Node" "Taint"}}
{{- range .items}}
{{- if $taint := (index .spec "taints") }}
{{- .metadata.name }}{{ "\t" }}
{{- range $taint }}
{{- .key }}={{ .value }}:{{ .effect }}{{ "\t" }}
{{- end }}
{{- "\n" }}
{{- end}}
{{- end}}

Execute:

kubectl get nodes -o go-template-file="tolerations.tmpl"

Label the Node

Apply a label to the private node by executing the below command:

kubectl label nodes <Node> <key>=<value>

Example, kubectl label nodes ip-xx.xx.xx.xxx.REGION.compute.internal private=true

Deploy the MicroService with Tolerations

Update the deployment.yaml to include the tolerations same as specified while tainting the node and node label in the node selector:

tolerations:
- key: "private"
operator: "Equal"
value: "true"
effect: "NoSchedule"
nodeSelector:
private: "true"

Deploy it.

kubectl apply -f deployment.yaml

Execute “kubectl get pod/podname -o wide”. Check the Node to which it is assigned.

Attaching the Private Node to the existing AutoScaling Group

Enable Termination Protection on the Private Node and Suspend Terminate process temporarily from the nodes autoscaling group.

Attach the new node to the Nodes autoscaling group. Go to AutoScaling Group, select the private node and set instance protection “Set Scale In Protection”.

Since we have Cluster Autoscaler configured for the Cluster Nodes, the new private node will get terminated by the autoscaler(due to less load as compared to the other nodes). Therefore, its safer to set instance protection on the private node.

Remove the Terminate process from the Suspended Processes now. Do have a look at the Cluster Autoscaler logs. The private node will be skipped by the autoscaler.

Fix Prometheus DaemonSetsMissScheduled Alert

After setting up the private node completely, we started getting DaemonSetsMissScheduled alerts for the calico-node DaemonSet from Prometheus. We have debugged and followed the below steps to fix it.

Problem: We had a total of 8 nodes in our cluster (including masters, nodes in public subnets and a node in private subnet) but the “desiredNumberScheduled” in DaemonSet was showing 7 (excluding the private node).

Solution: Since we have a tainted private node, the daemonset must match the tolerations. To fix the above problem, we have added the tolerations the same as a private node to the calico-node DaemonSet.

Execute:

kubectl edit ds/calico-node -n kube-system

Check the value of “desiredNumberScheduled”. It was one less than the total number of nodes. You can get the number of nodes by the command: “kubectl get nodes”.

Next, Add the toleration the same as you have provided to the private node in the second step above (Taint the Private Node).

Now execute:

kubectl describe ds/calico-node -n kube-system

Check the “Desired Number of Nodes Scheduled:”. It should be equal to your number of nodes currently available.

Look at the status of calico-node pods too:

kubectl get pods -n kube-system -o wide -l k8s-app=calico-node

and that’s how we were able to assign a specific pod to a particular node without any misconfigurations. Hope you found it useful. Keep following our blogs for the further parts on Kubernetes. Do visit the previous parts of this series.

References:

Case Study: A dual migration across AWS & Azure.

By | AWS, Azure, Blogs | No Comments

Written by Arun Kumar, Sr. Cloud Engineer and Ayush Ragesh, Cloud Solutions Architect, Powerupcloud Technologies

Problem Statement

The client is a global pioneer and leader providing end-to-end gifting solutions and manage 260Mn+ transactions a year. With over 15000 points of sale across 16 countries, they needed to migrate their platform from an on-premise datacentre to the cloud. In addition, they needed to be ready and scale-able for one of the largest e-commerce sales in the country.

Powerup’s approach:

As determined with the client it was decided to host their primary DC on AWS and DR on Azure.

Architecture Diagram

Architecture Description

  1. The applications were spread across multiple VPCs which are connected to each other via VPC peering. Different VPCs for UAT, Management Production, etc.
  2. VPN tunnels are deployed from the client’s Bangalore location to AWS and Azure environment.
  3. Multiple Load Balancers are used to distribute traffic between different applications.
  4. NAT Gateways are used for Internet access to private servers.
  5. Cisco Firepower/Palo Alto as the firewall.
  6. CloudFormation for automated deployments on AWS.
  7. Cloudtrail for logging and KMS for encryption of EBS volumes and S3 data. Config for change management.
  8. Route53 is used as the DNS service.
  9. Guard Duty and Inspector will be configured for additional security.
  10. DR site will be deployed on Azure.

Outcomes

* Powerupcloud was able to successfully migrate the core for their largest client on AWS.

* The client was able to achieve the required scalability, flexibility and performance.

The e-commerce sale day was a big success with zero downtime.

Lessons Learned

The customer initially wanted to use a Cisco Firepower firewall for IDS/IPS, DDOS, etc.SSL offloading needs to be done in the application server. So we decided to use Network Load Balancers. Instance-based routing was used so that the source IP addresses are available at the application server. The firewall needed 3 Ethernet cards for ‘trust’, ‘untrust’ and ‘management’.

In Cisco, by default, the eth0 is mapped management and we cannot change this. In Instance-based routing, the request always goes to eth0 while the request should go to ‘untrust’.

So we finally had to use a Palo Alto firewall where we can remap eth0 to ‘untrust’.

API GATEWAY(part-II)-AWS API GATEWAY with Private Integration

By | AWS, Blogs | No Comments

Written by Mudita Misra, Cloud Engineer, Powerupcloud Technologies

In blog series of AWS API GATEWAY here comes part-II:

Now, we can implement AWS API Gateway with Private hosted API’s, if we are planning to share the API’s with third-party tools/applications.

  1. We can create an API Gateway API with private integration to provide the customers access to HTTP/HTTPS resources within Amazon VPC.
  2. Such VPC resources are HTTP/HTTPS endpoints on an EC2 instance behind a network load balancer in the VPC.
  3. When a client calls the API, API Gateway connects to the network load balancer through the pre-configured VPC link. It will be forwarding API method requests to the VPC resources and returns backend responses to the caller.
  4. For an API developer, a VpcLink is functionally equivalent to an integration endpoint.
  5. To create an API with private integration, we must create a new or choose an existing VPC Link connected to a network load balancer that targets the desired VPC resources. We must have appropriate permissions to create and manage a VPC Link.
  6. Now we can set up an API method and integrate it with the VpcLink by setting either HTTP or HTTP_PROXY as the integration type, setting VPC_LINK as the integration connection type, and setting the VPC Link identifier on the integration connectionId.

Let’s start the implementation:

Network load balancer:

  1. Create/Choose a VPC from the AWS account with a private subnet(application requirement).
  2. Create an EC2 server and deploy sample application; we have opted nginx for the sample.
  3. Let us create the Network load balancer for the application, click on Load balancers from the left pane.

4. Click on Create in Network load balancer section.

5. Give NAME, choose internal for the schema in load balancer configuration, as the load balancer should be internal for VPC Link. Choose VPC and subnet respectively.

6. Next, create a Target group with NAME, choose protocol and port according to the application. Click NEXT.

7. Click next, and attach the server(created above) to the target group for a specific port on which application is running, click create and keep some patience by waiting for the instance to become healthy from the initial state.

API GATEWAY:

Create the API gateway, specify some names and descriptions. You can follow the link below, for creating the API gateway from our part-I blog:

https://blog.powerupcloud.com/api-gateway-part-i-aws-api-gateway-monitoring-and-authentication-36617ea47f57

Once, you have finished with the API gateway creation, resume the steps from below

  1. Next is the VPC link.
  2. For VPC link click on the left pane and then click CREATE, give some name and description for your VPC Link.

3. Give the target Network load balancer we created above, click Create. It will take 4–5 minutes.

4. Let’s move to the API we created.

5. Choose APIs from the primary navigation pane and then choose + Create API to create a new API of either an edge-optimized or regional endpoint type.

6. For the root resource (/), choose Create Method from the Actions drop-down menu, and then choose GET.

7. In the / GET — Setup pane, initialize the API method integration as follows:

  • Choose VPC Link for Integration type.
  • Choose Use Proxy Integration.
  • From the Method drop-down list, choose GET as the integration method.
  • From the VPC Link drop-down list, choose [Use Stage Variables] and type ${stageVariables.vpcLinkId} in the text box below.
  • We will define the vpcLinkId stage variable after deploying the API to a stage and set its value to the ID of the VpcLink created above.
  • Type a URL, for example, http://muditademo.com, for Endpoint URL.
  • Here, the hostname (for example, muditademo.com) is used to set the Host header of the integration request.
  • Leave the Use Default Timeout selection as it is unless we want to customize the integration timeouts.
  • Choose Save to finish setting up the integration.

3. Give the target Network load balancer we created above, click Create. It will take 4–5 minutes.

4. Let’s move to the API we created.

5. Choose APIs from the primary navigation pane and then choose + Create API to create a new API of either an edge-optimized or regional endpoint type.

6. For the root resource (/), choose Create Method from the Actions drop-down menu, and then choose GET.

7. In the / GET — Setup pane, initialize the API method integration as follows:

  • Choose VPC Link for Integration type.
  • Choose Use Proxy Integration.
  • From the Method drop-down list, choose GET as the integration method.
  • From the VPC Link drop-down list, choose [Use Stage Variables] and type ${stageVariables.vpcLinkId} in the text box below.
  • We will define the vpcLinkId stage variable after deploying the API to a stage and set its value to the ID of the VpcLink created above.
  • Type a URL, for example, http://muditademo.com, for Endpoint URL.
  • Here, the hostname (for example, muditademo.com) is used to set the Host header of the integration request.
  • Leave the Use Default Timeout selection as it is unless we want to customize the integration timeouts.
  • Choose Save to finish setting up the integration.

8. With proxy integration, the API is ready for deployment. We can configure further if requiredlappropriate method responses and integration responses.

9. From the Actions drop-down menu, choose Deploy API and then choose a new or existing stage to deploy the API.

10. Note the resulting Invoke URL. We need it to invoke the API. Before doing that, we must set up the vpcLinkIdstage variable.

  • In the Stage Editor, choose the Stage Variables tab and choose Add Stage Variable.
  • Under the Name column, type vpcLinkId.
  • Under the Value column, type the ID of VPC_LINK, for example, gi****.
  • Choose the check-mark icon to save this stage variable.
  • Using the stage variable, we can easily switch to different VPC links for the API by changing the stage variable value.

11. Get the invoke URL and hit it, we will get the application running on it without hiccups.

That’s it, the API call is going now with the private network via VPC link.

I would like to express my deep gratitude for your generous support Raju Banerjee.

Case study: How we helped a leading BFSI corporation improve process efficiency by 60% using AI-based OCR.

By | AWS, Blogs | No Comments

Written by Vinit Balani, Associate product manager at Powerupcloud Technologies

Demonetization has changed the way the Indian banking sector functions. While the wider acceptance of Aadhaar has made documentation and authentication easier, for BFSI clients mainly the insurance companies, document verification is still required for processing of loans and policies. Most of this process still remains manual, adding to the time required for opening an account or processing the claims.

One of India’s largest insurance companies was facing this challenge and wanted to resolve this problem. In this case study, we highlight the problem statement and take you through how Powerupcloud came up with the solution using AI automation.

Problem Statement:

With a customer base of 115 million users and expanding, one of India’s largest private Insurance companies wanted a resolve the problem where field agents were dealing with a large quantum of information including images captured as a part of a KYC process.

The company already had an android application for these agents to capture photos of documents, while an account was being created. However, once these photos were captured the data had to be manually entered into the customer repository.

Another challenge was the bad quality of document photographs being taken by the agents, which often resulted in them going back to the customer to re-capture the photos. This was ultimately also increased lead-time for account opening.

The company was looking to automate this process using an OCR (Optical Character Recognition) solution, which could help approve or reject the photo based on quality at the point of capture itself.

Proposed Solution:

Powerupcloud proposed creating a native android application to be integrated within the company’s primary application by leveraging AWS Rekognition’s OCR technology. In addition an application with features to improve the image and do a quality check using open source technologies.

The current scope included developing an OCR mechanism for only the Aadhaar Card. With its successful implementation, it is now going to be extended to other KYC documents.

Solution Flow:

Solution Details

The app (OCR) developed by Powerupcloud is a native android app integrated within the company’s existing android app. The OCR app gets triggered when the Aadhaar document has to be captured by the field agent.

Once the image is captured, it allows the user to crop and enhance the image using features like brightness, contrast, saturation, etc. Post this, the image quality check is done for brightness, contrast, and blurriness. If the image fails the quality check, the agent is asked to re-capture it. However, if it passes the test, the Aadhaar image is scanned to check for QR code and extract information from it. If the scan succeeds, the output with parameter values (like Aadhaar number, Name, Gender, Address) is sent to the company’s application. However, if the QR scan is not successful, the text/parameters are extracted from the image using AWS Rekognition’s OCR technology.

The extracted parameters are then passed on to the company’s Android application as JSON. Below are some snapshots of the native OCR app –

Results

The application is now live and being used by 6000+ field agents across India. There has led to a 60% reduction in the lead-time for processing an application. In addition, the solution has also helped improve the productivity of the field agents who can now cover a lot more customers.