Category

Uncategorized

10 Cloud Predictions for 2021

By | Uncategorized | No Comments

Compiled by Kiran Kumar, Business Analyst at Powerup Cloud Technologies

1. Rise in cloud Telephony

The cloud telephony market is projected to grow 8.9% in 2020 and 17.8% in 2021.

“As a result of workers employing remote work practices in response to COVID-19 office closures, there will be some long-term shifts in conferencing solution usage patterns. Policies established to enable remote work and experience gained with conferencing service usage during the outbreak is anticipated to have a lasting impact on collaboration adoption,” Gartner

COVID19 initiatives

2. Increased adoption of Virtual desktops

Forrester predicts the number of remote workers at the end of 2021 will be 3x of  the pre-pandemic levels. Due to the increase in the demand of remote working we expect to see a rise in organizations turning to Desktop-as-a-Service (DaaS) options in 2021 to allow for the secure access of data off corporate networks from any device. DaaS technology will allow organizations to better meet the demands of remote work by quickly provisioning secure virtual desktops for employees and contractors alike that can be deleted if compromised.

Research shows that Microsoft is not the only provider looking up to the desktop as a means of connecting to the cloud. By the same token, all of the key cloud vendors are interested in the virtual desktop market. Moreover, with popular Windows 7 reaching the end of life in January 2020, it means 2019 will be a year of its transition to glory. 2021 and 2022 will bring its own techs. My own question is; are people willing to just jump to Windows 10 and thus cement Microsoft’s hold? Or will they accept things such as AWS WorkSpaces or Google Chromebooks that are fast rising?

Remote workforce enablement

3. Multi-cloud management

50% of Indian enterprises will operate in a hybrid multi-cloud environment by 2021 and 30% of Indian enterprises will deploy unified VMs, Kubernetes, and multi-cloud management processes and tools to support robust multi-cloud management across on-premises and public clouds.: IDC

CloudEnsure

4. Focus on the “XOps”

AI will play a huge role in augmenting DevOps in 2021 and aslo it will play an essential role in monitoring various conventional activities, optimizing the test cases, along with reducing the time consumed by an application during the development phase.

The Markets and Markets report on the DevSecOps Global Forecast to 2023 suggests that the DevSecOps market size is expected to grow to USD 5.9 billion by 2023.

2020 is expected to witness developers leaning towards the compliance-as-code service, with security being the major focus. As mentioned earlier, security is introduced early in the SDLC cycle using the shift-left strategy. This would ensure that the threats are identified in the beginning and ultimately reduce the cost to fix security issues. DevSecOps ensures that there is better collaboration between the team members by unitedly focusing attention on observability and security issues.

With 65% of organizations expected to adopt DevOps as a mainstream strategy by this year

DevOps automation

5. Pervasiveness of AI

By 2022, 65 percent of CIOs will digitally empower and enable front-line workers with data, AI, and security to extend their productivity, adaptability, and decision-making in the face of rapid changes.

By 2023, driven by the goal to embed intelligence in products and services, one quarter of G2000 companies will acquire at least one AI software start-up to ensure ownership of differentiated skills and IP. Successful organizations will eventually sell internally developed industry-specific software and data services as a subscription, leveraging deep domain knowledge to open profitable new revenue streams.

AI in data centers

AI in data centers will see a peak rise in the coming years. The IDC forecasts that by the year 2021, AI spending will grow to US$52.2 billion with a total CAGR increase of 46.2 percent from 2016-2021.

The use of AI in data centers will serve multiple purposes like automating various manual tasks and also solving skill shortage issues. Along with this, the AI resources can help the enterprises to learn from their past data and have productive conclusions.

Chatbot

Image recognition

6. Serverless computing

25% of developers will leverage serverless by 2021. Gartner also stated the rise of serverless computing, marking the increase by approx. 20 percent of global enterprises.

A 2020 DataDog survey indicated that over 50% of AWS users are now using the serverless AWS Lambda Function as a Service (FaaS). Serverless technologies are going mainstream.

7. Focus on hybrid cloud

AWS and Google will increase their focus on Hybrid cloud. Security will remain as the key driver for hybrid cloud

AWS Outposts

Enterprise Migration

8. Mainstreaming of Containers and Kubernetes

Prior to the pandemic, about 20% of developers regularly used container and serverless functions to build new apps and modernize old ones. We predict nearly 30% will use containers regularly by the end of 2021, creating a spike in global demand for both multi-cloud container development platforms and public-cloud container/serverless services.

The IDC predicts that along with Kubernetes, 95 percent of more new-micro services will be deployed in the containers by 2021.

Forrester predicts that lightweight Kubernetes deployments will end up accounting for 20% of edge orchestration in 2021,

9. Moving DR to cloud

COVID-19 shined a bright light on every company unprepared to recover from a data center outage and refocused enterprise IT teams on improving resiliency. Before the pandemic, few companies protected data and workloads in the public cloud. In 2021, we predict that an additional 20% of enterprises will shift DR operations to the public cloud — and won’t look back.

10. Manage technical debt

Through 2023, coping with technical debt accumulated during the pandemic will shadow 70% of CIOs, causing financial stress, inertial drag on IT agility, and “forced march” migrations to the cloud.

Significance of BI tools in the Era of Big Data

By | Uncategorized | No Comments

Written by Anjali Sharma, Software Engineer at Powerupcloud Technologies

The Demand of business intelligence tool in the Big data world has become the BOOM these days. Today, after Big Data, one of the most used buzzword in the business world is nothing but Business Intelligence. Then how do they both relate to each other? The ascendance of Business Intelligence to the highest priority of most companies has meant that BI Analysts are highly sought after. Business Intelligence (BI) tools have enabled organizations to get revealing insights into their operations and processes and use them to improve productivity, boost revenue, cut costs, etc.

BI refers to the business strategy and technological tools used for analysing business information, including analysis of historical data, analysis of current data as well as future predictions. Hence, BI is a business discipline, much as it is also a technological discipline. As the technological part of BI, companies use various databases and data analytics tool, which comprise their enterprise BI infrastructure. BI tools have been around for decades. However, in recent years, the advent of Big data and artificial intelligence technologies have increased the number and broadened the functionalities of BI technologies.

Gone are the days when businesses were assumed to be like gambling. In those days, there were no other options than making ‘the perfect guess.’ But now, as you know, when it comes to a company’s future, this is no longer an appropriate method to arrive at a strategy. With the help of Business Intelligence software, one can have accurate data, real time updates, and means for forecasting and even to predict conditions.

Assortments- BI tool can have several visages according to the business demand or technical requirements:

  • Data Visual representation tool
  • Data Mining tool
  • Reporting tool
  • Querying tool
  • Analysis tool
  • Geolocation analysis tool etc.

How Tableau becomes the most Powerful BI tool

Now let’s understand among all BI tools how Tableau becomes the most powerful & user friendly-

Tableau offers powerful and sophisticated data collection, analysis and visualizations. One of the claims on Tableau’s website is “Tableau helps people see and understand their data” Tableau allows users to drill deep into data, create powerful visualizations to analyse the information, and automatically produce valuable business insights.

Several Data Source Connections

One of the main strengths of Tableau is that it can automatically connect with hundreds of data sources without any programming needed, including big data providers.

Tableau is one of the leading BI tools for Big Data Hadoop which you can use. It provides the connectivity to various Hadoop tools for the data source like Hive, Cloudera, HortonWorks, etc. Also, not only with Hadoop, Tableau provides the option to connect the data source from over 50 different sources including AWS and SAP.

Drag & Drop facility

Tableau’s drag & drop facility makes it really easy and user friendly. Tableau is designed with most integration taking place through drag-and-drop icons. You can quickly create visuals from data by dragging the icon for the relevant data set into the visualisation area. In other words, you can access visualisations that reveal important insights within a few clicks.

Live and Extracted Data Connection

Tableau allows users to connect live data and extracted data both. User can instantly switch between live data connections and pre-extracted data. You can also schedule extract refreshes and get notifications when live data connections fail.

Security

Users can collaborate securely across networks or the cloud, using Tableau Server and Tableau Online. This allows rapid sharing of insights, meaning that people can take action more quickly to save costs or make more money for the business.

Above mentioned features of tableau make it different from other BI tools. Data is growing faster than ever. With the proliferation of the internet, we now generate even more information. According to IBM, 2.5 quintillion bytes of data are created every day! However, less than 0.5% of it is ever analysed and used. Therefore, the importance of data analysis tools has increased these days. From past 6 years Tableau has been the leader among all data analysis and visualisation tool. Specializing in beautiful visualizations, Tableau lets you perform complex tasks with simple drag-and-drop functionalities and numerous type of charts.

If you are beginner, for better understanding let’s do a hands-on on Tableau with some sample data. Here I am using skill registry dataset where we have created a Google form for the employees of our organisation, we have shared it among them where they can fill their name, email address skills, Total experience etc. After collecting the data, we have created a CEO dashboard.

Download & Install Tableau desktop 14 days’ trial version-  

https://www.tableau.com/en-gb/products/trial

Also you can try free Tableau Public version 2020.2.

Open tableau and connect the data source wherever you have your data as Tableau provides more than 100 data sources we can connect.

After connecting data source check if the data is in correct format, any data source filter needs to be applied or should we use the data interpreter etc. Connections can be Live or Extracted as per the requirements.

What is Live & Extract? (Refer the link given below)

https://www.tableau.com/about/blog/2016/4/tableau-online-tips-extracts-live-connections-cloud-data-53351

If the data is not sufficient in one table, you can take another table using joins.

Now go to the sheet. It would be the first step moving forward creating your very first dashboard.

Tableau divides its data in two types- Measures & Dimensions.

Now Dimensions are something which contain qualitative values like Name, Date, Country etc.

And Measures are those field that can be aggregated or can be used for mathematical operations. In short the numeric values of the dimensions are measures.

As I am using Employees data I can put their location in one sheet using Map chart.

For another view I have put Employees’ skills in two different sheets skill categories and skill-sub categories using name count in measures so that we can analyze how many resources we do have in each skill category.

In last view I have added resource information like their email address, service group, Resume also I have added using action filter.

Now go to the dashboard symbol put all the sheets together and create a visualize representation. You can apply filters according to the requirements also use format option for making you dashboard clean and colorful.

(Data security is the reason why I have hidden the counts and resource information)

For the practice you can download sample data from https://www.kaggle.com/datasets and create your own dashboard.

10 ways to reduce your cloud bill

By | Uncategorized | No Comments
Immediate goal

1

Right-size your cloud instances, keep inventory to bare minimum and save upto 35%

Immediate goal

2

Shutdown unused resources and reduce storage and network costs

Immediate goal

3

Adopt reserved instances and save upto 60% on your compete spends

Immediate goal

4

Schedule your non-production instances / servers to start-stop automatically and save almost 50% of compute bills

Short-term goal

5

Adopt spot-instances and save upto 90% on your on-demand compute spends

Short-term goal

6

Leverage storage services like Amazon S3 Intelligent Tiering and Azure’s Archival / Cold storage to reduce storage costs

Long-term goal

7

Containerize / PaaSify your applications and reduce your IaaS spend by 50%

Long-term goal

8

Adopt open-source platforms (Amazon Linux) to save on enterprise licensing costs like RHEL

Long-term goal

9

Adopt a multi-cloud approach to leverage multi-vendor benefits

10

Ultimately good old excel sheets help you unearth insights usually missed by man and machine.

Customer Success

Powerup helped a large e-commerce company save $2Million on their annual cloud spend

Get started on your optimization journey with our Save Now. Pay Later gain-share program

Start saving on your cloud spend now

Customer support automation using Amazon Connect

By | Uncategorized | No Comments

Customer: One of the largest Global Insurance providers

 

Problem statment

  • The customer support team receives 5000+ calls on a daily basis, while 80% of these calls were services related & post sales support. Most of these queries being repetitive & standardized in nature, client was looking to automate these queries on their call center, so the customer care team can look into more critical queries.
  • A solution which can integrate with Genesys for a seamless handoff to the automated system

Solution

  • Automated flows for 7 use cases, which included user authentication using an alphanumeric policy #, filing a claim, claim refund, refund status & so on Amazon Connect, which used Amazon Lex for NLP classification & user query understanding
  • Amazon Connect integrated with Genesys dial in numbers on the existing call center support system, with a seamless handover onto the voice automated system
  • Complete design was on a serverless architecture, with policy manipulation logic written on Lambda Functions on AWS
  • System integrated with live policy database via REST based APIs for live policy updates & reading the latest policy information
  • Completely voice based interaction, system handed off to a human agent if it is not able to resolve user query

Architecture diagram

Demo Link

File a claim            Claim Status

Policy Cancellation  User Authentication

Business Impact

  • 35% Reduction in call volume to the agents
  • 90% Reduction in resolution time for customers

Enabling remote work at scale

By | Uncategorized | No Comments

Customer: A leading biotech company

 

Problem Statement

A leading biotechnology company has a lot of contractors joining them for temporary work and below are the challenges faced while making sure contractors are productive:

  1. Time taken to allocate a hardened workstation to the contractor took weeks.
  2. Preventing data loss from these workstations.
  3. Security issues like viruses or malware attacks impacting the overall environment.
  4. Self-service option with an integrated approval workflow.

Proposed Solution

AWS Workspace was recommended for this requirement. It is a secure and managed cloud desktop as a service. With Amazon WorkSpaces, you can provide either a Windows or Linux desktop for your users in minutes and allow them to access desktops from any supported devices from any location.

The workspace self-service portal was created to cater to the self-service requirement.

Using this portal, the users can provision their own WorkSpaces with an integrated approval workflow that doesn’t require IT intervention for each request.

This is entirely serverless leveraging AWS Lambda, S3, API Gateway, Step Functions, Cognito, and SES and provides continuous deployment through AWS CodePipeline, CodeBuild, CloudFormation with SAM, and GitHub.

Cloud platform

AWS.

Technologies used

Lambda, Amazon Workspaces, GitHub, CloudFormation, S3, API Gateway, Directory services.

Benefit

  • The time taken for the contractor to be productive has come down drastically due to the quick availability of the workspace.
  • Standardization w.r.t the configuration of the workspaces.
  • No security incidents related to malware or virus attacks.

Managed services for a leading ecommerce company

By | Uncategorized | No Comments

Problem Statement

Our E-commerce client has multiple websites one for each country in Singapore, Malaysia, Japan, Australia. Each website will have its own infra. Very frequently development and application team will need the copy of production DB’s in UAT, DEV and staging environments for their testing and bug fixing. Since it’s commerce site before restoring the DB to UAT or DEV environments we need to remove the customer data and restore it. It was timing taking process to manually dump, clean up customer data and restore in respective environments and also there is a chance of human error happens every now and then.

Proposed Solution

In order to avoid the manual effort, the task has been automated with the help of Shell scripting, AWS spot instances & Jenkins.

Every day shell script will be used to take production DB dump and move that to S3 & local copy will be available in AWS EC2 server for 7 days.

Then spot instance will be launched using the backup volume and multiple DB jobs will be run in the background which will restore the production data and truncate customer data tables and it will dump the cleaned DB and move it to S3.

Then whenever dev team requires they will use Jenkins Job to fetch the cleaned DB file from S3 & restore it in their respective environments.

After sometime when the data grows in production DB spot instance was getting terminated even before it finishes the process, then we increased spot price by a little and ran multiple restore jobs in parallel which consumes less time.

Cloud platform

AWS.

Technologies used

EC2, Jenkins, S3.

Replacing an existing IVR environment with Amazon Connect reduced call wait times by 40%

By | Uncategorized | No Comments

Customer: Multinational home appliances manufacturer

 

Problem Statement

The customer wanted to replace their existing Avaya Systems which had an IVR set up to take consumer calls. The categories included Service Schedules/Inquiries, Spare part status, Service location for maintenance, Product Information, etc.

Agent pain points in the AS-IS Process which also needed to be sorted:

  • Spare part status – Resolution is based on Inventory Management
  • Appointment scheduling – 5 executions for a technician per day
  • Agents coaching – Send message/email based on the event to supervisor & Real-time call listening

The team also had Avaya, network and Consumer Pain Point in the AS-IS Process.

Proposed Solution

Powerup successfully helped the client to set up a customer support environment for Customer agents in Indonesia through Amazon Connect. This helped them to host AWS services in Sydney, Australia region with the ability to conference and transfer calls. In addition, Powerup set-up a customer support environment for agents which will route the voice calls from consumers to appropriate agents based on the language proficiency using Amazon Connect based on the language support provided by the agents (English/Bahasa). It also facilitated Call Recording using Amazon Connect capabilities and ability for the agent to make an outbound call using call information provided in the InstaEdge CRM. The Solution also included Out of the box reports with Real-time and historical reports along with login/log out reports to the client.

Cloud platform

AWS.

Technologies used

Amazon Connect, S3, Lambda.

Benefits

  1. With the implementation of AWS Connect average call waiting times were reduced  by 40%
  2. Demonstrates that a voice call generated is successfully routed and addressed by an agent connected to AWS.
  3. Demonstrates that an agent connected to Amazon Connect can make a successful outbound call to a consumer basis details provided in the CRM.
  4. An iframe of AWS Control Panel is demonstrated within a web application

Migration to cloud

By | Uncategorized | No Comments

Customer: A leading provider of cloud-based software solutions

 

Problem Statement

Being a part of the highly regulated life sciences industry, recognized the benefits of cloud a long time ago. The Customer were one of the very first life sciences solution vendors to deliver SaaS solutions to its customers. Currently, that momentum continues as the business goes “all-in on AWS” by moving their entire cloud infrastructure to the AWS platform.

As their platform and solutions are powered entirely by the AWS cloud, the business wanted to find ways to reduce costs, strengthen security and increase the availability of the existing AWS environment. Powerup’s services were enlisted with the following objectives:

  1. Cost optimization of the existing AWS environment
  2. Deployment automation of
  3. Safety infrastructure on AWS
  4. Architecture and deployment of centralized Log Management solution
  5. Architecture review and migration of the client’s customer environment to AWS including POC for Database Migration Service (DMS)
  6. Evaluation of DevOps strategy

Proposed Solution

 

1. Cost optimization of the existing AWS environment

Here are the three steps followed by Powerup to optimize costs:

  • Addressing idle resources by proper server tagging, translating into instant savings
  • Right sizing recommendation for instances after a proper data analysis
  • Planning Amazon EC2 Reserved Instances (RI) purchasing for resized EC2 instances to capture long-term savings

Removing idle/unused resource clutter would fail to achieve its desired objective in the absence of a proper tagging strategy. Tags created to address wasted resources also help to properly size resources by improving capacity and usage analysis. After right sizing, committing to reserved instances gets a lot easier. For example, Powerup team was able to draw a comparison price chart for the running EC2 & RDS instances based on the On-Demand Vs RI costs and share a detailed analysis explaining the RI Instances pricing plans.

By following these steps, Powerup estimated 30% reduction in monthly spend of the customer on AWS.

2. Deployment automation Safety infrastructure on AWS

In AWS, the client has leveraged key security features like Cloud Watch and Cloud trail to closely monitor the traffic and actions performed at API level. Critical functions like Identity & Access Management, Encryption, Log management is also managed by using features of AWS. Capabilities like AWS Guard Duty, which is a ML-based tool, which continuously monitors threats and add industry intelligence to the alerts it generates is used by them for 24/7 monitoring; along with AWS Inspector, which is a vulnerability detection tool. To ensure end to end cyber security, they have deployed an end to end Endpoint Detection and Response (EDR) solution called Trend Micro Deep Security. All their products are tested for security vulnerabilities using IBM AppScan tool and manual code review, following OWASP Top10 guidelines and NIST standards to ensure Confidentiality, Integrity and Availability of data.

As part of deployment automation, Powerup used Cloud formation (CF) and/or Terraform templates to automate infrastructure provision and maintenance. In addition to this, Powerup’s team simplified all modules used to perform day to day tasks to render them re-usable for deployments across multiple AWS accounts. Logs generated for all provisioning tasks were stored in a centralized S3 bucket. The business had requested for incorporating security parameters and tagging files, along with tracking of user actions in cloud trail.

3. Architecture and deployment of centralized Log Management solution

Multiple approaches for Log management were shared with the customer. Powerup and the client team agreed on the approach “AWS CW Event Scheduler/SSM Agent”. Initially, the scope was generation of Log management system for Safety infrastructure account, later, it was expanded to other accounts as well. Powerup team built solution architecture for Log management using ELK stack and Cloud Watch. Scripts were written such that it can be used across their client’s on AWS cloud. Separate scripts were written for Linux /Windows machines using Shell scripting and Powershell. No hard coding was done on the script. All inputs are through a csv file which would have Instance ID, Log Path, Retention Period, backup folder path & S3 bucket path.

Furthermore, Live hands-on workshops were conducted by Powerup team to train the client’s Operations team for future implementations.

4. Architecture review and migration of the client’s environment to AWS including POC for Database Migration Service (DMS)

The client’s pharmacovigilance software and drug safety platform is now powered by the AWS Cloud, and currently more than 85 of their 200+ customers have been migrated, with more to quickly follow. In addition, the wanted Powerup to support the migration of one of its customer to AWS. Powerup reviewed and validated the designed architecture. Infrastructure was deployed as per the approved architecture. Once the architecture was deployed, Powerup used the AWS Well-Architected Framework to evaluate the deployed architecture and provide guidance to implement designs that scale with customer’s application needs over time. Powerup also supported the application team for production Go-live on AWS infrastructure, along with deploying and testing DMS POC.

5. Evaluation of DevOps strategy

Powerup was responsible for evaluating DevOps automation processes and technologies to suit the products built by the client’s product engineering team.

Cloud platform

AWS.

Technologies used

EC2, RDS, CloudFormation, S3.

Benefit

Powerup equipped the client with efficient and completely on-demand infrastructure provisioning with hours, along with built-in redundancies, all managed by AWS. Eliminating idle and over-allocated capacity, RI management and continuous monitoring enabled them to optimize costs. They successfully realized 30% savings on overlooked AWS assets, resulting in an overall 10 percent optimization in AWS cost. In addition, the client can now schedule and automate application backups, scale up databases in minutes by changing instance type, and have instances automatically moved to a healthy infrastructure in less than 15 minutes in case of a downtime, giving customers improved resiliency and availability.

The client continues to provide a globally unified, standardized solution on the AWS infrastructure-as-a-service (IaaS) platform to drive compliance and enhance the experiences of all its customers.

Sales prediction engine

By | Uncategorized | No Comments

Customer: One of the world’s largest corporate food catering firm

 

Problem Statement

One of the world’s largest corporate food catering companies wanted to understand their customer behaviour including their food ordering trends. This will help them discontinue less popular foods and combos, eventually helping them increase customer satisfaction and profit margins.

Proposed Solution

The POS data from customer’s catering sites were pushed to a central Data Warehouse. The data is then processed by machine learning powered prediction engine to predict several important business parameters including plate consumption, top combo foods, inventory prediction etc.

Cloud platform

Azure.

Technologies used

Azure Machine Learning, Python, SQL Server, PowerBI.

Migration to Amazon ECS and DevOps Setup

By | Uncategorized | No Comments

Customer: India’s largest trucking platform

Problem Statement

The customer’s environment on AWS was facing scalability challenge as it was maintained across a heterogeneous set of software solutions with many different types of programming languages and systems and there was no fault-tolerant mechanism implemented. The lead time to get a developer operational was high as the developer ended up waiting for a longer duration to access cloud resources like EC2, RDS, etc. Additionally, the deployment process was manual which increased the chances of unforced human errors and configuration discrepancies. Configuration management took longer time which further slowed down the deployment process. Furthermore, there was no centralized mechanism for user management, log management, and cron jobs monitoring.

Proposed Solution

For AWS cloud development the built-in choice for infrastructure as code (IAC) is AWS CloudFormation. However, before building the AWS Cloudformation (CF) templates, Powerup conducted a thorough assessment of customer’s existing infrastructure to identify the gaps and plan the template preparation phase accordingly. Below were a few key findings from their assessment:

  • Termination Protection was not enabled to many EC2 instances
  • IAM Password policy was not implemented
  • Root Multi Factor Authentication (MFA) was not enabled
  • IAM roles were not used to access the AWS services from EC2 instances
  • CloudTrail was not integrated with Cloudwatch logs
  • S3 Access logs for Cloudtrail S3 bucket was not enabled
  • Log Metric was not enabled for Unauthorised API Calls; Using ROOT Account to access the AWS Console; IAM Policy changes; Changes to CloudTrail, CloudConfig, S3 Bucket policy; Alarm for any security group changes, NACL, RouteTable, VPCs
  • SSH ports of few security groups were open to Public
  • VPC Flow logs were not enabled for few VPCs

 

Powerup migrated their monolithic service into smaller independent services which are self-deployable, sustainable, and scalable. They also set up CI/CD using Jenkins and Ansible. Centralized user management was implemented using FreeIPA, while ELK stack was used to implement centralized log management. Healthcheck.io was used to implement centralized cron jobs monitoring.

CloudFormation (CF) Templates were then used in the creation of the complete AWS environment. The template can be reused to create multiple environments in the future. 20 Microservices in the stage environment were deployed and handed over to the customer team for validation. Powerup also shared the Ansible playbook which helps in setting up the following components – Server Hardening / Jenkins / Metabase / FreeIPA / Repository.

The below illustrates the architecture:

  • Different VPCs are provisioned for Stage, Production and Infra management. VPC peering is established from Infra VPC to Production / Stage VPC.
  • VPN tunnel is established between customer office to  AWS Infra VPC for the SSH access / Infra tool access.
  • All layers except the elastic load balancer is configured in private subnet.
  • Separate security group configured for each layer like DB / Cache / Queue / App / ELB / Infra security groups. Only required Inbound / Outbound rules.
  • Amazon ECS is configured in Auto-scaling mode . So the ECS workers will scale horizontally based on the Load to the entire ECS cluster.
  • Service level scaling is implemented for each service to scale the individual service automatically based on the load.
  • Elasticache (Redis) is used to store the end user session
  • Highly available RabbitMQ cluster is configured. RabbitMQ is used as messaging broker between the micro services.
  • For MySQL and Postgresql RDS Multi-AZ is configured. MongoDB is configured in Master-slave mode.
  • IAM roles are configured for accessing the AWS resources like S3 from EC2 instances.
  • VPC flow logs / cloud trail / Cloud Config are enabled for logging purpose. The logs are streamed into AWS Elasticsearch services using AWS Lambda. Alerts are configured for critical events like instance termination, IAM user deletion, Security group updates etc.
  • AWS system manager is used to manage collect the OS, Application, instance meta data of EC2 instances for inventory management.
  • AMIs and backups are configured for business continuity.
  • Jenkins is configured for CI / CD process.
  • CloudFormation template is being used for provisioning / updating of the environment.
  • Ansible is used as configuration management for all the server configurations like Jenkins / Bastion / FreeIPA etc.
  • Sensu monitoring system is configured to monitor system performance
  • New Relic is configured for application performance monitoring and deployment tracking

Cloud platform

AWS.

Technologies used

Amazon Redshift, freeIPA, Amazon RDS, Redis.

Benefit

IaC enabled customer to spin up an entire infrastructure architecture by running a script. This will allow the customer to not only deploy virtual servers, but also launch pre-configured databases, network infrastructure, storage systems, load balancers, and any other cloud service that is needed. IaC completely standardized the setup of infrastructure, thereby decreasing the chances of any incompatibility issues with infrastructure and applications can run more smoothly. IaC is helpful for risk mitigation because the code can be version-controlled, every change in the server configuration is documented, logged, and tracked. And these configurations can be tested, just like code. So if there is an issue with the new setup configuration, it can be pinpointed and corrected much more easily, minimizing risk of issues or failure.

Developer productivity drastically increases with the use of IaC. Cloud architectures can be easily deployed in multiple stages to make the software development life cycle much more efficient.