Category

Uncategorized

The Evolution of Serverless Computing

By | Powerlearnings, Uncategorized | No Comments

Compiled by Kiran Kumar, Business analyst at Powerup Cloud Technologies

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

What is Serverless Computing?

Servers have always been an integral part of computer architecture and today, with the onset of cloud computing, the IT sector has progressed dynamically towards web based architecture further leading the way to serverless computing. 

Gartner estimated that by 2020, 20% of the world’s organizations would have gone serverless. 

Using a virtual server from a cloud provider not only offloads their development team from taking care of server infrastructure but also helps the operations team in running the code smoothly.  

Serverless computing, also known as serverless architecture or function as a service (FaaS) is a cloud deployment model offered by cloud service providers, to govern server and infrastructure management services of their customers.  

This model provisions for allocation of resources, equipping virtual machines, container management and even tasks like multithreading which are built into the application code, thus reducing the responsibility and accountability of software developers and application architects.

As a result, the application developers can solely focus on building the code more efficiently while cloud providers maintain complete transparency with them.

Actual physical servers are nevertheless used by cloud service providers to implement the code into production; but developers are least concerned with regards to executing, altering or scaling a server.

An organization seeking serverless computing services is charged on a flexible ‘pay-as-you-go’ basis, where you pay for only the actual amount of resources utilized by an application. The service is auto-scaling and paying for a fixed amount of bandwidth or servers like before has become redundant.

With a serverless architecture, the focus is mainly on the individual functions in an application code, while the cloud service provider automatically provisions, scales and manages the infrastructure required to run the code.

Other Cloud Computing Models Vs. Serverless

Cloud computing is the on-demand delivery of services pertaining to server, storage, database, networking, software and more via the Internet.

The three main service models of cloud computing are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) with serverless being the newest addition in the stack. All four services address distinct requirements, supplement each other and focus on specific functions within cloud computing and therefore commonly referred to as the “cloud-computing stack.”

Serverless computing ensures server provisioning and infrastructure management is handled only by the cloud provider on-demand on a per-request basis with auto-scaling capabilities. It shifts the onus away from developers and operations, thus mitigating serious issues like security breach, downtime and loss of customer data that otherwise proves uneconomical.

In the conventional cloud computing set up, the resources are dedicatedly available irrespective of whether it is put to use or idle while serverless enables customers to pay only for resources being used, which means that serverless is capable of delivering the exact units of resources in response to a demand from the application.

To elaborate further, applications are framed into independent autonomous functions and whenever a request from a particular application comes in, the corresponding functions are instanced and resources are applied across relevant functions as needed. The key advantage of a serverless model is to facilitate whatever the application calls for, whether it is additional computational power or more storage capacity.

Traditionally, the process of spinning up and maintaining a server is a tedious and risky task that may pose high security threats especially in case of misconfigurations or errors. On the FaaS or serverless models, virtual servers are utilized with nominal operations to keep applications running in the background. 

In other cloud computing models, resource allocation occurs in sections and buffers need to be accommodated for, in order to avoid failures in case of excess loads. This arrangement eventually leads to the application not always operating in full capacity resulting in unwanted expenses. However, in serverless computing, functions are invoked only on demand and turned off when not in use enhancing cost optimization.

Serverless Computing – Architectural impact 

Rather than running services on a continuous basis, users can deploy individual functions and pay only for the CPU time when their code is actually executing. Such Function as a Service (FaaS) capabilities are capable of significantly changing how client/server applications are designed, developed and operated. 

Gartner predicts that half of global enterprises will have deployed function Platform as a Service (fPaaS) by 2025, up from only 20% today.

The technology stack for an IT service delivery can be re-conceptualized to fit the serverless stack across each layers of network, compute and database. A serverless architecture includes three main components:

  • API Gateway 
  • FaaS
  • DbaaS,

where the API Gateway is nothing but a communication layer between the frontend and FaaS. It maps the architectural interface with the respective functions that run the business logic.

With the abdication of servers in the serverless set up, the need for distribution of network or application traffic via load balancers takes a backseat as well.

FaaS helps execute codes in response to events while the cloud provider attends to the underlying infrastructure associated with building and managing microservices applications.

DbaaS is a cloud based backend service that basically helps get rid of database administration overheads.

For serverless architectures, the key objective is to divide the system into a group of individual functions where costs are directly proportional to usage and not reserved capacity. All benefits of bundling and individual services sharing the same reserved capacity become obsolete. FaaS provisions for development of secured remote modules that can be maintained or replaced more efficiently.

Another major development with the serverless model is that it facilitates client applications to directly access backend resources like storage, with appropriately distributed authentication and authorization techniques in place.

Serverless Computing – Economic impact 

The pay as you go model offers a considerable remunerative benefit, as users are not paying for idle capacity. For instance, a 300 milliseconds service task that needs to run every five minutes would need a dedicated service instance in the traditional setup, but with a FaaS model, organizations will be billed for only those 300 milliseconds out of every five minutes, resulting in a potential saving of almost 99.5%

Also, as different cloud services are billed according to its utilization, allowing client applications to connect directly to backend resources can optimize costs significantly. The costs of event-driven serverless cloud computing services rises with the increase in memory requirement and processing time and any service that does not charge for execution time adds to cost effectiveness.

Who should use a serverless architecture?

In the recent past, Gartner identified Serverless computing as one of the most emerging software infrastructure and operations architecture stating that going forward, serverless would eliminate the need for infrastructure provisioning and management. IT enterprises need to adopt an application-centric approach to serverless computing, managing APIs and SLAs, rather than physical infrastructures. 

Organizations looking for scalability, flexibility and better testability of their applications should opt for serverless computing. 

Developers wanting to achieve reduced time to market with building optimal and agile applications would also benefit from serverless architecture models.

The need to have a server running 24/7 is no longer relevant and the module-based functions can be called by applications only when required, thus incurring costs only while at use. 

This in turn paves the way for organizations to have a product based approach where a part of the development team can focus on developing and launching new features without the hassle of having to deploy an entire server for the same. 

Also, with serverless architecture, developers have the option to provide users with access to some of the applications or functions in order to reduce latency.

Running a robust and scalable server along with being able to reduce the time and complexity of the infrastructure is a must. With serverless, the effort required to maintain the IT infrastructure is nominal as most of the server related issues are resolved automatically.

One of the most preferred cloud serverless services is AWS Lambda, which tops the list when it comes to integrating with other services. It offers features like event triggering, layers, high-level security control and online code editing. 

Microsoft Azure functions and Google Cloud functions that offer similar services by integrating with their own set of services and triggers are a close second.

There are players like Auth0, AWS Cognito UserPools and Azure B2C that offer serverless identity management with single sign-on and custom domain support while implementing real-time application features are provisioned by platforms like PubNub, Google Firebase, Azure SignalR and AWS AppSync. 

Amazon S3 by AWS is a leader in file storage services and Azure Blog Storage is an alternative to it.

Azure DevOps and the combination of AWS Code Commit, AWS Code Build, AWS Code Pipeline and AWS Code Star services cater to the entire DevOps management with tools like CircleCI, Bamboo focusing mainly on CI/CD functions. 

Thus, there are numerous serverless offerings in the market to evaluate and choose from, based on the platform that an organization is using with respect to their application needs.

https://azure.microsoft.com/en-in/solutions/serverless/

https://aws.amazon.com/serverless/

https://cloud.google.com/serverless

How serverless has impacted cloud computing?

In a recent worldwide IDC survey of more than 3,000 developers, 55.7% of respondents indicated they are currently using or have solid plans to implement serverless computing on public cloud infrastructure.

While physical servers are still a part of the serverless set up, serverless applications do not need to cater to or manage hardware and software constituents. Cloud service providers are equipped to offer lucrative alternatives to configuration selection, integration testing, operations and all other tasks related to infrastructure management. 

This is a notable shift in the IT infrastructure services. 

Developers are now responsible primarily for the code they develop while FaaS takes care of right sizing, scalability, operations, resource provisioning, testing and high availability of infrastructure. 

Therefore, infrastructure related costs are significantly reduced promoting a highly economical business set up.

As per Google trends, serverless computing is gaining immense popularity due to the simplicity and economical advantages it offers. The market size for FaaS services is estimated to grow to 7.72 billion by 2021.

Serverless computing Benefits and drawbacks

Serverless computing has initiated a revolutionary shift in the way businesses are run improving the accuracy and impact of technology services. Some of the benefits derived from implementing a serverless architecture are:

Reduces organizational costs

Adopting serverless computing eliminates IT infrastructure costs, as cloud providers build and maintain physical servers on behalf of     organizations. In addition, servers are exposed to breakdown, require maintenance and need additional workforce to deploy and operate them on a regular basis, all of which can be excluded by going serverless. It facilitates  enhanced workflow management, as organizations are able to convert operational processes into functions, thus, maintaining    profitability and bringing down expenses to a large extent. 

Serverless stacks

Serverless stacks act as an alternative to the conventional technology stacks by creating a responsive environment to develop agile        applications without being concerned about building complicated application stacks themselves.

Optimizes release cycle time

Serverless computing offers microservices that can be deployed and run on a serverless infrastructure only when needed by the application. It enables organizations to make the smallest of application-specific   developments, isolate and resolve issues and manage independent applications as well. According to a survey conducted, serverless        microservices have proven to bring down the standard release cycle from 65 to just 16 days.

Improved flexibility and deployment

Serverless computing microservices provide flexibility, technical support and clarity needed to process data owing to which organizations can boost a more consistent and well structured data warehouse. Similarly, since remote applications can be created, deployed and fixed in serverless, it is feasible to schedule specific automated repetitive tasks to enhance quick deployments and reduce the time to market.

Event based computing

 With FaaS, cloud providers are able to offer   event driven computing methodologies where molecular functions respond to application needs when called for. Therefore, developers can focus only on building codes allowing organizations to escape the conditional time-consuming traditional workflows. It moreover reduces the DevOps costs and lets developers focus on building new features and products.

Green computing

It is important for organizations to be mindful of the climatic and environmental changes in today’s times. With serverless computing,      organizations can operate servers on demand rather than run servers at all times, ensure energy consumption is reduced and help decrease the amount of radiation shed from actual physical servers and data centers.

Better Scalability

Serverless is highly scalable and accommodates growth and increase in load   without any additional infrastructure. It is researched that 30% of the world’s servers remain idle at any point in time and most servers only utilize 5%-15% of their total capacity, which makes it best to opt for       scalable serverless solutions.

However, organizations need to be wary of the downside of serverless computing as well.

  • Not Universally suitable 

Serverless is best for transitory applications and not efficient if workloads have to be run long-term on a dedicated server

  • Vendor lock-in

Applications are entirely dependent on third party vendors with organizations having minimal or no control over them. It is also difficult for customers to change the cloud platform or provider without making changes to the applications.

  • Security Issues

Major security issues may arise due to cloud providers conducting multi-tenancy operations on a shared environment in order to use their own resources more efficiently.

  • Not ideal for testing  

Some of the FaaS services do not facilitate testing of functions locally assuming that developers will use the same cloud for testing.

  • Practical difficulties

A scalable serverless platform needs to initialize or stop internal resources when application requests come in or when there have been no requests for a long time. Usually when functions handle such first time requests, they take more time than usual triggering an issue called cold start. Additional overheads may be incurred for function calls if the two communicating functions are located on different servers.

Serverless computing is an emerging technology with considerable scope for advancement. In the future, businesses can anticipate a more unified approach between FaaS, APIs and frameworks to overcome the listed drawbacks. 

As of today, serverless architecture gives organizations the freedom to focus on their core business offerings in order to develop a competitive edge over their counterparts. Its high deliverability and multi-cloud support coupled with the immense opportunities it promises, makes it a must-adopt in any organization.

Significance of BI tools in the Era of Big Data

By | Uncategorized | No Comments

Written by Anjali Sharma, Software Engineer at Powerupcloud Technologies

The Demand of business intelligence tool in the Big data world has become the BOOM these days. Today, after Big Data, one of the most used buzzword in the business world is nothing but Business Intelligence. Then how do they both relate to each other? The ascendance of Business Intelligence to the highest priority of most companies has meant that BI Analysts are highly sought after. Business Intelligence (BI) tools have enabled organizations to get revealing insights into their operations and processes and use them to improve productivity, boost revenue, cut costs, etc.

BI refers to the business strategy and technological tools used for analysing business information, including analysis of historical data, analysis of current data as well as future predictions. Hence, BI is a business discipline, much as it is also a technological discipline. As the technological part of BI, companies use various databases and data analytics tool, which comprise their enterprise BI infrastructure. BI tools have been around for decades. However, in recent years, the advent of Big data and artificial intelligence technologies have increased the number and broadened the functionalities of BI technologies.

Gone are the days when businesses were assumed to be like gambling. In those days, there were no other options than making ‘the perfect guess.’ But now, as you know, when it comes to a company’s future, this is no longer an appropriate method to arrive at a strategy. With the help of Business Intelligence software, one can have accurate data, real time updates, and means for forecasting and even to predict conditions.

Assortments- BI tool can have several visages according to the business demand or technical requirements:

  • Data Visual representation tool
  • Data Mining tool
  • Reporting tool
  • Querying tool
  • Analysis tool
  • Geolocation analysis tool etc.

How Tableau becomes the most Powerful BI tool

Now let’s understand among all BI tools how Tableau becomes the most powerful & user friendly-

Tableau offers powerful and sophisticated data collection, analysis and visualizations. One of the claims on Tableau’s website is “Tableau helps people see and understand their data” Tableau allows users to drill deep into data, create powerful visualizations to analyse the information, and automatically produce valuable business insights.

Several Data Source Connections

One of the main strengths of Tableau is that it can automatically connect with hundreds of data sources without any programming needed, including big data providers.

Tableau is one of the leading BI tools for Big Data Hadoop which you can use. It provides the connectivity to various Hadoop tools for the data source like Hive, Cloudera, HortonWorks, etc. Also, not only with Hadoop, Tableau provides the option to connect the data source from over 50 different sources including AWS and SAP.

Drag & Drop facility

Tableau’s drag & drop facility makes it really easy and user friendly. Tableau is designed with most integration taking place through drag-and-drop icons. You can quickly create visuals from data by dragging the icon for the relevant data set into the visualisation area. In other words, you can access visualisations that reveal important insights within a few clicks.

Live and Extracted Data Connection

Tableau allows users to connect live data and extracted data both. User can instantly switch between live data connections and pre-extracted data. You can also schedule extract refreshes and get notifications when live data connections fail.

Security

Users can collaborate securely across networks or the cloud, using Tableau Server and Tableau Online. This allows rapid sharing of insights, meaning that people can take action more quickly to save costs or make more money for the business.

Above mentioned features of tableau make it different from other BI tools. Data is growing faster than ever. With the proliferation of the internet, we now generate even more information. According to IBM, 2.5 quintillion bytes of data are created every day! However, less than 0.5% of it is ever analysed and used. Therefore, the importance of data analysis tools has increased these days. From past 6 years Tableau has been the leader among all data analysis and visualisation tool. Specializing in beautiful visualizations, Tableau lets you perform complex tasks with simple drag-and-drop functionalities and numerous type of charts.

If you are beginner, for better understanding let’s do a hands-on on Tableau with some sample data. Here I am using skill registry dataset where we have created a Google form for the employees of our organisation, we have shared it among them where they can fill their name, email address skills, Total experience etc. After collecting the data, we have created a CEO dashboard.

Download & Install Tableau desktop 14 days’ trial version-  

https://www.tableau.com/en-gb/products/trial

Also you can try free Tableau Public version 2020.2.

Open tableau and connect the data source wherever you have your data as Tableau provides more than 100 data sources we can connect.

After connecting data source check if the data is in correct format, any data source filter needs to be applied or should we use the data interpreter etc. Connections can be Live or Extracted as per the requirements.

What is Live & Extract? (Refer the link given below)

https://www.tableau.com/about/blog/2016/4/tableau-online-tips-extracts-live-connections-cloud-data-53351

If the data is not sufficient in one table, you can take another table using joins.

Now go to the sheet. It would be the first step moving forward creating your very first dashboard.

Tableau divides its data in two types- Measures & Dimensions.

Now Dimensions are something which contain qualitative values like Name, Date, Country etc.

And Measures are those field that can be aggregated or can be used for mathematical operations. In short the numeric values of the dimensions are measures.

As I am using Employees data I can put their location in one sheet using Map chart.

For another view I have put Employees’ skills in two different sheets skill categories and skill-sub categories using name count in measures so that we can analyze how many resources we do have in each skill category.

In last view I have added resource information like their email address, service group, Resume also I have added using action filter.

Now go to the dashboard symbol put all the sheets together and create a visualize representation. You can apply filters according to the requirements also use format option for making you dashboard clean and colorful.

(Data security is the reason why I have hidden the counts and resource information)

For the practice you can download sample data from https://www.kaggle.com/datasets and create your own dashboard.

10 ways to reduce your cloud bill

By | Uncategorized | No Comments
Immediate goal

1

Right-size your cloud instances, keep inventory to bare minimum and save upto 35%

Immediate goal

2

Shutdown unused resources and reduce storage and network costs

Immediate goal

3

Adopt reserved instances and save upto 60% on your compete spends

Immediate goal

4

Schedule your non-production instances / servers to start-stop automatically and save almost 50% of compute bills

Short-term goal

5

Adopt spot-instances and save upto 90% on your on-demand compute spends

Short-term goal

6

Leverage storage services like Amazon S3 Intelligent Tiering and Azure’s Archival / Cold storage to reduce storage costs

Long-term goal

7

Containerize / PaaSify your applications and reduce your IaaS spend by 50%

Long-term goal

8

Adopt open-source platforms (Amazon Linux) to save on enterprise licensing costs like RHEL

Long-term goal

9

Adopt a multi-cloud approach to leverage multi-vendor benefits

10

Ultimately good old excel sheets help you unearth insights usually missed by man and machine.

Customer Success

Powerup helped a large e-commerce company save $2Million on their annual cloud spend

Get started on your optimization journey with our Save Now. Pay Later gain-share program

Start saving on your cloud spend now

Customer support automation using Amazon Connect

By | Uncategorized | No Comments

Customer: One of the largest Global Insurance providers

 

Problem statment

  • The customer support team receives 5000+ calls on a daily basis, while 80% of these calls were services related & post sales support. Most of these queries being repetitive & standardized in nature, client was looking to automate these queries on their call center, so the customer care team can look into more critical queries.
  • A solution which can integrate with Genesys for a seamless handoff to the automated system

Solution

  • Automated flows for 7 use cases, which included user authentication using an alphanumeric policy #, filing a claim, claim refund, refund status & so on Amazon Connect, which used Amazon Lex for NLP classification & user query understanding
  • Amazon Connect integrated with Genesys dial in numbers on the existing call center support system, with a seamless handover onto the voice automated system
  • Complete design was on a serverless architecture, with policy manipulation logic written on Lambda Functions on AWS
  • System integrated with live policy database via REST based APIs for live policy updates & reading the latest policy information
  • Completely voice based interaction, system handed off to a human agent if it is not able to resolve user query

Architecture diagram

Demo Link

File a claim            Claim Status

Policy Cancellation  User Authentication

Business Impact

  • 35% Reduction in call volume to the agents
  • 90% Reduction in resolution time for customers

Enabling remote work at scale

By | Uncategorized | No Comments

Customer: A leading biotech company

 

Problem Statement

A leading biotechnology company has a lot of contractors joining them for temporary work and below are the challenges faced while making sure contractors are productive:

  1. Time taken to allocate a hardened workstation to the contractor took weeks.
  2. Preventing data loss from these workstations.
  3. Security issues like viruses or malware attacks impacting the overall environment.
  4. Self-service option with an integrated approval workflow.

Proposed Solution

AWS Workspace was recommended for this requirement. It is a secure and managed cloud desktop as a service. With Amazon WorkSpaces, you can provide either a Windows or Linux desktop for your users in minutes and allow them to access desktops from any supported devices from any location.

The workspace self-service portal was created to cater to the self-service requirement.

Using this portal, the users can provision their own WorkSpaces with an integrated approval workflow that doesn’t require IT intervention for each request.

This is entirely serverless leveraging AWS Lambda, S3, API Gateway, Step Functions, Cognito, and SES and provides continuous deployment through AWS CodePipeline, CodeBuild, CloudFormation with SAM, and GitHub.

Cloud platform

AWS.

Technologies used

Lambda, Amazon Workspaces, GitHub, CloudFormation, S3, API Gateway, Directory services.

Benefit

  • The time taken for the contractor to be productive has come down drastically due to the quick availability of the workspace.
  • Standardization w.r.t the configuration of the workspaces.
  • No security incidents related to malware or virus attacks.

Managed services for a leading ecommerce company

By | Uncategorized | No Comments

Problem Statement

Our E-commerce client has multiple websites one for each country in Singapore, Malaysia, Japan, Australia. Each website will have its own infra. Very frequently development and application team will need the copy of production DB’s in UAT, DEV and staging environments for their testing and bug fixing. Since it’s commerce site before restoring the DB to UAT or DEV environments we need to remove the customer data and restore it. It was timing taking process to manually dump, clean up customer data and restore in respective environments and also there is a chance of human error happens every now and then.

Proposed Solution

In order to avoid the manual effort, the task has been automated with the help of Shell scripting, AWS spot instances & Jenkins.

Every day shell script will be used to take production DB dump and move that to S3 & local copy will be available in AWS EC2 server for 7 days.

Then spot instance will be launched using the backup volume and multiple DB jobs will be run in the background which will restore the production data and truncate customer data tables and it will dump the cleaned DB and move it to S3.

Then whenever dev team requires they will use Jenkins Job to fetch the cleaned DB file from S3 & restore it in their respective environments.

After sometime when the data grows in production DB spot instance was getting terminated even before it finishes the process, then we increased spot price by a little and ran multiple restore jobs in parallel which consumes less time.

Cloud platform

AWS.

Technologies used

EC2, Jenkins, S3.

Replacing an existing IVR environment with Amazon Connect reduced call wait times by 40%

By | Uncategorized | No Comments

Customer: Multinational home appliances manufacturer

 

Problem Statement

The customer wanted to replace their existing Avaya Systems which had an IVR set up to take consumer calls. The categories included Service Schedules/Inquiries, Spare part status, Service location for maintenance, Product Information, etc.

Agent pain points in the AS-IS Process which also needed to be sorted:

  • Spare part status – Resolution is based on Inventory Management
  • Appointment scheduling – 5 executions for a technician per day
  • Agents coaching – Send message/email based on the event to supervisor & Real-time call listening

The team also had Avaya, network and Consumer Pain Point in the AS-IS Process.

Proposed Solution

Powerup successfully helped the client to set up a customer support environment for Customer agents in Indonesia through Amazon Connect. This helped them to host AWS services in Sydney, Australia region with the ability to conference and transfer calls. In addition, Powerup set-up a customer support environment for agents which will route the voice calls from consumers to appropriate agents based on the language proficiency using Amazon Connect based on the language support provided by the agents (English/Bahasa). It also facilitated Call Recording using Amazon Connect capabilities and ability for the agent to make an outbound call using call information provided in the InstaEdge CRM. The Solution also included Out of the box reports with Real-time and historical reports along with login/log out reports to the client.

Cloud platform

AWS.

Technologies used

Amazon Connect, S3, Lambda.

Benefits

  1. With the implementation of AWS Connect average call waiting times were reduced  by 40%
  2. Demonstrates that a voice call generated is successfully routed and addressed by an agent connected to AWS.
  3. Demonstrates that an agent connected to Amazon Connect can make a successful outbound call to a consumer basis details provided in the CRM.
  4. An iframe of AWS Control Panel is demonstrated within a web application

Migration to cloud

By | Uncategorized | No Comments

Customer: A leading provider of cloud-based software solutions

 

Problem Statement

Being a part of the highly regulated life sciences industry, recognized the benefits of cloud a long time ago. The Customer were one of the very first life sciences solution vendors to deliver SaaS solutions to its customers. Currently, that momentum continues as the business goes “all-in on AWS” by moving their entire cloud infrastructure to the AWS platform.

As their platform and solutions are powered entirely by the AWS cloud, the business wanted to find ways to reduce costs, strengthen security and increase the availability of the existing AWS environment. Powerup’s services were enlisted with the following objectives:

  1. Cost optimization of the existing AWS environment
  2. Deployment automation of
  3. Safety infrastructure on AWS
  4. Architecture and deployment of centralized Log Management solution
  5. Architecture review and migration of the client’s customer environment to AWS including POC for Database Migration Service (DMS)
  6. Evaluation of DevOps strategy

Proposed Solution

 

1. Cost optimization of the existing AWS environment

Here are the three steps followed by Powerup to optimize costs:

  • Addressing idle resources by proper server tagging, translating into instant savings
  • Right sizing recommendation for instances after a proper data analysis
  • Planning Amazon EC2 Reserved Instances (RI) purchasing for resized EC2 instances to capture long-term savings

Removing idle/unused resource clutter would fail to achieve its desired objective in the absence of a proper tagging strategy. Tags created to address wasted resources also help to properly size resources by improving capacity and usage analysis. After right sizing, committing to reserved instances gets a lot easier. For example, Powerup team was able to draw a comparison price chart for the running EC2 & RDS instances based on the On-Demand Vs RI costs and share a detailed analysis explaining the RI Instances pricing plans.

By following these steps, Powerup estimated 30% reduction in monthly spend of the customer on AWS.

2. Deployment automation Safety infrastructure on AWS

In AWS, the client has leveraged key security features like Cloud Watch and Cloud trail to closely monitor the traffic and actions performed at API level. Critical functions like Identity & Access Management, Encryption, Log management is also managed by using features of AWS. Capabilities like AWS Guard Duty, which is a ML-based tool, which continuously monitors threats and add industry intelligence to the alerts it generates is used by them for 24/7 monitoring; along with AWS Inspector, which is a vulnerability detection tool. To ensure end to end cyber security, they have deployed an end to end Endpoint Detection and Response (EDR) solution called Trend Micro Deep Security. All their products are tested for security vulnerabilities using IBM AppScan tool and manual code review, following OWASP Top10 guidelines and NIST standards to ensure Confidentiality, Integrity and Availability of data.

As part of deployment automation, Powerup used Cloud formation (CF) and/or Terraform templates to automate infrastructure provision and maintenance. In addition to this, Powerup’s team simplified all modules used to perform day to day tasks to render them re-usable for deployments across multiple AWS accounts. Logs generated for all provisioning tasks were stored in a centralized S3 bucket. The business had requested for incorporating security parameters and tagging files, along with tracking of user actions in cloud trail.

3. Architecture and deployment of centralized Log Management solution

Multiple approaches for Log management were shared with the customer. Powerup and the client team agreed on the approach “AWS CW Event Scheduler/SSM Agent”. Initially, the scope was generation of Log management system for Safety infrastructure account, later, it was expanded to other accounts as well. Powerup team built solution architecture for Log management using ELK stack and Cloud Watch. Scripts were written such that it can be used across their client’s on AWS cloud. Separate scripts were written for Linux /Windows machines using Shell scripting and Powershell. No hard coding was done on the script. All inputs are through a csv file which would have Instance ID, Log Path, Retention Period, backup folder path & S3 bucket path.

Furthermore, Live hands-on workshops were conducted by Powerup team to train the client’s Operations team for future implementations.

4. Architecture review and migration of the client’s environment to AWS including POC for Database Migration Service (DMS)

The client’s pharmacovigilance software and drug safety platform is now powered by the AWS Cloud, and currently more than 85 of their 200+ customers have been migrated, with more to quickly follow. In addition, the wanted Powerup to support the migration of one of its customer to AWS. Powerup reviewed and validated the designed architecture. Infrastructure was deployed as per the approved architecture. Once the architecture was deployed, Powerup used the AWS Well-Architected Framework to evaluate the deployed architecture and provide guidance to implement designs that scale with customer’s application needs over time. Powerup also supported the application team for production Go-live on AWS infrastructure, along with deploying and testing DMS POC.

5. Evaluation of DevOps strategy

Powerup was responsible for evaluating DevOps automation processes and technologies to suit the products built by the client’s product engineering team.

Cloud platform

AWS.

Technologies used

EC2, RDS, CloudFormation, S3.

Benefit

Powerup equipped the client with efficient and completely on-demand infrastructure provisioning with hours, along with built-in redundancies, all managed by AWS. Eliminating idle and over-allocated capacity, RI management and continuous monitoring enabled them to optimize costs. They successfully realized 30% savings on overlooked AWS assets, resulting in an overall 10 percent optimization in AWS cost. In addition, the client can now schedule and automate application backups, scale up databases in minutes by changing instance type, and have instances automatically moved to a healthy infrastructure in less than 15 minutes in case of a downtime, giving customers improved resiliency and availability.

The client continues to provide a globally unified, standardized solution on the AWS infrastructure-as-a-service (IaaS) platform to drive compliance and enhance the experiences of all its customers.

Sales prediction engine

By | Uncategorized | No Comments

Customer: One of the world’s largest corporate food catering firm

 

Problem Statement

One of the world’s largest corporate food catering companies wanted to understand their customer behaviour including their food ordering trends. This will help them discontinue less popular foods and combos, eventually helping them increase customer satisfaction and profit margins.

Proposed Solution

The POS data from customer’s catering sites were pushed to a central Data Warehouse. The data is then processed by machine learning powered prediction engine to predict several important business parameters including plate consumption, top combo foods, inventory prediction etc.

Cloud platform

Azure.

Technologies used

Azure Machine Learning, Python, SQL Server, PowerBI.

Migration to Amazon ECS and DevOps Setup

By | Uncategorized | No Comments

Customer: India’s largest trucking platform

Problem Statement

The customer’s environment on AWS was facing scalability challenge as it was maintained across a heterogeneous set of software solutions with many different types of programming languages and systems and there was no fault-tolerant mechanism implemented. The lead time to get a developer operational was high as the developer ended up waiting for a longer duration to access cloud resources like EC2, RDS, etc. Additionally, the deployment process was manual which increased the chances of unforced human errors and configuration discrepancies. Configuration management took longer time which further slowed down the deployment process. Furthermore, there was no centralized mechanism for user management, log management, and cron jobs monitoring.

Proposed Solution

For AWS cloud development the built-in choice for infrastructure as code (IAC) is AWS CloudFormation. However, before building the AWS Cloudformation (CF) templates, Powerup conducted a thorough assessment of customer’s existing infrastructure to identify the gaps and plan the template preparation phase accordingly. Below were a few key findings from their assessment:

  • Termination Protection was not enabled to many EC2 instances
  • IAM Password policy was not implemented
  • Root Multi Factor Authentication (MFA) was not enabled
  • IAM roles were not used to access the AWS services from EC2 instances
  • CloudTrail was not integrated with Cloudwatch logs
  • S3 Access logs for Cloudtrail S3 bucket was not enabled
  • Log Metric was not enabled for Unauthorised API Calls; Using ROOT Account to access the AWS Console; IAM Policy changes; Changes to CloudTrail, CloudConfig, S3 Bucket policy; Alarm for any security group changes, NACL, RouteTable, VPCs
  • SSH ports of few security groups were open to Public
  • VPC Flow logs were not enabled for few VPCs

 

Powerup migrated their monolithic service into smaller independent services which are self-deployable, sustainable, and scalable. They also set up CI/CD using Jenkins and Ansible. Centralized user management was implemented using FreeIPA, while ELK stack was used to implement centralized log management. Healthcheck.io was used to implement centralized cron jobs monitoring.

CloudFormation (CF) Templates were then used in the creation of the complete AWS environment. The template can be reused to create multiple environments in the future. 20 Microservices in the stage environment were deployed and handed over to the customer team for validation. Powerup also shared the Ansible playbook which helps in setting up the following components – Server Hardening / Jenkins / Metabase / FreeIPA / Repository.

The below illustrates the architecture:

  • Different VPCs are provisioned for Stage, Production and Infra management. VPC peering is established from Infra VPC to Production / Stage VPC.
  • VPN tunnel is established between customer office to  AWS Infra VPC for the SSH access / Infra tool access.
  • All layers except the elastic load balancer is configured in private subnet.
  • Separate security group configured for each layer like DB / Cache / Queue / App / ELB / Infra security groups. Only required Inbound / Outbound rules.
  • Amazon ECS is configured in Auto-scaling mode . So the ECS workers will scale horizontally based on the Load to the entire ECS cluster.
  • Service level scaling is implemented for each service to scale the individual service automatically based on the load.
  • Elasticache (Redis) is used to store the end user session
  • Highly available RabbitMQ cluster is configured. RabbitMQ is used as messaging broker between the micro services.
  • For MySQL and Postgresql RDS Multi-AZ is configured. MongoDB is configured in Master-slave mode.
  • IAM roles are configured for accessing the AWS resources like S3 from EC2 instances.
  • VPC flow logs / cloud trail / Cloud Config are enabled for logging purpose. The logs are streamed into AWS Elasticsearch services using AWS Lambda. Alerts are configured for critical events like instance termination, IAM user deletion, Security group updates etc.
  • AWS system manager is used to manage collect the OS, Application, instance meta data of EC2 instances for inventory management.
  • AMIs and backups are configured for business continuity.
  • Jenkins is configured for CI / CD process.
  • CloudFormation template is being used for provisioning / updating of the environment.
  • Ansible is used as configuration management for all the server configurations like Jenkins / Bastion / FreeIPA etc.
  • Sensu monitoring system is configured to monitor system performance
  • New Relic is configured for application performance monitoring and deployment tracking

Cloud platform

AWS.

Technologies used

Amazon Redshift, freeIPA, Amazon RDS, Redis.

Benefit

IaC enabled customer to spin up an entire infrastructure architecture by running a script. This will allow the customer to not only deploy virtual servers, but also launch pre-configured databases, network infrastructure, storage systems, load balancers, and any other cloud service that is needed. IaC completely standardized the setup of infrastructure, thereby decreasing the chances of any incompatibility issues with infrastructure and applications can run more smoothly. IaC is helpful for risk mitigation because the code can be version-controlled, every change in the server configuration is documented, logged, and tracked. And these configurations can be tested, just like code. So if there is an issue with the new setup configuration, it can be pinpointed and corrected much more easily, minimizing risk of issues or failure.

Developer productivity drastically increases with the use of IaC. Cloud architectures can be easily deployed in multiple stages to make the software development life cycle much more efficient.