Category

Case Study

Migration for tech-based real estate platform

By | Case Study | No Comments

The Customer: Tech-based real estate platform

The Challenge: 

The customer is a tech-based real estate platform were currently running its infrastructure on Digital Ocean. Their current setup made it difficult for them to manage their network, containers and storage connectivity. Customer were unable to use multiple features like Auto-Scaling, Managed Kubernetes, Load balancer and GCS storage.

The Solution: 

Google Cloud Platform (GCP) helped in their digital transformation journey with a highly efficient and cost-effective solution. A combination of strategies was adopted like Recreation/Lift & Shift & Re-architecture of their current infrastructure running on Digital Ocean was planned with advanced technologies like Managed Kubernetes, App Engine, Big Query & Managed Load Balancer. Frontend application servers and Elastic search clusters were deployed on GKE. SQL cluster was deployed on GCP as a Master-Master setup with Percona.

  • Number of VM’s – 140+
  • Number of applications migrated – 5
  • Approximate size of DB – 5 TB +

The Results:

  • The customer team is able to scale the infrastructure at any point in time and the increase of web traffic on their website had no impact on end-users and zero performance issues. 
  • Additionally, managing the current setup on GCP was more efficient. This helped them to gain more control from a security standpoint.

Tools and technologies used:

  • Tools used – GitHub, Jenkins, MongoDB, MySQL, MongoDB, Redis, ELK, Kafka, Elasticsearch.
  • Google services used – Compute Engine, Container Build, Google Kubernetes Engine, Container Registry, Cloud SQL, Google Storage Bucket, Cloud Identity & Access Management, Cloud VPN, Cloud DNS, Cloud Load Balancing.

Cloud platform:

Google Cloud Platform

Migration from VM to GKE (Serverless)

By | Case Study | No Comments

The Customer: India’s leading investment startup

The Problems:

Currently, their environment has been provisioned in an unstructured way on a single project and all users from the technology team have access/visibility to both Stage/Prod environments and the complete infrastructure is running on standalone VMs and scaling was very cumbersome. Maintaining different services like RabbitMQ, MongoDB, Redis, Kafka, Elasticsearch was complex. 

The Solution:

Google Kubernetes Platform (GKE) helped customers in their digital transformation journey with a highly efficient and cost-effective solution. A complete re-architecture of their current setup running on standalone virtual machines was planned with advanced technologies like Managed Kubernetes. Jenkins was implemented for the CI/CD pipeline for making sure integration of individual jobs, easier code deployment to production and effortless auditing of logs. 

  • Number of VM’s – 180+
  • Number of applications migrated – 25+
  • Approximate size of DB – 2TB

The Results:

  • Migration of all the containerized images to Managed GKE in Google Cloud helped to achieve high availability, scaling and increased in performance by 100%
  • The customer is able to manage their complete application lifecycle and build lifecycle as a code; it additionally helped to meet required security compliance.

Tools and technologies used:

  • Tools used – Bitbucket, Jenkins, MongoDB, influxDB, cloud MySQL, Percona, RabbitMQ, MongoDB, Redis, Kafka, Elasticsearch and Kubernetes.
  • Google services used – Compute Engine, Container Build, Google Kubernetes Engine, Container Registry, Cloud SQL, Stack Driver, Cloud Identity & Access Management, Cloud VPN, Cloud DNS, Cloud Load Balancing.

Cloud platform:

Google Cloud Platform

Re-architecting with managed GKE

By | Case Study | No Comments

The customer: A leading therapeutics company in immuno-oncology

Problem Statement:

A leading therapeutics company in immuno-oncology was running their research data of more than 2 TB in on-premise setup and were facing multiple challenges when performing tests and other application validations, due to constraints on scaling and performance validation.

The Solution:

Google Kubernetes Platform (GKE) helped Customers in their digital transformation journey with a highly efficient and cost-effective solution. A complete re-architecture of their current setup running in on-premise standalone virtual machines was planned with advanced technologies like Managed Kubernetes.  Jenkins was implemented for the CI/CD pipeline for making sure integration of individual jobs, easier code deployment to production and effortless auditing of logs. Auto-Scaling was flawless and handling large data was much easier.

  • Number of VM’s – 180+
  • Number of applications migrated – 25+
  • Approximate size of DB – 2TB

The Results:

  • Migration of all the containerized images to managed GKE on Google Cloud helped achieve high availability and scaling.
  • The customer was able to manage their complete application lifecycle and build lifecycle as a code; it additionally helped to meet required security compliances.

Tools and services used:

  • Tools used – Istio, Jenkins, MySQL, Clam AV, Elasticsearch &GitHub
  • Google services used – Compute Engine, Container Build, Google Kubernetes Engine, Container Registry, Cloud SQL, Stack Driver, Cloud Identity & Access Management, Cloud VPN, Cloud DNS, Cloud Load Balancing

Cloud platform:

Google Cloud Platform

Running Websites at Scale on Application Service

By | Case Study | No Comments

Customer: An Indian leading e-commerce company

Problem Statement

Our client is regarded as India’s biggest e-commerce store which also has non e-commerce websites like careers and stories (corporate blog) websites on AWS and as part of a company wide Azure adoption, they wanted to move these sites to Azure PaaS

The Solution

Powerup helped move their Careers (PHP based) and Stories (WordPress) from AWS IaaS to Azure PaaS. The websites were configured to withstand clients scale and sudden surge in traffic due to marketing activities and huge online presence. CDN was introduced & caching frequently visited content is enabled for better performance. The stories site was recently redeployed to App Service Linux backend. 

Cloud Platform

Microsoft Azure

Technologies Used

Azure App Service, Application Insights, Azure Security Center, WordPress, MYSQL

Data Lake & Data Warehouse for One of India’s largest media companies

By | AI, Alexa, Case Study | No Comments

Customer: An e-commerce Company-Running Websites at Scale on App Service.

Problem statment:

One of India’s largest media companies, uses various SaaS platforms to run their OTT streaming application resulting in data is stored a several disparate sources. With around 20 of these data sources, resulting in an overall daily raw data aggregating to ~600 GB. This made extracting customer meta-data complex while making search and building recommendations difficult. 

The Solution

Building a Data Lake to bring all their customers’ and operations’ data at one place to understand their business better. Powerupcloud built real-time and batch ETL jobs to bring the data from varied data sources to S3. The raw data was stored in S3. The data was then populated in Redshift for further reporting while advanced analytics was run using Hadoop based ML engines on EMR. Reporting was done using QuickSight.

The solution architecture

 

Cloud Platform

AWS

Technologies Used

S3, DynamoDB, AWS ElasticSearch, Kibana, EMR Clusters, RedShift, QuickSight, Lambda, Cognito, API gateway, Athena, MongoDB, Kinesis

HRMS automation Chatbot

By | Case Study | No Comments

About customer

Asia’s leading communications group and provides a diverse range of services including fixed, mobile, data, internet, TV, infocomms technology (ICT) and digital solutions. It is headquartered in Singapore and has 140 years of operating experience and played a pivotal role in the Singapore’s development as a major communications hub.

Problem Statement

As one of the largest telecom provider with a subscriber base in Asia, Australia & Africa. They have 25000+ employees spread across the globe. It is a nightmare for the HR department to respond to employee queries around policies & real time updates their HRMS, hosted on Success Factors on SAP. This created a need to have a comprehensive solution to cater to needs across geographies by automate the process for New Hire On-boarding and Leave Application process. This helps solve user request quickly, avoiding any delay in response by the HR function through a chatbot, which understands user’s exact questions and answers appropriately. 

The Solution

Powerup integrated with Success Factors on SAP & deployed a chatbot on their HR Central website, which allows their employees to query policies & get real time updates on HRMS data. The bot, built on Botzer, also allowed the employees to apply for leaves & approve pending requests in SAP.

Technologies Used

Botzer platform

 

Demo video

Technical Architecture 

Integration Architecture

Voice-based personal assistant

By | Case Study | No Comments

About Customer

A global Information Technology and consulting company that harnesses the power of digital, cloud, analytics and other emerging technologies. It is the 3rd largest IT services player in India with an employee base of over 1.7 lakhs. It serves clients across the US, Canada, Latin America, Continental Europe, India and the Middle East, and the Asia Pacific.

Business requirement

One of our Customer companies, which serves clients across six continents has a complex IT landscape to manage. The underlying infrastructure supports a huge employee base and all critical applications. The digital platform for self-service that gives employees a seamless experience across various processes and workflows. The application enables all company employees and contractors to manage business transactions, access productivity tools, news, videos, communications, and other content via one single application interface. Tens of thousands of employees worldwide depend on their “company application” and an associated suite of 150+ applications for their day-to-day activities. But the existing approval-based systems for requests rendered it difficult to handle higher numbers of transactions and larger volumes of data resulting in delays in approvals and decreased employee satisfaction. Our customer needed a smart Artificial Intelligence (AI) solution which uses advanced decision-making and machine learning to not only resolve this but also customize the process as per the request while also reducing the number of inputs by the user.

Solution

Powerup conducted an in-depth study of customer application systems and interacted with the users to understand the challenges. The major bottleneck was not the sheer number of requests being received on the portal, but the systems’ inability to understand user context and the number of steps involved in getting simple issues resolved.

Powerup designed a solution for the customer, which will integrate with their company application portal as a voice engine to automate the user journey on the system. 

This also has to be a voice-first solution that executes the action on voice inputs of the user. The engine backed by strong neural networks understands the user context and personalizes the engine for the user. The engine is built on an unsupervised learning model, where the engine personalizes the conversation based on the user’s past interactions. Thus, providing a unique and easy to navigate through a journey for each user. 

In this process, the users can get rid of the transactional system and get issues resolved, from approval to task submission, within 2-3 steps. Powerup also implemented the Botzer chatbot platform with Amazon Lex & Polly. Customer calls get diverted from IVR to the chatbot, which takes customers’ requests as voice input, does entity matching, triggers workflows and answers back immediately. The voice engine supports 2 languages today – English and Hindi. 

Customers can get details like 

  • Statement of Account,
  • EMI tenure, 
  • Balance Due etc. 

The intelligence built into the system allows it to behave differently with different users during a different time of the day, thus if the user accesses different applications during the morning than the evening hours, the engine will respond accordingly during the respective hours.

Below is a high-level Solution workflow of the engine, being developed on AWS Lex & Polly, utilizing Botzer APIs at the backend.

 

Video demo

Technical Architecture

Following is the high-level technical architecture of the implementation. The engine is hosted on the customer’s AWS VPC, ensuring data integrity & security. 

The current architecture is capable of hosting 1lakhs+ employee, with 150+ applications

The Benefit

Faster ticket resolution and better communication with third-party application providers led to an increase in the number of tickets resolved. At the same time, the number of false positives decreased.

Migrating large workloads to AWS and implementing best practice IaaS

By | Azure, Case Study | No Comments

Customer: a leading provider of cloud-based software solutions

About the customer:

The client is a leading provider of cloud-based software solutions for 200+ customers across pharmaceutical, biotech and medical device manufacturers, Contact Research Organizations (CROs) and regulatory agencies. It’s proprietary cognitive technology platform integrates
Machine Learning (ML) capabilities to automate the core functions of the product life-cycle, boosts efficiency, ensures compliance, delivers actionable insights, and lowers total cost of ownership through multi-tenant Software-as-a-Service (SaaS) architecture, thus enabling organizations to get started on their Digital transformation. Their services and solutions are used by 4 of the top 5, 40 of the top 50 life science companies, and by 8 health authorities. Headquartered in the US, and has regional offices in Europe, India and Japan.

The business requirement:

Being a part of the highly regulated life sciences industry, recognized the benefits of cloud a long time ago. they were one of the very first life sciences solution vendors to deliver SaaS solutions to its customers. Currently, that momentum continues as the business goes “all-in on
AWS” by moving their entire cloud infrastructure to the AWS platform. As their platform and solutions are powered entirely by the AWS cloud, the business wanted to find ways to reduce costs, strengthen security and increase the availability of the existing AWS environment. Powerup’s services were enlisted with the following objectives:

1. Cost optimization of the existing AWS environment
2. Deployment automation of
3. Safety infrastructure on AWS
4. Architecture and deployment of centralized Log Management solution
5. Architecture review and migration of the client’s customer environment to AWS including
POC for Database Migration Service (DMS)
6. Evaluation of DevOps strategy

The solution approach:

1. Cost optimization of the existing AWS environment

Here are the three steps followed by Powerup to optimize costs:
● Addressing idle resources by proper server tagging, translating into instant savings
● Right sizing recommendation for instances after a proper data analysis
● Planning Amazon EC2 Reserved Instances (RI) purchasing for resized EC2 instances to capture long-term savings

Removing idle/unused resource clutter would fail to achieve its desired objective in the absence of a proper tagging strategy. Tags created to address wasted resources also help to properly size resources by improving capacity and usage analysis. After right sizing, committing to reserved instances gets a lot easier. For example, Powerup team was able to draw a comparison price chart for the running EC2 & RDS instances based on the On-Demand Vs RI costs and share a detailed analysis explaining the RI Instances pricing plans. By following these steps, Powerup estimated 30% reduction in monthly spend of the customer on AWS.

2. Deployment automation Safety infrastructure on AWS

In AWS, the client has leveraged key security features like Cloud Watch and Cloud trail to closely monitor the traffic and actions performed at API level. Critical functions like Identity & Access Management, Encryption, Log management is also managed by using features of AWS.
Capabilities like AWS Guard Duty, which is a ML-based tool, which continuously monitors threats and add industry intelligence to the alerts it generates is used by them for 24/7 monitoring; along with AWS Inspector, which is a vulnerability detection tool. To ensure end to end cyber security, they have deployed an end to end Endpoint Detection and Response (EDR) solution called Trend Micro Deep Security. All their products are tested for security vulnerabilities using IBM AppScan tool and manual code review, following OWASP Top10 guidelines and NIST standards to ensure Confidentiality, Integrity and Availability of data. As part of deployment automation, Powerup used Cloud formation (CF) and/or Terraform templates to automate infrastructure provision and maintenance. In addition to this, Powerup’s team simplified all modules used to perform day to day tasks to render them re-usable for deployments across multiple AWS accounts. Logs generated for all provisioning tasks were stored in a centralized S3 bucket. The business had requested for incorporating security parameters and tagging files, along with tracking of user actions in cloud trail.

3. Architecture and deployment of centralized Log Management solution

Multiple approaches for Log management were shared with the customer. Powerup and the client team agreed on the approach “AWS CW Event Scheduler/SSM Agent”. Initially, the scope was generation of Log management system for Safety infrastructure account, later, it was
expanded to other accounts as well. Powerup team built solution architecture for Log management using ELK stack and Cloud Watch. Scripts were written such that it can be used across their client’s on AWS cloud. Separate scripts were written for Linux /Windows machines using Shell scripting and Powershell. No hard coding was done on the script. All inputs are through a csv file which would have Instance ID, Log Path, Retention Period, backup folder path & S3 bucket path. Furthermore, Live hands-on workshops were conducted by Powerup team to train the client’s Operations team for future implementations.

4. Architecture review and migration of the client’s environment to AWS including POC for Database Migration Service (DMS)

The client’s pharmacovigilance software and drug safety platform is now powered by the AWS Cloud, and currently more than 85 of their 200+ customers have been migrated, with more to quickly follow. In addition, the wanted Powerup to support the migration of one of its customer
to AWS. Powerup reviewed and validated the designed architecture. Infrastructure was deployed as per the approved architecture. Once the architecture was deployed, Powerup used the AWS Well-Architected Framework to evaluate the deployed architecture and provide guidance to implement designs that scale with customer’s application needs over time. Powerup also supported the application team for production Go-live on AWS infrastructure, along with deploying and testing DMS POC.

5. Evaluation of DevOps strategy

Powerup was responsible for evaluating Devops automation processes and technologies to suit the products built by the client’s product engineering team.

Benefits

Powerup equipped the client with efficient and completely on-demand infrastructure provisioning with hours, along with built-in redundancies, all managed by AWS. Eliminating idle and over-allocated capacity, RI management and continuous monitoring enabled them to optimize costs. They successfully realized 30% savings on overlooked AWS assets, resulting in an overall 10 percent optimization in AWS cost. In addition, the client can now schedule and automate application backups, scale up databases in minutes by changing instance type, and have instances automatically moved to a healthy infrastructure in less than 15 minutes in case of a downtime, giving customers improved resiliency and availability. The client continues to provide a globally unified, standardized solution on the AWS infrastructure-as-a-service (IaaS) platform to drive compliance and enhance the experiences of all its customers.

Customer support enablement with AWS Connect

By | AWS, Case Study | No Comments

Customer: A multinational home appliance manufacturer

 

The Problem:

There were several legacy issues with the existing system, as detailed below with the information being provided across categories including service schedules/inquiries, spare part status, service location for maintenance, product information, etc.

  • No Call Recording facility from Avaya
  • No Historical Data and Reports generation. Agents were manually generating reports daily and then aggregating them on excel every week for the weekly report
  • Public Holiday Announcement & Operational Hours changes – Ex: During Ramzan, it’s closes early, involved doing a manual recording and deploying it on the server
  • Scalability issues: A limit of 12 in a queue based on the support from the existing systems – 8 for inbound calls and 4 outbound and concurrent inbound calls
  • Average speed of answering calls was 35 seconds

The approach:

The client wanted to do a pilot project using Amazon Connect, moving from their current voice system hosted in their Mumbai region to Amazon services to achieve the following functionalities:

  1. ‎Ability to take voice calls
  2. On-call connect, an option to choose a language (English/Bahasa)
  3. Call routing based on the language proficiency of the agent
  4. Ability to record calls
  5. Ability to help supervision of calls
  6. Ability to transfer/conference calls
  7. Scalable environment
  8. The ability to generate records in real-time

Solution flow & design:

 

The steps:

  1. Customer calls to the service center number
  2. The Call is routed to AWS Connect through Twilio or equivalent ISP
  3. As per the routing profile, AWS Connect directs the call to the agent
  4. Agent will get a notification in Instaedge CRM of the incoming call, if the mobile of incoming matches with any record in the customer Database, the customer information will be displayed in the Instaedge
  5. The agent will have to log into the Connect panel separately with credentials.

Modernizing application from VMs to container deployment-the realtime case study

By | Case Study, Cloud Assessment, Migration | No Comments

Customer: One of the India’s largest online marketplace

 

Problem Statement

The customer decided to modernize its applications from running on a virtual machine (VMs) to a container-based deployment. Shifting from VMs to containers allows developers to deliver changes in a fraction of the time in a cost-effective way. And once an app is in a container, it’s portable. So the team can move it freely from AWS to Azure to Google Cloud, back to on-premise, optimizing the benefits of a hybrid environment. Customer anticipated that the demand for women fashion products would grow quickly and would spike during sales and other promotional events. As the team scaled, the deployment process became a bottleneck. The team was frequently troubleshooting deployment failures, which caused delays and missed target dates. The customer’s Azure-based platform was hosted on the Google Cloud Platform and comprised approximately 25 servers, the majority based in Central-India.

Proposed Solution

Powerup conducted detailed cloud compatibility assessments to chart out-migration of the existing platform to a scalable and highly available Google Kubernetes Engine (GKE) Cluster predominately hosted in Asia-South Region. GKE is a managed, production-ready environment for deploying containerized applications. The migration was conducted in multiple waves to ensure that customer production is not affected. At the end of the migration application, the customer was expected to bill around US$0.15 million annually.

The figure below illustrates the migration roadmap and the steps are explained further:

  • Separate Virtual Private Cloud (VPC) created for Production / Stage Environment
  • HA K8s Private cluster is provisioned through GKE
  • MySQL VM Provisioned and installed through Terraform modules
  • Kafka VM Provisioned and installed through Terraform modules
  • 4 Pool is created in Kubernetes
  • Each pool has the following stateful components
    – Elasticsearch
    – Redis
    – MongoDB
    – Neo4j

Stateless Application Microservices are deployed in each pool according to the priority of the microservices  Ingress Load balancer is deployed in the K8s for the routing the traffic to the microservices.

Benefits

Going through the modernization process helped the team increase its velocity and optimize costs. Deployments were seamless and automated. This enables the team to deploy more frequently to lower environments and reduces the amount of time it takes to roll out a new version to production. Costs came down by 40% as the customer moved to a container platform. Moreover, now that the application is running in a container, it is portable between on-premise and public cloud environments.

Cloud platform

GCP