Category

Case Study

Migrating large workloads to AWS and implementing best practice IaaS

By | Azure, Case Study | No Comments

Customer: a leading provider of cloud-based software solutions

About the customer:

The client is a leading provider of cloud-based software solutions for 200+ customers across pharmaceutical, biotech and medical device manufacturers, Contact Research Organizations (CROs) and regulatory agencies. It’s proprietary cognitive technology platform integrates
Machine Learning (ML) capabilities to automate the core functions of the product life-cycle, boosts efficiency, ensures compliance, delivers actionable insights, and lowers total cost of ownership through multi-tenant Software-as-a-Service (SaaS) architecture, thus enabling
organizations to get started on their Digital transformation. Their services and solutions are used by 4 of the top 5, 40 of the top 50 life science companies, and by 8 health authorities. Headquartered in the US, and has regional offices in Europe, India
and Japan.

The business requirement:

Being a part of the highly regulated life sciences industry, recognized the benefits of cloud a long time ago. they were one of the very first life sciences solution vendors to deliver SaaS solutions to its customers. Currently, that momentum continues as the business goes “all-in on
AWS” by moving their entire cloud infrastructure to the AWS platform. As their platform and solutions are powered entirely by the AWS cloud, the business wanted to find ways to reduce costs, strengthen security and increase the availability of the existing AWS environment. Powerup’s services were enlisted with the following objectives:

1. Cost optimization of the existing AWS environment
2. Deployment automation of
3. Safety infrastructure on AWS
4. Architecture and deployment of centralized Log Management solution
5. Architecture review and migration of the client’s customer environment to AWS including
POC for Database Migration Service (DMS)
6. Evaluation of DevOps strategy

The solution approach:

1. Cost optimization of the existing AWS environment

Here are the three steps followed by Powerup to optimize costs:
● Addressing idle resources by proper server tagging, translating into instant savings
● Right sizing recommendation for instances after a proper data analysis
● Planning Amazon EC2 Reserved Instances (RI) purchasing for resized EC2 instances to capture long-term savings

Removing idle/unused resource clutter would fail to achieve its desired objective in the absence of a proper tagging strategy. Tags created to address wasted resources also help to properly size resources by improving capacity and usage analysis. After right sizing, committing to
reserved instances gets a lot easier. For example, Powerup team was able to draw a comparison price chart for the running EC2 & RDS instances based on the On-Demand Vs RI costs and share a detailed analysis explaining the RI Instances pricing plans.
By following these steps, Powerup estimated 30% reduction in monthly spend of the customer on AWS.

2. Deployment automation Safety infrastructure on AWS

In AWS, the client has leveraged key security features like Cloud Watch and Cloud trail to closely monitor the traffic and actions performed at API level. Critical functions like Identity & Access Management, Encryption, Log management is also managed by using features of AWS.
Capabilities like AWS Guard Duty, which is a ML-based tool, which continuously monitors threats and add industry intelligence to the alerts it generates is used by them for 24/7 monitoring; along with AWS Inspector, which is a vulnerability detection tool. To ensure end to end cyber security, they have deployed an end to end Endpoint Detection and Response (EDR) solution called Trend Micro Deep Security. All their products are tested for security vulnerabilities using IBM AppScan tool and manual code review, following OWASP Top10 guidelines and NIST standards to ensure Confidentiality, Integrity and Availability of data. As part of deployment automation, Powerup used Cloud formation (CF) and/or Terraform templates to automate infrastructure provision and maintenance. In addition to this, Powerup’s team simplified all modules used to perform day to day tasks to render them re-usable for deployments across multiple AWS accounts. Logs generated for all provisioning tasks were stored in a centralized S3 bucket. The business had requested for incorporating security parameters and tagging files, along with tracking of user actions in cloud trail.

3. Architecture and deployment of centralized Log Management solution

Multiple approaches for Log management were shared with the customer. Powerup and the client team agreed on the approach “AWS CW Event Scheduler/SSM Agent”. Initially, the scope was generation of Log management system for Safety infrastructure account, later, it was
expanded to other accounts as well. Powerup team built solution architecture for Log management using ELK stack and Cloud Watch. Scripts were written such that it can be used across their client’s on AWS cloud. Separate scripts were written for Linux /Windows machines using Shell scripting and Powershell. No hard coding was done on the script. All inputs are through a csv file which would have Instance ID, Log Path, Retention Period, backup folder path & S3 bucket path. Furthermore, Live hands-on workshops were conducted by Powerup team to train the client’s Operations team for future implementations.

4. Architecture review and migration of the client’s environment to AWS including POC for Database Migration Service (DMS)

The client’s pharmacovigilance software and drug safety platform is now powered by the AWS Cloud, and currently more than 85 of their 200+ customers have been migrated, with more to quickly follow. In addition, the wanted Powerup to support the migration of one of its customer
to AWS. Powerup reviewed and validated the designed architecture. Infrastructure was deployed as per the approved architecture. Once the architecture was deployed, Powerup used the AWS Well-Architected Framework to evaluate the deployed architecture and provide guidance to implement designs that scale with customer’s application needs over time. Powerup also supported the application team for production Go-live on AWS infrastructure, along with deploying and testing DMS POC.

5. Evaluation of DevOps strategy

Powerup was responsible for evaluating Devops automation processes and technologies to suit the products built by the client’s product engineering team.

Benefits

Powerup equipped the client with efficient and completely on-demand infrastructure provisioning with hours, along with built-in redundancies, all managed by AWS. Eliminating idle and over-allocated capacity, RI management and continuous monitoring enabled them to optimize costs. They successfully realized 30% savings on overlooked AWS assets, resulting in an overall 10 percent optimization in AWS cost. In addition, the client can now schedule and automate application backups, scale up databases in minutes by changing instance type, and have instances automatically moved to a healthy infrastructure in less than 15 minutes in case of a downtime, giving customers improved resiliency and availability. The client continues to provide a globally unified, standardized solution on the AWS infrastructure-as-a-service (IaaS) platform to drive compliance and enhance the experiences of all its customers.

Customer support enablement with AWS Connect

By | AWS, Case Study | No Comments

Customer: A multinational home appliance manufacturer

 

The Problem:

There were several legacy issues with the existing system, as detailed below with the information being provided across categories including service schedules/inquiries, spare part status, service location for maintenance, product information, etc.

  • No Call Recording facility from Avaya
  • No Historical Data and Reports generation. Agents were manually generating reports daily and then aggregating them on excel every week for the weekly report
  • Public Holiday Announcement & Operational Hours changes – Ex: During Ramzan, it’s closes early, involved doing a manual recording and deploying it on the server
  • Scalability issues: A limit of 12 in a queue based on the support from the existing systems – 8 for inbound calls and 4 outbound and concurrent inbound calls
  • Average speed of answering calls was 35 seconds

The approach:

The client wanted to do a pilot project using Amazon Connect, moving from their current voice system hosted in their Mumbai region to Amazon services to achieve the following functionalities:

  1. ‎Ability to take voice calls
  2. On-call connect, an option to choose a language (English/Bahasa)
  3. Call routing based on the language proficiency of the agent
  4. Ability to record calls
  5. Ability to help supervision of calls
  6. Ability to transfer/conference calls
  7. Scalable environment
  8. The ability to generate records in real-time

Solution flow & design:

 

The steps:

  1. Customer calls into the service center number
  2. The Call is routed to AWS Connect through Twilio or equivalent ISP
  3. As per the routing profile, AWS Connect directs the call to the agent
  4. Agent will get a notification in Instaedge CRM of the incoming call, if the mobile of incoming matches with any record in the customer Database, the customer information will be displayed in the Instaedge
  5. The agent will have to log into the Connect panel separately with credentials.

Modernizing application from VMs to container deployment-the realtime case study

By | Case Study, Cloud Assessment, Migration | No Comments

Customer: One of the India’s largest online marketplace

 

Problem Statement

The customer decided to modernize its applications from running on a virtual machine (VMs) to a container-based deployment. Shifting from VMs to containers allows developers to deliver changes in a fraction of the time in a cost-effective way. And once an app is in a container, it’s portable. So the team can move it freely from AWS to Azure to Google Cloud, back to on-premise, optimizing the benefits of a hybrid environment. Customer anticipated that the demand for women fashion products would grow quickly and would spike during sales and other promotional events. As the team scaled, the deployment process became a bottleneck. The team was frequently troubleshooting deployment failures, which caused delays and missed target dates. The customer’s Azure-based platform was hosted on the Google Cloud Platform and comprised approximately 25 servers, the majority based in Central-India.

Proposed Solution

Powerup conducted detailed cloud compatibility assessments to chart out-migration of the existing platform to a scalable and highly available Google Kubernetes Engine (GKE) Cluster predominately hosted in Asia-South Region. GKE is a managed, production-ready environment for deploying containerized applications. The migration was conducted in multiple waves to ensure that customer production is not affected. At the end of the migration application, the customer was expected to bill around US$0.15 million annually.

The figure below illustrates the migration roadmap and the steps are explained further:

  • Separate Virtual Private Cloud (VPC) created for Production / Stage Environment
  • HA K8s Private cluster is provisioned through GKE
  • MySQL VM Provisioned and installed through Terraform modules
  • Kafka VM Provisioned and installed through Terraform modules
  • 4 Pool is created in Kubernetes
  • Each pool has the following stateful components
    – Elasticsearch
    – Redis
    – MongoDB
    – Neo4j

Stateless Application Microservices are deployed in each pool according to the priority of the microservices  Ingress Load balancer is deployed in the K8s for the routing the traffic to the microservices.

Benefits

Going through the modernization process helped the team increase its velocity and optimize costs. Deployments were seamless and automated. This enables the team to deploy more frequently to lower environments and reduces the amount of time it takes to roll out a new version to production. Costs came down by 40% as the customer moved to a container platform. Moreover, now that the application is running in a container, it is portable between on-premise and public cloud environments.

Cloud platform

GCP

AI-based solution

By | AI, Case Study | No Comments

Customer: A leading food and agri-business company in the world

Problem Statement

One of our client is a leading food and agri-business company in the world was in the process of building an E-Commerce application for their products to ensure global access & availability. They were in need of a solution which gives them complete visibility into their Micro-services & PAAS architecture and track all the application transaction rather than a sampling of them. They also wanted visibility in User Analytics so they can analyse the conversion trends & user behaviour in the context of User Session.

With above complexity they needed a solution that gives them Automated Problem Detection & Root Cause analytics so they can focus on the findings and make the end-user experience smoother rather than investing time in finding the root cause of those problems.

Proposed Solution

After thorough evaluation Powerup recommended the use of an AI-based solution which can automatically analyse all the dependency at micro-services level and can also trace the Root Cause at code-level depth. For this Powerup leverage the capabilities & offerings of Dynatrace APM tool.

The approach

Implementation stage: Powerup implemented Dynatrace by deploying a one agent on the Kubernetes host which initiated monitoring of all the Micro-services. Within a few minutes Dynatrace could automatically discover the Application Topology map with dependencies.

Powerup also integrated Azure PAAS Service with Dynatrace to gain complete visibility in application.

Configuration stage

  1. Management zone: Powerup configured different management zones so different teams can have the visibility of relevant data.
  2. User Tagging: Powerup configured user session tagging, Key User actions and set up the conversion goals to track the revenue over user experience.
  3. Dashboard: Powerup created the all in one dashboard so in single view they can track the User Experience, Application Transactions status, Infrastructure health, API Calls and Problem detection.

Dynatrace applied Dynamic thresholding on all the detected anomalies, Powerup helped customer to understand and analyse the automated detected problems and trace the Root Cause.
Powerup ensures High availability & quick content delivery of application at a global level by managing the PAAS services in HA mode & CDN to ensure quick response.

Cloud platform

AZURE

Technologies used

Dynatrace One Agent, Dynatrace DEM, Kubernetes, AZURE PAAS Services, CDN

DR and Migration

By | Case Study, Migration | No Comments

Customer: Matrimony.com

Problem Statement

Until recently, an online matrimony service provider Matrimony.com implemented
traditional disaster recovery through a secondary data center in Mumbai. The
business needed a technology infrastructure that could both keep up with demand
and help drive further growth. Purchasing duplicate storage, compute and
connectivity resources for the secondary location as its business scaled, translated
to additional cost burden — all of which might never actually be used. Given the
“always-on” nature of the business it was of paramount importance that the
application availability remains high. Keeping all the above factors into
consideration, the business decided to leverage the benefits of the public cloud by
migrating their core matrimony applications to AWS.

Proposed Solution

After thorough evaluation, Matrimony.com engaged Powerup and decided to use
AWS to build its business continuity and DR solution.

The approach – A pilot with DR

After thorough evaluation, it was decided to use AWS to build its business continuity
and DR solution with a ‘Pilot-light’ DR strategy was chosen and a minimal
environment of the entire DC setup to be run on AWS. All applications, database
and High Availability (HA) proxy instances were replicated to instances of minimal
size to optimize cost — a classic backup-and-restore scenario. AWS allows
maintenance of a pilot-light model by configuring and running only the most critical
core elements of a system. When required in case of a recovery, one can rapidly
provision a full-scale production environment around the critical core by upgrading
the instances.

Powerup built a replica of all required servers and launched it using AWS
CloudFormation (CF) templates. For Matrimony.com, the legacy applications
required Powerup to use the same IP addresses in the new environment as well.
Powerup used Asymmetric routing mechanism to accommodate multiple IP
addresses and resolve connectivity issues on the secondary IP addresses. Load
balancers were required to have custom static private IPs to accommodate legacy
applications. However, Elastic Load balancer did not support this. To resolve this
issue, Powerup set up highly available HAProxy as an alternative to the internal load
balancer traffic with Keep-Alive. Keep-alive, when enabled, allows the load balancer
to reuse connections to the instance, which reduces the CPU utilization. In this case,
failover support was enabled between two HA Proxy servers by load balancing
between DC and DR for periodic application check. Code commit was used to update
the code to both DC and DR environments simultaneously.

Powerup complied with customer data centre security guidelines and the migration
was successful. Multiple VPCs were created for production, recovery and
management applications. All application servers were migrated using AWS Server
Migration Service(SMS) by replicating server VMs as cloud-hosted Amazon Machine
Images (AMIs) ready for deployment. Lambda was used to trigger the creation of
new AMIs. Database servers were deployed on EC2, replicated using native
replication techniques. The Configuration of the environment is automated by AWS
CF templates.

Cloud platform

AWS.

Technologies used

EC2, CloudFormation, Lambda, S3, AWS SMS-Migration tool, DMS, ELB/ALB, VPC.

How Botzer (AI powered enterprise chatbot) helped automate customer support for Future Generali.

By | Botzer, Case Study, Chatbot | No Comments

Written by Gopinath P, Project Manager at Powerupcloud Technologies.

When you are one of the top Health Insurance service providers in a country such as India, with a population of over 1.3 billion people, you should know that your customer care team is going to be working around the clock to resolve customer queries & issues.

Powerup engaged with Future Generali India Insurance Company Ltd., where currently the Customer care team is the single point of contact for handling servicing queries and complaints from the customers to facilitate an end to end process in a Life insurance policy. Ensuring a high level of customer satisfaction remains core to such businesses. An approx. of 30,000 queries on a monthly basis have been received on this dedicated call centre. These calls are currently catered to by manual agents, which might lead to a higher load on agents & marred by manual inefficiencies.

A separate team looks into the selling of the policies, increasing business volumes & driving revenue. Being a completely manual process, this creates a bottleneck into no of sales and a host of times, creates incorrect recommendations on policies to be sold. Resulting in a high amount of customer churn & lost business.

When we at Powerup were approached with the above problem statement, we first analyzed the call recordings from their call centre. Most of the queries could be classified under a limited set of service queries. In addition, while enquiring about a policy, the user is generally looking out for a set of recommendations on policy, premium & payment terms, which would suit the user requirements. Powerup solution specialists designed a solution that could not only support customer queries but also recommends the most relevant policies to the customers, helping them sign up & close deals much faster. Powerup’s Botzer was a perfect fit for the Insurance giant, with readymade modules & integration available.

Botzer is an AI-Powered enterprise chatbot, which allows customized business solutions to be deployed & hosted in the customer’s account, integrating with multiple Enterprise Systems. Powered by intelligent Natural Language Processing & Machine Learning algorithms, it is capable of understanding even the most complex customer queries. On Botzer, customer support was automated for this Life Insurance giant across channels, including their website, customer mobile apps & Social channels, such as Facebook Messenger as well.

The customer response time was reduced to within 3 minutes to complex queries, as compared to 24 hours earlier. The resolution rate also increased by more than 50%, while the agent load reduced by 60% for inbound calls.

The bot not only improved the post-sales support but also gave targeted recommendations to the customers, basis their preferences & lifestyle, allowing them to buy policies within minutes.

The bot also performs sentiment analysis on the queries coming in from the users, responding to the users’ basis the identified sentiment. A positive sentiment sends the customer a happy & a blushing reply, while negative sentiment is replied to with an empathetic tone to the customer. If the bot is not able to respond to a query, the query is passed to a live agent to engage with the customer.

The bot also automates workflows to accelerate & close sales. While recommendations are given to the customers’ basis their preferences, the bot then connects users to a live agent to close the deal.


Bot provides recommendations to the user basis lifestyle & preferences

With Botzer, we not only automated their customer support, consolidating the customer support experience across channels but also provided a comprehensive set of analytics. These custom-built dashboards allowed the business to view user profiles, user preferences, journeys & how they have transacted with the system. Marketing & Customer servicing teams gain customer buying behaviour & preferences insights, allowing them to design high performing campaigns, resulting in higher ROI.

Cloud Storage with Horos App

By | Case Study | No Comments

Customer: Amaara Vectors Private Limited

 

Problem Statement

Amaara Vectors Private Limited has its own custom version of the open-source PACS application called Horos. They wanted to integrate it with cloud storage and was looking at AWS S3 for better scalability, flexibility, and security. The cloud storage should be like a drive in the local system where the documents can be downloaded and uploaded. They also want to notify when a new image is ready to view using WhatsApp for business.

 

Proposed Solution

Powerup helped Amaara Vectors design well-architected frameworks and migrate to AWS along with 2 months of initial support for testing and bug fixes.

 

[Architecture diagram]

 

Description:

Various Scan images will come to the Mac systems in each center to be viewed by the Horos Application.

2. Mountain Duck Mac agent will be running on the Mac OS. S3 buckets will be mounted as local volumes to the Mac OS.

3. Mountain Duck will be configured to sync all the data back to S3.

4. A script will be running on the Mac systems to delete any file which is not accessed in 15 days. The Specific time period to be a configurable option.

5. Powerupcloud will develop a small lightweight NodeJS agent which will keep the connection live with the Notification Server running on AWS.

6. An OpenVPN server will be running on AWS to establish a point-site VPN tunnel between the Mac and AWS for secure upload to AWS.

7. All files on S3 will be encrypted using KMS.

8. Once the File is uploaded to S3 two Lambda functions will be triggered. One Lambda calls the WhatsApp for Business API for WhatsApp notifications. Other Lambda function will trigger the NodeJS code to broadcast the notification to all the Mac systems belonging to the same center.

9. Illustration: ABC Diagnostics is an organization that has three Diagnostic Centers, at Rajajinagar, Chamarajapet, and HSR layout. A brain MRI scan is taken at Rajajinagar, and uploads to the Cloud automatically. The organization’s Brain MRI specialist at HSR gets a notification on his workstation (Mac) and on WhatsApp (Business). He then clicks on the “Rajajinagar” tab on his Horos application and diagnoses the image. This textual diagnosis is uploaded onto the cloud. The radiologist in Rajajinagar gets a notification about the uploaded diagnosis and downloads it. It is then verified and provided to the patient.

 

Cloud platform

AWS.

 

Technologies used

S3, Lambda, KMS, IAM.

Netmagic to AWS Migration

By | Case Study | No Comments

Customer: PayU

About Customer

PayU is a fintech company that provides payment technology to online merchants.
The company was founded in 2002 and is headquartered in Hoofddorp,
Netherlands. It allows online businesses to accept and process payments through
payment methods that can be integrated with web and mobile applications.

Problem Statement

PayU needed to migrate 2 of their core applications, PayUbiz and PayUmoney
from their existing Netmagic data center to AWS cloud. The challenge was 400+
VM’s needed to be migrated in just 3 months to support the annual sale days of
two of the largest e-comm players in India. They were required to handle 8000+
transaction per second at database level with improved reliability and automated
scalability, which their existing on-premises setup could not deliver.

Proposed Solution

➢ Powerup Architects worked closely with the PayU team to do a detailed
Application Discovery of the current Netmagic environment.
➢ Based on the data collected a blueprint architecture was designed mapping
the current environment to AWS services following the 6 R’s of Migration. A
detailed TCO analysis was also done so that the customer is clearly aware
about the benefits of moving to AWS cloud.
➢ Proper Load Testing was done to finalize the sizing for the application
servers.
➢ All the application servers were migrated using AWS VM Import/Export.
➢ The MYSQL databases on-premise was migrated to AWS Aurora using
Database Migration Service.
➢ User sessions and database cache was stored in Redis Cache.
➢ Classic Load Balancers were used to distribute traffic between the application
servers.
➢ Direct Connect was setup between on-premise and AWS Mumbai DC. VPN
tunnels were also setup for redundancy.
➢ Kafka will be used to stream all the logs and Logstash will be used to analyze
them.
➢ All sensitive data like user card details are encrypted using KMS.
Outcomes
➢ Customer was successfully migrated to AWS Aurora RDS from MYSQL
database.
➢ Flipkart’s Xiaomi Sale was a huge success with the AWS infrastructure able to
handle 8000+ TPS.
➢ Customer was able to achieve the required scalability and security on cloud.

AWS Services used

➢ EC2 – to host all application and web servers
➢ EBS – as storage attached to EC2
➢ VPC – to create the required isolated networks on AWS
➢ Elasticache – to host the Redis Caching engine
➢ RDS Aurora – to host the database
➢ KMS – to encrypt data at rest on EBS and S3
➢ S3 – to host the OVF images, to store backups other static details and logs
➢ CloudWatch – used as the monitoring tool
➢ Classic Load Balancer – to distribute the traffic and SSL termination
➢ Direct Connect – to establish a direct private line between AWS and
customer DC
➢ IAM – for Identity and Access Management

Microsoft Workloads

By | Case Study | No Comments

Customer: Sompo

Customer Engagement

Sompo Internationalwas established in March 2017 with the acquisition by Sompo Holdings, Inc.(Sompo) of Endurance SpecialtyHoldings Ltd.(Endurance) anditswholly owned operatingsubsidiaries. Sompo’s corebusiness encompasses one ofthe largest property and casualty insurance groupsin the Japanese domestic market. Seeking opportunities grow their business globally,Sompo acquired Endurance, aglobal provider of property and casualty insurance and reinsurance, to effectively become
their international operation.

Problem Statement

Sompo International wants to migrate 2 of their web services from on-premise to AWS Elastic Beanstalk. Both are .NET based applications and used Microsoft SQL server as the backend. Customer wants to use RDS for the database and AD authentication for SQL server access. Sompo International wants to work with a strong Cloud Consulting Partner like Powerupcloud to help them migrate the applications onto AWS, manage those applications 24*7 and then build Devops capabilities on cloud so that Sompo can concentrate on application development.

Proposed Solution

➢ AWSaccountswillbe createdandmanaged usingAWSOrganizations according tocustomerrequirement.
➢ Appropriateusers, groupsandpermissionswillbecreatedusingIdentityand AccessManagement(IAM)service.
➢ IAM roles will be created to access different AWS service.
➢ Networkwillbesetupusing theVPCservice.AppropriateCIDRrange, subnets,routetablesetc.willbecreated.
➢ NAT gateways will be deployed in 2 public subnetsin 2 different Availability Zones of AWS.
➢ VPN Tunnel will be setup from customer location to AWS data center.
➢ 2 Application Load Balancers will be created forthe 2 applications being migrated.
➢ Route53 service will be used to create the necessary DNS records.
➢ An open source DNS forwarding application called Unbound will be deployed across 2 AZsfor high availability. Unbound allows resolution of request originating from AWS by forwarding them to on-premise environment- and vice-versa.
➢ 2 Elastic Beanstalk environments will be created forthe 2 applications and the .NET code will be uploaded and then deployed onit.
➢ Windows Server 2016 R2 is used to deploy Application& AD.
➢ Both the applications will be deployed across 2 Availability Zones and auto-scaling will be enabled for high availability and scalability.
➢ MSSQL databasewill be deployed on RDS service ofAWS andmultiAZ feature will be enabled for high availability. Database will be replicatedfromon-premisetoAWSbytakingthelatestSQL dumpand restoring/enablingAlways-onreplicationbetweenthe database/usingtheAWSDMSservice.RDSSQL authentication will be used instead of Windows authentication.
➢ Elastic Cache Redis cluster will be deployed forstoring the usersessions. Multi-AZ feature will be turned on for high availability.
➢ All application logs will be sentto Splunk. VPC peering will be enabled between the 2 VPCs.
➢ CloudWatch service will be used formonitoring and SNS will be used to notify the usersin case of alarms, metrics crossing thresholds etc.
➢ Allsnapshot backups will be regularly taken and automated based on the best practices.
➢ All Server Sizing wasinitially taken based on the currentsizing and its utilization shared by the customer. Based on the utilization reportsin CloudWatch Servers were scaled up or down.
➢ NAT gateway is used forinstancesin private network to have accessto internet.
➢ SecuritygroupsareusedtocontroltrafficattheVMlevel.Only the required ports will be opened, and access allowed from required IP addresses.
➢ Network Access Control Lists(NACLs) are used to control traffic atthe subnet level.
o SSL certificates will be deployed on the load balancersto protect data in transit.
o CloudTrail will be enabled to capture all the API activities happening in the account.
o VPC flow logs will be enabled to capture all network traffic.
o ALB accesslogs will be enabled
o AllthelogswillbesenttoAWSGuardDutyforthreat detection and identifying malicious activities in the account,
account compromise.
➢ AWS Config will be enabled, and all the AWS recommended config rules will be created. Additional Details

AWS Services used:

EC2, EBS, ALB, RDS, Route53, S3, CloudFormation,
CloudWatch, CloudTrail, IAM, Config, Guard Duty, Systems Manager, Autoscaling, Transit gateway

3rd Party Solutions Used:

Unbound, Okta[Architecture Diagram]

Windows Stack used:

➢ .NET Applications
➢ IIS Web Server
➢ RDP Gateway
➢ SQL Server EnterpriseDatabase
➢ Active Directory

Outcomes of Project

➢ Powerupcloud was able to setup automated landing zone for Sompo
➢ Sompo was able to meet the required high availability& scalability
➢ Sompo was able to integrate themigrated applicationsto the on-premise
legacy systemsseamlessly

Automated photo moderation

By | Case Study | No Comments

Customer: Shaadi.com

A leading matrimony site in India

Problem Statement

A leading matrimony site in India receives 20,000 new profile creations every day.
A team of 16 reviews the uploaded profile pictures and moderates them based on
9 parameters including nudity, celebrity, blur, group photos, photoshopped images, etc. The customer wanted to automate this moderation process to improve efficiency and reduce manpower costs.

Proposed Solution

Powerup used a combination of Amazon Rekognition and custom rule engine to moderate the images in real-time. The solution was consistently achieving above
80% accuracy. This brought down the moderation time from 24 hours to 3 minutes and the headcount was reduced from 16 to 4.

Cloud Platform

AWS.

Technologies used

Amazon Rekognition, Lambda, OpenCV, Python.