The customer is one of the largest Indian entertainment companies catering to acquiring, co-producing, and distributing Indian cinema across the globe. They believe that media and OTT platforms can derive maximum benefit in terms of multi-tenant media management solutions provided by the cloud. Therefore, they are looking at migrating their existing servers, databases, applications, and content management system on to cloud for better scalability, maintenance of large volumes of data, modernization, and cost-effectiveness. The customer intends to also look at alternative migration strategies like re-structuring and refactoring if need be.
The customer is a global Indian entertainment company that acquires, co-produces, and distributes Indian films across all available formats such as cinema, television and digital new media. The customer became the first Indian media company to list on the New York Stock Exchange. It has experience of over three decades in establishing a global platform for Indian cinema. The company has an extensive and growing movie library comprising over 3,000 films, which include Hindi, Tamil, and other regional language films for home entertainment distribution.
The company also owns the rapidly growing Over The Top (OTT) platform. With over 100 million registered users and 7.9 million paying subscribers, the customer is one of India’s leading OTT platforms with the biggest catalogue of movies and music across several languages.
Problem statement / Objective
The online video market has brought a paradigm shift in the way technology is being used to enhance the customer journey and user experience. Media companies have huge storage and serving needs as well as the requirement for high availability via disaster recovery plans so that a 24x7x365 continuous content serving is available for users. Cloud could help media and OTT platforms address some pressing business challenges. Media and OTT companies are under constant pressure to continuously upload more content cost-effectively. At the same time, they have to deal with changing patterns in media consumption and the ways in which it is delivered to the audience.
The customer was keen on migrating their flagship OTT platform from a key public cloud platform to Microsoft Azure. Some of the key requirements were improved maintainability, scalability, and modernization of technology platforms. The overall migration involved re-platforming and migrating multiple key components such as the content management system (CMS), the Application Program Interfaces (APIs), and the data layer.
Powerup worked closely with the client’s engineering teams and with the OEM partner (Microsoft) to re-architect and re-platform the CMS component by leveraging the right set of PaaS services. The deployment and management methodology changed to containers (Docker) and Kubernetes.
Key learnings from the project are listed below:
Creating a bridge between the old database (in MySql) and a new database (in Postgres).
Migration of a massive volume of content from the source cloud platform to Microsoft Azure.
Rewriting the complete CMS app using a modern technology stack (using Python/Django) while incorporating functionality enhancements.
Setting up and maintaining the DevOps pipeline on Azure using open source components.
Modernized infrastructure powered by Azure, provided improved scalability and stability. The customer was able to minimize infrastructure maintenance using PAAS services. Modular design set-up enabled by migration empowered the developers with the ability to prototype new features faster.
The customer’s environment on AWS was facing scalability challenges as it was maintained across a heterogeneous set of software solutions with many different types of programming languages and systems and there was no fault-tolerant mechanism implemented. The lead time to get a developer operational was high as the developer ended up waiting for a longer duration to access cloud resources like EC2, RDS, etc. Additionally, the deployment process was manual which increased the chances of unforced human errors and configuration discrepancies. Configuration management took a long time which further slowed down the deployment process. Furthermore, there was no centralized mechanism for user management, log management, and cron job monitoring.
For AWS cloud development the built-in choice for infrastructure as code (IAC) is AWS CloudFormation. However, before building the AWS Cloudformation (CF) templates, Powerup conducted a thorough assessment of the customer’s existing infrastructure to identify the gaps and plan the template preparation phase accordingly. Below were a few key findings from their assessment:
Termination Protection was not enabled to many EC2 instances
IAM Password policy was not implemented
Root Multi-Factor Authentication (MFA) was not enabled
IAM roles were not used to access the AWS services from EC2 instances
CloudTrail was not integrated with Cloudwatch logs
S3 Access logs for Cloudtrail S3 bucket was not enabled
Log Metric was not enabled for Unauthorised API Calls; Using ROOT Account to access the AWS Console; IAM Policy changes; Changes to CloudTrail, CloudConfig, S3 Bucket policy; Alarm for any security group changes, NACL, RouteTable, VPCs
SSH ports of few security groups were open to Public
VPC Flow logs were not enabled for few VPCs
Powerup migrated their monolithic service into smaller independent services which are self-deployable, sustainable, and scalable. They also set up CI/CD using Jenkins and Ansible. Centralized user management was implemented using FreeIPA, while ELK stack was used to implement centralized log management. Healthcheck.io was used to implement centralized cron job monitoring.
CloudFormation (CF) Templates were then used in the creation of the complete AWS environment. The template can be reused to create multiple environments in the future. 20 Microservices in the stage environment was deployed and handed over to the customer team for validation. Powerup also shared the Ansible playbook which helps in setting up the following components – Server Hardening / Jenkins / Metabase / FreeIPA / Repository.
The below illustrates the architecture:
Different VPCs are provisioned for Stage, Production, and Infra management. VPC peering is established from Infra VPC to Production / Stage VPC.
VPN tunnel is established between the customs office to AWS Infra VPC for the SSH access / Infra tool access.
All layers except the elastic load balancer are configured in a private subnet.
Separate security group configured for each layer like DB / Cache / Queue / App / ELB / Infra security groups. Only required Inbound / Outbound rules.
Amazon ECS is configured in Auto-scaling mode. So the ECS workers will scale horizontally based on the Load to the entire ECS cluster.
Service level scaling is implemented for each service to scale the individual service automatically based on the load.
Elasticache (Redis) is used to store the end-user session
A highly available RabbitMQ cluster is configured. RabbitMQ is used as messaging broker between the microservices.
For MySQL and Postgresql, RDS Multi-AZ is configured. MongoDB is configured in Master-slave mode.
IAM roles are configured for accessing the AWS resources like S3 from EC2 instances.
VPC flow logs/cloud trail / Cloud Config are enabled for logging purposes. The logs are streamed into AWS Elasticsearch services using AWS Lambda. Alerts are configured for critical events like instance termination, IAM user deletion, Security group updates, etc.
AWS system manager is used to manage to collect the OS, Application, instance metadata of EC2 instances for inventory management.
AMIs and backups are configured for business continuity.
Jenkins is configured for CI / CD process.
CloudFormation template is being used for provisioning/updating of the environment.
Ansible is used as configuration management for all the server configurations like Jenkins / Bastion / FreeIPA etc.
Sensu monitoring system is configured to monitor system performance
New Relic is configured for application performance monitoring and deployment tracking
Amazon Redshift, free IPA, Amazon RDS, Redis.
IaC enabled customers to spin up an entire infrastructure architecture by running a script. This will allow the customer to not only deploy virtual servers, but also launch pre-configured databases, network infrastructure, storage systems, load balancers, and any other cloud service that is needed. IaC completely standardized the setup of infrastructure, thereby decreasing the chances of any incompatibility issues with infrastructure and applications that can run more smoothly. IaC is helpful for risk mitigation because the code can be version-controlled, every change in the server configuration is documented, logged, and tracked. And these configurations can be tested, just like code. So if there is an issue with the new setup configuration, it can be pinpointed and corrected much more easily, minimizing the risk of issues or failure.
Developer productivity drastically increases with the use of IaC. Cloud architectures can be easily deployed in multiple stages to make the software development life cycle much more efficient.
Tools used & AWS Services used – Compute – EC2,EKS,ECR,Lambda, Shared storage – SFTP,EFS, Database – RDS Postgres, Advanced networking – Route53, route 53 resolver , custom DHCP, Security – AWS IAM, Active Directory, CloudTrail, AWS Config, IAAS CloudFormation, Other Services – Terraform, Jenkins with sonarQube, Nexus and Clair
The customer is regarded as global insurance giants of the financial sector. ABS is their consolidated insurance system that they are looking at migrating to AWS along with its supporting applications. They also wanted Powerup to create a Disaster Recovery facility on AWS and make the ABS insurance system available as a backup solution for one of their esteemed banking clients while also catering to a business continuity strategy, automation of applications and security & compliance.
The Customer is a German multinational, one of the leading integrated financial services company, headquartered in Munich. Their core business caters to offering products and services in insurance and asset management.
Problem statement / Objective
ABS is a monolithic application while the supporting applications are microservices-based. Hence, a microservice architecture which can seamlessly integrate with the customer’s core insurance module was needed.
They wanted Powerup to deploy their applications on production as well as on a secondary (Disaster Recovery) DR facility on AWS using a Continuous Integration (CI)/ Continuous Deployment (CD) pipeline. This was to serve as a Business Continuity solution for one of their esteemed banking clients.
For business continuity, the customer anticipated the Recovery Time Objective (RTO) of less than 4hr and Recovery Point Objective (RPO) of not more than 24 hours.
In addition to infrastructure deployment, all application deployments were requested to be automated by the client. Being a financial services company, the customer is bound by multiple regulatory and compliance-related obligations for which Cloud Best Security practices were also to be instrumented.
AWS Landing Zone was set up with the following accounts – Organization Account, Production Account, Dev, Pre-Prod, Management, DR, Centralized Logging & Security Account.
The operational unit consisted of the customer business system (i.e., CISL (Core insurance layer), RAP (Rich Application), MFDB (Core application Database), CTRLM (Batch job automation)) and Non-ABS (Non-customer business system i.e., dispatcher).
All Logs will be centrally stored in the logging account. All management applications like Control-M, AD, Jenkins etc. will be deployed in the management account.
ABS application is deployed across multiple AZs and load balanced using AWS Application load balancer. Non-ABS applications are microservices-based and talk to the ABS running to process or fetch the required data based on request. Close to over 10 microservices are running on Docker within the EKS cluster.
Auto-scaling is enabled at the service level as well as EC2 level to scale out the microservices based on the load. The application uses Active Directory to authenticate.
Amazon Elastic Kubernetes Service (EKS) backed the highly available, reliable and decoupled API services which are accessible only inside the customer’s private global shared network. Each module is segregated with the namespace.
Jenkins pipelines were used to build automation, Nexus tool to store artifacts and Clair for checking vulnerabilities. Build Artifact Vulnerability management was made easy with the aid of SonarQube.
Active passive Disaster recovery
Actively synchronised AWS Secure File Transfer Protocol (SFTP) is the active directory and private file storage space on the cloud.
Powerup methodically designed and tested a cross-account and cross-region disaster recovery strategy. At the time of live deployment, docker images are tagged (with versioning) and shipped to Amazon Elastic Container Register (ECR) DR account. Encrypted (Amazon Machine Image) AMIs and Relational Database Service (RDS) snapshots are passively shipped to DR account with a Recovery Point Objective (RPO) of 3 hours.
Custom coupled Lambda functions are used to generate, ship and eliminate encrypted AMIs and snapshots to DR accounts with no human intervention as a backup solution.
Advanced strategy to ensure the best security
Custom cloud formation template helped in monitoring AWS API calls made to Change or update configurations of IAM roles / SecurityGroups inbound or outbound rules/ EC2. Granular rules are followed in the AWS config for maintaining and remediating as per regulatory compliance.
The customer’s network setup was the biggest issue faced. In AWS, the network was completely private, an environment without Internet Gateway (i.e., direct internet access) and Network Address Translation (NAT) because of which a custom Dynamic Host Configuration Protocol (DHCP) option set had to be used to cope up with an existing custom DNS server which was set up in the customer Shared Service account alongside a custom proxy setup for the internet. The most challenging part was registering the worker nodes to Master in EKS as some of the internal kubelet components were failing due to enterprise proxy and custom DNS servers. In order to fix this, AWS private link, route 53 resolver and Kubernetes configMap were fine-tuned effectively.
The ABS insurance application was successfully deployed on AWS environment while meeting all security & high availability guidelines as per the stated compliance directives.
During load tests, the application was found to be able to handle 200 concurrent users successfully.
Microservices helped in ‘easier to build and maintain’ application programming interface (API) services. Flexibility and scalability of different API applications were also achieved.
The customer could maintain the lifecycle of identifying, investigating and prioritizing vulnerabilities in code as well as containers without any compromise.
The customer could now implement strong access and control measures and maintain an information security policy.
Tabletop run-through and DR scenario simulations ensured business continuity.
The customer offers a broad range of financial products and services to diversified customer segments and has a sizable presence in the large retail market segment through its life insurance, housing finance, mutual fund and retail financial businesses across domestic and global geographies.
The customer, together with a strong network of sub-brokers and authorized persons, serve approximately 12-lakh strong client bases through 10,052 employees based out of 448 offices across all major cities in India.
Their business comprises of multiple asset classes broadly divided into Credit (retail and corporate), Franchise and Advisory (asset and wealth management, capital markets) and Insurance (Life and General insurance).
Cloud computing technology has gained significant momentum in the financial sector and the customer is looking at building a digital organization to align technology with evolving customer needs and behavior. Though they have been on the cloud from the beginning, cloud migration has accelerated at a rapid pace and they see the urge to be at par with the growing needs.
Problem statement / Objective
With such a manifold existence, the customer realized it was extremely necessary for them to set up an environment that would not just support diverse applications but also cater to teams and/or projects across multiple locations for their domestic as well as global customers. This was possible only if they migrated their applications to the cloud.
Powerup’s scope of work was to carry out a cloud readiness assessment in order to understand how well prepared the customer is for the technology-driven transitional shift. They were to define, plan, assess, and report the customer’s readiness via Migration Readiness Assessment & Planning (MRAP).
The customer’s MRAP Process:
Migration Readiness Assessment & Planning (MRAP) is the process of assessing the current on-premise environment in order to analyze how ready it is to migrate to the cloud and every organization intending to migrate to the cloud must undergo this. The analysis explains how the entire process works and in what order should the events occur.
The customer carried out MRAP for almost 250 applications and had intended to migrate all the applications that are a part of this assessment.
The first step in planning the MRAP exercise was to understand the number and type of applications, identify the appropriate stakeholders for interviews, tools to be installed, different types of installations, creation of project plans, to name a few.
To begin with, RISC networks, an application discovery tool, were configured and installed on the customer environment. It allowed all customer-specific data to be kept onsite or in a location of the customer’s choice to gather data from the on-premise existing in the customer environment. Application discovery service helped collect hardware and server specification information, credentials, details of running processes, network connectivity, and port details. It also helped acquire a list of all on-premise servers in scope along with their IP addresses, business application names hosted on them, and the stakeholders using those apps.
Deployment and assessment:
Once the application is deployed and has the necessary access, servers need to be licensed so that the RISC tool can start collecting data. It is recommended to have the tool collecting data for at least 2 weeks so that a significant amount of information is captured.
At the customer’s organization, a total of 363 servers were licensed and almost 216 applications that belonged to 7 different lines of businesses (LOBs) were covered in the process.
Application stacks were built for all applications in scope by grouping applications based on connectivity information. Assessment and group interviews were conducted with application users, namely; application team, network team, security team, and DevOps team to cross verify the data provided by the IT team and application team with RISC’s grouping and bridging the gaps if any. A proposed migration plan was to be developed post-analysis that would state identified migration patterns for the applications in scope, create customized or modernized target architecture to plan a rapid lift & shift migration strategy.
A comprehensive and detailed MRAP report included information on the overall current on-premise architecture, infrastructure and architecture details for all identified applications, suggested migration methodology for each application, migration roadmap with migration waves, total cost of ownership (TCO) analysis and an executive presentation for business cases.
The purpose of an AWS Landing Zone is to provide a framework for creating, automating, baselining, and maintaining a multi-account environment. This is considered as a best practice usually recommended before deploying applications on AWS.
The customer, with Powerup’ guidance, decided to set-up and maintain the following AWS Landing Zone accounts –
Business unit accounts – UAT Account & Production Account
Topology Diagram from RISC tool showing the interdependence of various applications and modules:
The report would also provide details on each application across LoBs that would cover the following information:
Current Application Architecture
To be Architecture on Cloud
Current Application Inventory Details with Utilization.
Recommended Sizing on Cloud
Network Topology for each application.
Migration Methodology – 7 Rs of Migration – Rehost, Refactor etc.
The MRAP report depicted in-depth details on the customized AWS Architecture for the customer:
Identifying the migration pattern for all applications was the key. Target architecture for applications was created in such a manner that it could be changed or improvised, if required, in the future. This architecture catered to not just application and network deployment but also covered non-functional requirements, data security, data sizes, operations, and monitoring and logging.
A VPN tunnel set up between the customer House and AWS Transit Gateway while the Transit Gateway was deployed in the Shared Services account to communicate with Virtual Private Networks (VPC) from other accounts.
Sensu Monitoring Server and Palo Alto Firewall were deployed in the Shared Services Account.
A shared services account was used to host Active Directory (AD) and a bastion host.
The production environment was isolated as the customer had several applications running development, test, and production from the same account.
Key Findings from the customer MRAP
● Current Infrastructure provisioned was utilized to only 30%.
● Close to 20% servers are already outdated or turning obsolete within the next one year.
● OS Split – 70% Windows, 20% RHEL, 10% OpenSource Linux Distributions.
● Database (DB) Split – 70% SQL Server, 20% Oracle, 10% – MYSQL, PostgreSQL, MariaDB, MongoDB. Databases are being shared across multiple applications.
● Up to 76 applications are running on the same servers.
● Multiple DB engines on the same DB server.
● Servers are being shared across LOBs
● Close to 20% Open Source applications are running on Windows/RHEL – this can be easily moved to Amazon Linux (opensource) during migrations.
● Close to 20% of applications can be moved to new AMD/ARM architectures to save costs.
● Up to 50% savings on TCO can be achieved over the next 5 years by moving to AWS
With the MRAP assessment and findings in place, the customer now has greater visibility towards cloud migration and the benefits it would derive from implementing it. With a rapid lift & shift migration strategy, they could now look at better resource utilization, enhanced productivity, and operational efficiency going forward.
The customer offers a broad range of financial products and services to diversified customer segments that include corporations, institutions, and individuals across domestic and global geographies. Financial service providers have long been at the forefront of cloud adoption and the customer has been no exception. Cloud migration has accelerated at a rapid pace across multiple business groups and the customer plans to stay abreast of the growing surge. The idea was to migrate their applications one- by- one to AWS.
For this purpose, a migration readiness assessment for almost 250 applications was conducted which included stakeholder interviews and tool data analysis. A rapid lift and shift migration were intended to be implemented as quickly as possible.
Powerup’s scope of services was to define and plan a business case for the Migration Readiness Assessment & Planning (MRAP) by gathering data from the existing setup and validating the same in terms of understanding how well equipped the customer is for cloud migration. An MRAP report would then be drafted which would act as a roadmap to the actual migration.
Worxogo is a pioneer in AI and sales have extended the services across the globe. They require to have security in terms of data and as well in the Network level infra running in Azure systems.
PUC recommended the following for the client to enhance the security aspects.
1.Azure Security Centre:
Azure Security Center is a unified infrastructure security management system that strengthens the security posture of the data centers, and provides advanced threat protection across your hybrid workloads in the cloud – whether they’re in Azure or not – as well as on-premises.
In the Security Center, we can set our policies to run on management groups, across subscriptions, and even for a whole tenant.
The advanced monitoring capabilities in the Security Center also let us track and manage compliance and governance over time. The overall compliance provides you with a measure of how much your subscriptions are compliant with policies associated with your workload.
Security Center continuously discovers new resources that are being deployed across your workloads and assess whether they are configured according to security best practices if not, they’re flagged and you get a prioritized list of recommendations for what you need to fix to protect your machines.
As our client add up new resources to the environment, this feature helps in the validation of the resources and fix the security issues based on the recommendations.
Enables us to see the topology of the workloads, so we can see if each node is properly configured. we can see how the nodes are connected, which helps you block unwanted connections that could potentially make it easier for an attacker to creep along with your network.
Security Center makes mitigating your security alerts one step easier, by adding a Secure Score. The Secure Scores are now associated with each recommendation you receive to help you understand how important each recommendation is to your overall security posture.
Azure Security center protects the following
Protect against threats
Integration with Microsoft Defender Advanced threat protection
Brute force attack
Protect IoT and hybrid cloud workloads
Hence Azure Security Center speaks to the growing need for an enterprise-grade security management platform that encompasses both cloud and onsite resources with a unified, analytics-rich, actionable interface that helps you take control of the security of your resources on all fronts.
Azure managed disks automatically encrypt your data by default when persisting it to the cloud. Server-side encryption (SSE) protects your data and helps you meet your organizational security and compliance commitments.
Data in Azure managed disks is encrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant.
Note: Encryption does not impact the performance of managed disks and there is no additional cost for the encryption.
If Azure Security Center is used, it notifies an alert if you have VMs that aren’t encrypted. The alerts show High Severity and the recommendation is to encrypt these VMs.
Accessing Azure resources Using Secure VPN Connection
As the organizational members access the VM resources for deployment and coding purposes daily and there is a need for secure communication between the Azure resources and the PC’s of the organizational members.
A Point-to-Site (P2S) VPN gateway connection lets to create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. Users use the native VPN clients on Windows and Mac devices for P2S. Azure provides a VPN client configuration zip file that contains settings required by these native clients to connect to Azure.
For Windows devices, the VPN client configuration consists of an installer package that users install on their devices.
For Mac devices, it consists of the mobileconfig file that users install on their devices.
The zip file also provides the values of some of the important settings on the Azure side that you can use to create your profile for these devices. Some of the values include the VPN gateway address, configured tunnel types, routes, and the root certificate for gateway validation.
Add a layer of security, Fortigate firewalls have been configured to monitor the incoming/outgoing traffic.
The FortiGate-VM on Microsoft Azure delivers next-generation firewall capabilities for organizations of all sizes, with the flexibility to be deployed as a next-generation firewall and/or VPN gateway. It protects against cyber threats with high performance, security efficacy, and deep visibility.
The customer is a tech-based real estate platform were currently running its infrastructure on Digital Ocean. Their current setup made it difficult for them to manage their network, containers and storage connectivity. Customer were unable to use multiple features like Auto-Scaling, Managed Kubernetes, Load balancer and GCS storage.
Google Cloud Platform (GCP) helped in their digital transformation journey with a highly efficient and cost-effective solution. A combination of strategies was adopted like Recreation/Lift & Shift & Re-architecture of their current infrastructure running on Digital Ocean was planned with advanced technologies like Managed Kubernetes, App Engine, Big Query & Managed Load Balancer. Frontend application servers and Elastic search clusters were deployed on GKE. SQL cluster was deployed on GCP as a Master-Master setup with Percona.
Number of VM’s – 140+
Number of applications migrated – 5
Approximate size of DB – 5 TB +
The customer team is able to scale the infrastructure at any point in time and the increase of web traffic on their website had no impact on end-users and zero performance issues.
Additionally, managing the current setup on GCP was more efficient. This helped them to gain more control from a security standpoint.
Currently, their environment has been provisioned in an unstructured way on a single project and all users from the technology team have access/visibility to both Stage/Prod environments and the complete infrastructure is running on standalone VMs and scaling was very cumbersome. Maintaining different services like RabbitMQ, MongoDB, Redis, Kafka, Elasticsearch was complex.
Google Kubernetes Platform (GKE) helped customers in their digital transformation journey with a highly efficient and cost-effective solution. A complete re-architecture of their current setup running on standalone virtual machines was planned with advanced technologies like Managed Kubernetes. Jenkins was implemented for the CI/CD pipeline for making sure integration of individual jobs, easier code deployment to production and effortless auditing of logs.
Number of VM’s – 180+
Number of applications migrated – 25+
Approximate size of DB – 2TB
Migration of all the containerized images to Managed GKE in Google Cloud helped to achieve high availability, scaling and increased in performance by 100%
The customer is able to manage their complete application lifecycle and build lifecycle as a code; it additionally helped to meet required security compliance.
Tools and technologies used:
Tools used – Bitbucket, Jenkins, MongoDB, influxDB, cloud MySQL, Percona, RabbitMQ, MongoDB, Redis, Kafka, Elasticsearch and Kubernetes.
Google services used – Compute Engine, Container Build, Google Kubernetes Engine, Container Registry, Cloud SQL, Stack Driver, Cloud Identity & Access Management, Cloud VPN, Cloud DNS, Cloud Load Balancing.
The customer: A leading therapeutics company in immuno-oncology
A leading therapeutics company in immuno-oncology was running their research data of more than 2 TB in on-premise setup and were facing multiple challenges when performing tests and other application validations, due to constraints on scaling and performance validation.
Google Kubernetes Platform (GKE) helped Customers in their digital transformation journey with a highly efficient and cost-effective solution. A complete re-architecture of their current setup running in on-premise standalone virtual machines was planned with advanced technologies like Managed Kubernetes. Jenkins was implemented for the CI/CD pipeline for making sure integration of individual jobs, easier code deployment to production and effortless auditing of logs. Auto-Scaling was flawless and handling large data was much easier.
Number of VM’s – 180+
Number of applications migrated – 25+
Approximate size of DB – 2TB
Migration of all the containerized images to managed GKE on Google Cloud helped achieve high availability and scaling.
The customer was able to manage their complete application lifecycle and build lifecycle as a code; it additionally helped to meet required security compliances.
Tools and services used:
Tools used – Istio, Jenkins, MySQL, Clam AV, Elasticsearch &GitHub
Google services used – Compute Engine, Container Build, Google Kubernetes Engine, Container Registry, Cloud SQL, Stack Driver, Cloud Identity & Access Management, Cloud VPN, Cloud DNS, Cloud Load Balancing
One of the largest media houses in the country was looking to improve the ad placements across channels for improved conversion. At the same time, it wanted to take other parameters like social media feedback (predominantly Twitter), EPG information, etc. into consideration. With the push towards digital content, on-prem infra was becoming a cost concern due to the volume of data being generated.
Current software that provides TRP information used to provide this information only once a week. There are certain reports that need to be generated in time (precisely in 6 to 12 mins from source to destination) to take critical business decisions. Also, there was a critical failure in the existing flow causing this delay as the processes were schedule-based. With all media companies generating these reports, the time taken to generate reports and make changes in promos, ad placements, etc. is very critical.
The solution on a high level involved complete process transformation from tightly coupled synchronous architecture to an event-based, loosely coupled asynchronous architecture to make sure that the end reports are generated as desired by the user.
Powerup also helped this client take a cloud-first approach where the data from different sources (SAP, Chrome feeds, Twitter feeds, social media feedback in excel files, etc.) on-prem were piped to cloud. The data warehouse was created were data extracted from all the channels was moved. The data is then transformed using ETL jobs to a format that can be easily be pushed and visualized on Tableau. The system also has a logging system built in which keeps a check on different parameters like time taken for each process, success/failure of a process, the reason for the failure of the process, etc.
The end-to-end time taken to generate the critical reports was now 3 mins which improved the decision making capability of business leaders.
An auto-recovery feature was built for the failures so that no data is lost. The solution was also made modular keeping in mind the addition of new channels and scalability so that components can be added or removed without any code changes.
The solution helped improve management to take business-critical decisions in time. With the reports now being generated and refreshed every 3 min, the client now can strategically do ad placements and this has led to better conversion. TRP is also set to increase further, post this initiative.
The customer, which serves clients across six continents has a complex IT landscape to manage. The underlying infrastructure supports a huge employee base and all critical applications, including myApp – the digital platform for self-service that gives employees a seamless experience across various processes and workflows. myApp enables all its employees and contractors to manage business transactions, access productivity tools, news, videos, communications, and other content via one single application interface. Tens of thousands of its employees worldwide depend on MyApp and an associated suite of 150+ applications for their day-to-day activities. But the existing approval-based systems for requests rendered it difficult to handle higher numbers of transactions and larger volumes of data resulting in delays in approvals and decreased employee satisfaction. The customer needed a smart Artificial Intelligence (AI) solution which uses advanced decision-making and machine learning to not only resolve this but also customize the process as per the request while also reducing the number of inputs by the user.
Powerup conducted an in-depth study of myApp’s systems and interacted with the users to understand the challenges. The major bottleneck was not the sheer number of requests being received on the portal, but the systems’ inability to understand user context and the number of steps involved in getting simple issues resolved.
Powerup designed a solution for Customers, which will integrate with their myApp portal as a voice engine to automate user journey on the system. This also has to be a voice-first solution that executes an action on voice inputs of the user. The engine backed by strong neural networks understands the user context and personalizes the engine for the user. The engine is built on an unsupervised learning model, where the engine personalizes the conversation based on the user’s past interactions. Thus, providing a unique and easy to navigate through a journey for each user. In this process, the users can get rid of the transactional system and get issues resolved, from approval to task submission, within 2-3 steps.
Powerup also implemented Botzer, chatbot platform with Amazon Lex & Polly. Customer calls get diverted from IVR to the chatbot, which takes customers’ requests as voice input, does entity matching, triggers workflows, and answers back immediately. The voice engine supports 2 languages today – English and Hindi. Customers can get details like Statement of Account, EMI tenure, Balance Due, etc. The intelligence built into the system allows it to behave differently with different users during a different times of the day, thus if the user accesses different applications during the morning than the evening hours, the engine will respond accordingly during the respective hours.
Below is a high-level Solution workflow of the engine, being developed on AWS Lex & Polly, utilizing Botzer APIs at the backend.
Following is the high-level technical architecture of the implementation. The engine is hosted on the customer’s AWS VPC, ensuring data integrity & security. The current architecture is capable of hosting 1lakhs+ Customer employee, with 150+ applications on myApp.
Following is the high-level technical architecture of the implementation. The engine is hosted on the customer’s AWS VPC, ensuring data integrity & security.
The current architecture is capable of hosting 1lakhs+ employee, with 150+ applications
Faster ticket resolution and better communication with third-party application providers led to an increase in the number of tickets resolved. At the same time, the number of false positives decreased.