All Posts By


Clouding Out Technical Debt

By | CS, Powerlearnings | No Comments

Compiled by Kiran Kumar, Business analyst at Powerup Cloud Technologies

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies


Large IT organizations comprising cross-functional teams, multi-products and services are dedicated to supporting and delivering software solutions on a swift and continuous basis. Such technologies and software solutions are omnipresent and undergo constant change, but the need to keep abreast with this dynamic scenario is highly demanding and can sometimes lead to ambiguity and debts if not monitored from time to time. 

Technical debt is considered healthy until it is at reasonable levels. However, If debts aren’t administered in time, businesses may face a larger impact in terms of poor or outdated product design, impaired software maintenance, and delays in delivery leading to demotivated teams and dissatisfied stakeholders.

With cloud computing gaining momentum, organizations are reaping benefits in terms of lower infrastructure deployment, maintenance costs, agility, scalability, business continuity, and increased utilization.

However, organizations must also look at resolving issues arising from technical debts by leveraging services offered by cloud providers. 

According to IDC, by 2023, 75% of G2000 companies would commit to providing technical parity to a workforce that is hybrid by design rather than by circumstance, enabling them to work together separately and in real-time. Through 2023, coping with accumulated technical debt would be the top priority where CIOs would look for opportunities to design next-generation digital platforms that modernize and rationalize infrastructure and applications while delivering flexible capabilities. 

Cloud services have the much-needed capabilities to cater to design, code, and infrastructure components of technical debts that would not just help upgrade and streamline their existing systems but will also help shift focus towards developing and delivering new and innovative products, services, and solutions.

Migrating to the cloud requires a well-coordinated effort between IT operations, infrastructure support, cloud providers, and the organization’s senior management. If an organization plans, coordinates, and executes as effectively, then technical debt can definitely be reduced or settled using cloud computing.

Technical Debt

Technical debt in layman’s terms means a debt arising out of the attempt at achieving short-term gains that actually converts to potential long-term pain. 

Many times, IT teams need to forgo certain development work such as writing clean code, writing concise documentation, or building clean data sources to hit a particular business deadline,” says Scott Ambler, VP and chief scientist of disciplined agile at Project Management Institute.

The additional cost incurred from rework because enterprises couldn’t do it right the first time either due to constraint in time, budgets, or demands, often end up experiencing increased downtime, higher operational costs and cost implied due to additional rework in the long run.

Why Managing Technical Debt is important 

When technical debt is left unchecked, it can limit your organization’s ability to try and adapt to new technologies, restrict organizations from coping with advanced market trends, reduce transparency, and limit timely deliverables.

Studies by CRL have identified that the technical debt of an average-sized application of 300,000 lines of code (LOC) is $1,083,000. This represents an average technical debt per LOC of $3.61. 

With technical and quality debts piling up over a period of time, organizations face a negative impact concerning increased cost and efforts from rework, indefinite delays, and compromise in the brand image or inferior market share.

Here are some typical use cases: 

  • Utilizing less efficient development platforms that unnecessarily increase the length and complexity of code. Studies show that modern platforms can reduce the application development lifecycle by 40-50%.
  • Delay in upgrading your IT infrastructure can have a compounding effect as unsupported hardware and software components become more expensive to maintain and operate.
  • Also, unsupported hardware and software components increase provision time resulting in increased time to market. During complex requirements, continuing to work with limited capabilities and resources can increase team fatigue. 
  • Lack of foresight while designing infrastructure can have an impact on future upgrades and change initiatives risking your entire business set-up.

How do we quantify Technical Debt?

Quantifying your problems can help you make clear decisions. Breaking it down to numbers not only makes it easier to understand, compare, analyze, and track progress but also helps create a plan of action to remediate all the detected issues.   

Technical debt can be computed with a ratio of the cost incurred to fix the system to the cost it takes to build the system; 

Technical Debt Ratio (TDR) = (Remediation Cost / Development Cost) x 100%

TDR is a useful tool that helps track the state of your infrastructure and applications. A low TDR reflects that your application is performing well and doesn’t require any upgrades. A high TDR reflects the system is in a poor state of quality and also depicts the time required to make the upgrades. Higher the TDR ratio, higher the time it takes to upgrade or restore the application.

Remediation cost (RC) is the cyclomatic complexity value of a particular project or application. RC can also be represented in terms of time, which will help determine the time taken to fix issues pertaining to a particular code function.

Development cost [DC] is the cost generated from writing the lines of code. For example, the number of lines of code multiplied by the cost per line of code will give us the total development cost incurred to build that code.

Thus, the solution is to represent technical debt as a ratio rather than a hypothetical number. With being represented as a ratio, the stakeholders are able to quantify the debts objectively and across multiple projects as the calculations contain the number of lines of code, providing an exact best and worst score percentage ratio.

An acceptable thumb rule is that codes with a technical debt ratio of over 10% are considered poor in quality. Once this is determined, the management team is directed towards working along with the development team to define strategies for eliminating the debts.

Types of technical debt

  • Architectural Debt – Architectural debt refer to debts arising out of substandard structural design and implementation that gradually deteriorates the quality of the software.
  • Build Debt – Large and frequent changes in specifications and codebases lead to build debts.
  • Code Debt – Code or design debt, represents the extra development work that arises when mediocre code is implemented in the short run, despite it not being of best quality due to deadline or budget constraints. 
  • Defect Debt – Defect debts refer to the bugs or failures found but not fixed in the current release, because there are either other higher priority defects to be fixed or there are limited resources to fix bugs.
  • Documentation Debt – Documentation debt is a type of technical debt that highlights problems like missing or incomplete information in documents.
  • Infrastructure Debt – IT Infrastructure technical debt is the total cost needed to refurbish  technology such as servers, operating systems, anti-virus tools, databases, networks, etc. in order to upgrade them.
  • People Debt – People debt is debt rising from the resources being less experienced, under skilled and sometimes having to make compromising decisions due to time or budget constraints despite knowing the repercussions of it.
  • Process Debt – Process Debt is the circumstances in which an organization adopts processes that are easy to implement instead of applying the best overall solution that would be beneficial in the long run.
  • Requirement Debt – Requirement debt is the debt incurred during the identification, validation, and implementation of requirements.
  • Service Debt – Service debt is the additional cash that is required to repay the debt for a particular period that includes the outstanding interest as well as principal components.
  • Test Automation Debt – The debt arising out of increased effort to maintain a test code because coding standards weren’t adopted in the first place.

Is Cloud the way out?

There are multiple types of debt, each unique but paying them off may or may not be a priority at a given point in time. A decade ago, there were no alternatives to running your infrastructure on data centers, hence solving interoperability issues or upgrading slower or redundant components efficiently took a lot of time. Therefore, managing technical debts was a massive challenge for almost all IT organizations.

Organizations have constantly been on the lookout for technical debt reversal strategies and the key possibilities of the cloud’s ability to address the various technical debt issues are fast catching up.

A study reveals that organizations end up spending 90% of their time on troubleshooting technical debt issues. The longer you tend to accumulate debts, the longer it takes to resolve it besides incurring higher costs the system sometimes become unfit for daily business operations.

Gartner reports that organizations spend more than 70 % of their IT budget simply operating their technology platforms, as high as 77 % in some industries, thus leaving precious little budget for enhancements and innovation.

Cloud computing has helped unlock an organization’s full potential allowing it to move towards innovative capabilities and sustainable growth while performing audits and checks to keep all kinds of debts at bay. 

Cloud services enable an organization to: 

  • Move from CapEx to OpEx model
  • Use pay-as-you-go services
  • Scale infrastructure 
  • Increase speed and agility of deployments
  • Auto-scale for better optimization and
  • Reduce maintenance cost

To begin with, it is best to opt for a hybrid cloud approach. Hybrid infrastructure is similar to that of a three-tier architecture where the application and its data is split between an on-premise and preferred cloud provider that helps optimize costs and gain increased control over the overall architecture.

In many cases, it makes sense for an organization to maintain their customer data on their data centers while hosting the application on cloud ensuring that their client data is fully secure, this arrangement is cost-effective and enables seamless business operations. In parallel to this, the application takes full advantage of the cloud technology in terms of scaling up or down depending on business needs ascertaining a win-win situation for enterprises. Furthermore, it also offers businesses the flexibility to make changes to the application at any given point in time, as it is essential to keep applications updated and flexible.

If you would like to know more about the Hybrid cloud and its advantages please refer to our earlier blog

Why hybrid is the preferred strategy for all your cloud needs

Let us look at the three most significant IT debts in today’s time and how the cloud acts as a solution to manage and control them.

Infrastructure Debt

Infrastructure must routinely be updated with new software releases to ensure known vulnerabilities are eliminated. When a device and its software are no longer in support, liabilities, as well as disparities, become exceedingly difficult and expensive to mitigate. Cloud helps manage such discrepancies, saves you from having to upgrade and replace your infrastructure periodically and also having to regularly manage software patches, scaling, distribution, and resiliency of the platforms supporting your applications and data unless your business requires complete control over the operating system required to run your applications. Lift and shift is the fastest and easiest way to cloud-based technical debt solutions, however, to derive maximum benefits from the same, organizations need to sometimes opt for PaaS offerings as well. 

Architectural and Design Debt

Cloud can redefine the way software and services are being delivered to your customers with the help of services like, 

Containers and microservices

Containers and microservices are the keys to driving innovation within organizations, especially if you have numerous customer-facing applications and services. The microservice architecture enables hassle-free and continuous software delivery with increased business agility. Containers are a repository that holds your applications, configurations, and OS dependencies in one lightweight bundle that’s easier to deploy and faster to provision. Containers help organizations to efficiently manage their applications with automated techniques. Additionally, the core of container services is open-sourced thus also enabling you to keep your budgets in check.


DNS or domain name system is often not given enough weightage but they play a huge role in aggregating multiple technologies, enabling quick response times and making sure everything runs smoothly all around your infrastructure.

Cloud technologies demand high API call rates, tasks like auto-scaling, spinning up new instances, and traffic automation for optimization. The traditional DNS servers might not be capable of supporting its fast-paced infrastructure. So teams must ensure that the necessary infrastructure requirements are met by these DNS platforms to ensure smoother operations.

Process Debt

Often, overheads in terms of technical and architectural analysis, code reviews, testing, and release management processes are not taken much into account, and these can lead to significant problems in any business environment. These factors can trap organizations in a legacy cycle and restrict them from implementing new processes. 

With cloud solutions, management teams are able to identify what suits them best as per their needs while also comprehending how new remediation processes should be accurately introduced and followed by development teams.

Clear visibility of your IT infrastructure usage patterns would mean that policies are in place which in turn would ensure consistency in monitoring, logging, and tracing activities along with streamlining the performance metrics and process telemetry.

Services like IAC (Infrastructure-as-Code) and configuration management tools help completely automate processes and minimize bottlenecks in your code delivery empowering engineers to focus on delivering business value. 


As per studies, over 95% of organizations plan to increase their cloud spends by the end of 2021 as the need for matured cloud platforms and technologies have become vital. There has been a significant surge in the demand for cloud even if the initial cloud set-up costs seem high and unwarranted. 

Organizations are beginning to understand that initial investment in cloud infrastructure and services, costs incurred from acquiring data center management tools as well as from hiring cloud-specialized resources apart from support and maintenance costs are actually worthwhile as the benefits derived from it are everlasting. 

Despite the heavy investments at the beginning, setting up a cloud environment is still considered the best trade-off and the most economical option of all. This is mainly because it drastically cuts down on daily operational expenses, keeps the systems up-to-date thus improving the uptime and efficiency of business while minimizing technical debts.

Advancements in the cloud space will always be an ongoing process that would call for continuous optimization of systems to enable organizations to progress towards innovation and modernization.

Top 10 Cloud Trends Post Pandemic

By | CS, Powerlearnings | No Comments

By Vinay Kalyan Parakala, SVP – Global Sales (Cloud Practice), LTI

The cloud computing market has experienced an exponential growth in the recent years and with the outbreak of the Covid-19 pandemic, the cloud sector is witnessing a rapid and sizeable growth, estimating to almost USD 165 billion as against the pre Covid estimate of USD 158 billion for the current financial year.

With a significant hike in the technology spend, the global cloud computing market is expected to witness a compound annual growth rate (CAGR) of 17.5% by 2025.

The onset of Covid-19 has compelled organizations to embrace work from home policies on a large scale increasing the demand for SaaS, IaaS and PaaS based cloud collaborated solutions. A surge in the usage of business communication tools, online streaming platforms and increased registered users on cloud has led to an upward and emerging cloud trend. 

Let us look at some of the cloud predictions that have enabled enterprises to provision for their employees and be better equipped in maintaining operational efficiency in the current pandemic situation.

1. Rise in Cloud Telephony

The cloud telephony market is projected to grow by 8.9% in 2020 and 17.8% in 2021. A notable development in the areas of cloud telephony, telecommunication services, network infrastructure, video conferencing and VPN has been observed since the pandemic has set in. Globally, a rise in the number of call centers, fast paced migration of companies to cloud and the benefits of cost efficient cloud services have also increased the demand for cloud telephony services. 

Gartner states, “As a result of workers employing remote work practices in response to COVID-19 office closures, there will be some long-term shifts in conferencing solution usage patterns. Policies established to enable remote work and experience gained with conferencing service usage during the outbreak is anticipated to have a lasting impact on collaboration adoption.” If you are looking at remote workforce facilitation, here’s a link to our solutions.

COVID-19 Initiatives

Remote Workforce Enablement

2. Increased adoption of virtual desktops

Forrester predicts the number of remote workers at the end of 2021 will be 3x of the pre-pandemic figures. Due to the increase in the demand for remote working, we expect to see a rise in organizations turning to Desktop-as-a-Service (DaaS) options in 2021 to allow for the secure access of data that is off corporate networks, from any device. DaaS technology will allow organizations to meet the demands of remote work better, by quickly provisioning secure virtual desktops for employees and contractors alike, that can be deleted if compromised.

Research shows that Microsoft is not the only provider looking up to the desktop as a means of connecting to the cloud. By the same token, all of the key cloud vendors are interested in the virtual desktop market. Moreover, with the popular Windows 7 OS reaching the end of life in January 2020, indicates that 2019 will be a year of transition. And the years 2021 and 2022 will bring their own techs. My own question is; are people willing to just jump to Windows 10 and thus cement Microsoft’s hold? Or will they accept things such as AWS WorkSpaces or Google Chromebooks that are fast rising? If you are looking at adopting DaaS, here’s a link to our services.

Virtual Desktop Environment

3. Multi-cloud management

With the uncertainty of the pandemic and the constant pressure on organizations to continually provide business flexibility and acceleration, there is an urgent need to evaluate appropriate cloud set-up for appropriate workloads. 

 50% of Indian enterprises are expected to operate in a hybrid multi-cloud environment by 2021 and 30% of Indian enterprises will deploy unified VMs, Kubernetes, and multi-cloud management processes and tools to support robust multi-cloud management across on-premises and public clouds.: IDC. Multi-cloud environments help facilitate better data control, data availability, prevent outages, gain agility, security and governance. For more detailed information, do visit our multi-cloud one governance platform.


4. Focus on the “Xops”

By 2021, AI will play an essential role in augmenting DevOps while monitoring and improving conventional IT operations like optimizing test cases, application development, release management, ticket management . The Markets and Markets report on the DevSecOps Global Forecast suggests that the DevSecOps market size is expected to grow to USD 5.9 billion by 2023. 

With 65% of organizations expected to adopt DevOps as a mainstream strategy by this year, 2020 is expected to witness developers leaning towards the compliance-as-a-code service with the main objective of DevOps being security. Security measures are introduced early on in the inception phase of the  SDLC cycle using the shift-left strategy, which ensure that the threats are identified in the beginning itself, helping in cutting down costs related to security issue fixes. This encourages businesses to instill security as a continuous integration and delivery practice while collaborating with the development, operations and security teams more efficiently. If you wish to explore our DevOps capabilities, here’s a link to our solutions. 

Cloud Native and DevOps

5. Pervasiveness of AI 

By 2022, 65 percent of CIOs will digitally empower and enable front-line workers with data, AI, and security to extend their productivity, adaptability, and decision-making in the face of rapid change.

By 2023, driven by the goal to embed human-like intelligence into products and services, one-quarter of G2000 companies will acquire at least one AI software start-up to ensure ownership and implementation of differentiated skills and IP. Successful organizations will eventually sell internally developed industry-specific software and data services as a subscription, leveraging deep domain knowledge to open profitable new revenue streams.

AI in data centers

AI in data centers will see a peak in the coming years. The IDC forecasts that by the year 2021, AI spending will grow to US$52.2 billion with a total CAGR increase of 46.2 percent from 2016-2021.

The use of AI in data centers will serve multiple purposes like automating tasks, enhancing security, eliminating skill shortage issues and improving workload distribution. AI resources can also assist enterprises to become more competent using their past data to have productive conclusions. For better understanding of our AI and automation solutions, please visit:

Digital Platform

Digital Transformation

6. Serverless computing

25% of developers will leverage serverless services by 2021. Gartner also stated the rise of serverless computing, marking the increase by approx. 20 percent of global enterprises.

A 2020 DataDog survey indicated that over 50% of AWS users are now using the serverless AWS Lambda Function as a Service (FaaS). Serverless technologies are going mainstream letting organizations experience better scalability, flexibility and improved latency at a reasonable price. For more insight on our serverless computing services, do visit the below link.

Digital Transformation

7. Focus on the Hybrid cloud

It is believed that adopting multi-cloud solutions especially in the current pandemic situation will help organizations support their customer base, boost recovery management and build precision and flexibility in the new normal. A research depicts that the hybrid cloud market will grow to $97.6 billion by 2023, at a CAGR of 17 percent. 

Hybrid cloud solutions support technological advancements to the maximum that eases smooth business operations apart from providing agility, security and efficiency irrespective of the unforeseen circumstances.

Studies show that AWS and Google are committed to increasing their focus on Hybrid cloud solutions where security will remain as the key driver for a hybrid cloud set-up. Hybrid cloud is trusted to be the future of IT in the covid scenario. If you are looking at achieving a candid hybrid environment, here’s a link to our services.

AWS Outposts

Enterprise Migration

8. Mainstreaming Containers and Kubernetes

Prior to the pandemic, about 20% of developers regularly used container and serverless functions to build new apps and modernize old ones. We predict nearly 30% will use containers regularly by the end of 2021, creating a spike in global demand for both multi-cloud container development platforms and public-cloud container/serverless services. 

The IDC predicts that along with Kubernetes, 95 percent more new-micro services will be deployed in the containers by 2021. 

Forrester forecasts that lightweight Kubernetes deployments will end up accounting for 20% of edge orchestration in 2021. If you are looking for containerization of workloads, here’s a link to our solutions. 

Cloud Native

9. Moving DR from on-premise to cloud

COVID-19 has vividly impacted and caught almost every organization off-guard when it comes to securing infrastructure and data or handling storage and recovery from a data center outage, etc.  Directing the enterprise IT teams’ focus on shaping a business continuity plan, analyzing business impact, planning and building infrastructure that supports DR to gain resiliency are some important aspects of establishing a robust DR shift from on-premise to cloud. 

Before the pandemic, few companies protected data and workloads in the public cloud but by 2021, an additional 20% of enterprises will be shifting their DR operations to the public cloud undoubtedly. If you are looking for backup and DR on cloud, here’s a link to our solutions. 

Backup and DR on Cloud

10. Manage technical debt

By 2023, coping with technical debt accumulated during the pandemic will shadow 70% of CIOs, causing financial stress, inertial drag on IT agility, and “forced march” migrations to the cloud.

In the current scenario, with the remote work set-up, the need for consorting with collaborative tools and speedy deliverables is on a rise and the best way to provide value to customers is by provisioning for better requirements and design management practices, upscaling cloud architecture to meet current needs, adopting devops and automation and re-strategizing software development practices. 

If technical debts already exist for a particular enterprise, the first step would be to acknowledge and address it while taking lasting measures to remediate the same. Teams can even measure technical debts through metrics in order to arrive at best possible repayment solutions and gradually derive a best practices knowledge base out of it.

Thus, the cloud computing markets, both domestic and global, are combating hard to not just overcome and resolve the challenges arising from the pandemic but are also striving at emerging as clear winners despite the turmoil it has caused. If you are looking for cost-effective technical debts remediation, here’s a link to our solutions. 

Cloud IaaS

Simplify Cloud Transformation with Tools

By | CS, Powerlearnings | No Comments

Compiled by Kiran Kumar, Business Analyst at Powerup Cloud Technologies


Cloud can bring a lot of benefits to organizations including operational resilience, business agility, cost-efficiency, scalability as well as staff productivity. However moving to cloud can be a task with so many loose ends to worry about like downtime, security, and cost etc. Even some of the top cloud teams can be left clueless and overwhelmed by the scale of the project and decisions that need to be taken. 

But the cloud marketplace has matured greatly and there are a lot of tools and solutions that can help you automate or assist you in your expedition. These solutions can significantly reduce the complexity of the project. Knowing the value cloud tools bring to your organization. I have listed down tools that can assist you in each phase of your cloud journey have been said that this blog just serves as a representation of the type of products and services that are available for easy cloud transformation.

Cloud Assessment


LTI RapidAdopt helps you fast track adoption of various cloud models. Based on the overall scores, the accurate Cloud strategy is formulated The Cloud Assessment Framework enables organizations to assess their cloud readiness roadmap.

SurPaaS® Assess™

Is a complete Cloud migration feasibility assessment platform that generates multiple reports after analyzing your application to help you understand the factors involved in migrating your application to the Cloud. These reports help you to decide on your ideal Cloud migration plan and help you accelerate your Cloud Journey


Make data-driven cloud decisions with confidence using high-precision analytics and powerful automation make planning simple and efficient, accelerating your migration to the cloud.

Cloud Recon

Inventory applications and workloads to develop a cloud strategy with detailed ROI and TCO benefits.

NetApp Cloud Assessment Tool

The Cloud Assessment tool will monitor your cloud storage resources, optimize cloud efficiency and data protection, identify cost-saving opportunities, and reduce overall storage spend so you can manage your cloud with confidence.

Risc Networks

RISC Networks’ breakthrough SaaS-based analytics platform helps companies chart the most efficient and effective path from on-premise to the cloud.


With migVisor, you’ll know exactly how difficult (or easy) your database migration will be. migVisor analyzes your source database configuration, attributes, schema objects, and proprietary features


SurPaaS® Migrate™

with its advanced Cloud migration methodologies, enables you to migrate any application to the Cloud without any difficulty. It provides you with various intelligent options for migrating applications/VMs to the Cloud. robust migration methodologies allow you to migrate multiple servers in parallel with clear actionable reporting in case of any migration issues.

SurPaaS® Smart Shift™

Smart Shift™ migrates an application to Cloud with a re-architected deployment topology based on different business needs such as scalability, performance, security, redundancy, high availability, backup, etc.

SurPaaS® PaaSify™

Is the only Cloud migration platform that lets you migrate any application and its databases to required PaaS services on Cloud. Choose different PaaS services for different workloads available in your application and migrate to Cloud with a single click. 


simplifies, expedites, and automates migrations from physical, virtual, and cloud-based infrastructure to AWS.

SurPaaS® Containerize™

allows you to identify application workloads that are compatible with containerization using its comprehensive knowledge base system. Choose workloads that need to be containerized and select any one of the topologies from SurPaaS® multiple container deployment architecture suggestions

Carbonite Migrate

Structured, repeatable data migration from any source to any target with near zero data loss and no disruption in service.

AWS Database Migration Service

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Azure Migrate

A central hub of Azure cloud migration services and tools to discover, assess and migrate workloads to the cloud

Cloud Sync

An easy to use cloud replication and synchronization service for transferring NAS data between on-premises and cloud object stores.


Migrate multiple cloud workloads with a single solution. MigrationWiz—the industry-leading SaaS solution—enables you to migrate email and data from a wide range of Sources and Destinations.

Cloud Pilot

Analyze applications at the code level to determine Cloud readiness and conduct migrations for Cloud-ready applications.


PaaSify is an advanced solution, which runs through the application code, and evaluates the migration of apps to cloud. The solution analyzes the code across 70+ parameters, including session objects, third-party dependencies, authentication, database connections, and hard-coded links.

Application Development

DevOps on AWS

Amazon Elastic Container Service

Production Docker Platform

AWS Lambda

Serverless Computing

AWS CloudFormation

Templated Infrastructure Provisioning

AWS OpsWorks   

Chef Configuration Management

AWS Systems Manager

Configuration Management

AWS Config

Policy as Code

Amazon CloudWatch

Cloud and Network Monitoring


Distributed Tracing

AWS CloudTrail

Activity & API Usage Tracking

AWS Elastic Beanstalk

Run and Manage Web Apps

AWS CodeCommit

Private Git Hosting

Azure DevOps service

Azure Boards

Deliver value to your users faster using proven agile tools to plan, track, and discuss work across your teams.

Azure Pipelines

Build, test, and deploy with CI/CD which works with any language, platform, and cloud. Connect to GitHub or any other Git provider and deploy continuously.

Azure Repos

Get unlimited, cloud-hosted private Git repos and collaborate to build better code with pull requests and advanced file management.

Azure Test Plans

Test and ship with confidence using manual and exploratory testing tools.

Azure Artifacts

Create, host, and share packages with your team and add artifacts to your CI/CD pipelines with a single click.


Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Infrastructure Monitoring & Optimization

SurPaaS® Optimo™

Realizing Continuous Cloud Landscape Optimization, with AI-driven advisories and Integrated Cloud management actions to reduce your Cloud costs.

Carbonite Availability

continuous replication technology maintains an up-to-date copy of your operating environment without taxing the primary system or network bandwidth.


Embrace AIOps for the technical agility, speed and capacity needed to manage today’s complex environments.

Cloud Insights

With Cloud Insights, you can monitor, troubleshoot and optimize all your resources including your public clouds and your private data centers.

TrueSight Operations Management

Machine learning and advanced analytics for holistic monitoring and event management

BMC Helix Optimize

SaaS solution that deploys analytics to continuously optimize resource capacity and cost

Azure Monitor 

Azure Monitor collects monitoring telemetry from a variety of on-premises and Azure sources. Management tools, such as those in Azure Security Center and Azure Automation, also push log data to Azure Monitor.

Amazon CloudWatch

CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

TrueSight Orchestration

Coordinate workflows across applications, platforms, and tools to automate critical IT processes

BMC Helix Remediate

Automated security vulnerability management and simplified patching for hybrid clouds

BMC Helix Discovery

Automatic discovery of data center and multi-cloud inventory, configuration, and relationship data

Cloud Manager  

Cloud Manager provides IT experts and cloud architects with a centralized control plane to manage, monitor and automate data in hybrid-cloud environments, providing an integrated experience of NetApp’s Cloud Data Services.

Application Management

SurPaaS® Operato™

Visualize Application Landscape on the Cloud and effectively manage with ease using multiple options to enhance your applications.

SurPaaS® Moderno™

SurPaaS® can quickly assess your applications and offers a path to move your workloads within hours to DBaaS, Serverless App Services, Containers, and Kubernetes Services.

SurPaaS® SaaSify™

Faster and Smarter Way to SaaS. SaaSify your Applications and Manage their SaaS Operations Efficiently.


Best-in-class APM from the category leader. Advanced observability across cloud and hybrid environments, from microservices to mainframe. Automatic full-stack instrumentation, dependency mapping and AI-assisted answers detailing the precise root-cause of anomalies, eliminating redundant manual work, and letting you focus on what matters. 

New Relic APM

APM agents give real-time observability matched with trending data about your application’s performance and the user experience. Agents reveal what is happening deep in your code with end to end transaction tracing and a variety of color-coded charts and reports.

DataDog APM

Datadog APM provides end-to-end distributed tracing from frontend devices to databases—with no sampling. Distributed traces correlate seamlessly with metrics, logs, browser sessions, code profiles, synthetics, and network performance data, so you can understand service dependencies, reduce latency, eliminate errors, 

SolarWinds Server & Application Monitor 

End-To-End Monitoring

Server capacity planning 

Custom app monitoring 

Application dependency mapping 


Actively monitor, analyze and optimize complex application environments at scale.


Carbonite Recover

Carbonite® Recover reduces the risk of unplanned downtime by securely replicating critical systems to the cloud, providing an up-to-date copy for immediate failover

Carbonite Server

All-in-one backup and recovery solution for physical, virtual and legacy systems with optional cloud failover.

Carbonite Availability

Continuous replication for physical, virtual and cloud workloads with push-button failover for keeping critical systems online all the time.

Cloud Backup Service

Cloud Backup Service delivers seamless and cost-effective backup and restore capabilities for protecting and archiving data.


Scalable, cost-effective business continuity for physical, virtual, and cloud servers


Reduce cost and complexity of application migrations and data protection with Zerto’s unique platform utilizing Continuous Data Protection. Orchestration built into the platform enables full automation of recovery and migration processes. Analytics provides 24/7 infrastructure visibility and control, even across clouds.

Azure Site Recovery

 Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location

Cloud Governance and security

CloudHealth Multicloud Platform

Transform the way you operate in the public cloud

CloudHealth Partner Platform

Deliver managed services to your customers at scale

CloudHealth Secure State

Mitigate security and compliance risk with real-time security insights

Cloud Compliance

Automated controls for data privacy regulations such as the GDPR, CCPA, and more. Driven by powerful artificial intelligence algorithms,

Azure Governance Tools

Get transparency into what you are spending on cloud resources. Set policies across resources and monitor compliance enabling quick, repeatable creation of governed environments


Splunk Security Operations Suite combines industry-leading data, analytics and operations solutions to modernize and optimize your cyber defenses.


An autonomous cloud governance platform that is built to manage multi cloud environment It performs real time well architected audit on all your cloud, giving you a comprehensive view of best practices adherence in your cloud environment with additional emphasis on security, reliability and cost.The enterprise version of Cloud Ensure, the hosted version of the original SaaS platform, is best suited for organizations wanting to have in-house governance and monitoring of their cloud portfolio. 

Azure Cache for Redis: Connecting to SSL enabled Redis from Redis-CLI in Windows & Linux

By | Powerlearnings | No Comments

Written by Tejaswee Das, Sr. Software Engineer, Powerupcloud Technologies

Collaborator: Layana Shrivastava, Software Engineer


This blog will guide you through the steps to connect to a SSL enabled remote Azure Redis Cache from redis-cli. We will demonstrate how to achieve this connectivity in both Windows & Linux systems.

Use Case

While connecting to a non-SSL redis might be straight forward, works great for Dev & Test Environments, but for higher environments – Stage & Prod, security is something that should always be the priority. For that reason, it is advisable to use SSL enabled redis instances.  The default non-SSL port is 6379 & SSL port is 6380.


Step 1:  Connecting to non-SSL redis is easy

PS C:\Program Files\Redis> .\redis-cli.exe -h -p 6379 -a xxxxxxxx

Step 2:  To connect to SSL redis, we will need to create a secure tunnel. Microsoft has recommended using Stunnel to achieve this. You can download the applicable package from the below link

We are using stunnel-5.57-win64-installer.exe here

2.1 Agree License and start installation

2.2 Specify User

2.3 Choose components

2.4 Choose Install Location

2.5 This step is optional. You can fill in details or just press Enter to continue.

Choose FQDN as localhost

2.6 Complete setup and start stunnel

2.7 On the bottom task bar, right corner, click on  (green dot icon) → Edit Configuration

2.8 Add this block in the config file. You can add it at the end.

client = yes
accept =
connect =

2.9 Open Stunnel again from the taskbar → Right click → Reload Configuration to effect the changes. Double click on the icon and you can see

Step 3: Go back to your redis-cli.exe location in Powershell and try connecting now

PS C:\Program Files\Redis> .\redis-cli.exe -p 6380 -a xxxxxxxx


Step 1:  Installation & configuring Stunnel in Linux is pretty easy. Follow the below steps. You are advised to use these commands with admin privileges

1.1 Update & upgrade existing packages to the latest version.

  • apt update
  • apt upgrade -y

1.2 Install redis server. You can skip this if you already have redis-cli installed in your system/VM

  • apt install redis-server
  • To check redis status : service redis status
  • If the service is not in active(running state): service redis restart

1.3 Install Stunnel for SSL redis

●    apt install stunnel4
●    Open file /etc/default/stunnel4
--Enabled=1 (Change value from 0 to 1 to auto start service)
●       Create redis conf for stunnel. Open /etc/stunnel/redis.conf with your favorite editor and add this code block
client = yes
accept =
connect =
●       Check status: systemctl status stunnel4.service
●       Restart stunnel service: systemctl restart stunnel4.service
●       Reload configuration: systemctl reload stunnel4.service
●       Restart: systemctl restart stunnel4.service
●       Check status if running: systemctl status stunnel4.service

1.4 Check whether Stunnel is listening to connections

  • Netstat -tlpn | grep

1.5 Try connecting to redis now

>redis-cli -p 6380 -a xxxxxxxx

Success! You are now connected.


So finally we are able to connect to SSL enabled redis from redis-cli.

This makes our infrastructure more secure.

Hope this was informative. Do leave you comments for any questions.


AWS Lambda Java: Sending S3 event notification email using SES – Part 2

By | Powerlearnings | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies

Collaborator: Neenu Jose, Senior Software Engineer


In the first part of this series, we have discussed in-depth about creating a Lambda deployment package for Java 8/11 using Maven in Eclipse & S3 event triggers. know more here

In this post, we will showcase how we can send emails using AWS Simple Email Service (SES) with S3 Event triggers in Java.

Use Case

One of our clients had their workloads running on Azure Cloud. They had few serverless applications in Java 8 in Azure Functions. They wanted to upgrade Java from Java 8 to Java 11. Since Java 11 was not supported (Java 11 for Azure Functions has recently been released in Preview), they wanted to try out other cloud services – that’s when AWS Lambda was the one to come into the picture. We did a POC feasibility check for Java 11 applications running on AWS Lambda. 

Step 1:

Make sure you are following Part 1 of this series. This is a continuation of the first part, so it will be difficult to follow Part 2 separately.

Step 2:

Add SES Email Addresses

Restrictions are added to all SES accounts to prevent fraud and abuse. For this reason, for all test emails that you intend to use, you will have to add both the sender & receiver email addresses to SES, which by default is placed in SES sandbox.

2.1 To add email addresses, go to AWS Console → Services → Customer Engagement → Simple Email Service (SES)

2.2  SES Home → Email Addresses → Verify a New Email Address

2.3 Add Addresses to be verified

2.4 A verification email is sent to the added email address

2.5 Until the email address is verified, it cannot be used to send or receive emails. Status shown in SES is pending verification (resend)

2.6 Go to your email client inbox and click on the URL to authorize your email address

2.7 On successful verification, we can check the new status in SES Home, status verified.

Step 3:

In the pom.xml add the below Maven dependencies. To use SES, we will require aws-java-sdk-ses

Below in our pom.xml file for reference

<project xmlns="" xmlns:xsi=""





    <!-- -->

Step 4:

Edit your file with the latest code

4.1 Add email components as string

final String FROM = "";
final String TO = "";
final String SUBJECT = "Upload Successful";
final String HTMLBODY = key+" has been successfully uploaded to "+bucket;
final String TEXTBODY = "This email was sent through Amazon SES using the AWS SDK for Java.";

4.2 Create SES client

AmazonSimpleEmailService client = AmazonSimpleEmailServiceClientBuilder.standard()
                // Replace US_WEST_2 with the AWS Region you're using for
                // Amazon SES.

4.3 Send email using SendEmailRequest

SendEmailRequest request = new SendEmailRequest()
                    new Destination().withToAddresses(TO))
                .withMessage(new Message()
                    .withBody(new Body()
                        .withHtml(new Content()
                        .withText(new Content()
                    .withSubject(new Content()

You can refer the complete code below

package com.amazonaws.lambda.demo;

import com.amazonaws.regions.Regions;

public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {

    private AmazonS3 s3 = AmazonS3ClientBuilder.standard()

    public LambdaFunctionHandler() {}

    // Test purpose only.
    LambdaFunctionHandler(AmazonS3 s3) {
        this.s3 = s3;

    public String handleRequest(S3Event event, Context context) {
        context.getLogger().log("Received event: " + event);

        // Get the object from the event and show its content type
        String bucket = event.getRecords().get(0).getS3().getBucket().getName();
        String key = event.getRecords().get(0).getS3().getObject().getKey();
        final String FROM = "";
        final String TO = "";
        final String SUBJECT = "Upload Successful";
        final String HTMLBODY = key+" has been successfully uploaded to "+bucket;

        final String TEXTBODY = "This email was sent through Amazon SES "
        	      + "using the AWS SDK for Java.";
        try {
            AmazonSimpleEmailService client = 
                // Replace US_WEST_2 with the AWS Region you're using for
                // Amazon SES.
            SendEmailRequest request = new SendEmailRequest()
                    new Destination().withToAddresses(TO))
                .withMessage(new Message()
                    .withBody(new Body()
                        .withHtml(new Content()
                        .withText(new Content()
                    .withSubject(new Content()
                // Comment or remove the next line if you are not using a
                // configuration set
               // .withConfigurationSetName(CONFIGSET);
            System.out.println("Email sent!");
          } catch (Exception ex) {
            System.out.println("The email was not sent. Error message: " 
                + ex.getMessage());
       context.getLogger().log("Filename: " + key);
        try {
            S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
            String contentType = response.getObjectMetadata().getContentType();
            context.getLogger().log("CONTENT TYPE: " + contentType);
            return contentType;
        } catch (Exception e) {
                "Error getting object %s from bucket %s. Make sure they exist and"
                + " your bucket is in the same region as this function.", key, bucket));
            throw e;

Step 5:

Build the updated project and upload it to Lambda. Refer to Step 5 (

Step 6:

To test this deployment. Upload yet another new file to your bucket. Refer Step 9 of blog Part 1.

On successful upload, SES sends an email with the details. Sample screenshot below.


S3 event notifications can be used for a variety of use-case scenarios. We have tried to showcase just one simple case. This can be used to monitor incoming files & objects into a S3 bucket and appropriate actions & transformations.

Hope this was informative. Do leave you comments for any questions.


Using Reserved Instances saved 30% for a leading healthcare research, data, and technologies company

By | Cloud Case Study | No Comments

Customer: A leading healthcare research, data & technologies company


A leading healthcare research, data and technologies company was seeking recommendations on cloud optimization & best practices. Powerup conducted a detailed study & analysis to provide the customer team with suggestions on cost optimization, security audit and AWS best practices.

About Customer

The customer is a leading healthcare research and consulting company that provides high-value healthcare industry analysis and insights. They create patient-centric commercialization strategies that drive better outcomes and access, improving the lives of patients globally.

The customer helps businesses achieve commercial excellence through evidence-based decision making processes like expert consultation, proprietary data and analysis via machine learning artificial intelligence.

Problem Statement

The customer utilizes nearly all the tools that AWS offers to build, upgrade & maintain their infrastructure as per ongoing requirements. They are looking at cost optimization for all of their 17 AWS accounts. They plan to initiate cost-saving strategies to their AWS master account by –

  • Identifying the number of servers running idle and help create reserved instances.
  • Deploy upgraded servers based on recommendations.
  • Implement resource tagging for business unit-wise billing.
  • Install CloudHealth agent to maintain multiple accounts and
  • Automate lifecycle policies for backup maintenance.

The tagging is required to be done on a total of 490+ EC2, 70+ RDS and S3 servers that would be based on P&L, projects, stage, application owner, roles, support contact, function and cost savings heads to name a few.

The team was ill-equipped with the techniques of downsizing and was uncertain about how reports could be utilized to their maximum advantage in order to minimize costs.

Proposed Solution

  • Phase 1 – 100% CloudHealth agent installation coverage on AWS accounts

Applying AWS user data as a benchmark, Powerup created a CloudHealth agent inventory list and identified missing agents for the customer. They worked closely with the customer’s DevOps team to gain access to servers, to install CloudHealth agent on the remaining 300+ systems. Once done, agent check-in was verified to confirm 100% coverage. Installation was automated for new resources launch and a restriction was imposed on launching any instance without agent set up. Reserved Instance (RI) recommendation was obtained through the CloudHealth tools with the intent to reduce costs.

  • Phase 2 – Tagging and Governance

In the cloud environment, tags are identifiers that are affixed to instances. Powerup helped the customer incorporate 100% tagging based on appropriate business reviews. The objective was to strengthen inventory tag lists by classifying all instances under their respective heads. Instances were classified as per AWS best practices to initiate cost controls.

ParkmyCloud is a self-service SaaS platform that is implemented to help identify and terminate wasted cloud spend. It was scheduled periodically on customer’s Dev/QC/Staging environments and no machines were launched without proper tagging. It helped keep a check on auto-scaling groups to ensure tagging, as well as help, identify and implement governance rules as alert checks on resources, from being launched without proper tagging, sizing or approvals. Categorization of assets based on its name when tags are missing could be detected easily. Automating tagging and enabling termination policy for an instance helped in better-cost management along with providing the customer with accurate findings, recommendations and a strategic roadmap.

  • Phase 3 – Rightsizing and instance type consolidation

Powerup created a database instance inventory list to recognize and review the outdated version of servers. They also identified instances that required reconfiguration and upgradation. They imported instance right-sizing recommendations from data collected from CloudHealth tools that stated suitable suggestions for new instance type and size. It ensured appropriate process flow of right-sizing checks, added business intelligence around recommendations and smoothly transitioned all suggestions to the customer team. These recommendations helped them cut down on costs significantly.

  • Phase 4 -Security Audit

With the help of CloudHealth security audit report, the customer could study, analyze and prioritize summary findings by order of criticality and business requirements in a consolidated format. Recommended resolutions helped them validate security loopholes and facilitated suggestions on security fixes to the customer DevOps team. It also enabled them to generate backup services and POC reports which assisted them in checking how reports performed. This enabled them to update alert thresholds to meet business expectations and requirements.

Business Benefits

  • Using RI recommendations will help the customer cut down their monthly bills on EC2 and RDS by 30% .
  • The new EC2 version instance recommendation can help them save a minimum of 8% of costs while guaranteeing the high-quality performance.
  • The customer was able to regulate their billing and cost console using CloudHealth and AWS billing dashboard.
  • Restricted resources provisioned without proper tags and CloudHealth agent promotes easy maintenance of multiple accounts by using a single console.

Cloud platform


Technologies used

CloudHealth, ParkMyCloud, AWS Backup.

Bulk AWS EC2 pricing

By | Powerlearnings | No Comments

Written by Mudit Jain, Principal Architect at Powerupcloud Technologies.

Your Amazon EC2 usage is calculated by either the hour or the second based on the size of the instance, operating system, and the AWS Region where the instances are launched. Pricing is per instance hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. However, there is no bulk upload in a simple monthly calculator and calculator. AWS, with the following options to execute the same.

  • There are paid discovery tools that also do bulk pricing. However, often initial budgeting TCO calculation is done way ahead in the pipeline.
  • Manually add entries to AWS calculator and simple monthly calculator. However, this is not feasible with a large number of VMs on enterprise-level and manual entries are error-prone.
  • You can work with the excels but it has its own limitations.

To  overcome the above-mentioned, we have arrived at an effective 4-steps solution process, they are,

  1. Source of truth download 
  2. Source file cleanup
  3. One-time setup
  4. Bulk Pricing:
    1. Source file preparation
    2. pricing

1. Source of truth download

AWS publishes its service EC2 pricings here.

The documentation for Using the bulk API.

2. Source file cleanup

Use any python with Pandas pre-installed.

#!/usr/bin/env python
# coding: utf-8
# In[1]:
import pandas as pd
# In[2]:
# In[3]:
vmsrcindex = srcindex[srcindex['Instance Type'].notna()]
curvmsrcindex=vmsrcindex[vmsrcindex['Current Generation'] == "Yes"]
#remove non-ec2 pricings in ec2 and remove old generations pricings
# In[5]:
remove0=tencurvmsrcindex[tencurvmsrcindex['PricePerUnit'] != 0]
#at the TCO stage 95% of our customer need pricing against shared tenancy and unused pricings  are not as relevant.
# In[8]:
filter1=remove_unsed[['Instance Type','Operating System','TermType','PriceDescription','Pre Installed S/W','LeaseContractLength','PurchaseOption','OfferingClass','Location','Unit','PricePerUnit','vCPU','Memory']]
filter2=filter1.apply(lambda x: x.astype(str).str.lower())
filter3=filter2[~filter2['PurchaseOption'].isin(['partial upfront'])]
#most customers don’t use partial upfront.
# In[9]:
filter5=filter4[~filter4['Instance Type'].str.contains("^['r','c','m']4.*")]
#in the aws published excels, even r4,c4 and m4's are marked as current generation. Also, at this stage most customers are not looking for byol pricing.
# In[10]:

3. One-time setup

#!/usr/bin/env python
# coding: utf-8
# In[76]:
import pandas as pd
# In[77]:
#read the source file and filtered file create in previous step
# In[78]:
source2=source2.dropna(subset=['hostname', 'Instance Type', 'Operating System','Location','TermType'])
source2=source2.apply(lambda x: x.astype(str).str.lower())
filtered=filtered.apply(lambda x: x.astype(str).str.lower())
#lowercase everything for string match
# In[79]:
#to drop columns used in source file for validation
# In[85]:
processedsrc_ondemand=processedsrc.loc[lambda processedsrc: processedsrc['TermType']=="ondemand"]
priced_processedsrc_ondemand=pd.merge(processedsrc_ondemand,filtered,on=['Instance Type','Operating System','TermType','Pre Installed S/W','Location'],how='left')
#find pricing for on-demand entries
# In[86]:
processedsrc_reserved=processedsrc.loc[lambda processedsrc: processedsrc['TermType']=="reserved"]
priced_processedsrc_reserved=pd.merge(processedsrc_reserved,filtered,on=['Instance Type','TermType','Operating System','Pre Installed S/W','Location','PurchaseOption','OfferingClass','LeaseContractLength'],how='left')
#find pricing for reserved entries
# In[87]:
#summarise the findings
# In[88]:
# In[ ]:

4. Bulk Pricing

a. Source file preparation:

It should be a fixed input format:

Please find attached sample input sheet. Columns description:

1.Hostname - any unique & mandatory string
2.Col2-6 - For reference, non-unique, Optional
3.'Instance Type' - Mandatory valid instance type.
4.'Operating System' - Mandatory and have to be one of the following:


1.'Pre Installed S/W' - Optional, have to be one of the following:
2.'sql ent'
3.'sql std'
4.'sql web'

1.'Location' - Mandatory and have to be one of the following:
1.'africa (cape town)'
2.'asia pacific (hong kong)'
3.'asia pacific (mumbai)'
4.'asia pacific (osaka-local)'
5.'asia pacific (seoul)'
6.'asia pacific (singapore)'
7.'asia pacific (sydney)'
8.'asia pacific (tokyo)'
9.'aws govcloud (us-east)'
10.'aws govcloud (us-west)'
11.'canada (central)'
12.'eu (frankfurt)'
13.'eu (ireland)'
14.'eu (london)'
15.'eu (milan)'
16.'eu (paris)'
17.'eu (stockholm)'
18.'middle east (bahrain)'
19.'south america (sao paulo)'
20.'us east (n. virginia)'
21.'us east (ohio)'
22.'us west (los angeles)'
23.'us west (n. california)'
24.'us west (oregon)'
25.'eu (stockholm)'
26.'middle east (bahrain)'
27.'south america (sao paulo)'
28.'us east (n. virginia)'
29.'us east (ohio)'
30.'us west (los angeles)'
31.'us west (n. california)'
32.'us west (oregon)'
33.'us west (n. california)'
34.'us west (oregon)'

1.'TermType' - Mandatory and have to be one of the following:

1.'LeaseContractLength' - Mandatory for TermType=='reserved' and have to be one of the following:

1.'PurchaseOption' - Mandatory for TermType=='reserved' and have to be one of the following:
1.'all upfront'
2.'no upfront'

1.'OfferingClass' - Mandatory for TermType=='reserved' and have to be one of the following:

b. pricing:

Now you have pricing against each EC2 and the excel skills can come in handy to do the first level of optimizations etc.

Download the Source Sheet here.

Will come back with more details and steps on Bulk Sizing and Pricing Exercise in our next blog.

40% faster deployments post migrating to cloud along with DevOps transformation

By | Cloud Case Study | No Comments

Customer: The fastest-growing cinema business in the Middle East


The customer is the fastest-growing cinema business in the Middle East who is intending to migrate their core movie ticket management and booking application on to AWS. Currently, it is a colocation data center that needs to be migrated for better scalability and availability followed by implementation of DevOps optimisation, disaster recovery, log analysis, and managed services.

  • Case – Migration
  • Type –  RePlatform
  • Number of VM’s – 50
  • Number of applications migrated – 5
  • Approximate size of DB – 250 GB

About Customer

The customer is the fastest-growing cinema business in the Middle East, which is the leading shopping mall, communities, retail, and leisure pioneer across the Middle East, Africa, and Asia. They operate over 300 screens across UAE, Oman, Bahrain, Qatar, Lebanon and Egypt and will expand to own and manage 600 screens by the year 2020.

Problem Statement

The customer is planning to migrate their core movie ticket management and booking application from their colocation data center (DC) to AWS for better availability and scalability. Migration to AWS was to facilitate the customer to move their production workload as-is without many architectural changes and then focus on DevOps optimisation.

Powerup proposed a 2 phased approach in migrating the customer applications to AWS.

  • Phase 1 – As-is migration of applications from on-premise set up to AWS cloud along with the migration of databases for better scalability and availability.
  • Phase 2 – Implementation of DevOps optimisation.

Post this, Powerup’s scope of work for the customer also included Disaster Recovery (DR) implementation, setting up EKK for log analysis and AWS managed services.

Proposed Solution


Powerup will study the customer’s environment and prepare a blueprint of the overall architecture. They will identify potential servers and database failure points and will accordingly fix the automation of backups.

Powerup to work in coordination with the customer team to,

  • export application and data from on-premise architecture to AWS using either Amazon Import/Export functionality or over the internet.
  • Restore data on the cloud and enable database replication between on-premise and Amazon data stores to identify differential data.
  • Implement the monitoring agents and configure backups.
  • Conduct load testing if required as well as system and user acceptance accessibility tests to identify and rectify vulnerabilities.

Post-deployment and stabilization, Powerup completed the automation of the infrastructure using AWS Cloud formation and code deployment automation to save operational time and effort.


Post phase 1 as-is migration of the customer’s applications, Powerup’s DevOps team will perform weekly manual and automated audits and share reports with the customer team.
Weekly reports on consolidated uptime of applications, total tickets logged, issue resolution details and actionable plans will be shared with the customer team. Powerup will also run a Vulnerability Assessment & Penetration Testing (VAPT) on cloud coupled with quarterly Disaster recovery (DR) drills for one pre-selected application in the cloud platform to ensure governance is in place.
DevOps is also responsible for seamless continuous integration (CI) typically handled by managing a standard single-source repository, automating the build, track the build changes/progress and finally automating the deployment.

Disaster Recovery (DR)

Powerup to understand customer’s compliance, Recovery Point Objective (RPO) and Recovery Time Objective (RTO) expectations before designing the DR strategy.

  • Configure staging VPC, subnets and the entire network set up as per current architecture.
  • Set up network access controls, create NAT gateway and provision firewall for the DR region.
  • Initiate CloudEndure console, enable replication to AWS staging server, create failover replication to DR site from CloudENdure dashboard to conduct DR drill.
  • Verify and analyze RTO and RPO requirements.

EKK set up

The AWS EKK stack (Amazon Elasticsearch Service, Amazon Kinesis and Kibana) act as an alternative to ELK, an open-source tool by Amazon to engage in and visualize data logs. Powerup’s scope involved gathering information on environment and providing access to relevant users to create an AWS ElasticSearch service and AWS Kinesis service. The intent was to install and configure the Kinesis agent on the application servers in order to push the data logs and validate the log stream to the Kibana dashboard. This set up would help configure error and failure logs, alerts, anti-virus logs and transition failures.  The EKK solution is known to provision for analyzing logs and debugging applications. Overall, it helps in managing a log aggregation system. Know more on EKK implementation here.

Managed services

The first step is to study the customer’s cloud environment to identify potential failure points and loopholes if any.

Powerup DevOps team will continuously monitor the customer’s cloud infrastructure health by keeping a check on CPU, memory and storage utilization as well as URL uptime and application performance.

OpenVPN will be configured for the secure exchange of data between the customer’s production setup on AWS and the individual cinema locations. The web, application and database servers are implemented in High Availability (HA) mode. Databases are implemented on Amazon EC2 with replication enabled to a standby server. Relational database service (RDS) may be considered if there are no dependencies from the application end.

Security in the cloud is a shared responsibility between the customer, cloud platform provider and Powerup. Powerup will continuously analyze and assist the customer with best practices around application-level security.

The security monitoring scope includes creating an AWS organization account and proxy accounts with multi-factor authorization (MFA) for controlled access on AWS. Powerup to also set up an Identity and Access Management (IAM), security groups as well as network components on customer’s behalf.

The powerup team has helped setting up the VPN tunnel from AWS to different customer theatre locations [31 different locations].

Enable server-side encryption and manage Secure Sockets Layer (SSL) certificates for the website. Monitor logs for security analysis, resource change tracking, and compliance auditing. Powerup DevOps team to track and monitor firewall for the customer’s environment and additionally mitigate distributed denial-of-service (DDoS) attacks on their portals and websites.

Anti-virus tools and intrusion detection/prevention to be set up by Powerup along with data encryption at the server as well as storage level. Powerup DevOps team will continuously monitor the status of automated and manual backups and record the events in a tracker. In case of missed automatic backups, a manual backup will be taken as a corrective step. Alerts to also be configured for all metrics monitored at the cloud infrastructure level and application level.

Business Benefits

  • Migration helped the customer achieve better scalability and availability.
  • DevOps tool helped automate manual tasks and facilitated seamless continuous delivery while AWS cloud managed services provisioned the customer to reduce operational costs and achieve maximum optimization of workload efficiency.

Cloud platform


Technologies used

EC2, S3, ALB, Autoscaling, CodeDeploy, CloudFormation, MS SQL, Jboss, Infinispan Cluster,  Windows, AWS Export/Import.

Data Analytics helping real-time decision making

By | Data Case Study | 2 Comments

Customer: The fastest-growing cinema business in the Middle East


The customer is the fastest-growing cinema business in the Middle East wanted to manage the logs from multiple environments by setting up centralized logging and visualization, this was done by implementing the EKK(Amazon Elasticsearch, Amazon Kinesis and Kibana) solution in their AWS environment.

About Customer

The customer is a cinema arm of a leading shopping mall, retail and leisure pioneer across the Middle East and North Africa. They are the Middle East’s most innovative and customer-focused exhibitor, and the fastest and rapidly growing cinema business in the MENA region.

Problem Statement

The customer’s applications generate huge amounts of logs from multiple servers, if any error occurs in the application it is difficult for the development team to get the logs or view the logs in real-time to troubleshoot the issue. They do not have a centralized location to visualize logs and get notified if any errors occur.

In the ticket booking scenario, by analyzing the logs that are generated by the application, an organization can enable valuable features, such as notifying the developers that error occurred in the application server while customers are booking the ticket. If the application logs can be analyzed and monitored in real-time, developers can be notified immediately to investigate and fix the issues.

Proposed Solution

Powerup built a log analytics solution on AWS using ElasticSearch as the real-time analytics engine. AWS Kinesis firehose pushes the data to ElasticSearch. In some scenarios, the Customer wanted to transform or enhance data streaming before it is delivered to ElasticSearch. Since all the application logs are in an unstructured format in the server, the customer wanted to filter the unstructured data and transform it into JSON before delivering it to Amazon Elasticsearch Service. Logs from Web, App and DB were pushed to Elasticsearch for all the six applications.

Amazon Kinesis Agent

  • The Amazon Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send data to Kinesis Streams and Kinesis Firehose.
  • AWS Kinesis Firehose Agent – daemon installed on each EC2 instance that pipes logs to Amazon Kinesis Firehose.
  • The agent continuously monitors a set of files and sends new data to your delivery stream. It handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner.

Amazon Kinesis Firehose

  • Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards that you’re already using today.
  • Kinesis Data Firehose Stream – endpoint that accepts the incoming log data and forwards to ElasticSearch

Data Transformation

Kinesis Data Firehose can invoke your Lambda function to transform incoming source data and deliver the transformed data to destinations. When you enable Kinesis Data Firehose data transformation, Kinesis Data Firehose buffers incoming data up to 3 MB by default. Kinesis Data Firehose then invokes the specified Lambda function asynchronously with each buffered batch using the AWS Lambda synchronous invocation model. The transformed data is sent from Lambda to Kinesis Data Firehose. Kinesis Data Firehose then sends it to the destination when the specified destination buffering size or buffering interval is reached, whichever happens first.


  • Elasticsearch is a search engine based on the Lucene It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
  • Store, analyze, and correlate application and infrastructure log data to find and fix issues faster and improve application performance. You can receive automated alerts if your application is underperforming, enabling you to proactively address any issues.
  • Provide a fast, personalized search experience for your applications, websites, and data lake catalogs, allowing users to quickly find relevant data.
  • Collect logs and metrics from your servers, routers, switches, and virtualized machines to get comprehensive visibility into your infrastructure, reducing mean time to detect (MTTD) and resolve (MTTR) issues and lowering system downtime.


Kibana is an open-source data visualization and exploration tool used for log and time-series analytics, application monitoring, and operational intelligence use cases. It offers powerful and easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in geospatial support. Also, it provides tight integration with Elasticsearch, a popular analytics and search engine, which makes Kibana the default choice for visualizing data stored in Elasticsearch.

  • Using Kibana’s pre-built aggregations and filters, you can run a variety of analytics like histograms, top-N queries, and trends with just a few clicks.
  • You can easily set up dashboards and reports and share them with others. All you need is a browser to view and explore the data.
  • Kibana comes with powerful geospatial capabilities so you can seamlessly layer in geographical information on top of your data and visualize results on maps.

Ingesting data to ElasticSearch using Amazon Kinesis Firehose.

Kinesis Data Firehose is part of the Kinesis streaming data platform, along with Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.


The data of interest that your data producer sends to a Kinesis Data Firehose delivery stream. A record can be as large as 1000 KB.

Data producer

Producers send records to Kinesis Data Firehose delivery streams. For example, a web server that sends log data to a delivery stream is a data producer. You can also configure your Kinesis Data Firehose delivery stream to automatically read data from an existing Kinesis data stream, and load it into destinations.

Writing Logs to Kinesis Data Firehose Using Kinesis Agent

  • Amazon Kinesis Agent is a standalone Java software application that offers an easy way to collect and send data to Kinesis Data Firehose. The agent continuously monitors a set of files and sends new data to your Kinesis Data Firehose delivery stream.
  • The agent handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner. It also emits Amazon CloudWatch metrics to help you better monitor and troubleshoot the streaming process.
  • The Kinesis Agent has been installed on all the production server environments such as web servers, log servers, and application servers. After installing the agent, we need to configure it by specifying the log files to monitor and the delivery stream for the data. After the agent is configured, it durably collects data from those log files and reliably sends it to the delivery stream.
  • Since the data in the servers are unstructured and the customer wanted to send the specific format of data to ElasticSearch and visualize it on Kibana. So we configured an agent to preprocess the data and deliver the preprocessed data to AWS Kinesis Firehose. Preprocessed configuration used in the Kinesis Agent


  • Since the data in the logs are unstructured and needed to filter some specific records from the data. So we used the match pattern to send the record to filter the data and send it to Kinesis Firehose.
  • The agent has configured in a way to capture the unstructured data using regular expression and send it to the AWS Kinesis Firehose.

An Example How we filtered the data and sent it to the kinesis firehose.

  • LOGTOJSON configuration with Match Pattern

Sample Kinesis agent configuration:


    "optionName": "LOGTOJSON",

    "logFormat": "COMMONAPACHELOG",

    "matchPattern": "^([\\d.]+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(.+?)\" (\\d{3})",

    "customFieldNames": ["host", "ident", "authuser", "datetime", "request", "response"]


The record in the server before conversion: - - [27/Oct/2000:09:27:09 -0400] "GET /java/javaResources.html HTTP/1.0" 200

After conversion:





"datetime":"27/Oct/2000:09:27:09 -0400",

"request":"GET /java/javaResources.html HTTP/1.0",



The record in the server has been converted to JSON format. The Match pattern only captures the data in the data according to regular expression and sends the data to AWS Kinesis Firehose. AWS Kinesis Firehose sends the data to Elasticsearch and can be visualized on the Kibana.

Business Benefits

  • Powerup Team successfully implemented the real-time centralized log analytics solution using AWS kinesis firehose and ElasticSearch.
    • Kinesis agent was used to filtering the applications and kinesis firehose streams the logs to Elasticsearch.
    • Separate indexes were created for all 6 applications in  Elasticsearch based on access log and error log.
    • A Total of 20 dashboards were created in Kibana based on error types, for example, 4xx error, 5xx error, cron failure, auth failure.
    • Created Alerts were sent to the developers using AWS SNS. when the configured thresholds, so that developers can take immediate actions on the errors generated on the application and server.
    • Developer log analysis time has greatly decreased from a couple of hours to a few minutes.
  • The EKK setup implemented for the customer is a total log-analysis platform for search, analyses and visualization of log-generated data from different machines and perform centralized logging to help identify any server and application-related issues across multiple servers in the customer environment and correlate the logs in a particular time frame.
  • The data analysis and visualization of EKK setup have benefited the management and respective stakeholders to view the business reports from various application streams which led to easy business decision making.

Cloud platform


Technologies used

Lambda, Kibana, EC2, Kinesis.

Using AI to make roller coasters safer

By | AI Case Study, Artificial Intelligence | No Comments

Customer: One of the leading integrated resorts


The customer an integrated resort on the island in Singapore. They offer some world-class attractions one of which is the Battlestar Galactica, the most enticing duel rollercoaster ride at the resort. They decided to cater to preventive maintenance of the wheels of this ride to ensure top-class safety. They planned to adopt Machine Learning (ML) based solution via Google cloud platform (GCP).

Problem Statement

  • The Customer’s Battlestar Galactica ride is financially quite demanding and requires high maintenance.
  • The wheel detection process is time-consuming and a high maintenance manual job.
  • Decision making on the good versus the bad wheel is based on human judgement and expert’s experience.

The ultimate goal was to remove human intervention and automate the decision making on the identification of a bad wheel using machine learning. The machine learning model needed to be trained on currently available data and ingest real-time data over a period of time to help identify patterns of range and intensity values of wheels. This would in turn help in identifying the wheel as good or bad at the end of every run.

Proposed Solution

Project pre-requisites

Ordering of .dat files generated by SICK cameras to be maintained in a single date-time format for appropriate Radio-frequency identification (RFID) wheel mapping. Bad wheel data should be stored in the same format as a good wheel (.dat files) in order to train the classifier. The dashboard to contain the trend of intensity and height values. Single folder to be maintained for Cam_01 and another folder for Cam_02, folder name or location should not change.


  • Data ingestion and storage

An image capturing software tool named Ranger Studio was used to absorb the complete information on wheels. The Ranger Studio onsite machine generates .dat files for wheels post every run and stores in a local machine. An upload service picks these .dat files from the storage location at pre-defined intervals and runs C# code on it to provide CSV output with range and intensity values.

CSV files are pushed to Google Cloud Services (GCS) using Google Pub/Sub real-time messaging service. The publisher is used to publish files from the local machine using two separate python scripts for Cam01 and Cam02. The subscriber is then used to subscribe to the published files for Cam01 and Cam02.

  • Data Processing

Powerup is responsible to ingest the data into cloud storage or cloud SQL based on the defined format. Processing of data would include the timestamp and wheel run count. There is a pub tracker and a sub tracker maintained to track the files for both cameras so that the subscribed files can be stored on GCS for both the cameras separately. After CSV data is processed, it is removed from the local machine via a custom script to avoid memory issues.

  • Data modelling Cloud SQL

Once data is processed, Powerup to design the data model in cloud SQL where all the data points will be stored in relational format.

The CSV files of individual wheels are then used to train the classifier model. The classifier model is built with an application programming interface named Keras. The trained classifier helps generate a prediction model (.pkl file) to identify good and bad wheels. The prediction model resides on a GCP VM. The generated CSV files are passed through the prediction model and are classified as good or bad based on an accuracy value.

  • Big Query and ML Model

Once the prediction for a wheel is done, the predicted accuracy score, timestamp and wheel information is stored into the Big Query tables. The average wheel accuracy for wheels is then displayed on Google Data Studio.

Powerup to ensure optimization of data performance via tuning and build the ML model. This would enable the customer to obtain large volumes of height and intensity data, post which, they score the ML model with new data.

Current accuracy threshold for SMS trigger is set at 70. Accuracy of prediction is set to improve over a period 6 months when the model has enough bad wheel data reported for training the ML classifier model. SMS will be triggered if the accuracy value is below 70.

SMS will also be triggered if a file is not received from the local machine to Google Cloud Storage via Google Pub/Sub. The reason for file not being received needs to be checked by the client’s SICK team as it may be due to multiple reasons like source file not generated due to camera malfunction, system shutdown or maintenance and so on. Powerup team to be informed about the same as the restart of instances may be required in such cases. Twilio is the service used for SMS whereas SendGrid is used for email notifications.

  • Security and Deployment

Powerup to build a secure environment for all third party integrations. Deploy User Acceptance Test (UAT) environments, conduct regression tests and provide

Go Live Support to off-site applications. The number of servers and services supported with the production was 10 where support included server management in terms of security, network, DevOps, backup DR and audit. Support also included adjusting ML models to improvise training.



Since the request payload size was higher, Google ML / Online Predictor could not be used. A custom prediction model was built with Keras to overcome this.

Artificial Intelligence

Cloud platform

Google Cloud Platform.

Technologies used

Cloud Storage, Bog Query, Data Studio, Compute Engine.

Business Benefits

Powerup has successfully been able to train the classifier model with a limited set of good and bad wheel real-time data. The accuracy of the model is expected to improve over time. With current data, the accuracy of the model stands at 60% ensuring cost-effectiveness and world-class safety.