Simplify Cloud Transformation with Tools

By | Powerlearnings | No Comments

Compiled by Kiran Kumar, Business Analyst at Powerup Cloud Technologies


Cloud can bring a lot of benefits to organizations including operational resilience, business agility, cost-efficiency, scalability as well as staff productivity. However moving to cloud can be a task with so many loose ends to worry about like downtime, security, and cost etc. Even some of the top cloud teams can be left clueless and overwhelmed by the scale of the project and decisions that need to be taken. 

But the cloud marketplace has matured greatly and there are a lot of tools and solutions that can help you automate or assist you in your expedition. These solutions can significantly reduce the complexity of the project. Knowing the value cloud tools bring to your organization. I have listed down tools that can assist you in each phase of your cloud journey have been said that this blog just serves as a representation of the type of products and services that are available for easy cloud transformation.

Cloud Assessment


LTI RapidAdopt helps you fast track adoption of various cloud models. Based on the overall scores, the accurate Cloud strategy is formulated The Cloud Assessment Framework enables organizations to assess their cloud readiness roadmap.

SurPaaS® Assess™

Is a complete Cloud migration feasibility assessment platform that generates multiple reports after analyzing your application to help you understand the factors involved in migrating your application to the Cloud. These reports help you to decide on your ideal Cloud migration plan and help you accelerate your Cloud Journey


Make data-driven cloud decisions with confidence using high-precision analytics and powerful automation make planning simple and efficient, accelerating your migration to the cloud.

Cloud Recon

Inventory applications and workloads to develop a cloud strategy with detailed ROI and TCO benefits.

NetApp Cloud Assessment Tool

The Cloud Assessment tool will monitor your cloud storage resources, optimize cloud efficiency and data protection, identify cost-saving opportunities, and reduce overall storage spend so you can manage your cloud with confidence.

Risc Networks

RISC Networks’ breakthrough SaaS-based analytics platform helps companies chart the most efficient and effective path from on-premise to the cloud.


With migVisor, you’ll know exactly how difficult (or easy) your database migration will be. migVisor analyzes your source database configuration, attributes, schema objects, and proprietary features


SurPaaS® Migrate™

with its advanced Cloud migration methodologies, enables you to migrate any application to the Cloud without any difficulty. It provides you with various intelligent options for migrating applications/VMs to the Cloud. robust migration methodologies allow you to migrate multiple servers in parallel with clear actionable reporting in case of any migration issues.

SurPaaS® Smart Shift™

Smart Shift™ migrates an application to Cloud with a re-architected deployment topology based on different business needs such as scalability, performance, security, redundancy, high availability, backup, etc.

SurPaaS® PaaSify™

Is the only Cloud migration platform that lets you migrate any application and its databases to required PaaS services on Cloud. Choose different PaaS services for different workloads available in your application and migrate to Cloud with a single click. 


simplifies, expedites, and automates migrations from physical, virtual, and cloud-based infrastructure to AWS.

SurPaaS® Containerize™

allows you to identify application workloads that are compatible with containerization using its comprehensive knowledge base system. Choose workloads that need to be containerized and select any one of the topologies from SurPaaS® multiple container deployment architecture suggestions

Carbonite Migrate

Structured, repeatable data migration from any source to any target with near zero data loss and no disruption in service.

AWS Database Migration Service

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Azure Migrate

A central hub of Azure cloud migration services and tools to discover, assess and migrate workloads to the cloud

Cloud Sync

An easy to use cloud replication and synchronization service for transferring NAS data between on-premises and cloud object stores.


Migrate multiple cloud workloads with a single solution. MigrationWiz—the industry-leading SaaS solution—enables you to migrate email and data from a wide range of Sources and Destinations.

Cloud Pilot

Analyze applications at the code level to determine Cloud readiness and conduct migrations for Cloud-ready applications.


PaaSify is an advanced solution, which runs through the application code, and evaluates the migration of apps to cloud. The solution analyzes the code across 70+ parameters, including session objects, third-party dependencies, authentication, database connections, and hard-coded links.

Application Development

DevOps on AWS

Amazon Elastic Container Service

Production Docker Platform

AWS Lambda

Serverless Computing

AWS CloudFormation

Templated Infrastructure Provisioning

AWS OpsWorks   

Chef Configuration Management

AWS Systems Manager

Configuration Management

AWS Config

Policy as Code

Amazon CloudWatch

Cloud and Network Monitoring


Distributed Tracing

AWS CloudTrail

Activity & API Usage Tracking

AWS Elastic Beanstalk

Run and Manage Web Apps

AWS CodeCommit

Private Git Hosting

Azure DevOps service

Azure Boards

Deliver value to your users faster using proven agile tools to plan, track, and discuss work across your teams.

Azure Pipelines

Build, test, and deploy with CI/CD which works with any language, platform, and cloud. Connect to GitHub or any other Git provider and deploy continuously.

Azure Repos

Get unlimited, cloud-hosted private Git repos and collaborate to build better code with pull requests and advanced file management.

Azure Test Plans

Test and ship with confidence using manual and exploratory testing tools.

Azure Artifacts

Create, host, and share packages with your team and add artifacts to your CI/CD pipelines with a single click.


Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Infrastructure Monitoring & Optimization

SurPaaS® Optimo™

Realizing Continuous Cloud Landscape Optimization, with AI-driven advisories and Integrated Cloud management actions to reduce your Cloud costs.

Carbonite Availability

continuous replication technology maintains an up-to-date copy of your operating environment without taxing the primary system or network bandwidth.


Embrace AIOps for the technical agility, speed and capacity needed to manage today’s complex environments.

Cloud Insights

With Cloud Insights, you can monitor, troubleshoot and optimize all your resources including your public clouds and your private data centers.

TrueSight Operations Management

Machine learning and advanced analytics for holistic monitoring and event management

BMC Helix Optimize

SaaS solution that deploys analytics to continuously optimize resource capacity and cost

Azure Monitor 

Azure Monitor collects monitoring telemetry from a variety of on-premises and Azure sources. Management tools, such as those in Azure Security Center and Azure Automation, also push log data to Azure Monitor.

Amazon CloudWatch

CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

TrueSight Orchestration

Coordinate workflows across applications, platforms, and tools to automate critical IT processes

BMC Helix Remediate

Automated security vulnerability management and simplified patching for hybrid clouds

BMC Helix Discovery

Automatic discovery of data center and multi-cloud inventory, configuration, and relationship data

Cloud Manager  

Cloud Manager provides IT experts and cloud architects with a centralized control plane to manage, monitor and automate data in hybrid-cloud environments, providing an integrated experience of NetApp’s Cloud Data Services.

Application Management

SurPaaS® Operato™

Visualize Application Landscape on the Cloud and effectively manage with ease using multiple options to enhance your applications.

SurPaaS® Moderno™

SurPaaS® can quickly assess your applications and offers a path to move your workloads within hours to DBaaS, Serverless App Services, Containers, and Kubernetes Services.

SurPaaS® SaaSify™

Faster and Smarter Way to SaaS. SaaSify your Applications and Manage their SaaS Operations Efficiently.


Best-in-class APM from the category leader. Advanced observability across cloud and hybrid environments, from microservices to mainframe. Automatic full-stack instrumentation, dependency mapping and AI-assisted answers detailing the precise root-cause of anomalies, eliminating redundant manual work, and letting you focus on what matters. 

New Relic APM

APM agents give real-time observability matched with trending data about your application’s performance and the user experience. Agents reveal what is happening deep in your code with end to end transaction tracing and a variety of color-coded charts and reports.

DataDog APM

Datadog APM provides end-to-end distributed tracing from frontend devices to databases—with no sampling. Distributed traces correlate seamlessly with metrics, logs, browser sessions, code profiles, synthetics, and network performance data, so you can understand service dependencies, reduce latency, eliminate errors, 

SolarWinds Server & Application Monitor 

End-To-End Monitoring

Server capacity planning 

Custom app monitoring 

Application dependency mapping 


Actively monitor, analyze and optimize complex application environments at scale.


Carbonite Recover

Carbonite® Recover reduces the risk of unplanned downtime by securely replicating critical systems to the cloud, providing an up-to-date copy for immediate failover

Carbonite Server

All-in-one backup and recovery solution for physical, virtual and legacy systems with optional cloud failover.

Carbonite Availability

Continuous replication for physical, virtual and cloud workloads with push-button failover for keeping critical systems online all the time.

Cloud Backup Service

Cloud Backup Service delivers seamless and cost-effective backup and restore capabilities for protecting and archiving data.


Scalable, cost-effective business continuity for physical, virtual, and cloud servers


Reduce cost and complexity of application migrations and data protection with Zerto’s unique platform utilizing Continuous Data Protection. Orchestration built into the platform enables full automation of recovery and migration processes. Analytics provides 24/7 infrastructure visibility and control, even across clouds.

Azure Site Recovery

 Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location

Cloud Governance and security

CloudHealth Multicloud Platform

Transform the way you operate in the public cloud

CloudHealth Partner Platform

Deliver managed services to your customers at scale

CloudHealth Secure State

Mitigate security and compliance risk with real-time security insights

Cloud Compliance

Automated controls for data privacy regulations such as the GDPR, CCPA, and more. Driven by powerful artificial intelligence algorithms,

Azure Governance Tools

Get transparency into what you are spending on cloud resources. Set policies across resources and monitor compliance enabling quick, repeatable creation of governed environments


Splunk Security Operations Suite combines industry-leading data, analytics and operations solutions to modernize and optimize your cyber defenses.


An autonomous cloud governance platform that is built to manage multi cloud environment It performs real time well architected audit on all your cloud, giving you a comprehensive view of best practices adherence in your cloud environment with additional emphasis on security, reliability and cost.The enterprise version of Cloud Ensure, the hosted version of the original SaaS platform, is best suited for organizations wanting to have in-house governance and monitoring of their cloud portfolio. 

Azure Cache for Redis: Connecting to SSL enabled Redis from Redis-CLI in Windows & Linux

By | Powerlearnings | No Comments

Written by Tejaswee Das, Sr. Software Engineer, Powerupcloud Technologies

Collaborator: Layana Shrivastava, Software Engineer


This blog will guide you through the steps to connect to a SSL enabled remote Azure Redis Cache from redis-cli. We will demonstrate how to achieve this connectivity in both Windows & Linux systems.

Use Case

While connecting to a non-SSL redis might be straight forward, works great for Dev & Test Environments, but for higher environments – Stage & Prod, security is something that should always be the priority. For that reason, it is advisable to use SSL enabled redis instances.  The default non-SSL port is 6379 & SSL port is 6380.


Step 1:  Connecting to non-SSL redis is easy

PS C:\Program Files\Redis> .\redis-cli.exe -h -p 6379 -a xxxxxxxx

Step 2:  To connect to SSL redis, we will need to create a secure tunnel. Microsoft has recommended using Stunnel to achieve this. You can download the applicable package from the below link

We are using stunnel-5.57-win64-installer.exe here

2.1 Agree License and start installation

2.2 Specify User

2.3 Choose components

2.4 Choose Install Location

2.5 This step is optional. You can fill in details or just press Enter to continue.

Choose FQDN as localhost

2.6 Complete setup and start stunnel

2.7 On the bottom task bar, right corner, click on  (green dot icon) → Edit Configuration

2.8 Add this block in the config file. You can add it at the end.

client = yes
accept =
connect =

2.9 Open Stunnel again from the taskbar → Right click → Reload Configuration to effect the changes. Double click on the icon and you can see

Step 3: Go back to your redis-cli.exe location in Powershell and try connecting now

PS C:\Program Files\Redis> .\redis-cli.exe -p 6380 -a xxxxxxxx


Step 1:  Installation & configuring Stunnel in Linux is pretty easy. Follow the below steps. You are advised to use these commands with admin privileges

1.1 Update & upgrade existing packages to the latest version.

  • apt update
  • apt upgrade -y

1.2 Install redis server. You can skip this if you already have redis-cli installed in your system/VM

  • apt install redis-server
  • To check redis status : service redis status
  • If the service is not in active(running state): service redis restart

1.3 Install Stunnel for SSL redis

●    apt install stunnel4
●    Open file /etc/default/stunnel4
--Enabled=1 (Change value from 0 to 1 to auto start service)
●       Create redis conf for stunnel. Open /etc/stunnel/redis.conf with your favorite editor and add this code block
client = yes
accept =
connect =
●       Check status: systemctl status stunnel4.service
●       Restart stunnel service: systemctl restart stunnel4.service
●       Reload configuration: systemctl reload stunnel4.service
●       Restart: systemctl restart stunnel4.service
●       Check status if running: systemctl status stunnel4.service

1.4 Check whether Stunnel is listening to connections

  • Netstat -tlpn | grep

1.5 Try connecting to redis now

>redis-cli -p 6380 -a xxxxxxxx

Success! You are now connected.


So finally we are able to connect to SSL enabled redis from redis-cli.

This makes our infrastructure more secure.

Hope this was informative. Do leave you comments for any questions.


AWS Lambda Java: Sending S3 event notification email using SES – Part 2

By | Powerlearnings | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies

Collaborator: Neenu Jose, Senior Software Engineer


In the first part of this series, we have discussed in-depth about creating a Lambda deployment package for Java 8/11 using Maven in Eclipse & S3 event triggers. know more here

In this post, we will showcase how we can send emails using AWS Simple Email Service (SES) with S3 Event triggers in Java.

Use Case

One of our clients had their workloads running on Azure Cloud. They had few serverless applications in Java 8 in Azure Functions. They wanted to upgrade Java from Java 8 to Java 11. Since Java 11 was not supported (Java 11 for Azure Functions has recently been released in Preview), they wanted to try out other cloud services – that’s when AWS Lambda was the one to come into the picture. We did a POC feasibility check for Java 11 applications running on AWS Lambda. 

Step 1:

Make sure you are following Part 1 of this series. This is a continuation of the first part, so it will be difficult to follow Part 2 separately.

Step 2:

Add SES Email Addresses

Restrictions are added to all SES accounts to prevent fraud and abuse. For this reason, for all test emails that you intend to use, you will have to add both the sender & receiver email addresses to SES, which by default is placed in SES sandbox.

2.1 To add email addresses, go to AWS Console → Services → Customer Engagement → Simple Email Service (SES)

2.2  SES Home → Email Addresses → Verify a New Email Address

2.3 Add Addresses to be verified

2.4 A verification email is sent to the added email address

2.5 Until the email address is verified, it cannot be used to send or receive emails. Status shown in SES is pending verification (resend)

2.6 Go to your email client inbox and click on the URL to authorize your email address

2.7 On successful verification, we can check the new status in SES Home, status verified.

Step 3:

In the pom.xml add the below Maven dependencies. To use SES, we will require aws-java-sdk-ses

Below in our pom.xml file for reference

<project xmlns="" xmlns:xsi=""





    <!-- -->

Step 4:

Edit your file with the latest code

4.1 Add email components as string

final String FROM = "";
final String TO = "";
final String SUBJECT = "Upload Successful";
final String HTMLBODY = key+" has been successfully uploaded to "+bucket;
final String TEXTBODY = "This email was sent through Amazon SES using the AWS SDK for Java.";

4.2 Create SES client

AmazonSimpleEmailService client = AmazonSimpleEmailServiceClientBuilder.standard()
                // Replace US_WEST_2 with the AWS Region you're using for
                // Amazon SES.

4.3 Send email using SendEmailRequest

SendEmailRequest request = new SendEmailRequest()
                    new Destination().withToAddresses(TO))
                .withMessage(new Message()
                    .withBody(new Body()
                        .withHtml(new Content()
                        .withText(new Content()
                    .withSubject(new Content()

You can refer the complete code below

package com.amazonaws.lambda.demo;

import com.amazonaws.regions.Regions;

public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {

    private AmazonS3 s3 = AmazonS3ClientBuilder.standard()

    public LambdaFunctionHandler() {}

    // Test purpose only.
    LambdaFunctionHandler(AmazonS3 s3) {
        this.s3 = s3;

    public String handleRequest(S3Event event, Context context) {
        context.getLogger().log("Received event: " + event);

        // Get the object from the event and show its content type
        String bucket = event.getRecords().get(0).getS3().getBucket().getName();
        String key = event.getRecords().get(0).getS3().getObject().getKey();
        final String FROM = "";
        final String TO = "";
        final String SUBJECT = "Upload Successful";
        final String HTMLBODY = key+" has been successfully uploaded to "+bucket;

        final String TEXTBODY = "This email was sent through Amazon SES "
        	      + "using the AWS SDK for Java.";
        try {
            AmazonSimpleEmailService client = 
                // Replace US_WEST_2 with the AWS Region you're using for
                // Amazon SES.
            SendEmailRequest request = new SendEmailRequest()
                    new Destination().withToAddresses(TO))
                .withMessage(new Message()
                    .withBody(new Body()
                        .withHtml(new Content()
                        .withText(new Content()
                    .withSubject(new Content()
                // Comment or remove the next line if you are not using a
                // configuration set
               // .withConfigurationSetName(CONFIGSET);
            System.out.println("Email sent!");
          } catch (Exception ex) {
            System.out.println("The email was not sent. Error message: " 
                + ex.getMessage());
       context.getLogger().log("Filename: " + key);
        try {
            S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
            String contentType = response.getObjectMetadata().getContentType();
            context.getLogger().log("CONTENT TYPE: " + contentType);
            return contentType;
        } catch (Exception e) {
                "Error getting object %s from bucket %s. Make sure they exist and"
                + " your bucket is in the same region as this function.", key, bucket));
            throw e;

Step 5:

Build the updated project and upload it to Lambda. Refer to Step 5 (

Step 6:

To test this deployment. Upload yet another new file to your bucket. Refer Step 9 of blog Part 1.

On successful upload, SES sends an email with the details. Sample screenshot below.


S3 event notifications can be used for a variety of use-case scenarios. We have tried to showcase just one simple case. This can be used to monitor incoming files & objects into a S3 bucket and appropriate actions & transformations.

Hope this was informative. Do leave you comments for any questions.


Bulk AWS EC2 pricing

By | Powerlearnings | No Comments

Written by Mudit Jain, Principal Architect at Powerupcloud Technologies.

Your Amazon EC2 usage is calculated by either the hour or the second based on the size of the instance, operating system, and the AWS Region where the instances are launched. Pricing is per instance hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. However, there is no bulk upload in a simple monthly calculator and calculator. AWS, with the following options to execute the same.

  • There are paid discovery tools that also do bulk pricing. However, often initial budgeting TCO calculation is done way ahead in the pipeline.
  • Manually add entries to AWS calculator and simple monthly calculator. However, this is not feasible with a large number of VMs on enterprise-level and manual entries are error-prone.
  • You can work with the excels but it has its own limitations.

To  overcome the above-mentioned, we have arrived at an effective 4-steps solution process, they are,

  1. Source of truth download 
  2. Source file cleanup
  3. One-time setup
  4. Bulk Pricing:
    1. Source file preparation
    2. pricing

1. Source of truth download

AWS publishes its service EC2 pricings here.

The documentation for Using the bulk API.

2. Source file cleanup

Use any python with Pandas pre-installed.

#!/usr/bin/env python
# coding: utf-8
# In[1]:
import pandas as pd
# In[2]:
# In[3]:
vmsrcindex = srcindex[srcindex['Instance Type'].notna()]
curvmsrcindex=vmsrcindex[vmsrcindex['Current Generation'] == "Yes"]
#remove non-ec2 pricings in ec2 and remove old generations pricings
# In[5]:
remove0=tencurvmsrcindex[tencurvmsrcindex['PricePerUnit'] != 0]
#at the TCO stage 95% of our customer need pricing against shared tenancy and unused pricings  are not as relevant.
# In[8]:
filter1=remove_unsed[['Instance Type','Operating System','TermType','PriceDescription','Pre Installed S/W','LeaseContractLength','PurchaseOption','OfferingClass','Location','Unit','PricePerUnit','vCPU','Memory']]
filter2=filter1.apply(lambda x: x.astype(str).str.lower())
filter3=filter2[~filter2['PurchaseOption'].isin(['partial upfront'])]
#most customers don’t use partial upfront.
# In[9]:
filter5=filter4[~filter4['Instance Type'].str.contains("^['r','c','m']4.*")]
#in the aws published excels, even r4,c4 and m4's are marked as current generation. Also, at this stage most customers are not looking for byol pricing.
# In[10]:

3. One-time setup

#!/usr/bin/env python
# coding: utf-8
# In[76]:
import pandas as pd
# In[77]:
#read the source file and filtered file create in previous step
# In[78]:
source2=source2.dropna(subset=['hostname', 'Instance Type', 'Operating System','Location','TermType'])
source2=source2.apply(lambda x: x.astype(str).str.lower())
filtered=filtered.apply(lambda x: x.astype(str).str.lower())
#lowercase everything for string match
# In[79]:
#to drop columns used in source file for validation
# In[85]:
processedsrc_ondemand=processedsrc.loc[lambda processedsrc: processedsrc['TermType']=="ondemand"]
priced_processedsrc_ondemand=pd.merge(processedsrc_ondemand,filtered,on=['Instance Type','Operating System','TermType','Pre Installed S/W','Location'],how='left')
#find pricing for on-demand entries
# In[86]:
processedsrc_reserved=processedsrc.loc[lambda processedsrc: processedsrc['TermType']=="reserved"]
priced_processedsrc_reserved=pd.merge(processedsrc_reserved,filtered,on=['Instance Type','TermType','Operating System','Pre Installed S/W','Location','PurchaseOption','OfferingClass','LeaseContractLength'],how='left')
#find pricing for reserved entries
# In[87]:
#summarise the findings
# In[88]:
# In[ ]:

4. Bulk Pricing

a. Source file preparation:

It should be a fixed input format:

Please find attached sample input sheet. Columns description:

1.Hostname - any unique & mandatory string
2.Col2-6 - For reference, non-unique, Optional
3.'Instance Type' - Mandatory valid instance type.
4.'Operating System' - Mandatory and have to be one of the following:


1.'Pre Installed S/W' - Optional, have to be one of the following:
2.'sql ent'
3.'sql std'
4.'sql web'

1.'Location' - Mandatory and have to be one of the following:
1.'africa (cape town)'
2.'asia pacific (hong kong)'
3.'asia pacific (mumbai)'
4.'asia pacific (osaka-local)'
5.'asia pacific (seoul)'
6.'asia pacific (singapore)'
7.'asia pacific (sydney)'
8.'asia pacific (tokyo)'
9.'aws govcloud (us-east)'
10.'aws govcloud (us-west)'
11.'canada (central)'
12.'eu (frankfurt)'
13.'eu (ireland)'
14.'eu (london)'
15.'eu (milan)'
16.'eu (paris)'
17.'eu (stockholm)'
18.'middle east (bahrain)'
19.'south america (sao paulo)'
20.'us east (n. virginia)'
21.'us east (ohio)'
22.'us west (los angeles)'
23.'us west (n. california)'
24.'us west (oregon)'
25.'eu (stockholm)'
26.'middle east (bahrain)'
27.'south america (sao paulo)'
28.'us east (n. virginia)'
29.'us east (ohio)'
30.'us west (los angeles)'
31.'us west (n. california)'
32.'us west (oregon)'
33.'us west (n. california)'
34.'us west (oregon)'

1.'TermType' - Mandatory and have to be one of the following:

1.'LeaseContractLength' - Mandatory for TermType=='reserved' and have to be one of the following:

1.'PurchaseOption' - Mandatory for TermType=='reserved' and have to be one of the following:
1.'all upfront'
2.'no upfront'

1.'OfferingClass' - Mandatory for TermType=='reserved' and have to be one of the following:

b. pricing:

Now you have pricing against each EC2 and the excel skills can come in handy to do the first level of optimizations etc.

Download the Source Sheet here.

Will come back with more details and steps on Bulk Sizing and Pricing Exercise in our next blog.

Managing the big ‘R’ in cloud migrations

By | Powerlearnings | One Comment

Compiled by Mitu Kumari Senior Executive, Marketing at Powerupcloud Technologies.

Cloud adoption, with migRation at the core, is possibly one of the biggest technology-based organizational changes undertaken. It has a profound impact on a business’s ability to innovate and fully transform the way they operate.

Moving the business applications and data to the cloud is a great strategic move that gives a competitive edge by reducing IT costs. It helps lighten the budget and promotes greater collaboration. It also has the added benefit of disaster risk management and supports continuity of business operations.

However, in spite of the obvious benefits, migRation to the cloud can be a daunting process; and needs to be done in the right way to ensure it supports business needs and delivers the value that it promises.

According to the seventh annual Cisco Global Cloud Index (2016-2021), which focuses on data centre virtualization and cloud computing, both consumer and business applications are contributing to the growing dominance of cloud services over the Internet.

Key highlights & projections of cloud computing:

  • The study forecasts global cloud data center traffic to reach 19.5 zettabytes (ZB) per year by 2021, up from 6.0 ZB per year in 2016.
  • Cloud data center traffic will represent 95 percent of total data center traffic by 2021, compared to 88 percent in 2016.
  • It is expected that by 2021 there will be 628 hyperscale data centers globally, compared to 338 in 2016, 1.9-fold growth or near doubling over the forecast period. 

The key to a successful cloud migRation is in the planning. Creating a migRation strategy is vital and involves identifying the potential options, understanding the interdependencies between applications, and deciding upon what layers of an application would benefit from migRation. Each of these steps provides more opportunities to take advantage of cloud-native features.

Having established a basic understanding of the importance of migRation, let’s try to understand the primary steps to be taken by  organizations and businesses to migrate applications to the cloud.

But, prior to that, an organisation/ business needs to analyze its current applications and systems.

What are the parameters to analyze in order to select the right migRation approach?

There are broadly two considerations – business and technology; for migrating an application to the cloud, which can also affect the choice of a migRation strategy. Using this knowledge, organizations can outline a plan on how they’ll approach migrating each of the applications in their portfolio and in what order.

The 3 main parameters for the business considerations are:

  • Business Fit: When the current application is not a suitable fit for business requirements.
  • Business Value: When the current application does not provide adequate value to the users in terms of information quality and support.
  • Agility: The existing application failing to keep up with the current pace, creating financial and operational risks in the future.

The 3 main parameters for the technical considerations are:

  • Cost: The total cost of ownership for maintenance of the application is higher than its business value. 
  • Complexity: Complexity within current application causes various problems which can be a major factor in maintainability and time, cost and risk of change.
  • Risk: With regards to older applications various level risk exists within the application tech stack or functionality. 

Before an organization begins moving to the cloud, it is vital that IT executives and business leaders take a step back and carefully craft a strategic plan for cloud migRation and application transformation.

Identifying issues in the application layer:

The underlying cause of the issue and its location must be identified within the application. The source of the issue can exist within 3 main aspects of software component: 

  •  Technology 
  • Functionality
  •  Architecture 

Functionality = Likely source for fit and value issues. 

Technology = Common Source for Cost, Complexity and Risk issues Architecture.

 Architecture = Contributor for both and has an impact on complexity and agility.

After having identified and confirmed the application layer issues, the next key step would be to choose the right and most suitable migRation strategy.

How do you choose a migRation strategy, the big ‘R’:

There are 6 primary approaches to choose the most suitable migRation strategy. They are listed and defined as below.

  1. Rehost
  2. Replatform
  3. Repurchase
  4. Refactor/Rearchitect
  5. Retire
  6. Retain

1. Rehosting (lift-and-shift)

The simplest path is the lift and shift, which works just how it sounds.  It’s about simply taking the existing data applications and redeploying them on the cloud servers. This is the most common path for companies new to cloud computing, who are not yet accustomed to a cloud environment & can benefit from the speed of deployment.

Gartner refers to this as rehosting, because it involves moving your stack to a new host without making extensive changes. This enables a rapid, cost-effective migRation, minimal disruption and quick ROI, minimizing the business disruption that could occur if the application was to be refactored.

 However, the lack of modification to your system also prevents you from harnessing certain cloud migRation benefits in the short term.

You shouldn’t treat lift and shift as the end of the migRation story. Very often, applications can be migrated with lift and shift and then, once in the cloud, re-architected to take advantage of the cloud computing platform.

When to choose Rehost approach for cloud migRation:

  • Avoiding expensive investments in hardware: For example, if setting up a data operation center is costing twice compared to the cloud, it is advisable to move the application to the cloud with minor or no modification.
  •  Some applications can be easily optimized once migrated to the cloud: For those applications, it is a good strategy to first move them to the cloud by adopting the rehost approach as it is and then optimize.
  •  In the case of commercially off-the-shelf applications: It is impossible to do code changes on those applications. In this case, it is a better idea to adopt rehost.
  •  MigRation of applications that need to keep running: For organizations choosing to move to the cloud and having some applications that just need to keep running without disruption or modification, rehost is a good

2. Re-platforming (lift-tinker-and-shift)

 This is a good strategy for organizations that aren’t ready for expansion or configuration, or those that want to build trust in the cloud before making a commitment.

Re-platforming is really a variation of lift and shift, here you might make a few cloud (or other) optimizations in order to achieve some tangible benefit, you aren’t otherwise changing the core architecture of the application, but use cloud-based frameworks and tools that allow developers to take advantage of the cloud’s potential.

While this and migRation has some cost associated, it is sometimes a significant savings when compared to the cost of rebuilding the existing legacy system. 

When to choose Re-platform approach for cloud migRation:

  •  Organizations willing to automate tasks that are essential to operations, but are not the business priorities. 
  • If for moving an application to cloud, the source environment is not supported, then a slight modification is required.
  •  Organizations looking to leverage more cloud benefits other than just moving the application to the cloud. 
  • Organizations that are sure that minor changes won’t affect the application functioning.

3. Re-purchase (Drop & Shop)

Repurchasing is another fast way to access cloud technologies. Software as a service (SaaS) can take existing data and applications and translate them into a comparable cloud-based product. This can help make significant progress with operations such as customer relationship management and content management. 

Repurchasing involves a licensing change where the on-premise version is being swapped for the same application but as a cloud service. Dropping on-premise applications and switching to the cloud can offer improved feature sets, without the complexities of rehosting the existing application. The approach is a common choice for legacy applications that are incompatible with the cloud. 

When to choose Re-purchase approach for cloud migRation:

  • Use this approach for legacy applications incompatible with the cloud: If you find existing applications that could benefit from the cloud but would be difficult to migrate using “lift and shift”, “drop and shop” could be a good option.
  • Many commercial off the shelf (COTS) applications are now available as Software as a Service (SaaS): Repurchasing, an excellent and fast way to access cloud-based SaaS that is tailored to your business needs by the cloud provider.

4. Re- architecting (Re-factoring)

This strategy calls for a complete overhaul of an application to adapt it to the cloud. It is valuable when you have a strong business need for cloud-native features, such as improved development agility, scalability or performance.

Highly customized applications that provide a key business differentiator should be re-architected to take advantage of cloud-native capabilities. 

Re-architecting means a rebuild of your applications from scratch to leverage cloud-native capabilities you couldn’t otherwise, such as auto-scaling or serverless computing.  It is  the most future-proof for companies that want to take advantage of more advanced cloud features.

When to choose Re-architect approach for cloud migRation:

  • When restructuring of the code is required to take full advantage of cloud capability.
  • When there is a strong business need for adding features and performance to the application and that is not possible within the existing framework.
  • When an organization is looking to boost agility or improve business continuity, the re-architecting strategy is a better
  • When an organization is willing to move to a service-oriented architecture, this approach can be used.

5. Retire

In today’s data centers there are oftentimes several workloads that are no longer used but have been kept running.  This can have lots of causes, but in some cases, the best thing to do to a workload is to simply turn it off.

Care should be taken to ensure that the service is decommissioned in a fashion that is in line with your current procedure of retiring a platform, but oftentimes migRation is a great time to remove deprecated technology from your service catalog.

When to choose Retire approach as part of cloud migRation:

  •  In many cases, during a migRation project – identify applications that are redundant, and shutting them down can represent a cost-saving.
  •  There may already be existing plans to decommission the application or consolidate it with other applications.

6. Retain

In some cases when a server or IT service is still required and cannot be migrated to the cloud it makes the most sense to retain that server or service in its current position. This Retain methodology is used in a hybrid cloud deployment that uses some on-premises IT servers and services combined with cloud technologies to offer a seamless user experience.  

While it might at times make sense to retain a technology, doing so is only advisable in select circumstances.  The impact of retaining some technologies on-premises is usually an increased demand for hybrid connectivity.

When to choose Retain approach as part of cloud migRation:

  • The business is heavily invested in the on-premise application and may have currently active development projects.
  • Legacy operating systems and applications are not supported by cloud environments.
  • The application is working well-no business case for the cost and disruption of migRation.
  • For industries that must adhere to strict compliance regulations that require that data is on-premise.
  • For applications that require very high performance, the on-premise option may prove the better choice.

Managing the big R:

So in conclusion, which is the best approach to cloud migRation?

Different use-cases have different requirements, so there is no “one size, fits all”. Selecting one among the six migRation approaches is  finding the best that suits your specific needs. That said, there is a way to determine whether one of these three cloud migRation approaches will suit you better than the others.

While choosing an approach for cloud migRation, to improve the technology, architecture, & functionality of the IT infrastructure, one must always keep in consideration the cost and risk associated with the chosen approach.

Start by checking if the app can be moved to a cloud environment in its entirety while maintaining running costs and keeping operational efficiency in check. If the answer is yes, starting with a rehost is a good idea.

If rehosting is not an option or if cost-efficiency is at a level that needs to be improved, you can also consider replatforming as the approach to take. Remember that not all apps can be migrated this way, so you may end up having to find other solutions entirely.

For workloads that can easily be upgraded to newer versions, the repurchase model might allow a feature set upgrade as you move to the cloud.

The same can be said for refactoring. When there is a strong business need for refactoring to take full advantage of cloud capability, which is not possible with the existing applications. The time and resources to complete the full refactoring should be taken into consideration.

Some applications simply won’t be required anymore, so it is important to identify these prior to migrating to the cloud & retire, so that you do not end up paying for application infrastructure that is not delivering any business benefit.

Retain portions of your IT infrastructure, if there are some applications that are not ready to be migrated, would produce more benefit when kept On-prem or it was a recent upgrade.

One can certainly take (most of) the hassle out of moving to the cloud with the right cloud migRation strategy. You will be left with the exciting part: finding new resources to use, better flexibility to benefit from, and a more effective environment for your apps.

AWS Lambda Java: Creating Deployment Package for Java 8/11 using Maven in Eclipse – Part 1

By | Powerlearnings | No Comments

Written by Tejaswee Das, Software Engineer at Powerupcloud Technologies Collaborator: Neenu Jose, Senior Software Engineer


When we talk about being serverless, AWS Lambda is definitely one that we connect with. It was never that simple before, AWS Lambda has made life easy for Developers and Data Engineers alike. You will hardly find any use-cases involving AWS without Lambda. It’s a nightmare to think about AWS without a Lambda. To know more about my AWS Lambda and some simple S3 events use cases around it, you should have a look at one of my earlier posts on AWS Lambda

This post is part of a two-part blog series. This part 1 blog will guide you through the steps to create a Java (Java 8/Java 11) deployment package for AWS Lambda in Eclipse using Maven and use S3 Event triggers. 

In Part 2, we will discuss steps on using SES with S3 Event triggers to Lambda.

Use Case

One of our clients had their workloads running on Azure Cloud. They had few serverless applications in Java 8 in Azure Functions. They wanted to upgrade Java from Java 8 to Java 11. Since Java 11 was not supported (Java 11 for Azure Functions has recently been released in Preview), they wanted to try out other cloud services – that’s when AWS Lambda was the one to come into the picture. We did a POC feasibility check for Java 11 applications running on AWS Lambda.

Deployment Steps

Step 1: Install AWS Toolkit in Eclipse

1.1 Open Eclipse → Go to Help → Install New Software

1.2 Enter in Work with and select AWS Toolkit for Eclipse Core(Required)

1.3 Click Next and Install

Note: The toolkit requires Eclipse 4.4 (Luna) or higher.

Step 2: Create New AWS Lambda Function

You might need to restart Eclipse for the installation to reflect.  Add your AWS Access Keys when asked. This step is optional, you can anytime add/remove AWS credentials/accounts from Preferences Menu.

2.1 Go to File → New → Other…

2.2 Select AWS → AWS Lambda Java Project → Next

2.3 Fill in your Project Name and other details

Class Name is your Lambda Handler. In Lambda terms – it’s like the main function of your Project.  You can have anything here – we are using the default name.

For our demo we are using in-built S3 Events. There are a lot of other events to use from like – DynamoDB Event, Stream Request Handler, SNS Event, Kinesis Event, Cognito Event or even Custom if you want to build from scratch.

2.4 Click Finish

For our demonstration & test purposes, you can go with the default code. We are using us-east-1.  Make sure you add the region, you might encounter an error if not added.

Sample Code

package com.amazonaws.lambda.demo;

import com.amazonaws.regions.Regions;

public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {

	private AmazonS3 s3 = AmazonS3ClientBuilder.standard()

    public LambdaFunctionHandler() {}

    // Test purpose only.
    LambdaFunctionHandler(AmazonS3 s3) {
        this.s3 = s3;

    public String handleRequest(S3Event event, Context context) {
        context.getLogger().log("Received event: " + event);

        // Get the object from the event and show its content type
        String bucket = event.getRecords().get(0).getS3().getBucket().getName();
        String key = event.getRecords().get(0).getS3().getObject().getKey();
        try {
            S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
            String contentType = response.getObjectMetadata().getContentType();
            context.getLogger().log("CONTENT TYPE: " + contentType);
            return contentType;
        } catch (Exception e) {
                "Error getting object %s from bucket %s. Make sure they exist and"
                + " your bucket is in the same region as this function.", key, bucket));
            throw e;

Step 4: Java Runtime Environment (JRE)

4.1 Go to Windows → Preferences → Java → Installed JREs

Step 5: Maven Build

Now to the final step where we will build our deployment package.

5.1 Right click on your project in the Project Explorer → Run As → Maven Build

5.2 Edit Configuration & Launch

Enter ‘package’in Goals. Select your JRE, can leave else as default.

5.3 Run

Your build should happen without any errors for Java 8, but with Java 11, you might run into few errors. Make sure you add updated mockito-core.

In pom.xml of  the generated project change the version of mockito-core

      <version>2.7.22</version> //Change to 3.3.3

This version change is necessary for java 11 build to work.

On successful build, you should see something similar

Sample build

[INFO] Scanning for projects...
[INFO] ------< com.amazonaws.lambda:demo >----------------------
[INFO] Building demo 1.0.0
[INFO] -----------------[ jar ]---------------------------------
[INFO] -- maven-resources-plugin:2.6:resources (default-resources) @ demo ---
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 0 resource
[INFO] --- maven-compiler-plugin:3.6.0:compile (default-compile) @ demo ---
[INFO] Nothing to compile - all classes are up to date
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ demo ---
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 1 resource
[INFO] --- maven-compiler-plugin:3.6.0:testCompile (default-testCompile) @ demo ---
[INFO] Nothing to compile - all classes are up to date
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ demo ---
[INFO] Surefire report directory: D:\Eclipse Workspace\Maven\demo-blog-s3\target\surefire-reports

 T E S T S
Running com.amazonaws.lambda.demo.LambdaFunctionHandlerTest
Received event:
CONTENT TYPE: image/jpeg
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.117 sec

[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ demo ---
[INFO] Downloading from :
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing D:\Eclipse Workspace\Maven\demo-blog-s3\target\demo-1.0.0.jar with D:\Eclipse Workspace\Maven\demo-blog-s3\target\demo-1.0.0-shaded.jar
[INFO] Dependency-reduced POM written at: D:\Eclipse Workspace\Maven\demo-blog-s3\dependency-reduced-pom.xml
[INFO] ----------------------------------------------------------------
[INFO] ----------------------------------------------------------------
[INFO] Total time:  03:10 min
[INFO] Finished at: 2020-08-03T14:53:09+05:30
[INFO] ---------------------------------------------------------------

Look at the highlighted path to locate your .jar file. This is as per your Workspace directory configuration.

My Eclipse Workspace directory here is : D:\Eclipse Workspace\Maven\

Step 6: Create Test S3 Bucket

Create S3 bucket,  for putting the files, from the AWS console.

Make sure this bucket is in the same region where you are planning to create the Lambda Function.

Step 7: Creating Lambda Function in AWS Console

There are a couple of ways of creating Lambda functions. The easiest way is through the AWS Console. Choose Language, create Function and get going. You get a lot of runtimes to choose from. Start writing code on the Go. Works great for interpreted languages – Python, Node, Ruby others.

But for compiler based languages like Java, Go, .NET, you will need to upload the deployment package and do not allow in-line editing.

There are other ways to directly upload Lambda functions from Eclipse itself. We faced issues with that, so to get our task done, we created a deployment package (.jar in Java) and uploaded it to Lambda. Works great.

7.1 S3 Event Triggers & IAM

Please refer to one of my previous posts

Follow solution steps 1&2. Create required Execution roles and attach policies to provide required permissions.

Step 8: Deploying Lambda Function

8.1 All setup done, you just need to upload the maven built jar file in Step 5.3 here.

You can either directly upload if file size is less than 10MB, else you can upload large files using Amazon S3.

Great! Your code is deployed successfully. Time to test it now.

Step 9: Testing

9.1 To test your deployment, Go to S3

9.2 Go to your Bucket that you created in Step 6 and configured trigger in 7.1

Upload a test file

9.3 Go back to your Lambda Function. Click on Monitoring → View logs in CloudWatch

You can see the S3 trigger events log here. When the file was uploaded to that bucket, it triggered Lambda.


This was a very simple proof of concept that we have demonstrated here. This was mainly to get AWS Lambda working with Java 11 for our client. In the next part of this series, we will try to demonstrate some more stuff we can try with AWS Lambda using Java 8/11 – using AWS Java SDK to send emails & notifications on file upload to S3.

Hope this was informative. We had a tough time figuring the correct resources to use, so planned to write this to help folks out there looking for help with different Java versions & AWS Lambda.


Key evaluating factors in deciding the right Cloud Service Provider

By | Powerlearnings | No Comments

Compiled by Kiran Kumar 

Since August 9, 2006, when the then Google CEO Eric Schmidt introduced the term to an industry conference, cloud computing has been driving the IT industry over the past decade and a half through its outright performance, ease of use, and its industry adaptability. Broadly segmented in Iaas, PaaS, SaaS, BPaaS, and Management & Security with the line’s blurring between each it, makes it challenging to find the right fit for your computing needs. So we have listed down a few factors you should consider while evaluating, and what are the core considerations for each. 

  1. Infrastructure setup
  2. The learning curve
  3. The relevance of the service catalog
  4. Data governance and Security
  5. Partner relationships
  6. SLAs
  7. Consistency and Reliability
  8. Back-Up and Support
  9. Cost
  10. Flexibility and Exit strategy

Infrastructure Setup

Infrastructure setup can have a huge impact on your latency, network speeds, data transfer rate, and so on, a diversified infrastructure setup would require a data center (also referred to as Availability Zones) that is closest to your preferred location. 

There are 4 standards of data centers as defined by uptime institute – globally accepted standards for data center planning, the exact guidelines and protocols are not clearly out in public but some of these metrics include redundant electrical path for power, uptime guarantee, cooling capacity, and concurrent maintainability, etc. Tier 4 is the highest standard for data centers. Min requirements for Tier 4 data centers are 

  • 99.995 % uptime in a  year. 
  • 2N+1 infrastructure (two times the amount required for operation plus a backup). 
  • Maximum allowed downtime per year of 26.3 minutes. 
  • 96-hour power outage protection.

Data centers can be easily affected by power outages, earthquakes, tornadoes, lightning strikes, etc. hence require careful planning, try and get a sense of the key design parameters adopted in setting up their data centers to counter such occurrences. Also, make sure to evaluate the cloud provider’s crisis management processes and guidelines as it showcases their ability and how well equipped they are to quickly resolve an ongoing crisis.

If your enterprise is into IoT and edge computing check for highly redundant network connectivity (5g) and low latency services to improve response times and save bandwidth. They are key factors in supporting the edge computing environments like real-time securities market forecasting, autonomous vehicles, and transportation traffic routing, etc. 

Key considerations

  • Location (Availability Zone’s)
  • Datacenter tier
  • Crisis management guidelines and protocols
  • Roadmap for upcoming technology support

The learning curve 

Despite its popularity among enterprises, with almost all Fortune 500 companies having one provider, there is still a shortage of cloud understanding. Cloud transformation can be challenging and demands to be considered as a separate project of its own, the lack of necessary cloud skills can cause inefficient migration leading to unintended consequences and unwanted costs. 

Every organization has its strengths and weaknesses, identify your strengths, and try to build your Cloud infrastructure around it. Start by assessing each cloud provider and the type of offerings available. It is imperative to be cognizant of all skills required for the general operation, governance, compliance amongst others. The storage of data is often an afterthought, stemming from a general absence of protocol-related knowledge. This needs to be addressed through various upskilling programs and strategic talent acquisitions. So it is important to list down for each of those providers how steep the learning curve would be before and after the migration. As a remark, most companies preferred to outsource these functions to specialist managed services providers.

Key considerations

  • Ease of learning
  • Upskilling support from the cloud provider
  • Active communities and partnership 

The relevance of the service catalog

Each cloud provider offers different products and services but it is important to make sure that your cloud goals align with the provider’s vision for improvement. Do your preferences match with the provider’s standards, SLA’s and your security needs, how much re-coding or customization is required at the architectural level to suit your workloads, and what are the associated costs? 

Especially if you are looking for SaaS-based services with a high dependency on a particular application, understanding the service development roadmap, or how they continue to innovate, grow, and support their product over time. Does their roadmap fit your needs in the long term? 

Cloud providers also offer a lot of services to assist you in your cloud transformation, assessment, and planning or in case of a large scale public cloud provider they provide a lot of offers custom made for your organization. In support, they also have a well-established partner community that can help you with all your cloud requirements. 

Key considerations

  • Services in-line with your needs
  • Rich and broad marketplace and an active developer community
  • Compatibility over the long term

Data Governance and Security

Storing data can be tricky simply because of the diversities in the data law across the world. Every organization needs to be aware of the local data regulations and prevailing privacy laws.

Your choice will depend on the cloud provider offering most flexibility and most compliant, also be aware of the provider’s data center location, and verify it.

Data encryption is another factor that needs your attention. Assess what are the different modes of encryption available for both data transit and during storage. Check the provider’s history for any major incidents that have occurred and understand what processes they have in place to quickly resolve the issues. High risk, highly sensitive data can be stored using much more secure and encrypted data storage solutions to and cheaper storage solutions to store some of the less sensitive data like (inventory information, daily logs, etc). 

On the information-security side check what are the compliance standards followed and any recognizable certifications they hold. However, even with all this in place, it is important for the cloud provider to offer the flexibility to support your own security practices and your commitments to your clients.   

Key considerations

  • Flexible data access and management
  • Data Compliance & Security
  • Wide range of data services  

Partner relationships

It is common practice to have a partner ecosystem to guide and facilitate these transitions to the cloud. It is, therefore, important to assess the provider’s relationship with key vendors, their accreditation levels, technical capabilities, number of projects completed, staff certifications, and the overall expertise they bring to the table. Important to note here, expertise in multi-cloud is a bonus. Powerup, for example, is a top-tier partner with the big three cloud service providers.

If you are largely reliant on SaaS-based services, check for the overall compatibility of the product across the platforms, as some of the SaaS-based services are platform-specific. Look for an active marketplace to buy complementary services that are super compatible.

In some of the regions, cloud services are made available through a subcontractor mainly due to the local laws like in the case of China, such interdependencies have to be uncovered and guarantee the primary SLAs stated across all parts of the service.

Key factors

  • Check for accreditations
  • Level of expertise across platforms
  • Relationship with the cloud provider
  • Service compatibility across platforms 


Cloud agreements seem complex simply because of the lack of industry standards which defines how these contracts should be constructed. However, ISO/IEC 19086-1:2016 tries to an extent to facilitate a common understanding between cloud service providers and cloud service customers. Usually, agreements are a mixture of commonly agreed general terms and conditions and some negotiated terms.  

Service Levels

Make sure each service objectives like accessibility, service availability or uptime in percentages, service capacity and what is the upper limit in terms of users, connections, resources response time, and deliverables are defined. Be clearly aware of your roles and responsibilities related to delivery, provisioning, service management, monitoring, support, escalations, etc and how responsibilities are split between you and the provider. 

Other scenarios include during an outage or natural disaster the min and max accepted downtime, data loss, or recovery times have to be clearly analyzed against your requirements. Control over data access, data location, confidentiality usage, and ownership rights are crucial check for standards and commitments under data resilience and data backup needs to be in line with your requirements and necessary provisions for a safe exit strategy.

Some of the key business considerations: 

  • Contractual and service governance includes to what extent the provider can unilaterally change the terms of service or contract
  • What are the policies on contract renewals and exits and what are the notice periods?
  • What insurance policies, guarantees, and penalties are included, and some exceptions.

Key considerations

  • Key SLAs
  • How compatible are the terms and conditions with your organization’s goals
  • Are they equipped to support their claims?
  • Are they negotiable?
  • Are the liabilities and responsibilities equally shared?  

Consistency and Reliability 

High availability and reliability are essential for both CSP and the client in maintaining customer’s confidence and preventing revenue losses due to service level agreement (SLA) violation penalties. Cloud computing has appealed to a larger audience in recent years for supporting critical mission systems. However, the lack of consistency in cloud services is quickly becoming a major issue. According to 2018 research reports, about $285 million have been lost yearly due to cloud service failures and offering availability of about 99.91%.

No cloud platform is perfect and downtime may occur more often than not, so try to measure it against their SLAs for the last 6-12 months, this data is mostly available online if not on request. Check what learnings they take back from such occurrences and how consistent they have been in their recovery times as stated in SLAs. Also, ensure monitoring and reporting tools on offer are sufficient and can neatly integrate into your overall management and reporting systems.

Key factors

  • Check for consistency in delivery through past performance.
  • Fault management and reporting systems. 

Back-Up and Support

Check what back-up provisions and processes are in place and understand the limits of their ability to support your data preservation expectations. Roles, responsibilities, escalation processes, and who has the burden of proof, all must be clearly documented in the service agreement. Taking into consideration the increasing levels in criticalness of data, data sources, scheduling, backup, restore, integrity checks, etc. Consider purchasing additional risk insurance if the costs associated with recovery are not covered by the provider’s terms and conditions.

Cloud providers offer a wide range of support services to help their customers out on each step migration, managed services, etc. The support delivery medium offered is also important, where you want them to be available – a phone call, chat, or email? And if they are offered through a partner, the expertise they bring to you. Here staff certification is a good barometer of the quality of the support on offer. 

Key considerations

  • Well equipped multi-channel support services (24/7)
  • Clear documentation around the roles and responsibilities
  • Insurance coverage


Don’t just go by the list price, as providers might offer services at low cost but may not offer you the optimum level of performance. While that does not mean more expensive is better. The correct way to approach it is by comparing all of them against your core requirements as illustrated above. It is not uncommon to ask for offers, so do so with the ultimate goal to incorporate all the services you need in the desired price range. Studies express that just basic optimization can save about 10% of your cloud cost. Some of the cloud providers have tried and tested ways through which you can save costs. Also through a multi-cloud approach, you can leverage far more value and flexibility, read more about it here

Key considerations

  • Price to value comparison
  • Look for offers
  • Predefined guidelines to save cost 

Flexibility & Exit Strategy

Vendor lock-in is an important factor in most considerations, however, some things cannot be avoided and it’s best that we check if the provider has minimal use of such services. Here is where adopting open source services can be an effective workaround. Also, stay aware of the updates which drastically change the working model policies and technologies of a product to support a particular platform and make sure to have policies in place to counter such situations.

Exiting a CSP can be tricky; it boils down to the exit provisions provided by the service provider and the services levels agreed upon by both parties. All your digital assets starting from your data products and services need to have their own exit strategy which needs to be integrated deeply into your cloud transformation plan. Most organizations don’t include an exit strategy in their cloud adoption roadmap which leads to a lack of preparation, waste of effort, and penalties due to exceeding the exit duration.

Key considerations

  • Ease of transition
  • Support for open source
  • Exit provisions

Thundering Clouds – Technical overview of AWS vs Azure vs Google Cloud

By | Blogs, Powerlearnings | No Comments

Compiled by Kiran Kumar, Business Analyst at Powerupcloud Technologies.

The battle of the Big 3 Cloud Service Providers

The cloud ecosystem is in a constant state of evolution, with increasing maturity and adoption, the battle for the mind and wallet intensifies. With Amazon Web Services (AWS), Microsoft Azure, and Google Cloud (GCP) leading with IaaS maturity, the likes of Salesforce, SAP, and Oracle to Workday, which recently reached $1B in quarterly revenue are both gaining ground and carving out niches in the the ‘X’aaS space. The recent COVID crisis has accelerated both adoption and consideration as enterprises transform to cope, differentiate, and sustain an advantage over the competition.  

In this article, I will stick to referencing the AWS, Azure, and GCP and terming them as the BIG 3, a disclaimer, Powerup is a top-tier partner with all three and the comparisons are purely objective based on current publically available information. It is very likely that when you do read this article a lot might have already changed. Having said that, the future will belong to those who excel in providing managed solutions around artificial intelligence, analytics, IoT, and edge computing. So let’s dive right in:      

Amazon Web Services –  As the oldest amongst the three and the most widely known, showcasing the biggest spread of availability zones and an extensive roster of services. It has monopolized its maturity to activate a developer ecosystem globally, which has proven to be a critical enabler of its widespread use.      

Microsoft Azure – Azure is the closest that one gets to AWS in terms of products and services. While AWS has fully leveraged its head start, Azure tapped into Microsoft’s huge enterprise customers and let them take advantage of the already existing infrastructure by providing better value through Windows support and interoperability.

Google Cloud Platform –  Google Cloud was announced in 2011, for being less than a decade old it has created a significant footprint. Initially intended to strengthen google’s products but later came up with an enterprise offering. A lot is expected from its deep expertise in AI, ML, deep learning & data analytics to give it a significant edge over the other providers.

AWS vs. Azure vs. Google Cloud: Overall Pros and Cons

In this analysis, I dive into broad technical aspects of these 3 cloud providers based on the common parameters listed below.

  • Compute
  • Storage
  • Exclusives  


AWS Compute:

Amazon EC2 EC2 or Elastic compute cloud is Amazon’s compute offering. EC2 can support multiple instance types (bare metal, GPU, windows, Linux, and more)and can be launched with different security and networking options, you can choose from a wide range of templates available based on your use case. EC2 can both resize and autoscale to handle changes in requirements which eliminates the need for complex governance.

Amazon Elastic Container Service a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications, manage and scale a cluster of VM’s, or schedule containers on those VM’s.

Amazon EKS makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS.

It also has its own Fargate service that automates server and cluster management for containers, a virtual private cloud option known as Lightsail for batch computing jobs, Elastic Beanstalk for running and scaling Web applications, lambda for launching serverless applications.

Container services Include Amazon Elastic Container Registry a fully-managed Docker container registry which allows you to store, manage, and deploy Docker container images.

Microsoft VM:

Azure VM: Azure VM’s are a secure and highly scalable compute solution with various instance types optimized for high-performance computing, Ai, and ML-based computing container instances and with azure’s emphasis on hybrid computing, support for multiple OS’s types, Microsoft software, and services. Virtual Machine Scale Sets are used to auto-scale your instances.

Azure container services include Azure Kubernetes service fully managed Kubernetes based Container Solution.

Container Registry which lets you store and manage container images across all types of Azure deployments.

Service Fabric A unique fully managed services which lets you develop microservices and orchestrate containers on Windows or Linux.

Other services include Web App for Containers which lets you run, scale, and deploy containerized web apps. Azure Functions for launching serverless applications, Azure Red Hat OpenShift, with support for  OpenShift.

Google Compute Engine:

Google Compute Engine (GCE) is google compute service Google is fairly new to cloud compared to the other two CSP’s and it is reflected in its catalog of services GCE offers the standard array of features starting from windows and Linux instances, RESTful API’s, load balancing, data storage, and networking, CLI and GUI interfaces, and easy scaling. Backed by Google, GCE can spin up instances faster than most of its competition under most cases. It runs on a carbon-neutral infrastructure and offers the best value for your buck among the competition.

Google Kubernetes Engine (GKE) is based on Kubernetes, originally developed inhouse Google has the highest expertise when it comes to Kubernetes and has deeply integrated it into the google cloud platform GKE service can be used to automate many of your deployment, maintenance, and management tasks. Also can be used with hybrid clouds via the Anthos service.


AWS Storage:

Amazon S3 is an object storage service that offers scalability, data availability, security, and performance for most of your storage requirements. Amazon Elastic Block Store persistent block storage that can be used with your Amazon EC2 instances. Elastic file system for scalable file storage.

Other storage services include S3 Glacier, a secure, durable, and extremely low-cost storage service for data archiving and long-term backup, Storage Gateway for hybrid storage, and snowball, a device used for offline small to medium scale data transfer.


And other database services like Amazon Aurora a SQL compatible relational database, RDS (relational database service), DynamoDB NoSQL database, Amazon ElastiCache forElasti Cache in-memory data store, Redshift data warehouse, Amazon Neptune a graph database.

Azure Storage:

Azure Blobs A massively scalable object storage solution, includes support for big data analytics through Data Lake Storage Gen2, Azure Files Managed file storage solution with support for on-prem, Azure Queues A reliable messaging store, Azure Tables A NoSQL storage solution for structured data.

Azure Disks Block-level storage volumes for Azure VMs similar to Amazon EBS.


Database Services Include SQL based database like Azure SQL Database, Azure Database for MySQL, and, Azure Database for PostgreSQL for NoSQL data warehouse services, Cosmos DB, and table storage, Server stretch database is a hybrid storage service designed specifically for organizations leveraging Microsoft SQL on-prem and, Redis cache is an in-memory data storage service.

Google Cloud Storage:

GCP’s cloud storage service includes Google Cloud Storage unified, scalable, and highly durable object storage, Filestore network-attached storage (NAS) for Compute Engine and GKE instances, Persistent Disk object storage for VM instances and, Transfer Appliance for Large data transfer.


On the database side, GCP has 3 NoSQL database Cloud BigTable for storing big data, Firestore a document database for mobile and web application data, Bigquery an analytics server, Memorystore for in-memory storage, Firebase Realtime Database cloud database for storing and syncing data in real-time. SQL-based Cloud SQL and a relational database called, Cloud Spanner that is designed for mission-critical workloads.

Benchmarks Reports

An additional drill-down would be to analyze performance figures for the three across for network, storage, and CPU, and here I quote research data from a study conducted by Cockroach labs.


GCP has taken significant strides when it comes to network and latency compared to last year as it even outperforms AWS and Azure in network performance

  • Some of GCP’s best performing machines hover around 40-60 GB/sec
  • AWS machines stick to their claims and offer a consistent 20 to 25 GB/sec and
  • Azure’s machines offered significantly less at 8 GB/sec.  
  • When it comes to latency AWS outshines the competition by offering a consistent performance across all of its machines.
  • GCP does undercut AWS under some cases but still lacks the consistency of AWS.
  • Azure’s negligible performance in the network department has reflected in high latency making it the least performing among the three.

NOTE: GCP believes that skylake for the n1 family of machines, is the reason for their increase in performance on the network side.


AWS has superior performance in storage; neither GCP nor Azure even comes close to the read-write speeds and latency figures. This is largely due to the storage optimized instances like the i3 series. Azure and GCP do not have storage optimized instances and have a performance that is comparable to the non-storage optimized instances from Amazon While Azure offered slightly better read-write speed among the two, GCP offered better latency.


While comparing the CPU’s performances Azure machines showcased a slightly higher CPU performance thanks to Using conventional 16 core CPUs. Azure machines use 16 cores with a single thread per core and other clouds use hyperthreading to achieve 16 cores by combining 8cores with 2 threads. After comparing each offering across the three platforms here’s the best each cloud platform has to offer.

  • AWS c5d.4xlarge 25000 – 50000 Bogo ops per sec
  • Azure Standard_DS14_v2  just over 75000 Bogo ops per sec
  • GCP c2-standard-16 25000 – 50000 Bogo ops per sec
  • While AWS and GCP figures look similar AWS overall offers slightly better than GCP and
  • Avoiding hyperthreading has inflated Azure’s figures and while it might still be superior in performance it may not accurately represent the difference in the performance power it offers.

For detailed benchmarking reports visit Cockroach Labs  

Key Exclusives

Going forward, technologies like Artificial Intelligence, Machine Learning, the Internet of Things(IoT), and serverless computing will play a huge role in shaping the technology industry. The goal of most of the services and products will try to take advantage of these technologies to deliver solutions more efficiently and with precision. All of the “BIG 3“providers have begun experimenting with offerings in these areas. This can very well be the key differentiator between them.

AWS Key Tools:

Some of the latest additions to the AWS portfolio include AWS Graviton processors built using 64 bit Arm Neoverse cores. EC2 based M6g, C6g, and R6g instances are powered by these new-gen instances. Thanks to the power-efficient Arm architecture it is said to provide 40% better price performance over the X86 based instances.

AWS Outpost: Outpost is Amazon’s emphasis on the hybrid architecture; it is a fully managed ITaaS solution that brings all AWS products and services to anywhere by physically deploying it in your site. It is aimed at offering a consistent hybrid experience with the scalability and flexibility of AWS.

AWS has put a lot of time and effort into developing a relatively broad range of products and services in AI and ML space. Some of the important ones include AWS Sagemaker service for training and deploying machine learning models, the Lex conversational interface, and Polly text-to-speech service which powers Alexa services, its Greengrass IoT messaging service and the Lambda serverless computing service.

And AI-powered services like DeepLens which can be trained and used for OCR, Image, and, character Recognition, Gluon, an open-source deep-learning library designed to build and quickly train neural networks without having to know AI programming.

Azure Key Tools:

When it comes to hybrid support Azure offers a very strong proposition, with services like Azure stack and Azure Arc minimize your risks of going wrong. Knowing that a lot of enterprises are already using Microsoft’s services Azure tries to deepen this by offering enhanced security and flexibility through its hybrid services. With Azure Arc customers can manage resources deployed within Azure and outside of Azure through the same control plane enabling organizations to extend Azure services to their on-prem data centers.

Azure also consists of a comprehensive family of AI services and cognitive APIs which helps you build intelligent apps, services like Bing Web Search API, Text Analytics API, Face API, Computer Vision API and Custom Vision Service come under it. For IoT, it has several management and analytics services, and it also has a serverless computing service known as Functions.

Google Cloud Key Tools:

AI and machine learning are big areas of focus for GCP. Google is a leader in AI development, thanks to TensorFlow, an open-source software library for building machine learning applications. It is the single most popular library in the market, with AWS also adding support for TensorFlow in an acknowledgment of this.

Google Cloud has strong offerings in APIs for natural language, speech, translation, and more. Additionally, it offers IoT and serverless services, but both are still in beta stage. However Google has been working extensively on Anthos, as quoted by Sundar Pichai Anthos follows the “Write once and run anywhere” approach by allowing organizations to run Kubernetes workloads on-premises, AWS or Azure, however, Azure support is still in a beta testing stage. 


Each of the three has its own set of features and come with their own set of constraints and advantages. The selection of the appropriate cloud provider should, therefore, like with most enterprise software be based on your organizational goals over the long term.

However, we strongly believe that multi-cloud will be the way forward for an organization for e.g. if an organization is an existing user of Microsoft’s services it is natural for it to prefer Azure. Most small, web-based/digitally native companies looking to scale quickly by leveraging AI/ML, Data services, would want to take a good look at Google Cloud. And of course, AWS with its absolute scale of products and services and maturity makes it very hard to ignore in any mix.

Hope this shed some light on the technical considerations, and will follow this up with some of the other key evaluating factors that we think you should consider while selecting your cloud provider.

Converting Your Enterprise Data To Intelligence

By | Powerlearnings | No Comments

Written by Vinit Balani, Program Manager – Data, AI & ML at Powerupcloud Technologies.

Data has become the new business currency and its further rapid increase is increasingly becoming the key to the transformation and growth of enterprises globally. We are living in a world where data is exploding. Experts have pressed enough on the importance of data and we have recently seen live examples of the countries also using data to fight the current COVID-19 pandemic. Today, data is valuable at almost all levels of the organization as well. While the current pandemic tightens it grips on the world, the businesses are going back to the drawing board and charting out new strategies to improve margins and ensuring business continuity.

However, the enterprises which already knew the importance of data have been able to easily and proactively create these strategies owing to the insights that they have been able to derive from the data being captured for years. The rest are already feeling the heat.

Even while the data is being collected, it becomes increasingly important as to how it is being collected and consumed by the users within (employees) and outside the organization (consumers). The data for most organizations today is siloed across departments with separate knowledge bases within separate software systems for ERP, CRM, HRMS, and many others. While the systems are efficient in their very own way, it becomes very difficult to co-relate the data and dig deeper to understand the reasons behind some of the business metrics.

Let’s say for example, a company ABC found an x% reduction in revenue projections. Now, the known reasons available from the CRM system will be a decrease in the number of sales or increased price of the product. However, there may be many other reasons like the increase in negative product feedback on social media, increased / new competition, operational downtime of the website, and others. Since this information lies in different systems with different departments, the top management most of the time does not get a holistic view of the actual reasons or all the possible reasons.

It becomes very important for an organization to understand which factors affect a particular metric, how, and in what proportion. For an organization to achieve this, it needs to transform on different levels – Infrastructure, Data, and Applications. This transformation can make enterprises ‘Future Ready’.

Our definition of  Future Ready Enterprises is a ‘hyper-connected learning entity’ with

  • Ability to access and unleash organizational knowledge with ease
  • Ability to describe the reason behind events
  • Ability to predict future trends
  • Ability to prescribe corrective actions

From a siloed enterprise towards a more connected learning enterprise

To evolve, the enterprises have to anticipate the customer needs using the technology to harness data to take meaningful actions.

Next Question is ‘How’?

Public Cloud Platforms are already providing the necessary fuel to drive this transformation. With their compelling unit economics that allows data to be stored at scale, they also provide necessary fire-power to crunch data with a plethora of tools. As an enterprise customer, one has an option to choose a tool or service for every functionality or process you want to perform on the cloud. While on one side you are spoilt for choices, it also becomes difficult to choose the right tool for the right functionality. However, that is a separate topic of discussion altogether and a more technical one.

Coming back to becoming a future-ready enterprise, we believe it is a journey for an enterprise from Data to Intelligence i.e. gathering all the data and using it to make intelligent decisions or build future-ready applications. This, however, is not a one-step transformation. We, at Powerup, divided this into 4 stages as below –

Powerup’s Data To Intelligence Journey For Enterprises

As an organization, there will be a lot of data being generated like in ERP, CRM, Documents, IOT data, Research Data, Social Media Data, Call Center Notes, and much more lying in different applications. The data can be structured, unstructured, or semi-structured and it may be on-prem or hybrid cloud within different SaaS applications or some kind of cloud storage. It is very important to first have the raw data in a single place to be able to do something about it. So, the first step in the journey is the migration of all this data into a single place. This is where we create a data lake where data from multiple sources is stored in its raw format. Now, converting this data into ‘Information’ is when we create a Data Warehouse. One can perform some transformations on data and use it to derive some insights. Many enterprises have reached or at least are trying to reach until this step today. But this is not where it ends.

With the increase in the amount of data, there is also a lot of irrelevant data that users have to deal with. This is where the next step in the journey becomes very important. The data within the data warehouse is further transformed to create data marts. We call this step ‘Knowledge’ because it creates different knowledge bases within an enterprise serving different business users. The users can now use only the data that is relevant to them and this also leads to improved efficiency. Data marts can be created as per LOB’s of the enterprise or on the department level and user access can be controlled at this level to provide access to only certain profiles of users. So then, one will ask now that we have specific data to work with, we can derive insights and make business decisions. Problem solved, right? No.

The last step of the journey is centered around the most important stakeholder for your enterprise – ‘Customer’. It all depends on how your internal alignment of data benefits your customer or end-user and provides them more value. ‘Intelligence’ signifies intelligent use of data to build innovative products and applications that help improve customer experience. With data for the customer coming from multiple digital touchpoints, one can create a digital profile for the customer and personalize the experience and create stickiness to one’s brand. Based on the data, enterprises need to proactively anticipate market demands and needs to deliver products and services that are ‘relevant’. With the host of cognitive technologies available on the cloud, the enterprises can take actions or business decisions based on expected outputs.

No matter which stages your enterprise is, in this journey, the ultimate goal is to achieve Intelligence. We, at Powerup, have helped many enterprises at different stages in their data journey and reach this stage.

Let us know where you are on this journey. Reach out to us at

Why hybrid is the preferred strategy for all your cloud needs

By | AWS, Azure, Blogs, GCP, hybrid cloud, Powerlearnings | No Comments

Written by Kiran Kumar, Business analyst at Powerupcloud Technologies.

While public cloud is a globally accepted and proven solution for CIO’s and CTO’s looking for a more agile, scalable and versatile  IT environment, there are still questions about security, reliability, cloud readiness of the enterprises and that require a lot of time and resources to fully migrate to a cloud-native organization. This is exacerbated especially for start-ups, as it is too much of a risk to work with these uncertainties. This demands a solution that is innocuous and less expensive to drive them out of the comforts of their existing on-prem infrastructure. 

Under such cases, a hybrid cloud is the best approach providing you with the best of both worlds while keeping pace with all your performance, & compliance needs within the comforts of your datacenter.

So what is a hybrid cloud?

Hybrid cloud delivers a seamless computing experience to the organizations by combining the power of the public and private cloud and allowing data and applications to be shared between them. It provides enterprises the ability to easily scale their on-premises infrastructure to the public cloud to handle any fluctuations in the work-load without giving third-party datacenters access to the entirety of their data. Understanding the benefits, various organizations around the world have streamlined their offerings to effortlessly integrate these solutions into their hybrid infrastructures. However, an enterprise has no direct control over the architecture of a public cloud so, for hybrid cloud deployment, enterprises must architect their private cloud to achieve consistent hybrid experience with the desired public cloud or clouds. 

A 2019 survey (of 2,650 IT decision-makers from around the world) respondents reported steady and substantial hybrid deployment plans over the next five years. In addition to this, a vast majority of 2019 survey respondents about more than 80% selected hybrid cloud as their ideal IT operating model and more than half of these respondents cited hybrid cloud as the model that meets all of their needs. And more than 60% of them stated that data security is the biggest influencer.

Also, respondents felt having the flexibility to match the right cloud to each application showcases the scale of adaptability that enterprises are allowed to work with, in a hybrid multi-cloud environment 

Banking is one of those industries that will embrace the full benefits of a hybrid cloud, because of how the industry operates they require a unique mix of services and an infrastructure which is easily accessible and also affordable

In a recent IBM survey 

  • 50 percent of banking executives say they believe the hybrid cloud can lower their cost of IT ownership 
  • 47 percent of banking executives say they believe hybrid cloud can improve operating margin 
  • 47 percent of banking executives say they believe hybrid cloud can accelerate innovation

Hybrid adoption – best practices and guidelines 

Some of the biggest challenges in cloud adoption include security, talent, and costs, according to the report’s hybrid computing has shown that it can eliminate security challenges and manage risk, by positioning all the important digital assets and data on-prem. Private clouds are still considered to be an appropriate solution to host and manage sensitive data and applications and also the enterprises still need the means to support their conventional enterprise computing models. A sizeable number of businesses still have substantial on-premise assets comprising archaic technology, sensitive collections of data, and highly coupled legacy apps that either can’t be easily moved or swapped for public cloud. 

Here some of the guidelines for hybrid adoption 

Have a cloud deployment model for applications and data

Deployment models talk about what cloud resources and applications should be deployed and where. Hence it is crucial to understand the 2 paced system ie, steady and fast-paced system to determine the deployment models. 

A steady paced system must continue to support the traditional enterprise applications on-prem to keep the business running and maintain the current on-premise services. Additionally, off-premises services, such as private dedicated IaaS, can be used to increase infrastructure flexibility for enterprise services.

And a fast-paced system is required to satisfy the more spontaneous needs like delivering applications and services quickly whether it’s scaling existing services to satisfy spikes in demand or providing new applications quickly to meet an immediate business need. 

The next step is determining where applications and data must reside.

Placement of application and datasets on private, public or on-prem is crucial since IT architects must access the right application architecture to achieve maximum benefit. This includes understanding application workload characteristics and determining the right deployment model for multi-tier applications. 

Create heterogeneous environments 

To achieve maximum benefits from a hybrid strategy, the enterprise must leverage its existing in-house investments with cloud services, by efficiently integrating them, as new cloud services are deployed, the applications running on them with various on-premises applications and systems becomes important.

Integration between applications typically includes 

  • Process (or control) integration, where an application invokes another one in order to execute a certain workflow. 
  • Data integration, where applications share common data, or one application’s output becomes another application’s input. 
  • Presentation integration, where multiple applications present their results simultaneously to a user through a dashboard or mashup.

To obtain a seamless integration between heterogeneous environments, the following actions are necessary:

  • a cloud service provider must support open source technologies for admin and business interfaces.
  • Examine the compatibility of in-house systems to work with cloud services providers and also ensure that on-premises applications are following SOA design principles and can utilize and expose APIs to enable interoperability with private or public cloud services.
  • Leverage the support of third party ID and Access Management functionality to authenticate and authorize access to cloud services. Put in place suitable API Management capabilities to prevent unauthorized access.

Network security requirements 

Network type – Technology used for physical connection over the WAN Depends on aspects like bandwidth, latency, service levels, and costs. Hybrid cloud solutions can rely on P2P links as well as the Internet to connect on-premises data centers and cloud providers. The selection of the connectivity type depends on the analysis of aspects like performance, availability, and type of workloads. 

Security – connectivity domain needs to be evaluated and understood; to match the network security standards between cloud provider network security standards and the overall network security policies, guidelines and compliance. The encrypting and authenticating traffic on the WAN can be evaluated at the application level. And aspects like systems for the computing resources, applications must be considered and technologies such as VPNs can be employed to provide secure connections between components running in different environments.

Web apps security and Management services like DNS and DDoS protection which are available on the cloud can free up dedicated resources required by an enterprise to procure, set-up and maintain such services and instead concentrate on business applications. This is especially applicable to the hybrid cloud for workloads that have components deployed into a cloud service and are exposed to the Internet. The system that is deployed on-premises needs to adapt to work with the cloud, to facilitate problem identification activities that may span multiple systems that have different governance boundaries.

Security and privacy challenges & counter-measures

Hybrid cloud computing has to coordinate between applications and services spanning across various environments, which involves the movement of applications and data between the environments. Security protocols need to be applied across the whole system consistently, and additional risks must be addressed with suitable controls to account for the loss of control over any assets and data placed into a cloud provider’s systems. Despite this inherent loss of control, enterprises still need to take responsibility for their use of cloud computing services to maintain situational awareness, weigh alternatives, set priorities, and effect changes in security and privacy that are in the best interest of the organization. 

  • A single and uniform interface must be used to curtail or Deal with risks arising from using services from various cloud providers since it is likely that each will have its own set of security and privacy characteristics. 
  • Authentication and Authorization. A hybrid environment could mean that gaining access to the public cloud environment could lead to access to the on-premises cloud environment.
  • Compliance check between cloud providers used and in-home systems.


  • A single Id&AM system should be used.
  • Networking facilities such as VPN are recommended between the on-premises environment and the cloud.  
  • Encryption needs to be in place for all sensitive data, wherever it is located.
  • firewalls, DDoS attack handling, etc, needs to be coordinated across all environments with external interfaces.

Set-up an appropriate DB&DR plan

As already been discussed, a hybrid environment provides organizations the option to work with the multi-cloud thus offering business continuity, which has been one of the most important aspects of business operations. It is not just a simple data backup to the cloud or a Disaster Recovery Plan, it means when a disaster or failure occurs, data is still accessible with little to no downtime. Which is measured in terms of time to restart (RTO: recovery time objective) and maximum data loss allowed (RPO: recovery point objective).

Therefore a business continuity solution has to be planned considering some of the key elements such as resilience, time to restart (RTO: recovery time objective) and maximum data loss allowed (RPO: recovery point objective) which was agreed upon by the cloud provider. 

Here are some of the challenges encountered while making a DR plan 

  • Although the RTO and RPO values give us a general idea of the outcome, they cannot be trusted fully so the time required to restart the operation may take longer 
  • As the systems get back up and operational there will be a sudden burst of request for resources which is more apparent in large scale disasters.
  • Selecting the right CSP is crucial as most of the cloud providers do not provide DR as a managed service instead, they provide a basic infrastructure to enable our own DRaaS.

Hence enterprises have to be clear and select their DR strategy which best suits their IT infrastructure as this is very crucial in providing mobility to the business thus making the business more easily accessible from anywhere around the world and also data insurance in the event of a disaster natural or even in case of technical failures, by minimizing downtime and the costs associated with such an event.

How are leading OEMs like AWS, Azure and Google Cloud adapting to this changing landscape  

Google Anthos

In early, 2019 google came up with Anthos which is one of the first multi-cloud solutions from a mainstream cloud provider, Anthos is an open application modernization platform that enables you to modernize your existing applications, build new ones, and run them anywhere, built on open-source, including Kubernetes as its central command and control center, Istio enables federated network management across the platform, and Knative provides an open API and runtime environment that enables you to run your serverless workloads anywhere you choose. Anthos enables consistency between on-premises and cloud environments. Anthos helps accelerate application development and strategically enables your business with transformational technologies. 

AWS Outposts

AWS Outposts is a fully managed service that extends the same AWS hardware infrastructure, services, APIs, and tools to build and run your applications on-premises and in the cloud for a truly consistent hybrid experience. AWS compute, storage, database, and other services run locally on Outposts, and you can access the full range of AWS services available in the Region to build, manage, and scale your on-premises applications using familiar AWS services and tools. across your on-premises and cloud environments. Your Outposts infrastructure and AWS services are managed, monitored, and updated by AWS just like in the cloud.

Azure Stack

Azure Stack is a hybrid solution provided by Azure built and distributed by approved Hardware vendors(like DellLenovoHPE, etc,.) that bring Azure cloud into your on-prem data center. It is a fully managed service where hardware is managed by the certified vendors and software is managed by the Microsoft Azure. Using azure stack you can extend the azure technology anywhere, from the datacenter to edge locations and remote offices. Enabling you to build, deploy, and run hybrid and edge computing apps consistently across your IT ecosystem, with flexibility for diverse workloads.

How Powerup approaches Hybrid cloud for its customers 

Powerup is one of the few companies in the world to have achieved the status of a launch partner with AWS outposts with the experience in working on over 200+projects across various verticals and having top-tier certified expertise in all the 3 major cloud providers in the market. We can bring an agile, secure, and seamless hybrid experience across the table. Outposts is a fully managed services hence it eliminates the hassle of managing an on-prem data center so that the enterprises can concentrate more on optimizing their infrastructure

Reference Material

Practical Guide to Hybrid Cloud Computing