Category

GCP

Why hybrid is the preferred strategy for all your cloud needs

By | AWS, Azure, Blogs, GCP, hybrid cloud, Powerlearnings | No Comments

Written by Kiran Kumar, Business analyst at Powerupcloud Technologies.

While public cloud is a globally accepted and proven solution for CIO’s and CTO’s looking for a more agile, scalable and versatile  IT environment, there are still questions about security, reliability, cloud readiness of the enterprises and that require a lot of time and resources to fully migrate to a cloud-native organization. This is exacerbated especially for start-ups, as it is too much of a risk to work with these uncertainties. This demands a solution that is innocuous and less expensive to drive them out of the comforts of their existing on-prem infrastructure. 

Under such cases, a hybrid cloud is the best approach providing you with the best of both worlds while keeping pace with all your performance, & compliance needs within the comforts of your datacenter.

So what is a hybrid cloud?

Hybrid cloud delivers a seamless computing experience to the organizations by combining the power of the public and private cloud and allowing data and applications to be shared between them. It provides enterprises the ability to easily scale their on-premises infrastructure to the public cloud to handle any fluctuations in the work-load without giving third-party datacenters access to the entirety of their data. Understanding the benefits, various organizations around the world have streamlined their offerings to effortlessly integrate these solutions into their hybrid infrastructures. However, an enterprise has no direct control over the architecture of a public cloud so, for hybrid cloud deployment, enterprises must architect their private cloud to achieve consistent hybrid experience with the desired public cloud or clouds. 

A 2019 survey (of 2,650 IT decision-makers from around the world) respondents reported steady and substantial hybrid deployment plans over the next five years. In addition to this, a vast majority of 2019 survey respondents about more than 80% selected hybrid cloud as their ideal IT operating model and more than half of these respondents cited hybrid cloud as the model that meets all of their needs. And more than 60% of them stated that data security is the biggest influencer.

Also, respondents felt having the flexibility to match the right cloud to each application showcases the scale of adaptability that enterprises are allowed to work with, in a hybrid multi-cloud environment 

Banking is one of those industries that will embrace the full benefits of a hybrid cloud, because of how the industry operates they require a unique mix of services and an infrastructure which is easily accessible and also affordable

In a recent IBM survey 

  • 50 percent of banking executives say they believe the hybrid cloud can lower their cost of IT ownership 
  • 47 percent of banking executives say they believe hybrid cloud can improve operating margin 
  • 47 percent of banking executives say they believe hybrid cloud can accelerate innovation

Hybrid adoption – best practices and guidelines 

Some of the biggest challenges in cloud adoption include security, talent, and costs, according to the report’s hybrid computing has shown that it can eliminate security challenges and manage risk, by positioning all the important digital assets and data on-prem. Private clouds are still considered to be an appropriate solution to host and manage sensitive data and applications and also the enterprises still need the means to support their conventional enterprise computing models. A sizeable number of businesses still have substantial on-premise assets comprising archaic technology, sensitive collections of data, and highly coupled legacy apps that either can’t be easily moved or swapped for public cloud. 

Here some of the guidelines for hybrid adoption 

Have a cloud deployment model for applications and data

Deployment models talk about what cloud resources and applications should be deployed and where. Hence it is crucial to understand the 2 paced system ie, steady and fast-paced system to determine the deployment models. 

A steady paced system must continue to support the traditional enterprise applications on-prem to keep the business running and maintain the current on-premise services. Additionally, off-premises services, such as private dedicated IaaS, can be used to increase infrastructure flexibility for enterprise services.

And a fast-paced system is required to satisfy the more spontaneous needs like delivering applications and services quickly whether it’s scaling existing services to satisfy spikes in demand or providing new applications quickly to meet an immediate business need. 

The next step is determining where applications and data must reside.

Placement of application and datasets on private, public or on-prem is crucial since IT architects must access the right application architecture to achieve maximum benefit. This includes understanding application workload characteristics and determining the right deployment model for multi-tier applications. 

Create heterogeneous environments 

To achieve maximum benefits from a hybrid strategy, the enterprise must leverage its existing in-house investments with cloud services, by efficiently integrating them, as new cloud services are deployed, the applications running on them with various on-premises applications and systems becomes important.

Integration between applications typically includes 

  • Process (or control) integration, where an application invokes another one in order to execute a certain workflow. 
  • Data integration, where applications share common data, or one application’s output becomes another application’s input. 
  • Presentation integration, where multiple applications present their results simultaneously to a user through a dashboard or mashup.

To obtain a seamless integration between heterogeneous environments, the following actions are necessary:

  • a cloud service provider must support open source technologies for admin and business interfaces.
  • Examine the compatibility of in-house systems to work with cloud services providers and also ensure that on-premises applications are following SOA design principles and can utilize and expose APIs to enable interoperability with private or public cloud services.
  • Leverage the support of third party ID and Access Management functionality to authenticate and authorize access to cloud services. Put in place suitable API Management capabilities to prevent unauthorized access.

Network security requirements 

Network type – Technology used for physical connection over the WAN Depends on aspects like bandwidth, latency, service levels, and costs. Hybrid cloud solutions can rely on P2P links as well as the Internet to connect on-premises data centers and cloud providers. The selection of the connectivity type depends on the analysis of aspects like performance, availability, and type of workloads. 

Security – connectivity domain needs to be evaluated and understood; to match the network security standards between cloud provider network security standards and the overall network security policies, guidelines and compliance. The encrypting and authenticating traffic on the WAN can be evaluated at the application level. And aspects like systems for the computing resources, applications must be considered and technologies such as VPNs can be employed to provide secure connections between components running in different environments.

Web apps security and Management services like DNS and DDoS protection which are available on the cloud can free up dedicated resources required by an enterprise to procure, set-up and maintain such services and instead concentrate on business applications. This is especially applicable to the hybrid cloud for workloads that have components deployed into a cloud service and are exposed to the Internet. The system that is deployed on-premises needs to adapt to work with the cloud, to facilitate problem identification activities that may span multiple systems that have different governance boundaries.

Security and privacy challenges & counter-measures

Hybrid cloud computing has to coordinate between applications and services spanning across various environments, which involves the movement of applications and data between the environments. Security protocols need to be applied across the whole system consistently, and additional risks must be addressed with suitable controls to account for the loss of control over any assets and data placed into a cloud provider’s systems. Despite this inherent loss of control, enterprises still need to take responsibility for their use of cloud computing services to maintain situational awareness, weigh alternatives, set priorities, and effect changes in security and privacy that are in the best interest of the organization. 

  • A single and uniform interface must be used to curtail or Deal with risks arising from using services from various cloud providers since it is likely that each will have its own set of security and privacy characteristics. 
  • Authentication and Authorization. A hybrid environment could mean that gaining access to the public cloud environment could lead to access to the on-premises cloud environment.
  • Compliance check between cloud providers used and in-home systems.

Counter-measures

  • A single Id&AM system should be used.
  • Networking facilities such as VPN are recommended between the on-premises environment and the cloud.  
  • Encryption needs to be in place for all sensitive data, wherever it is located.
  • firewalls, DDoS attack handling, etc, needs to be coordinated across all environments with external interfaces.

Set-up an appropriate DB&DR plan

As already been discussed, a hybrid environment provides organizations the option to work with the multi-cloud thus offering business continuity, which has been one of the most important aspects of business operations. It is not just a simple data backup to the cloud or a Disaster Recovery Plan, it means when a disaster or failure occurs, data is still accessible with little to no downtime. Which is measured in terms of time to restart (RTO: recovery time objective) and maximum data loss allowed (RPO: recovery point objective).

Therefore a business continuity solution has to be planned considering some of the key elements such as resilience, time to restart (RTO: recovery time objective) and maximum data loss allowed (RPO: recovery point objective) which was agreed upon by the cloud provider. 

Here are some of the challenges encountered while making a DR plan 

  • Although the RTO and RPO values give us a general idea of the outcome, they cannot be trusted fully so the time required to restart the operation may take longer 
  • As the systems get back up and operational there will be a sudden burst of request for resources which is more apparent in large scale disasters.
  • Selecting the right CSP is crucial as most of the cloud providers do not provide DR as a managed service instead, they provide a basic infrastructure to enable our own DRaaS.

Hence enterprises have to be clear and select their DR strategy which best suits their IT infrastructure as this is very crucial in providing mobility to the business thus making the business more easily accessible from anywhere around the world and also data insurance in the event of a disaster natural or even in case of technical failures, by minimizing downtime and the costs associated with such an event.

How are leading OEMs like AWS, Azure and Google Cloud adapting to this changing landscape  

Google Anthos

In early, 2019 google came up with Anthos which is one of the first multi-cloud solutions from a mainstream cloud provider, Anthos is an open application modernization platform that enables you to modernize your existing applications, build new ones, and run them anywhere, built on open-source, including Kubernetes as its central command and control center, Istio enables federated network management across the platform, and Knative provides an open API and runtime environment that enables you to run your serverless workloads anywhere you choose. Anthos enables consistency between on-premises and cloud environments. Anthos helps accelerate application development and strategically enables your business with transformational technologies. 

AWS Outposts

AWS Outposts is a fully managed service that extends the same AWS hardware infrastructure, services, APIs, and tools to build and run your applications on-premises and in the cloud for a truly consistent hybrid experience. AWS compute, storage, database, and other services run locally on Outposts, and you can access the full range of AWS services available in the Region to build, manage, and scale your on-premises applications using familiar AWS services and tools. across your on-premises and cloud environments. Your Outposts infrastructure and AWS services are managed, monitored, and updated by AWS just like in the cloud.

Azure Stack

Azure Stack is a hybrid solution provided by Azure built and distributed by approved Hardware vendors(like DellLenovoHPE, etc,.) that bring Azure cloud into your on-prem data center. It is a fully managed service where hardware is managed by the certified vendors and software is managed by the Microsoft Azure. Using azure stack you can extend the azure technology anywhere, from the datacenter to edge locations and remote offices. Enabling you to build, deploy, and run hybrid and edge computing apps consistently across your IT ecosystem, with flexibility for diverse workloads.

How Powerup approaches Hybrid cloud for its customers 

Powerup is one of the few companies in the world to have achieved the status of a launch partner with AWS outposts with the experience in working on over 200+projects across various verticals and having top-tier certified expertise in all the 3 major cloud providers in the market. We can bring an agile, secure, and seamless hybrid experience across the table. Outposts is a fully managed services hence it eliminates the hassle of managing an on-prem data center so that the enterprises can concentrate more on optimizing their infrastructure

Reference Material

Practical Guide to Hybrid Cloud Computing

Azure Arc: Onboarding GCP machine to Azure

By | Azure, Blogs, Cloud, Cloud Assessment, GCP | No Comments

Written by Madan Mohan K, Associate Cloud Architect

“#Multicloud and #Hybrid cloud are not things, they are situations you find yourself in when trying to solve business problems”

In the recent past, the most organization started clinging to hybrid and multi-cloud approach. Many enterprises still face a sprawl of resources spread across multiple datacentres, clouds, and edge locations. Enterprise customers keep looking for a cloud-native control plane to inventory, organize, and enforce policies for their IT resources wherever they are, from a central place.

Azure Arc:

Azure Arc extends the Azure Resource Manager capabilities to Linux and Windows servers, and Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and the edge. Organizations can use Azure Arc to run Azure data services anywhere, which includes always up-to-date data capabilities, deployment in seconds, and dynamic scalability on any infrastructure. We will have a close look at Azure Arc for Servers is currently in preview.

Azure Arc for Servers:

Using Azure Arc for servers, managing machines that are hosted outside of Azure (on-premise & other cloud providers). When these types of machines are connected to Azure using Azure Arc for servers, they become Connected Machines, and they will be treated as native resources in Azure. Each Connected machine will get a Resource ID during registration in Azure and it will be managed as part of a Resource group inside an Azure subscription. This will enable the ability to benefit from Azure features and capabilities, such as Azure Policies, and tagging.

For each machine that you want to connect to Azure, an agent package needs to be installed. Based on how recently the agent has checked in, the machine will have a status of Connected or Disconnected. If a machine has not checked-in within the past 5 minutes, it will show as Disconnected until connectivity is restored. This check-in is called a heartbeat. The Azure Resource Manager service limits are also applicable to Azure Arc for server, which means that there is a limit of 800 servers per resource group.

Supported Operating Systems:

The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:

  • Windows Server 2012 R2 and higher
  • Ubuntu 16.04 and 18.04

Networking Requirements on Remote Firewall:

The Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. During installation and runtime, the agent requires connectivity to Azure Arc service endpoints. If outbound connectivity is blocked by the firewall, make sure that the following URLs and Service Tags are not blocked:

Service Tags:

  • AzureActiveDirectory
  • AzureTrafficManager

URLs Required:

Domain Environment Azure Service Endpoints
management.azure.com Azure Resource Manager
login.windows.net Azure Active Directory
dc.services.visualstudio.com Application Insights
agentserviceapi.azure-automation.net Guest Configuration
*-agentservice-prod-1.azure-automation.net Guest Configuration
*.his.hybridcompute.azure-automation.net Hybrid Identity Service

Register Azure resource providers:

Azure Arc for servers depends on the following Azure resource providers in your subscription in order to use this service:

  • Microsoft.HybridCompute
  • Microsoft.GuestConfiguration

First, we need to register the required resource providers in Azure. Therefore, take the following steps:

Navigate to the Azure portal at https://portal.azure.com/

Log in with administrator credentials

Registration can be done either using Azure Portal or Powershell

Using Azure Portal:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types#azure-portal

Using Azure Powershell:

Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute

Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration

In this case, we will be using Powershell to register the resource providers

Note: The resource providers are only registered in specific locations.

Connect the on-premise/cloud machine to Azure Arc for Servers:

There are two different ways to connect on-premises machines to Azure Arc.

  • Download a script and run it manually on the server.
  • Using PowerShell for adding multiple machines using a Service Principal.

When adding the single server, the best approach is to use the download script and run manually method. To connect the machine to Azure, we need to generate the agent install script in the Azure portal. This script is going to download the Azure Connected Machine Agent (AzCMAgent) installation package, install it on the on-premises machine and register the machine in Azure Arc.

Generate the agent install script using the Azure portal:

To generate the agent, install script, take the following steps:

Navigate to the Azure portal and type Azure Arc

Use https://aka.ms/hybridmachineportal instead of Azure Portal

Click on +Add.

Select Add machines using an interactive script:

Keep the defaults in Basic Pane and click on Review and generate the script.

Connect the GCP machine to Azure Arc:

GCP Instance:

To connect the GCP machine to Azure Arc, we first need to install the agent on the GCP machine. Therefore, take the following steps

Open Windows PowerShell ISE as an administrator. Paste the script, that is generated in the previous step in PowerShell, in the window and execute it.

A registration code is received during the execution of the script

Navigate to  https://microsoft.com/devicelogin Paste in the code from PowerShell and click Next

A confirmation message is displayed stating that the device is registered in Azure Arc

Validation in Azure Portal:

Now when navigating to the Azure Arc in Azure Portal we can see the GCP VM is onboarded and the status shows connected.

Managing the machine in Azure Arc:

To manage the machine from Azure, click on the machine in the overview blade

In the overview blade, it offers to add tags to the machine. Furthermore, it offers to Manage access and apply policies to the machine.

Inference:

Azure Arc acts as a centralized cloud-native control plane to inventory, organize, and enforce policies for IT resources wherever they are. With an introduction to Azure Arc, an organization can enjoy the full benefits of managing its hybrid environment and it also offers the ability to innovate using cloud technologies.

Access Management of GCP instances by configuring Google Cloud Identity LDAP for user login

By | Blogs, Cloud, Cloud Assessment, GCP | No Comments

Written by Madan Mohan K, Associate Cloud Architect

“One Man’s TRASH can be another Man’s IDENTITY”-Identity Access Management

Maintaining two identity management systems for SaaS apps and traditional apps/infrastructure which results in intricacy, fragmented security, and additional cost.

To overcome this Google launched secure LDAP that lets you manage access to SaaS apps and traditional LDAP based apps/infrastructure using a single cloud-based identity and access management (IAM) solution.

Cloud Identity:

A unified identity, access, app, and endpoint management (IAM/EMM) platform that helps IT and security teams maximize end-user efficiency, protect company data, and transition to a digital workspace.

LDAP (Lightweight Directory Access Protocol) is an application protocol for querying and modifying items in directory service providers like Active Directory, which supports a form of LDAP.

Platform Used:

  • G-Suite Admin with cloud Identity Premium 
  • Google Cloud

G-Suite Admin:

Create LDAP client from the Apps in the G-Suite Admin console

G-Suite Admin Console

In the LDAP apps page, click on “ADD CLIENT” button and Key in the required details

LDAP client creation

Under Access Permissions, you will have 3 settings:

  • Verify user credentials,
  • Read user information &
  • Read group information.

In this illustration, we chose to go with the entire domain option. If you wish to have restricted access it can be done by limiting the user access to OU.

In the “Read group information” section, change the option to On and click the “ADD LDAP CLIENT” button to create the client.

Once after the configuration When prompted with a Google SSL certificate Click on “Download certificate” and then “CONTINUE TO CLIENT DETAILS”

The service status should be in ON state. So, in the Status page, select “ON for everyone” and click on “SAVE”

Well, that is all at the G-Suite Admin console.

Google Cloud:

  • Create an instance in the GCP. In this example, we chose to use Ubuntu 16.
  • Update the Instance using sudo apt update -y
  • Install the SSSD package using sudo apt install -y sssd sssd-tools
  • Once the installation is done create a new file in /etc/sssd/ and name it as sssd.conf. You can do it using vi /etc/sssd/sssd.conf or the preferred editor.

The sssd.conf file should include the following and look similar like the image below

Note: Remember to replace the domain with yours. By default, Google Linux instances disable password authentication so change it to Yes.

Configuration in Google Instance:

  • Upload the certificate which was downloaded earlier in the G-Suite Download certificate Page.
  • Change the permission of sssd.conf file using sudo chown root:root /etc/sssd/sssd.conf & sudo chmod 600 /etc/sssd/sssd.conf.
  • Restart the SSSD service using sudo service sssd restart

To verify that SSSD is running and connecting to the LDAP server you can run the following command with any of the users in your G Suite account:

  • Type getent passwd username@powerup.university in the instance created in google cloud and the output should look something like this:

Instance Access Scenario:

Now, when you try to ssh from the open in the browser window you will receive the following error. Well now without the G-Suite user we will not be able to log in to the instance.

Granular Level Access to the G-Suite User: When you need to restrict the user access only to the instance. We need to set the custom metadata as enable-oslogin=TRUE

The following roles must be assigned to the G-Suite user to access the instance using a third-party tool(e.g putty).

  • Computer OS Admin Login
  • Compute OS Login
  • Service Account User

Now open a third-party tool and use the G-Suite user and password to login to the machine.

Inference:

When all the identities and apps are managed in a single window the complexity is reduced and security is enhanced which also leads to an increase in the adoption of cloud technology across your business

Update:

In the forthcoming days, we shall have G-Suite users access the Windows instances using the G-Suite credentials.

Creating a VM Snapshot in google cloud using Python

By | Blogs, Cloud, Cloud Assessment, GCP | No Comments

Written by Nirmal Prabhu, Cloud Engineer, Powerupcloud Technologies.

Information is eternal, computers are ephemeral, backup is the saviour.

Keeping it to the point, we have a script to do that to automate VM disk snapshot for google cloud using python and it works with the help of tags.

This script will take a snapshot of all the disks of a VM whose tag matches under condition.

Tag your Virtual machine whose disks to be backed up. Here we used [‘env’:’prod’] where “env” is a key and “prod” is a value.

import apiclient

import json

from datetime import datetime, timedelta

day = datetime.now()

##To get the current date with format.

currday = day.strftime(‘%d-%m-%Y’)

compute = apiclient.discovery.build(‘compute’, ‘v1’)

def list_instances(compute, project, zone):

result = compute.instances().list(project=project, zone=zone).execute()

desired_vms= []

vmdisks= []

data=””

for each_item in result[‘items’]:

if each_item[‘labels’][‘env’]==’prod’: ##Mention the VM label to take snapshot

desired_vms.append(str(each_item[‘name’]))

disks = (each_item[‘disks’])

for disk in disks:

vmdisks.append(str(disk[‘deviceName’]))

data={“vmname”:desired_vms,”disk”:vmdisks}

for disk in vmdisks:

snapshot_body = {‘name’:’automated-snap-’+disk + currday} ## Name Format for new snapshot.

print “Creating snap for %s” % disk

request = compute.disks().createSnapshot(project=’xxx’, zone=’asia-south1-c’,disk=disk, body=snapshot_body) ##Mention the project and Zone

response = request.execute()

return data

print(list_instances(compute,’xxx’ ,’asia-south1-c’)) ##Mention the project and Zone

That’s it… We are done. Happy Automating…Let us know what you think!

Evaluating a hybrid cloud approach – How does one set up for success?

By | AWS, Azure, Cloud, Cloud Assessment, GCP | No Comments

Written by Agnel Bankien, Head of Marketing at Powerupcloud Technologies.

The blended benefits of private cloud including control, visibility, greater leverage along with the perceived greater security and flexibility, the agility of the public cloud make the hybrid cloud an obvious consideration. However, in practice, the pairing of traditional systems with public cloud-based systems increases complexities. This is at cross-purposes, as one of the core management reasons for hybrid cloud adoption is to hide complexity behind an abstraction layer. In addition, some hybrid models adopted by organizations are done so without fully integrating cloud and on-prem.

The true benefits of a hybrid model call for refreshed approaches towards IT management and more than often necessitates cultural realignment. While there is inherent complexity, which will only increase, there are multiple benefits and being mindful of some aspects, would ensure the best leverage of a hybrid model.

Before we get to that, on the industry and where is it headed:

Reports indicate the global hybrid cloud market is valued at around USD 40 billion in 2017 and is expected to reach a value of USD 138 billion by 2023, at a CAGR of upwards of 22, during the forecast period (2018–2023). This market is split by:

Solution:

  • Cloud Management and Orchestration
  • Disaster Recovery
  • Security and Compliance
  • Hybrid Hosting

Service Model:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

IaaS is the fastest-growing segment of the wider cloud market with Gartner predicting the top 10 providers including AWS, Azure and Google Cloud will account for around 70% of the IaaS market by 2021.

Forrester’s predictions in 2019 are:

Enterprise cloud spending will soar as core business app modernization takes off: Core business applications being replaced with new cloud SaaS apps, and others will be refactored using new cloud-native techs like containers, Kubernetes, and functions

Private cloud adoption and spending will grow faster: Cloud in 2019 is about creating a flexible, Agile, automated. Success will be measured by developer satisfaction and time-to-market for new products and services, and not by taking out the cost

True SaaS-based connected cloud ecosystems will emerge to turbocharge innovation: The resurgence of “industry clouds” with extensible SaaS applications which are becoming development platforms.

According to a CIO survey, cloud users are aware that they are wasting money in the cloud — they estimate 30% waste. To combat this issue, 58% of cloud users rate cloud optimization efforts as their top initiative for the coming year.

We at Powerup with expertise across leading public cloud platforms including as top-tier partners on AWS, Azure, Google Cloud … offer best-in-class platform-agnostic solutions. Our deep technical expertise in Data Center technologies including VMware, Hyper-V, Open Stack offers a robust understanding of hybrid ecosystems. Additionally our implementations in container orchestration like Kubernetes & Docker Swarm having resulted in up to ~80% cost efficiency for large enterprise customers. The Powerup ACE (Advanced Consulting for Enterprise)team lead by solution architects using enterprise-class frameworks can help assess >plan>deploy>manage any IT infra solution. Amplified with platform-agnostic expertise in container orchestration, we can help an organization adopt an “optimized hybrid cloud infrastructure”.

While well known, a re-articulation of the benefits of a hybrid approach:

A hybrid cloud combines the best elements of private and public clouds (and Co-Lo), with no single platform capable of supporting a wide variety of Ops, Development and business unit requirements. These are some of the benefits of this blended approach.

> Network Optimization

> Flexibility & Scalability

> Reduced Latency

> Organizational Agility

> Resource Optimization

Given the relative nascency of the Hybrid cloud, mastery in its management is still some time away, and with complexity increasing over time, the approaches will have to continuously evolve. There are some aspects however that should be focused upon, to ensure that path-to-success is set up.

These are some aspects to focus on for an optimal hybrid strategy:

> Workload management

The following needs to be defined:

  • Who owns the workload and escalation?
  • Business criticality
  • When do the workloads run?
  • Where do the workloads run? On the public cloud, private cloud, or in both places?
  • Why were the decisions made about where to run the workload? And when they may need to be re-evaluated?

> Security and governance

A recent survey of CIO’s reveal 77% of its respondents see security as a challenge, while 29 % see it as a significant challenge. Core to the success of hybrid cloud management is how the following are defined and handled:

  • Security and performance
  • Policy management

> Management layer — A unified interface

To manage diverse entities with their native interfaces, a unified management layer that abstracts you away from complexity.

> Defining SLA’s

Go beyond the baseline of good performance but providing meets specific expectations and is defined in the management layer as an extension of business requirements.

> Leveraging tools

Mapping tools to the requirement for API management, Resource management, Cloud management platforms, Performance management, DevOps management, Security management, Network management, etc.

At Powerup we partner with organizations on this journey, which is orchestrated using our PCLP (Powerup Cloud LaunchPad) framework; an enterprise-class methodology that aids customers with an optimal start their cloud journey. PCLP is designed to help customers visualize the end-state of a cloud migration activity and arrive at a base-line cloud framework with key components configured before any migration begins. Successful adoptions are premised on clear base-line definition across the following 7 parameters.

Happy to hear your thoughts on the subject or sharing of experiences, if you would like to have a conversation with us, do drop us a line at marketing@powerupcloud.com

Time triggered recommendations

By | AI Case Study, Case Study, GCP, ML | No Comments

Customer: A large online home festival by a high-end property portal that caters to a global market

Problem Statement

The customer was looking to build a highly accurate product recommendation engine to suggest properties to visiting users. They also wanted these recommendations to be time-triggered. The challenge was, almost 95% of users were guest users and hence the prediction engine has to be built using very little user-specific data.

Proposed Solution

Powerup team built 2 engines, 1. A product recommendation engine that tracks user click-stream data and recommends users with right properties, 2. A prediction engine that predicts the time a user is going to spend on a web-page.

Cloud Platform

Google Cloud.

Technologies used

Apache Spark, Socket.io, BigQuery, Riak, Fluentd, Prediction.io.