All Posts By

powerupcloud

10 Cloud Predictions for 2021

By | Uncategorized | No Comments

Compiled by Kiran Kumar, Business Analyst at Powerup Cloud Technologies

1. Rise in cloud Telephony

The cloud telephony market is projected to grow 8.9% in 2020 and 17.8% in 2021.

“As a result of workers employing remote work practices in response to COVID-19 office closures, there will be some long-term shifts in conferencing solution usage patterns. Policies established to enable remote work and experience gained with conferencing service usage during the outbreak is anticipated to have a lasting impact on collaboration adoption,” Gartner

COVID19 initiatives

2. Increased adoption of Virtual desktops

Forrester predicts the number of remote workers at the end of 2021 will be 3x of  the pre-pandemic levels. Due to the increase in the demand of remote working we expect to see a rise in organizations turning to Desktop-as-a-Service (DaaS) options in 2021 to allow for the secure access of data off corporate networks from any device. DaaS technology will allow organizations to better meet the demands of remote work by quickly provisioning secure virtual desktops for employees and contractors alike that can be deleted if compromised.

Research shows that Microsoft is not the only provider looking up to the desktop as a means of connecting to the cloud. By the same token, all of the key cloud vendors are interested in the virtual desktop market. Moreover, with popular Windows 7 reaching the end of life in January 2020, it means 2019 will be a year of its transition to glory. 2021 and 2022 will bring its own techs. My own question is; are people willing to just jump to Windows 10 and thus cement Microsoft’s hold? Or will they accept things such as AWS WorkSpaces or Google Chromebooks that are fast rising?

Remote workforce enablement

3. Multi-cloud management

50% of Indian enterprises will operate in a hybrid multi-cloud environment by 2021 and 30% of Indian enterprises will deploy unified VMs, Kubernetes, and multi-cloud management processes and tools to support robust multi-cloud management across on-premises and public clouds.: IDC

CloudEnsure

4. Focus on the “XOps”

AI will play a huge role in augmenting DevOps in 2021 and aslo it will play an essential role in monitoring various conventional activities, optimizing the test cases, along with reducing the time consumed by an application during the development phase.

The Markets and Markets report on the DevSecOps Global Forecast to 2023 suggests that the DevSecOps market size is expected to grow to USD 5.9 billion by 2023.

2020 is expected to witness developers leaning towards the compliance-as-code service, with security being the major focus. As mentioned earlier, security is introduced early in the SDLC cycle using the shift-left strategy. This would ensure that the threats are identified in the beginning and ultimately reduce the cost to fix security issues. DevSecOps ensures that there is better collaboration between the team members by unitedly focusing attention on observability and security issues.

With 65% of organizations expected to adopt DevOps as a mainstream strategy by this year

DevOps automation

5. Pervasiveness of AI

By 2022, 65 percent of CIOs will digitally empower and enable front-line workers with data, AI, and security to extend their productivity, adaptability, and decision-making in the face of rapid changes.

By 2023, driven by the goal to embed intelligence in products and services, one quarter of G2000 companies will acquire at least one AI software start-up to ensure ownership of differentiated skills and IP. Successful organizations will eventually sell internally developed industry-specific software and data services as a subscription, leveraging deep domain knowledge to open profitable new revenue streams.

AI in data centers

AI in data centers will see a peak rise in the coming years. The IDC forecasts that by the year 2021, AI spending will grow to US$52.2 billion with a total CAGR increase of 46.2 percent from 2016-2021.

The use of AI in data centers will serve multiple purposes like automating various manual tasks and also solving skill shortage issues. Along with this, the AI resources can help the enterprises to learn from their past data and have productive conclusions.

Chatbot

Image recognition

6. Serverless computing

25% of developers will leverage serverless by 2021. Gartner also stated the rise of serverless computing, marking the increase by approx. 20 percent of global enterprises.

A 2020 DataDog survey indicated that over 50% of AWS users are now using the serverless AWS Lambda Function as a Service (FaaS). Serverless technologies are going mainstream.

7. Focus on hybrid cloud

AWS and Google will increase their focus on Hybrid cloud. Security will remain as the key driver for hybrid cloud

AWS Outposts

Enterprise Migration

8. Mainstreaming of Containers and Kubernetes

Prior to the pandemic, about 20% of developers regularly used container and serverless functions to build new apps and modernize old ones. We predict nearly 30% will use containers regularly by the end of 2021, creating a spike in global demand for both multi-cloud container development platforms and public-cloud container/serverless services.

The IDC predicts that along with Kubernetes, 95 percent of more new-micro services will be deployed in the containers by 2021.

Forrester predicts that lightweight Kubernetes deployments will end up accounting for 20% of edge orchestration in 2021,

9. Moving DR to cloud

COVID-19 shined a bright light on every company unprepared to recover from a data center outage and refocused enterprise IT teams on improving resiliency. Before the pandemic, few companies protected data and workloads in the public cloud. In 2021, we predict that an additional 20% of enterprises will shift DR operations to the public cloud — and won’t look back.

10. Manage technical debt

Through 2023, coping with technical debt accumulated during the pandemic will shadow 70% of CIOs, causing financial stress, inertial drag on IT agility, and “forced march” migrations to the cloud.

Simplify Cloud Transformation with Tools

By | Powerlearnings | No Comments

Compiled by Kiran Kumar, Business Analyst at Powerup Cloud Technologies

Introduction

Cloud can bring a lot of benefits to organizations including operational resilience, business agility, cost-efficiency, scalability as well as staff productivity. However moving to cloud can be a task with so many loose ends to worry about like downtime, security, and cost etc. Even some of the top cloud teams can be left clueless and overwhelmed by the scale of the project and decisions that need to be taken. 

But the cloud marketplace has matured greatly and there are a lot of tools and solutions that can help you automate or assist you in your expedition. These solutions can significantly reduce the complexity of the project. Knowing the value cloud tools bring to your organization. I have listed down tools that can assist you in each phase of your cloud journey have been said that this blog just serves as a representation of the type of products and services that are available for easy cloud transformation.

Cloud Assessment

RapidAdopt

LTI RapidAdopt helps you fast track adoption of various cloud models. Based on the overall scores, the accurate Cloud strategy is formulated The Cloud Assessment Framework enables organizations to assess their cloud readiness roadmap.

SurPaaS® Assess™

Is a complete Cloud migration feasibility assessment platform that generates multiple reports after analyzing your application to help you understand the factors involved in migrating your application to the Cloud. These reports help you to decide on your ideal Cloud migration plan and help you accelerate your Cloud Journey

Cloudamize

Make data-driven cloud decisions with confidence using high-precision analytics and powerful automation make planning simple and efficient, accelerating your migration to the cloud.

Cloud Recon

Inventory applications and workloads to develop a cloud strategy with detailed ROI and TCO benefits.

NetApp Cloud Assessment Tool

The Cloud Assessment tool will monitor your cloud storage resources, optimize cloud efficiency and data protection, identify cost-saving opportunities, and reduce overall storage spend so you can manage your cloud with confidence.

Risc Networks

RISC Networks’ breakthrough SaaS-based analytics platform helps companies chart the most efficient and effective path from on-premise to the cloud.

Migvisor

With migVisor, you’ll know exactly how difficult (or easy) your database migration will be. migVisor analyzes your source database configuration, attributes, schema objects, and proprietary features

Migration

SurPaaS® Migrate™

with its advanced Cloud migration methodologies, enables you to migrate any application to the Cloud without any difficulty. It provides you with various intelligent options for migrating applications/VMs to the Cloud. robust migration methodologies allow you to migrate multiple servers in parallel with clear actionable reporting in case of any migration issues.

SurPaaS® Smart Shift™

Smart Shift™ migrates an application to Cloud with a re-architected deployment topology based on different business needs such as scalability, performance, security, redundancy, high availability, backup, etc.

SurPaaS® PaaSify™

Is the only Cloud migration platform that lets you migrate any application and its databases to required PaaS services on Cloud. Choose different PaaS services for different workloads available in your application and migrate to Cloud with a single click. 

CloudEndure

simplifies, expedites, and automates migrations from physical, virtual, and cloud-based infrastructure to AWS.

SurPaaS® Containerize™

allows you to identify application workloads that are compatible with containerization using its comprehensive knowledge base system. Choose workloads that need to be containerized and select any one of the topologies from SurPaaS® multiple container deployment architecture suggestions

Carbonite Migrate

Structured, repeatable data migration from any source to any target with near zero data loss and no disruption in service.

AWS Database Migration Service

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Azure Migrate

A central hub of Azure cloud migration services and tools to discover, assess and migrate workloads to the cloud

Cloud Sync

An easy to use cloud replication and synchronization service for transferring NAS data between on-premises and cloud object stores.

MigrationWiz

Migrate multiple cloud workloads with a single solution. MigrationWiz—the industry-leading SaaS solution—enables you to migrate email and data from a wide range of Sources and Destinations.

Cloud Pilot

Analyze applications at the code level to determine Cloud readiness and conduct migrations for Cloud-ready applications.

Paasify

PaaSify is an advanced solution, which runs through the application code, and evaluates the migration of apps to cloud. The solution analyzes the code across 70+ parameters, including session objects, third-party dependencies, authentication, database connections, and hard-coded links.

Application Development

DevOps on AWS

Amazon Elastic Container Service

Production Docker Platform

AWS Lambda

Serverless Computing

AWS CloudFormation

Templated Infrastructure Provisioning

AWS OpsWorks   

Chef Configuration Management

AWS Systems Manager

Configuration Management

AWS Config

Policy as Code

Amazon CloudWatch

Cloud and Network Monitoring

AWS X-Ray

Distributed Tracing

AWS CloudTrail

Activity & API Usage Tracking

AWS Elastic Beanstalk

Run and Manage Web Apps

AWS CodeCommit

Private Git Hosting

Azure DevOps service

Azure Boards

Deliver value to your users faster using proven agile tools to plan, track, and discuss work across your teams.

Azure Pipelines

Build, test, and deploy with CI/CD which works with any language, platform, and cloud. Connect to GitHub or any other Git provider and deploy continuously.

Azure Repos

Get unlimited, cloud-hosted private Git repos and collaborate to build better code with pull requests and advanced file management.

Azure Test Plans

Test and ship with confidence using manual and exploratory testing tools.

Azure Artifacts

Create, host, and share packages with your team and add artifacts to your CI/CD pipelines with a single click.

Kubernetes

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Infrastructure Monitoring & Optimization

SurPaaS® Optimo™

Realizing Continuous Cloud Landscape Optimization, with AI-driven advisories and Integrated Cloud management actions to reduce your Cloud costs.

Carbonite Availability

continuous replication technology maintains an up-to-date copy of your operating environment without taxing the primary system or network bandwidth.

Splunk

Embrace AIOps for the technical agility, speed and capacity needed to manage today’s complex environments.

Cloud Insights

With Cloud Insights, you can monitor, troubleshoot and optimize all your resources including your public clouds and your private data centers.

TrueSight Operations Management

Machine learning and advanced analytics for holistic monitoring and event management

BMC Helix Optimize

SaaS solution that deploys analytics to continuously optimize resource capacity and cost

Azure Monitor 

Azure Monitor collects monitoring telemetry from a variety of on-premises and Azure sources. Management tools, such as those in Azure Security Center and Azure Automation, also push log data to Azure Monitor.

Amazon CloudWatch

CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

TrueSight Orchestration

Coordinate workflows across applications, platforms, and tools to automate critical IT processes

BMC Helix Remediate

Automated security vulnerability management and simplified patching for hybrid clouds

BMC Helix Discovery

Automatic discovery of data center and multi-cloud inventory, configuration, and relationship data

Cloud Manager  

Cloud Manager provides IT experts and cloud architects with a centralized control plane to manage, monitor and automate data in hybrid-cloud environments, providing an integrated experience of NetApp’s Cloud Data Services.

Application Management

SurPaaS® Operato™

Visualize Application Landscape on the Cloud and effectively manage with ease using multiple options to enhance your applications.

SurPaaS® Moderno™

SurPaaS® can quickly assess your applications and offers a path to move your workloads within hours to DBaaS, Serverless App Services, Containers, and Kubernetes Services.

SurPaaS® SaaSify™

Faster and Smarter Way to SaaS. SaaSify your Applications and Manage their SaaS Operations Efficiently.

Dynatrace

Best-in-class APM from the category leader. Advanced observability across cloud and hybrid environments, from microservices to mainframe. Automatic full-stack instrumentation, dependency mapping and AI-assisted answers detailing the precise root-cause of anomalies, eliminating redundant manual work, and letting you focus on what matters. 

New Relic APM

APM agents give real-time observability matched with trending data about your application’s performance and the user experience. Agents reveal what is happening deep in your code with end to end transaction tracing and a variety of color-coded charts and reports.

DataDog APM

Datadog APM provides end-to-end distributed tracing from frontend devices to databases—with no sampling. Distributed traces correlate seamlessly with metrics, logs, browser sessions, code profiles, synthetics, and network performance data, so you can understand service dependencies, reduce latency, eliminate errors, 

SolarWinds Server & Application Monitor 

End-To-End Monitoring

Server capacity planning 

Custom app monitoring 

Application dependency mapping 

AppDynamics

Actively monitor, analyze and optimize complex application environments at scale.

DB DR

Carbonite Recover

Carbonite® Recover reduces the risk of unplanned downtime by securely replicating critical systems to the cloud, providing an up-to-date copy for immediate failover

Carbonite Server

All-in-one backup and recovery solution for physical, virtual and legacy systems with optional cloud failover.

Carbonite Availability

Continuous replication for physical, virtual and cloud workloads with push-button failover for keeping critical systems online all the time.

Cloud Backup Service

Cloud Backup Service delivers seamless and cost-effective backup and restore capabilities for protecting and archiving data.

CloudEndure

Scalable, cost-effective business continuity for physical, virtual, and cloud servers

Zerto

Reduce cost and complexity of application migrations and data protection with Zerto’s unique platform utilizing Continuous Data Protection. Orchestration built into the platform enables full automation of recovery and migration processes. Analytics provides 24/7 infrastructure visibility and control, even across clouds.

Azure Site Recovery

 Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location

Cloud Governance and security

CloudHealth Multicloud Platform

Transform the way you operate in the public cloud

CloudHealth Partner Platform

Deliver managed services to your customers at scale

CloudHealth Secure State

Mitigate security and compliance risk with real-time security insights

Cloud Compliance

Automated controls for data privacy regulations such as the GDPR, CCPA, and more. Driven by powerful artificial intelligence algorithms,

Azure Governance Tools

Get transparency into what you are spending on cloud resources. Set policies across resources and monitor compliance enabling quick, repeatable creation of governed environments

Splunk

Splunk Security Operations Suite combines industry-leading data, analytics and operations solutions to modernize and optimize your cyber defenses.

CloudEnsure

An autonomous cloud governance platform that is built to manage multi cloud environment It performs real time well architected audit on all your cloud, giving you a comprehensive view of best practices adherence in your cloud environment with additional emphasis on security, reliability and cost.The enterprise version of Cloud Ensure, the hosted version of the original SaaS platform, is best suited for organizations wanting to have in-house governance and monitoring of their cloud portfolio. 

Azure Cache for Redis: Connecting to SSL enabled Redis from Redis-CLI in Windows & Linux

By | Powerlearnings | No Comments

Written by Tejaswee Das, Sr. Software Engineer, Powerupcloud Technologies

Collaborator: Layana Shrivastava, Software Engineer

Introduction

This blog will guide you through the steps to connect to a SSL enabled remote Azure Redis Cache from redis-cli. We will demonstrate how to achieve this connectivity in both Windows & Linux systems.

Use Case

While connecting to a non-SSL redis might be straight forward, works great for Dev & Test Environments, but for higher environments – Stage & Prod, security is something that should always be the priority. For that reason, it is advisable to use SSL enabled redis instances.  The default non-SSL port is 6379 & SSL port is 6380.

Windows

Step 1:  Connecting to non-SSL redis is easy

PS C:\Program Files\Redis> .\redis-cli.exe -h demo-redis-ssl.redis.cache.windows.net -p 6379 -a xxxxxxxx

Step 2:  To connect to SSL redis, we will need to create a secure tunnel. Microsoft has recommended using Stunnel to achieve this. You can download the applicable package from the below link

https://www.stunnel.org/downloads.html

We are using stunnel-5.57-win64-installer.exe here

2.1 Agree License and start installation

2.2 Specify User

2.3 Choose components

2.4 Choose Install Location

2.5 This step is optional. You can fill in details or just press Enter to continue.

Choose FQDN as localhost

2.6 Complete setup and start stunnel

2.7 On the bottom task bar, right corner, click on  (green dot icon) → Edit Configuration

2.8 Add this block in the config file. You can add it at the end.

[redis-cli]
client = yes
accept = 127.0.0.1:6380
connect = demo-redis-ssl.redis.cache.windows.net:6380

2.9 Open Stunnel again from the taskbar → Right click → Reload Configuration to effect the changes. Double click on the icon and you can see

Step 3: Go back to your redis-cli.exe location in Powershell and try connecting now

PS C:\Program Files\Redis> .\redis-cli.exe -p 6380 -a xxxxxxxx

Linux

Step 1:  Installation & configuring Stunnel in Linux is pretty easy. Follow the below steps. You are advised to use these commands with admin privileges

1.1 Update & upgrade existing packages to the latest version.

  • apt update
  • apt upgrade -y

1.2 Install redis server. You can skip this if you already have redis-cli installed in your system/VM

  • apt install redis-server
  • To check redis status : service redis status
  • If the service is not in active(running state): service redis restart

1.3 Install Stunnel for SSL redis

●    apt install stunnel4
●    Open file /etc/default/stunnel4
--Enabled=1 (Change value from 0 to 1 to auto start service)
●       Create redis conf for stunnel. Open /etc/stunnel/redis.conf with your favorite editor and add this code block
 
[redis-cli]
client = yes
accept = 127.0.0.1:6380
connect = demo-redis-ssl.redis.cache.windows.net:6380
●       Check status: systemctl status stunnel4.service
●       Restart stunnel service: systemctl restart stunnel4.service
●       Reload configuration: systemctl reload stunnel4.service
●       Restart: systemctl restart stunnel4.service
●       Check status if running: systemctl status stunnel4.service

1.4 Check whether Stunnel is listening to connections

  • Netstat -tlpn | grep

1.5 Try connecting to redis now

>redis-cli -p 6380 -a xxxxxxxx
>PING
PONG

Success! You are now connected.

Conclusion

So finally we are able to connect to SSL enabled redis from redis-cli.

This makes our infrastructure more secure.

Hope this was informative. Do leave you comments for any questions.

References

https://techcommunity.microsoft.com/t5/azure-paas-blog/connect-to-azure-cache-for-redis-using-ssl-port-6380-from-linux/ba-p/1186109

AWS Lambda Java: Sending S3 event notification email using SES – Part 2

By | Powerlearnings | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies

Collaborator: Neenu Jose, Senior Software Engineer

Introduction

In the first part of this series, we have discussed in-depth about creating a Lambda deployment package for Java 8/11 using Maven in Eclipse & S3 event triggers. know more here

In this post, we will showcase how we can send emails using AWS Simple Email Service (SES) with S3 Event triggers in Java.

Use Case

One of our clients had their workloads running on Azure Cloud. They had few serverless applications in Java 8 in Azure Functions. They wanted to upgrade Java from Java 8 to Java 11. Since Java 11 was not supported (Java 11 for Azure Functions has recently been released in Preview), they wanted to try out other cloud services – that’s when AWS Lambda was the one to come into the picture. We did a POC feasibility check for Java 11 applications running on AWS Lambda. 

Step 1:

Make sure you are following Part 1 of this series. This is a continuation of the first part, so it will be difficult to follow Part 2 separately.

Step 2:

Add SES Email Addresses

Restrictions are added to all SES accounts to prevent fraud and abuse. For this reason, for all test emails that you intend to use, you will have to add both the sender & receiver email addresses to SES, which by default is placed in SES sandbox.

2.1 To add email addresses, go to AWS Console → Services → Customer Engagement → Simple Email Service (SES)

2.2  SES Home → Email Addresses → Verify a New Email Address

2.3 Add Addresses to be verified

2.4 A verification email is sent to the added email address

2.5 Until the email address is verified, it cannot be used to send or receive emails. Status shown in SES is pending verification (resend)

2.6 Go to your email client inbox and click on the URL to authorize your email address

2.7 On successful verification, we can check the new status in SES Home, status verified.

Step 3:

In the pom.xml add the below Maven dependencies. To use SES, we will require aws-java-sdk-ses

Below in our pom.xml file for reference

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.amazonaws.lambda</groupId>
  <artifactId>demo</artifactId>
  <version>1.0.0</version>
  <packaging>jar</packaging>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.6.0</version>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
          <encoding>UTF-8</encoding>
          <forceJavacCompilerUse>true</forceJavacCompilerUse>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>3.0.0</version>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>shade</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-bom</artifactId>
        <version>1.11.256</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>3.3.3</version>
      <scope>test</scope>
    </dependency>

    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-lambda-java-events</artifactId>
      <version>1.3.0</version>
    </dependency>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-lambda-java-core</artifactId>
      <version>1.1.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-ses -->
<dependency>
     <groupId>com.amazonaws</groupId>
     <artifactId>aws-java-sdk-ses</artifactId>
     <version>1.11.256</version><!--$NO-MVN-MAN-VER$-->
     <scope>compile</scope>
   </dependency>
  </dependencies>
</project>

Step 4:

Edit your LambdaFunctionHandler.java file with the latest code

4.1 Add email components as string

final String FROM = "neenu.j@powerupcloud.com";
final String TO = "neenu.j@powerupcloud.com";
final String SUBJECT = "Upload Successful";
final String HTMLBODY = key+" has been successfully uploaded to "+bucket;
final String TEXTBODY = "This email was sent through Amazon SES using the AWS SDK for Java.";

4.2 Create SES client

AmazonSimpleEmailService client = AmazonSimpleEmailServiceClientBuilder.standard()
                // Replace US_WEST_2 with the AWS Region you're using for
                // Amazon SES.
                  .withRegion(Regions.US_EAST_1).build();

4.3 Send email using SendEmailRequest

SendEmailRequest request = new SendEmailRequest()
                .withDestination(
                    new Destination().withToAddresses(TO))
                .withMessage(new Message()
                    .withBody(new Body()
                        .withHtml(new Content()
                            .withCharset("UTF-8").withData(HTMLBODY))
                        .withText(new Content()
                            .withCharset("UTF-8").withData(TEXTBODY)))
                    .withSubject(new Content()
                        .withCharset("UTF-8").withData(SUBJECT)))
                .withSource(FROM);
            client.sendEmail(request);

You can refer the complete code below

package com.amazonaws.lambda.demo;

import com.amazonaws.regions.Regions;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object; 
import com.amazonaws.services.simpleemail.AmazonSimpleEmailService;
import com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClientBuilder;
import com.amazonaws.services.simpleemail.model.Body;
import com.amazonaws.services.simpleemail.model.Content;
import com.amazonaws.services.simpleemail.model.Destination;
import com.amazonaws.services.simpleemail.model.Message;
import com.amazonaws.services.simpleemail.model.SendEmailRequest;


public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {

    private AmazonS3 s3 = AmazonS3ClientBuilder.standard()
    		.withRegion(Regions.US_EAST_1)
    		.build();

    public LambdaFunctionHandler() {}

    // Test purpose only.
    LambdaFunctionHandler(AmazonS3 s3) {
        this.s3 = s3;
    }

    @Override
    public String handleRequest(S3Event event, Context context) {
        context.getLogger().log("Received event: " + event);

        // Get the object from the event and show its content type
        String bucket = event.getRecords().get(0).getS3().getBucket().getName();
        String key = event.getRecords().get(0).getS3().getObject().getKey();
        final String FROM = "neenu.j@powerupcloud.com";
        final String TO = "neenu.j@powerupcloud.com";
        final String SUBJECT = "Upload Successful";
        final String HTMLBODY = key+" has been successfully uploaded to "+bucket;
        	            	     

        final String TEXTBODY = "This email was sent through Amazon SES "
        	      + "using the AWS SDK for Java.";
        try {
            AmazonSimpleEmailService client = 
                AmazonSimpleEmailServiceClientBuilder.standard()
                // Replace US_WEST_2 with the AWS Region you're using for
                // Amazon SES.
                  .withRegion(Regions.US_EAST_1).build();
            SendEmailRequest request = new SendEmailRequest()
                .withDestination(
                    new Destination().withToAddresses(TO))
                .withMessage(new Message()
                    .withBody(new Body()
                        .withHtml(new Content()
                            .withCharset("UTF-8").withData(HTMLBODY))
                        .withText(new Content()
                            .withCharset("UTF-8").withData(TEXTBODY)))
                    .withSubject(new Content()
                        .withCharset("UTF-8").withData(SUBJECT)))
                .withSource(FROM);
                // Comment or remove the next line if you are not using a
                // configuration set
               // .withConfigurationSetName(CONFIGSET);
            client.sendEmail(request);
            System.out.println("Email sent!");
          } catch (Exception ex) {
            System.out.println("The email was not sent. Error message: " 
                + ex.getMessage());
          }
       
       context.getLogger().log("Filename: " + key);
        try {
            S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
            String contentType = response.getObjectMetadata().getContentType();
            context.getLogger().log("CONTENT TYPE: " + contentType);
            return contentType;
        } catch (Exception e) {
            e.printStackTrace();
            context.getLogger().log(String.format(
                "Error getting object %s from bucket %s. Make sure they exist and"
                + " your bucket is in the same region as this function.", key, bucket));
            throw e;
        }
    }
}

Step 5:

Build the updated project and upload it to Lambda. Refer to Step 5 (https://www.powerupcloud.com/aws-lambda-java-creating-deployment-package-for-java-8-11-using-maven-in-eclipse-part-1/)

Step 6:

To test this deployment. Upload yet another new file to your bucket. Refer Step 9 of blog Part 1.

On successful upload, SES sends an email with the details. Sample screenshot below.

Conclusion

S3 event notifications can be used for a variety of use-case scenarios. We have tried to showcase just one simple case. This can be used to monitor incoming files & objects into a S3 bucket and appropriate actions & transformations.

Hope this was informative. Do leave you comments for any questions.

References

https://docs.aws.amazon.com/ses/latest/DeveloperGuide/request-production-access.html

Using Reserved Instances saved 30% for a leading healthcare research, data, and technologies company

By | Cloud Case Study | No Comments

Customer: A leading healthcare research, data & technologies company

Summary

A leading healthcare research, data and technologies company was seeking recommendations on cloud optimization & best practices. Powerup conducted a detailed study & analysis to provide the customer team with suggestions on cost optimization, security audit and AWS best practices.

About Customer

The customer is a leading healthcare research and consulting company that provides high-value healthcare industry analysis and insights. They create patient-centric commercialization strategies that drive better outcomes and access, improving the lives of patients globally.

The customer helps businesses achieve commercial excellence through evidence-based decision making processes like expert consultation, proprietary data and analysis via machine learning artificial intelligence.

Problem Statement

The customer utilizes nearly all the tools that AWS offers to build, upgrade & maintain their infrastructure as per ongoing requirements. They are looking at cost optimization for all of their 17 AWS accounts. They plan to initiate cost-saving strategies to their AWS master account by –

  • Identifying the number of servers running idle and help create reserved instances.
  • Deploy upgraded servers based on recommendations.
  • Implement resource tagging for business unit-wise billing.
  • Install CloudHealth agent to maintain multiple accounts and
  • Automate lifecycle policies for backup maintenance.

The tagging is required to be done on a total of 490+ EC2, 70+ RDS and S3 servers that would be based on P&L, projects, stage, application owner, roles, support contact, function and cost savings heads to name a few.

The team was ill-equipped with the techniques of downsizing and was uncertain about how reports could be utilized to their maximum advantage in order to minimize costs.

Proposed Solution

  • Phase 1 – 100% CloudHealth agent installation coverage on AWS accounts

Applying AWS user data as a benchmark, Powerup created a CloudHealth agent inventory list and identified missing agents for the customer. They worked closely with the customer’s DevOps team to gain access to servers, to install CloudHealth agent on the remaining 300+ systems. Once done, agent check-in was verified to confirm 100% coverage. Installation was automated for new resources launch and a restriction was imposed on launching any instance without agent set up. Reserved Instance (RI) recommendation was obtained through the CloudHealth tools with the intent to reduce costs.

  • Phase 2 – Tagging and Governance

In the cloud environment, tags are identifiers that are affixed to instances. Powerup helped the customer incorporate 100% tagging based on appropriate business reviews. The objective was to strengthen inventory tag lists by classifying all instances under their respective heads. Instances were classified as per AWS best practices to initiate cost controls.

ParkmyCloud is a self-service SaaS platform that is implemented to help identify and terminate wasted cloud spend. It was scheduled periodically on customer’s Dev/QC/Staging environments and no machines were launched without proper tagging. It helped keep a check on auto-scaling groups to ensure tagging, as well as help, identify and implement governance rules as alert checks on resources, from being launched without proper tagging, sizing or approvals. Categorization of assets based on its name when tags are missing could be detected easily. Automating tagging and enabling termination policy for an instance helped in better-cost management along with providing the customer with accurate findings, recommendations and a strategic roadmap.

  • Phase 3 – Rightsizing and instance type consolidation

Powerup created a database instance inventory list to recognize and review the outdated version of servers. They also identified instances that required reconfiguration and upgradation. They imported instance right-sizing recommendations from data collected from CloudHealth tools that stated suitable suggestions for new instance type and size. It ensured appropriate process flow of right-sizing checks, added business intelligence around recommendations and smoothly transitioned all suggestions to the customer team. These recommendations helped them cut down on costs significantly.

  • Phase 4 -Security Audit

With the help of CloudHealth security audit report, the customer could study, analyze and prioritize summary findings by order of criticality and business requirements in a consolidated format. Recommended resolutions helped them validate security loopholes and facilitated suggestions on security fixes to the customer DevOps team. It also enabled them to generate backup services and POC reports which assisted them in checking how reports performed. This enabled them to update alert thresholds to meet business expectations and requirements.

Business Benefits

  • Using RI recommendations will help the customer cut down their monthly bills on EC2 and RDS by 30% .
  • The new EC2 version instance recommendation can help them save a minimum of 8% of costs while guaranteeing the high-quality performance.
  • The customer was able to regulate their billing and cost console using CloudHealth and AWS billing dashboard.
  • Restricted resources provisioned without proper tags and CloudHealth agent promotes easy maintenance of multiple accounts by using a single console.

Cloud platform

AWS.

Technologies used

CloudHealth, ParkMyCloud, AWS Backup.

Bulk AWS EC2 pricing

By | Powerlearnings | No Comments

Written by Mudit Jain, Principal Architect at Powerupcloud Technologies.

Your Amazon EC2 usage is calculated by either the hour or the second based on the size of the instance, operating system, and the AWS Region where the instances are launched. Pricing is per instance hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. However, there is no bulk upload in a simple monthly calculator and calculator. AWS, with the following options to execute the same.

  • There are paid discovery tools that also do bulk pricing. However, often initial budgeting TCO calculation is done way ahead in the pipeline.
  • Manually add entries to AWS calculator and simple monthly calculator. However, this is not feasible with a large number of VMs on enterprise-level and manual entries are error-prone.
  • You can work with the excels but it has its own limitations.

To  overcome the above-mentioned, we have arrived at an effective 4-steps solution process, they are,

  1. Source of truth download 
  2. Source file cleanup
  3. One-time setup
  4. Bulk Pricing:
    1. Source file preparation
    2. pricing

1. Source of truth download

AWS publishes its service EC2 pricings here.

The documentation for Using the bulk API.

2. Source file cleanup

Use any python with Pandas pre-installed.

#!/usr/bin/env python
# coding: utf-8
# In[1]:
import pandas as pd
# In[2]:
srcindex=pd.read_csv('/Users/mudit/Downloads/index2.csv',header=5,low_memory=True)
# In[3]:
vmsrcindex = srcindex[srcindex['Instance Type'].notna()]
curvmsrcindex=vmsrcindex[vmsrcindex['Current Generation'] == "Yes"]
#remove non-ec2 pricings in ec2 and remove old generations pricings
# In[5]:
tencurvmsrcindex=curvmsrcindex[curvmsrcindex['Tenancy'].isin(['Shared'])]
remove0=tencurvmsrcindex[tencurvmsrcindex['PricePerUnit'] != 0]
remove_unsed=remove0[~remove0.PriceDescription.str.contains("Unused")]
#at the TCO stage 95% of our customer need pricing against shared tenancy and unused pricings  are not as relevant.
# In[8]:
filter1=remove_unsed[['Instance Type','Operating System','TermType','PriceDescription','Pre Installed S/W','LeaseContractLength','PurchaseOption','OfferingClass','Location','Unit','PricePerUnit','vCPU','Memory']]
filter2=filter1.apply(lambda x: x.astype(str).str.lower())
filter3=filter2[~filter2['PurchaseOption'].isin(['partial upfront'])]
filter4=filter3
#most customers don’t use partial upfront.
# In[9]:
filter5=filter4[~filter4['Instance Type'].str.contains("^['r','c','m']4.*")]
filterered=filter5[~filter5.PriceDescription.str.contains("byol")]
#in the aws published excels, even r4,c4 and m4's are marked as current generation. Also, at this stage most customers are not looking for byol pricing.
# In[10]:
filterered.to_csv('/Users/mudit/Downloads/filterered.csv')

3. One-time setup

#!/usr/bin/env python
# coding: utf-8
# In[76]:
import pandas as pd
# In[77]:
filtered=pd.read_csv('/Users/mudit/Downloads/filterered.csv',low_memory=True)
source2=pd.read_csv('/Users/mudit/Desktop/source.csv',header=0,low_memory=True)
#read the source file and filtered file create in previous step
# In[78]:
source2=source2.dropna(subset=['hostname', 'Instance Type', 'Operating System','Location','TermType'])
source2=source2.apply(lambda x: x.astype(str).str.lower())
filtered=filtered.apply(lambda x: x.astype(str).str.lower())
#lowercase everything for string match
# In[79]:
source2=source2[source2.columns.drop(list(source2.filter(regex=('Unnamed'))))]
processedsrc=source2[source2.columns.drop(list(source2.filter(regex=('Validation'))))]
#to drop columns used in source file for validation
# In[85]:
processedsrc_ondemand=processedsrc.loc[lambda processedsrc: processedsrc['TermType']=="ondemand"]
priced_processedsrc_ondemand=pd.merge(processedsrc_ondemand,filtered,on=['Instance Type','Operating System','TermType','Pre Installed S/W','Location'],how='left')
#find pricing for on-demand entries
# In[86]:
processedsrc_reserved=processedsrc.loc[lambda processedsrc: processedsrc['TermType']=="reserved"]
priced_processedsrc_reserved=pd.merge(processedsrc_reserved,filtered,on=['Instance Type','TermType','Operating System','Pre Installed S/W','Location','PurchaseOption','OfferingClass','LeaseContractLength'],how='left')
#find pricing for reserved entries
# In[87]:
output=pd.merge(priced_processedsrc_reserved,priced_processedsrc_ondemand,how='outer')
#summarise the findings
# In[88]:
output.to_csv('/Users/mudit/Desktop/output_pricing.csv')
#export
# In[ ]:

4. Bulk Pricing

a. Source file preparation:

It should be a fixed input format:

Please find attached sample input sheet. Columns description:

1.Hostname - any unique & mandatory string
2.Col2-6 - For reference, non-unique, Optional
3.'Instance Type' - Mandatory valid instance type.
4.'Operating System' - Mandatory and have to be one of the following:

1.'linux'
2.'rhel'
3.'suse'

1.'Pre Installed S/W' - Optional, have to be one of the following:
1.'None'
2.'sql ent'
3.'sql std'
4.'sql web'

1.'Location' - Mandatory and have to be one of the following:
1.'africa (cape town)'
2.'asia pacific (hong kong)'
3.'asia pacific (mumbai)'
4.'asia pacific (osaka-local)'
5.'asia pacific (seoul)'
6.'asia pacific (singapore)'
7.'asia pacific (sydney)'
8.'asia pacific (tokyo)'
9.'aws govcloud (us-east)'
10.'aws govcloud (us-west)'
11.'canada (central)'
12.'eu (frankfurt)'
13.'eu (ireland)'
14.'eu (london)'
15.'eu (milan)'
16.'eu (paris)'
17.'eu (stockholm)'
18.'middle east (bahrain)'
19.'south america (sao paulo)'
20.'us east (n. virginia)'
21.'us east (ohio)'
22.'us west (los angeles)'
23.'us west (n. california)'
24.'us west (oregon)'
25.'eu (stockholm)'
26.'middle east (bahrain)'
27.'south america (sao paulo)'
28.'us east (n. virginia)'
29.'us east (ohio)'
30.'us west (los angeles)'
31.'us west (n. california)'
32.'us west (oregon)'
33.'us west (n. california)'
34.'us west (oregon)'

1.'TermType' - Mandatory and have to be one of the following:
1.'ondemand'
2.'reserved'

1.'LeaseContractLength' - Mandatory for TermType=='reserved' and have to be one of the following:
1.'1yr'
2.'3yr'

1.'PurchaseOption' - Mandatory for TermType=='reserved' and have to be one of the following:
1.'all upfront'
2.'no upfront'

1.'OfferingClass' - Mandatory for TermType=='reserved' and have to be one of the following:
1.'convertible'
2.'standard'

b. pricing:

Now you have pricing against each EC2 and the excel skills can come in handy to do the first level of optimizations etc.

Download the Source Sheet here.

Will come back with more details and steps on Bulk Sizing and Pricing Exercise in our next blog.

40% faster deployments post migrating to cloud along with DevOps transformation

By | Cloud Case Study | No Comments

Customer: The fastest-growing cinema business in the Middle East

Summary

The customer is the fastest-growing cinema business in the Middle East who is intending to migrate their core movie ticket management and booking application on to AWS. Currently, it is a colocation data center that needs to be migrated for better scalability and availability followed by implementation of DevOps optimisation, disaster recovery, log analysis, and managed services.

  • Case – Migration
  • Type –  RePlatform
  • Number of VM’s – 50
  • Number of applications migrated – 5
  • Approximate size of DB – 250 GB

About Customer

The customer is the fastest-growing cinema business in the Middle East, which is the leading shopping mall, communities, retail, and leisure pioneer across the Middle East, Africa, and Asia. They operate over 300 screens across UAE, Oman, Bahrain, Qatar, Lebanon and Egypt and will expand to own and manage 600 screens by the year 2020.

Problem Statement

The customer is planning to migrate their core movie ticket management and booking application from their colocation data center (DC) to AWS for better availability and scalability. Migration to AWS was to facilitate the customer to move their production workload as-is without many architectural changes and then focus on DevOps optimisation.

Powerup proposed a 2 phased approach in migrating the customer applications to AWS.

  • Phase 1 – As-is migration of applications from on-premise set up to AWS cloud along with the migration of databases for better scalability and availability.
  • Phase 2 – Implementation of DevOps optimisation.

Post this, Powerup’s scope of work for the customer also included Disaster Recovery (DR) implementation, setting up EKK for log analysis and AWS managed services.

Proposed Solution

Migration

Powerup will study the customer’s environment and prepare a blueprint of the overall architecture. They will identify potential servers and database failure points and will accordingly fix the automation of backups.

Powerup to work in coordination with the customer team to,

  • export application and data from on-premise architecture to AWS using either Amazon Import/Export functionality or over the internet.
  • Restore data on the cloud and enable database replication between on-premise and Amazon data stores to identify differential data.
  • Implement the monitoring agents and configure backups.
  • Conduct load testing if required as well as system and user acceptance accessibility tests to identify and rectify vulnerabilities.

Post-deployment and stabilization, Powerup completed the automation of the infrastructure using AWS Cloud formation and code deployment automation to save operational time and effort.

Automation

Post phase 1 as-is migration of the customer’s applications, Powerup’s DevOps team will perform weekly manual and automated audits and share reports with the customer team.
Weekly reports on consolidated uptime of applications, total tickets logged, issue resolution details and actionable plans will be shared with the customer team. Powerup will also run a Vulnerability Assessment & Penetration Testing (VAPT) on cloud coupled with quarterly Disaster recovery (DR) drills for one pre-selected application in the cloud platform to ensure governance is in place.
DevOps is also responsible for seamless continuous integration (CI) typically handled by managing a standard single-source repository, automating the build, track the build changes/progress and finally automating the deployment.

Disaster Recovery (DR)

Powerup to understand customer’s compliance, Recovery Point Objective (RPO) and Recovery Time Objective (RTO) expectations before designing the DR strategy.

  • Configure staging VPC, subnets and the entire network set up as per current architecture.
  • Set up network access controls, create NAT gateway and provision firewall for the DR region.
  • Initiate CloudEndure console, enable replication to AWS staging server, create failover replication to DR site from CloudENdure dashboard to conduct DR drill.
  • Verify and analyze RTO and RPO requirements.

EKK set up

The AWS EKK stack (Amazon Elasticsearch Service, Amazon Kinesis and Kibana) act as an alternative to ELK, an open-source tool by Amazon to engage in and visualize data logs. Powerup’s scope involved gathering information on environment and providing access to relevant users to create an AWS ElasticSearch service and AWS Kinesis service. The intent was to install and configure the Kinesis agent on the application servers in order to push the data logs and validate the log stream to the Kibana dashboard. This set up would help configure error and failure logs, alerts, anti-virus logs and transition failures.  The EKK solution is known to provision for analyzing logs and debugging applications. Overall, it helps in managing a log aggregation system. Know more on EKK implementation here.

Managed services

The first step is to study the customer’s cloud environment to identify potential failure points and loopholes if any.

Powerup DevOps team will continuously monitor the customer’s cloud infrastructure health by keeping a check on CPU, memory and storage utilization as well as URL uptime and application performance.

OpenVPN will be configured for the secure exchange of data between the customer’s production setup on AWS and the individual cinema locations. The web, application and database servers are implemented in High Availability (HA) mode. Databases are implemented on Amazon EC2 with replication enabled to a standby server. Relational database service (RDS) may be considered if there are no dependencies from the application end.

Security in the cloud is a shared responsibility between the customer, cloud platform provider and Powerup. Powerup will continuously analyze and assist the customer with best practices around application-level security.

The security monitoring scope includes creating an AWS organization account and proxy accounts with multi-factor authorization (MFA) for controlled access on AWS. Powerup to also set up an Identity and Access Management (IAM), security groups as well as network components on customer’s behalf.

The powerup team has helped setting up the VPN tunnel from AWS to different customer theatre locations [31 different locations].

Enable server-side encryption and manage Secure Sockets Layer (SSL) certificates for the website. Monitor logs for security analysis, resource change tracking, and compliance auditing. Powerup DevOps team to track and monitor firewall for the customer’s environment and additionally mitigate distributed denial-of-service (DDoS) attacks on their portals and websites.

Anti-virus tools and intrusion detection/prevention to be set up by Powerup along with data encryption at the server as well as storage level. Powerup DevOps team will continuously monitor the status of automated and manual backups and record the events in a tracker. In case of missed automatic backups, a manual backup will be taken as a corrective step. Alerts to also be configured for all metrics monitored at the cloud infrastructure level and application level.

Business Benefits

  • Migration helped the customer achieve better scalability and availability.
  • DevOps tool helped automate manual tasks and facilitated seamless continuous delivery while AWS cloud managed services provisioned the customer to reduce operational costs and achieve maximum optimization of workload efficiency.

Cloud platform

AWS.

Technologies used

EC2, S3, ALB, Autoscaling, CodeDeploy, CloudFormation, MS SQL, Jboss, Infinispan Cluster,  Windows, AWS Export/Import.

Data Analytics helping real-time decision making

By | Data Case Study | 2 Comments

Customer: The fastest-growing cinema business in the Middle East

Summary

The customer is the fastest-growing cinema business in the Middle East wanted to manage the logs from multiple environments by setting up centralized logging and visualization, this was done by implementing the EKK(Amazon Elasticsearch, Amazon Kinesis and Kibana) solution in their AWS environment.

About Customer

The customer is a cinema arm of a leading shopping mall, retail and leisure pioneer across the Middle East and North Africa. They are the Middle East’s most innovative and customer-focused exhibitor, and the fastest and rapidly growing cinema business in the MENA region.

Problem Statement

The customer’s applications generate huge amounts of logs from multiple servers, if any error occurs in the application it is difficult for the development team to get the logs or view the logs in real-time to troubleshoot the issue. They do not have a centralized location to visualize logs and get notified if any errors occur.

In the ticket booking scenario, by analyzing the logs that are generated by the application, an organization can enable valuable features, such as notifying the developers that error occurred in the application server while customers are booking the ticket. If the application logs can be analyzed and monitored in real-time, developers can be notified immediately to investigate and fix the issues.

Proposed Solution

Powerup built a log analytics solution on AWS using ElasticSearch as the real-time analytics engine. AWS Kinesis firehose pushes the data to ElasticSearch. In some scenarios, the Customer wanted to transform or enhance data streaming before it is delivered to ElasticSearch. Since all the application logs are in an unstructured format in the server, the customer wanted to filter the unstructured data and transform it into JSON before delivering it to Amazon Elasticsearch Service. Logs from Web, App and DB were pushed to Elasticsearch for all the six applications.

Amazon Kinesis Agent

  • The Amazon Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send data to Kinesis Streams and Kinesis Firehose.
  • AWS Kinesis Firehose Agent – daemon installed on each EC2 instance that pipes logs to Amazon Kinesis Firehose.
  • The agent continuously monitors a set of files and sends new data to your delivery stream. It handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner.

Amazon Kinesis Firehose

  • Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards that you’re already using today.
  • Kinesis Data Firehose Stream – endpoint that accepts the incoming log data and forwards to ElasticSearch

Data Transformation

Kinesis Data Firehose can invoke your Lambda function to transform incoming source data and deliver the transformed data to destinations. When you enable Kinesis Data Firehose data transformation, Kinesis Data Firehose buffers incoming data up to 3 MB by default. Kinesis Data Firehose then invokes the specified Lambda function asynchronously with each buffered batch using the AWS Lambda synchronous invocation model. The transformed data is sent from Lambda to Kinesis Data Firehose. Kinesis Data Firehose then sends it to the destination when the specified destination buffering size or buffering interval is reached, whichever happens first.

ElasticSearch

  • Elasticsearch is a search engine based on the Lucene It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
  • Store, analyze, and correlate application and infrastructure log data to find and fix issues faster and improve application performance. You can receive automated alerts if your application is underperforming, enabling you to proactively address any issues.
  • Provide a fast, personalized search experience for your applications, websites, and data lake catalogs, allowing users to quickly find relevant data.
  • Collect logs and metrics from your servers, routers, switches, and virtualized machines to get comprehensive visibility into your infrastructure, reducing mean time to detect (MTTD) and resolve (MTTR) issues and lowering system downtime.

Kibana

Kibana is an open-source data visualization and exploration tool used for log and time-series analytics, application monitoring, and operational intelligence use cases. It offers powerful and easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in geospatial support. Also, it provides tight integration with Elasticsearch, a popular analytics and search engine, which makes Kibana the default choice for visualizing data stored in Elasticsearch.

  • Using Kibana’s pre-built aggregations and filters, you can run a variety of analytics like histograms, top-N queries, and trends with just a few clicks.
  • You can easily set up dashboards and reports and share them with others. All you need is a browser to view and explore the data.
  • Kibana comes with powerful geospatial capabilities so you can seamlessly layer in geographical information on top of your data and visualize results on maps.

Ingesting data to ElasticSearch using Amazon Kinesis Firehose.

Kinesis Data Firehose is part of the Kinesis streaming data platform, along with Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

Record

The data of interest that your data producer sends to a Kinesis Data Firehose delivery stream. A record can be as large as 1000 KB.

Data producer

Producers send records to Kinesis Data Firehose delivery streams. For example, a web server that sends log data to a delivery stream is a data producer. You can also configure your Kinesis Data Firehose delivery stream to automatically read data from an existing Kinesis data stream, and load it into destinations.

Writing Logs to Kinesis Data Firehose Using Kinesis Agent

  • Amazon Kinesis Agent is a standalone Java software application that offers an easy way to collect and send data to Kinesis Data Firehose. The agent continuously monitors a set of files and sends new data to your Kinesis Data Firehose delivery stream.
  • The agent handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner. It also emits Amazon CloudWatch metrics to help you better monitor and troubleshoot the streaming process.
  • The Kinesis Agent has been installed on all the production server environments such as web servers, log servers, and application servers. After installing the agent, we need to configure it by specifying the log files to monitor and the delivery stream for the data. After the agent is configured, it durably collects data from those log files and reliably sends it to the delivery stream.
  • Since the data in the servers are unstructured and the customer wanted to send the specific format of data to ElasticSearch and visualize it on Kibana. So we configured an agent to preprocess the data and deliver the preprocessed data to AWS Kinesis Firehose. Preprocessed configuration used in the Kinesis Agent

MatchPattern

  • Since the data in the logs are unstructured and needed to filter some specific records from the data. So we used the match pattern to send the record to filter the data and send it to Kinesis Firehose.
  • The agent has configured in a way to capture the unstructured data using regular expression and send it to the AWS Kinesis Firehose.

An Example How we filtered the data and sent it to the kinesis firehose.

  • LOGTOJSON configuration with Match Pattern

Sample Kinesis agent configuration:

{

    "optionName": "LOGTOJSON",

    "logFormat": "COMMONAPACHELOG",

    "matchPattern": "^([\\d.]+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(.+?)\" (\\d{3})",

    "customFieldNames": ["host", "ident", "authuser", "datetime", "request", "response"]

}

The record in the server before conversion:


100.189.189.89 - - [27/Oct/2000:09:27:09 -0400] "GET /java/javaResources.html HTTP/1.0" 200

After conversion:

{

"Host":"100.189.189.89",

"Ident":null,

"Authuser":null,

"datetime":"27/Oct/2000:09:27:09 -0400",

"request":"GET /java/javaResources.html HTTP/1.0",

"Response":"200"

}

The record in the server has been converted to JSON format. The Match pattern only captures the data in the data according to regular expression and sends the data to AWS Kinesis Firehose. AWS Kinesis Firehose sends the data to Elasticsearch and can be visualized on the Kibana.

Business Benefits

  • Powerup Team successfully implemented the real-time centralized log analytics solution using AWS kinesis firehose and ElasticSearch.
    • Kinesis agent was used to filtering the applications and kinesis firehose streams the logs to Elasticsearch.
    • Separate indexes were created for all 6 applications in  Elasticsearch based on access log and error log.
    • A Total of 20 dashboards were created in Kibana based on error types, for example, 4xx error, 5xx error, cron failure, auth failure.
    • Created Alerts were sent to the developers using AWS SNS. when the configured thresholds, so that developers can take immediate actions on the errors generated on the application and server.
    • Developer log analysis time has greatly decreased from a couple of hours to a few minutes.
  • The EKK setup implemented for the customer is a total log-analysis platform for search, analyses and visualization of log-generated data from different machines and perform centralized logging to help identify any server and application-related issues across multiple servers in the customer environment and correlate the logs in a particular time frame.
  • The data analysis and visualization of EKK setup have benefited the management and respective stakeholders to view the business reports from various application streams which led to easy business decision making.

Cloud platform

AWS.

Technologies used

Lambda, Kibana, EC2, Kinesis.

Using AI to make roller coasters safer

By | AI Case Study, Artificial Intelligence | No Comments

Customer: One of the leading integrated resorts

Summary

The customer an integrated resort on the island in Singapore. They offer some world-class attractions one of which is the Battlestar Galactica, the most enticing duel rollercoaster ride at the resort. They decided to cater to preventive maintenance of the wheels of this ride to ensure top-class safety. They planned to adopt Machine Learning (ML) based solution via Google cloud platform (GCP).

Problem Statement

  • The Customer’s Battlestar Galactica ride is financially quite demanding and requires high maintenance.
  • The wheel detection process is time-consuming and a high maintenance manual job.
  • Decision making on the good versus the bad wheel is based on human judgement and expert’s experience.

The ultimate goal was to remove human intervention and automate the decision making on the identification of a bad wheel using machine learning. The machine learning model needed to be trained on currently available data and ingest real-time data over a period of time to help identify patterns of range and intensity values of wheels. This would in turn help in identifying the wheel as good or bad at the end of every run.

Proposed Solution

Project pre-requisites

Ordering of .dat files generated by SICK cameras to be maintained in a single date-time format for appropriate Radio-frequency identification (RFID) wheel mapping. Bad wheel data should be stored in the same format as a good wheel (.dat files) in order to train the classifier. The dashboard to contain the trend of intensity and height values. Single folder to be maintained for Cam_01 and another folder for Cam_02, folder name or location should not change.

Solution

  • Data ingestion and storage

An image capturing software tool named Ranger Studio was used to absorb the complete information on wheels. The Ranger Studio onsite machine generates .dat files for wheels post every run and stores in a local machine. An upload service picks these .dat files from the storage location at pre-defined intervals and runs C# code on it to provide CSV output with range and intensity values.

CSV files are pushed to Google Cloud Services (GCS) using Google Pub/Sub real-time messaging service. The publisher is used to publish files from the local machine using two separate python scripts for Cam01 and Cam02. The subscriber is then used to subscribe to the published files for Cam01 and Cam02.

  • Data Processing

Powerup is responsible to ingest the data into cloud storage or cloud SQL based on the defined format. Processing of data would include the timestamp and wheel run count. There is a pub tracker and a sub tracker maintained to track the files for both cameras so that the subscribed files can be stored on GCS for both the cameras separately. After CSV data is processed, it is removed from the local machine via a custom script to avoid memory issues.

  • Data modelling Cloud SQL

Once data is processed, Powerup to design the data model in cloud SQL where all the data points will be stored in relational format.

The CSV files of individual wheels are then used to train the classifier model. The classifier model is built with an application programming interface named Keras. The trained classifier helps generate a prediction model (.pkl file) to identify good and bad wheels. The prediction model resides on a GCP VM. The generated CSV files are passed through the prediction model and are classified as good or bad based on an accuracy value.

  • Big Query and ML Model

Once the prediction for a wheel is done, the predicted accuracy score, timestamp and wheel information is stored into the Big Query tables. The average wheel accuracy for wheels is then displayed on Google Data Studio.

Powerup to ensure optimization of data performance via tuning and build the ML model. This would enable the customer to obtain large volumes of height and intensity data, post which, they score the ML model with new data.

Current accuracy threshold for SMS trigger is set at 70. Accuracy of prediction is set to improve over a period 6 months when the model has enough bad wheel data reported for training the ML classifier model. SMS will be triggered if the accuracy value is below 70.

SMS will also be triggered if a file is not received from the local machine to Google Cloud Storage via Google Pub/Sub. The reason for file not being received needs to be checked by the client’s SICK team as it may be due to multiple reasons like source file not generated due to camera malfunction, system shutdown or maintenance and so on. Powerup team to be informed about the same as the restart of instances may be required in such cases. Twilio is the service used for SMS whereas SendGrid is used for email notifications.

  • Security and Deployment

Powerup to build a secure environment for all third party integrations. Deploy User Acceptance Test (UAT) environments, conduct regression tests and provide

Go Live Support to off-site applications. The number of servers and services supported with the production was 10 where support included server management in terms of security, network, DevOps, backup DR and audit. Support also included adjusting ML models to improvise training.

 

Limitations

Since the request payload size was higher, Google ML / Online Predictor could not be used. A custom prediction model was built with Keras to overcome this.

Artificial Intelligence

Cloud platform

Google Cloud Platform.

Technologies used

Cloud Storage, Bog Query, Data Studio, Compute Engine.

Business Benefits

Powerup has successfully been able to train the classifier model with a limited set of good and bad wheel real-time data. The accuracy of the model is expected to improve over time. With current data, the accuracy of the model stands at 60% ensuring cost-effectiveness and world-class safety.

Managing the big ‘R’ in cloud migrations

By | Powerlearnings | One Comment

Compiled by Mitu Kumari Senior Executive, Marketing at Powerupcloud Technologies.

Cloud adoption, with migRation at the core, is possibly one of the biggest technology-based organizational changes undertaken. It has a profound impact on a business’s ability to innovate and fully transform the way they operate.

Moving the business applications and data to the cloud is a great strategic move that gives a competitive edge by reducing IT costs. It helps lighten the budget and promotes greater collaboration. It also has the added benefit of disaster risk management and supports continuity of business operations.

However, in spite of the obvious benefits, migRation to the cloud can be a daunting process; and needs to be done in the right way to ensure it supports business needs and delivers the value that it promises.

According to the seventh annual Cisco Global Cloud Index (2016-2021), which focuses on data centre virtualization and cloud computing, both consumer and business applications are contributing to the growing dominance of cloud services over the Internet.

Key highlights & projections of cloud computing:

  • The study forecasts global cloud data center traffic to reach 19.5 zettabytes (ZB) per year by 2021, up from 6.0 ZB per year in 2016.
  • Cloud data center traffic will represent 95 percent of total data center traffic by 2021, compared to 88 percent in 2016.
  • It is expected that by 2021 there will be 628 hyperscale data centers globally, compared to 338 in 2016, 1.9-fold growth or near doubling over the forecast period. 

The key to a successful cloud migRation is in the planning. Creating a migRation strategy is vital and involves identifying the potential options, understanding the interdependencies between applications, and deciding upon what layers of an application would benefit from migRation. Each of these steps provides more opportunities to take advantage of cloud-native features.

Having established a basic understanding of the importance of migRation, let’s try to understand the primary steps to be taken by  organizations and businesses to migrate applications to the cloud.

But, prior to that, an organisation/ business needs to analyze its current applications and systems.

What are the parameters to analyze in order to select the right migRation approach?

There are broadly two considerations – business and technology; for migrating an application to the cloud, which can also affect the choice of a migRation strategy. Using this knowledge, organizations can outline a plan on how they’ll approach migrating each of the applications in their portfolio and in what order.

The 3 main parameters for the business considerations are:

  • Business Fit: When the current application is not a suitable fit for business requirements.
  • Business Value: When the current application does not provide adequate value to the users in terms of information quality and support.
  • Agility: The existing application failing to keep up with the current pace, creating financial and operational risks in the future.

The 3 main parameters for the technical considerations are:

  • Cost: The total cost of ownership for maintenance of the application is higher than its business value. 
  • Complexity: Complexity within current application causes various problems which can be a major factor in maintainability and time, cost and risk of change.
  • Risk: With regards to older applications various level risk exists within the application tech stack or functionality. 

Before an organization begins moving to the cloud, it is vital that IT executives and business leaders take a step back and carefully craft a strategic plan for cloud migRation and application transformation.

Identifying issues in the application layer:

The underlying cause of the issue and its location must be identified within the application. The source of the issue can exist within 3 main aspects of software component: 

  •  Technology 
  • Functionality
  •  Architecture 

Functionality = Likely source for fit and value issues. 

Technology = Common Source for Cost, Complexity and Risk issues Architecture.

 Architecture = Contributor for both and has an impact on complexity and agility.

After having identified and confirmed the application layer issues, the next key step would be to choose the right and most suitable migRation strategy.

How do you choose a migRation strategy, the big ‘R’:

There are 6 primary approaches to choose the most suitable migRation strategy. They are listed and defined as below.

  1. Rehost
  2. Replatform
  3. Repurchase
  4. Refactor/Rearchitect
  5. Retire
  6. Retain

1. Rehosting (lift-and-shift)

The simplest path is the lift and shift, which works just how it sounds.  It’s about simply taking the existing data applications and redeploying them on the cloud servers. This is the most common path for companies new to cloud computing, who are not yet accustomed to a cloud environment & can benefit from the speed of deployment.

Gartner refers to this as rehosting, because it involves moving your stack to a new host without making extensive changes. This enables a rapid, cost-effective migRation, minimal disruption and quick ROI, minimizing the business disruption that could occur if the application was to be refactored.

 However, the lack of modification to your system also prevents you from harnessing certain cloud migRation benefits in the short term.

You shouldn’t treat lift and shift as the end of the migRation story. Very often, applications can be migrated with lift and shift and then, once in the cloud, re-architected to take advantage of the cloud computing platform.

When to choose Rehost approach for cloud migRation:

  • Avoiding expensive investments in hardware: For example, if setting up a data operation center is costing twice compared to the cloud, it is advisable to move the application to the cloud with minor or no modification.
  •  Some applications can be easily optimized once migrated to the cloud: For those applications, it is a good strategy to first move them to the cloud by adopting the rehost approach as it is and then optimize.
  •  In the case of commercially off-the-shelf applications: It is impossible to do code changes on those applications. In this case, it is a better idea to adopt rehost.
  •  MigRation of applications that need to keep running: For organizations choosing to move to the cloud and having some applications that just need to keep running without disruption or modification, rehost is a good

2. Re-platforming (lift-tinker-and-shift)

 This is a good strategy for organizations that aren’t ready for expansion or configuration, or those that want to build trust in the cloud before making a commitment.

Re-platforming is really a variation of lift and shift, here you might make a few cloud (or other) optimizations in order to achieve some tangible benefit, you aren’t otherwise changing the core architecture of the application, but use cloud-based frameworks and tools that allow developers to take advantage of the cloud’s potential.

While this and migRation has some cost associated, it is sometimes a significant savings when compared to the cost of rebuilding the existing legacy system. 

When to choose Re-platform approach for cloud migRation:

  •  Organizations willing to automate tasks that are essential to operations, but are not the business priorities. 
  • If for moving an application to cloud, the source environment is not supported, then a slight modification is required.
  •  Organizations looking to leverage more cloud benefits other than just moving the application to the cloud. 
  • Organizations that are sure that minor changes won’t affect the application functioning.

3. Re-purchase (Drop & Shop)

Repurchasing is another fast way to access cloud technologies. Software as a service (SaaS) can take existing data and applications and translate them into a comparable cloud-based product. This can help make significant progress with operations such as customer relationship management and content management. 

Repurchasing involves a licensing change where the on-premise version is being swapped for the same application but as a cloud service. Dropping on-premise applications and switching to the cloud can offer improved feature sets, without the complexities of rehosting the existing application. The approach is a common choice for legacy applications that are incompatible with the cloud. 

When to choose Re-purchase approach for cloud migRation:

  • Use this approach for legacy applications incompatible with the cloud: If you find existing applications that could benefit from the cloud but would be difficult to migrate using “lift and shift”, “drop and shop” could be a good option.
  • Many commercial off the shelf (COTS) applications are now available as Software as a Service (SaaS): Repurchasing, an excellent and fast way to access cloud-based SaaS that is tailored to your business needs by the cloud provider.

4. Re- architecting (Re-factoring)

This strategy calls for a complete overhaul of an application to adapt it to the cloud. It is valuable when you have a strong business need for cloud-native features, such as improved development agility, scalability or performance.

Highly customized applications that provide a key business differentiator should be re-architected to take advantage of cloud-native capabilities. 

Re-architecting means a rebuild of your applications from scratch to leverage cloud-native capabilities you couldn’t otherwise, such as auto-scaling or serverless computing.  It is  the most future-proof for companies that want to take advantage of more advanced cloud features.

When to choose Re-architect approach for cloud migRation:

  • When restructuring of the code is required to take full advantage of cloud capability.
  • When there is a strong business need for adding features and performance to the application and that is not possible within the existing framework.
  • When an organization is looking to boost agility or improve business continuity, the re-architecting strategy is a better
  • When an organization is willing to move to a service-oriented architecture, this approach can be used.

5. Retire

In today’s data centers there are oftentimes several workloads that are no longer used but have been kept running.  This can have lots of causes, but in some cases, the best thing to do to a workload is to simply turn it off.

Care should be taken to ensure that the service is decommissioned in a fashion that is in line with your current procedure of retiring a platform, but oftentimes migRation is a great time to remove deprecated technology from your service catalog.

When to choose Retire approach as part of cloud migRation:

  •  In many cases, during a migRation project – identify applications that are redundant, and shutting them down can represent a cost-saving.
  •  There may already be existing plans to decommission the application or consolidate it with other applications.

6. Retain

In some cases when a server or IT service is still required and cannot be migrated to the cloud it makes the most sense to retain that server or service in its current position. This Retain methodology is used in a hybrid cloud deployment that uses some on-premises IT servers and services combined with cloud technologies to offer a seamless user experience.  

While it might at times make sense to retain a technology, doing so is only advisable in select circumstances.  The impact of retaining some technologies on-premises is usually an increased demand for hybrid connectivity.

When to choose Retain approach as part of cloud migRation:

  • The business is heavily invested in the on-premise application and may have currently active development projects.
  • Legacy operating systems and applications are not supported by cloud environments.
  • The application is working well-no business case for the cost and disruption of migRation.
  • For industries that must adhere to strict compliance regulations that require that data is on-premise.
  • For applications that require very high performance, the on-premise option may prove the better choice.

Managing the big R:

So in conclusion, which is the best approach to cloud migRation?

Different use-cases have different requirements, so there is no “one size, fits all”. Selecting one among the six migRation approaches is  finding the best that suits your specific needs. That said, there is a way to determine whether one of these three cloud migRation approaches will suit you better than the others.

While choosing an approach for cloud migRation, to improve the technology, architecture, & functionality of the IT infrastructure, one must always keep in consideration the cost and risk associated with the chosen approach.

Start by checking if the app can be moved to a cloud environment in its entirety while maintaining running costs and keeping operational efficiency in check. If the answer is yes, starting with a rehost is a good idea.

If rehosting is not an option or if cost-efficiency is at a level that needs to be improved, you can also consider replatforming as the approach to take. Remember that not all apps can be migrated this way, so you may end up having to find other solutions entirely.

For workloads that can easily be upgraded to newer versions, the repurchase model might allow a feature set upgrade as you move to the cloud.

The same can be said for refactoring. When there is a strong business need for refactoring to take full advantage of cloud capability, which is not possible with the existing applications. The time and resources to complete the full refactoring should be taken into consideration.

Some applications simply won’t be required anymore, so it is important to identify these prior to migrating to the cloud & retire, so that you do not end up paying for application infrastructure that is not delivering any business benefit.

Retain portions of your IT infrastructure, if there are some applications that are not ready to be migrated, would produce more benefit when kept On-prem or it was a recent upgrade.

One can certainly take (most of) the hassle out of moving to the cloud with the right cloud migRation strategy. You will be left with the exciting part: finding new resources to use, better flexibility to benefit from, and a more effective environment for your apps.