Migrating large workloads to AWS and implementing best practice IaaS

By | Azure, Case Study | No Comments

Customer: a leading provider of cloud-based software solutions

About the customer:

The client is a leading provider of cloud-based software solutions for 200+ customers across pharmaceutical, biotech and medical device manufacturers, Contact Research Organizations (CROs) and regulatory agencies. It’s proprietary cognitive technology platform integrates
Machine Learning (ML) capabilities to automate the core functions of the product life-cycle, boosts efficiency, ensures compliance, delivers actionable insights, and lowers total cost of ownership through multi-tenant Software-as-a-Service (SaaS) architecture, thus enabling organizations to get started on their Digital transformation. Their services and solutions are used by 4 of the top 5, 40 of the top 50 life science companies, and by 8 health authorities. Headquartered in the US, and has regional offices in Europe, India and Japan.

The business requirement:

Being a part of the highly regulated life sciences industry, recognized the benefits of cloud a long time ago. they were one of the very first life sciences solution vendors to deliver SaaS solutions to its customers. Currently, that momentum continues as the business goes “all-in on
AWS” by moving their entire cloud infrastructure to the AWS platform. As their platform and solutions are powered entirely by the AWS cloud, the business wanted to find ways to reduce costs, strengthen security and increase the availability of the existing AWS environment. Powerup’s services were enlisted with the following objectives:

1. Cost optimization of the existing AWS environment
2. Deployment automation of
3. Safety infrastructure on AWS
4. Architecture and deployment of centralized Log Management solution
5. Architecture review and migration of the client’s customer environment to AWS including
POC for Database Migration Service (DMS)
6. Evaluation of DevOps strategy

The solution approach:

1. Cost optimization of the existing AWS environment

Here are the three steps followed by Powerup to optimize costs:
● Addressing idle resources by proper server tagging, translating into instant savings
● Right sizing recommendation for instances after a proper data analysis
● Planning Amazon EC2 Reserved Instances (RI) purchasing for resized EC2 instances to capture long-term savings

Removing idle/unused resource clutter would fail to achieve its desired objective in the absence of a proper tagging strategy. Tags created to address wasted resources also help to properly size resources by improving capacity and usage analysis. After right sizing, committing to reserved instances gets a lot easier. For example, Powerup team was able to draw a comparison price chart for the running EC2 & RDS instances based on the On-Demand Vs RI costs and share a detailed analysis explaining the RI Instances pricing plans. By following these steps, Powerup estimated 30% reduction in monthly spend of the customer on AWS.

2. Deployment automation Safety infrastructure on AWS

In AWS, the client has leveraged key security features like Cloud Watch and Cloud trail to closely monitor the traffic and actions performed at API level. Critical functions like Identity & Access Management, Encryption, Log management is also managed by using features of AWS.
Capabilities like AWS Guard Duty, which is a ML-based tool, which continuously monitors threats and add industry intelligence to the alerts it generates is used by them for 24/7 monitoring; along with AWS Inspector, which is a vulnerability detection tool. To ensure end to end cyber security, they have deployed an end to end Endpoint Detection and Response (EDR) solution called Trend Micro Deep Security. All their products are tested for security vulnerabilities using IBM AppScan tool and manual code review, following OWASP Top10 guidelines and NIST standards to ensure Confidentiality, Integrity and Availability of data. As part of deployment automation, Powerup used Cloud formation (CF) and/or Terraform templates to automate infrastructure provision and maintenance. In addition to this, Powerup’s team simplified all modules used to perform day to day tasks to render them re-usable for deployments across multiple AWS accounts. Logs generated for all provisioning tasks were stored in a centralized S3 bucket. The business had requested for incorporating security parameters and tagging files, along with tracking of user actions in cloud trail.

3. Architecture and deployment of centralized Log Management solution

Multiple approaches for Log management were shared with the customer. Powerup and the client team agreed on the approach “AWS CW Event Scheduler/SSM Agent”. Initially, the scope was generation of Log management system for Safety infrastructure account, later, it was
expanded to other accounts as well. Powerup team built solution architecture for Log management using ELK stack and Cloud Watch. Scripts were written such that it can be used across their client’s on AWS cloud. Separate scripts were written for Linux /Windows machines using Shell scripting and Powershell. No hard coding was done on the script. All inputs are through a csv file which would have Instance ID, Log Path, Retention Period, backup folder path & S3 bucket path. Furthermore, Live hands-on workshops were conducted by Powerup team to train the client’s Operations team for future implementations.

4. Architecture review and migration of the client’s environment to AWS including POC for Database Migration Service (DMS)

The client’s pharmacovigilance software and drug safety platform is now powered by the AWS Cloud, and currently more than 85 of their 200+ customers have been migrated, with more to quickly follow. In addition, the wanted Powerup to support the migration of one of its customer
to AWS. Powerup reviewed and validated the designed architecture. Infrastructure was deployed as per the approved architecture. Once the architecture was deployed, Powerup used the AWS Well-Architected Framework to evaluate the deployed architecture and provide guidance to implement designs that scale with customer’s application needs over time. Powerup also supported the application team for production Go-live on AWS infrastructure, along with deploying and testing DMS POC.

5. Evaluation of DevOps strategy

Powerup was responsible for evaluating Devops automation processes and technologies to suit the products built by the client’s product engineering team.


Powerup equipped the client with efficient and completely on-demand infrastructure provisioning with hours, along with built-in redundancies, all managed by AWS. Eliminating idle and over-allocated capacity, RI management and continuous monitoring enabled them to optimize costs. They successfully realized 30% savings on overlooked AWS assets, resulting in an overall 10 percent optimization in AWS cost. In addition, the client can now schedule and automate application backups, scale up databases in minutes by changing instance type, and have instances automatically moved to a healthy infrastructure in less than 15 minutes in case of a downtime, giving customers improved resiliency and availability. The client continues to provide a globally unified, standardized solution on the AWS infrastructure-as-a-service (IaaS) platform to drive compliance and enhance the experiences of all its customers.


By | Azure, Blogs | No Comments

Written by Madan Mohan K, Sr. Cloud EngineerContributor; Karthik T, Principal Cloud Architect, Powerupcloud Technologies

“Illusion appeals creativity Authentication validates Reality”-Active Directory Federation Service.

Problem Statement:

Our Prime client has their AD and ADFS on-premise. The client needed to federate their Web Application and get the User attributes without opting for a SAML provider.


With SAML provider the configuration would have been so eased. But without a SAML provider how we have cracked this is what we are going to have in this blog.


Active Directory Federations Services (ADFS) is an enterprise-level identity and access management service provided by Microsoft. ADFS runs as a separate service and hence any application that supports WF-Federation and Security Assertion Markup Language (SAML), can leverage this federation authentication service.

In this article, we are going to use Active Directory and ADFS configured in Azure VM. The configuration for AD an ADFS in Azure VM will be explained in the consecutive blogs.

This article has inclusively used some of the Azure services to setup the on-premise set of the client to demonstrated how action happen in live. Services opted-in Azure are as follows

  • Azure Virtual Network (VNET) configuration
  • Azure Virtual Machine (VM) provisioning
  • Active Directory configuration on Azure VM
  • Active Directory Federation Services (ADFS) configuration in Azure VM


This article aims at explaining the configuration of AD and ADFS on Azure VM. This typically involves the following steps to be carried out from the Azure Management Portal

  • Set up Azure Virtual Network
  • Provision Azure VM

Once the VM provisioning is done, the following Services needs to be configured inside the Azure VM

  • Active Directory Domain Services
  • Active Directory Federation Services
  • IIS

Technology Stack

1. Windows Server 2012 R2 Datacenter

2. ADFS 3.0

3. IIS

4. Microsoft Azure subscription.

5. Self-Signed SSL Certificates

Configuring AD, ADFS and SSL

As we have multiple blogs on the Internet to configure Active Directory we will focus on ADFS and SSL configuration but will be brief what we have done in AD.

  • Configured Active Directory Domain Services
  • Promoted it to a domain controller
  • The domain used is

Configure SSL certificate

Active Directory Federation Service (ADFS) uses HTTPs protocol. Certificates provisioned from Certificate authority helps us in getting this work on HTTPs. We opted to use a self-signed certificate to serve the certificates needed. To create a self-signed certificate, we have 2 options.

PS C:\Scripts> . \New-SelfSignedCertificateEx.ps1

PS C:\Scripts> New-SelfSignedCertificateEx -Subject “” -EKU “Server Authentication” -KeyUsage 0xa0 -StoreLocation “LocalMachine” -ProviderName “Microsoft Strong Cryptographic Provider” -Exportable

ADFS Configuration:

In this section, ADFS configuration is explained

To begin, open Server Manager and click on “Add Roles and Features”. In server roles window, select the option “Active Directory Federation Services”. Click Next to continue.

On the Feature, section select nothing and click Next.

On the ADFS section select nothing and click Next.

In the Confirmation page click on Next.

On the success of Installation, you will be prompted to Configure the Federation service on the server.

Click on Configure the Federation service on the server.

Check to Create the first federation server in the federation server farm.

Select the privileged account for the setup to get executed.

Import the SSL certificate which is generated earlier and enter the Federation service display name.

Use the privileged account for the Service Account.

Check the create a database on this server using Windows Internal Database.

Select nothing on Review option and Click Next

The display status of successful validation in the Pre-requisites section.

ADFS configured successfully.

Once ADFS is configured we need to configure the Relying Party Trusts.

In the Relying Party Trust Tab Right click and Add a new Relying Party Trust

Check the Enter data about the relying party manually

Pass on the Display name as required

ADFS profile needs to be checked.

Click Next on the Configure Certificate section.

Check the Enable support for the SAML 2.0 WebSSO protocol and pass the https:// URL.

In the trust identifier pass the URL.

Check on I do not want to configure multi-factor authentication settings for this relying party trust currently.

Allow all the users to access this relying party.

Click on Next in the Ready to Add Trust section.

Click on Finish.

In the add Transform Claim select the Send LDAP Attributes as Claims.

Pass on the Claim rule name and select the Active Directory Attribute storeand define the Claim Attributes as in the below screenshot.

Adjusting the trust settings

You still need to adjust a few settings on your relying party trust. To access these settings, select Properties from the Actions sidebar

  • In the Endpoints tab, click on add SAML to add a new endpoint.
  • For the Endpoint type, select SAML Logout.
  • For the Binding, choose POST.
  • For the Trusted URL, create a URL using:
  • The web address of your AD FS server
  • The AD FS SAML endpoint you noted earlier
  • The string? wa=wsignout1.0
  • The URL should look something like this:

  • Confirm your changes by clicking OK on the endpoint.

There you go the application is up and gets redirected while authentication is provided.

VSTS Project on Azure Web-app and VM

By | Azure, Blogs | No Comments

Written by Karthik T, Principal Cloud Architect, Powerupcloud Technologies

This is a four-step process to create a VSTS project on Azure web-apps and VMs.

  1. Setup
  2. Code
  3. Build
  4. Release

1.Setup :

Steps for creating a VSTS project for hosting the application on Azure Web-App and VM.


Sign Up for the account and create a new project. We used an agile process and Git version control.

If you want to know about the other project process available take a look at this link


We are going to host a static webpage through Azure app services.


VSTS is browser-based, so get your favourite modern browser.


Create an Azure VM. OS of your choice.(Here we used Windows OS).


Install Visual studio in your local machine from here.


There are a lot of ways we can import the repo. Here we used the “import a repository” option and use the git link to import the code to VSTS.


Now the code will be pushed in your VSTS project.

Code will be pushed like this in the master branch


We can create multiple branches for each developer and keep permission for each branch for merging the code in the master branch.


Once the developer writes his code in the new branch says ‘dev’, then he wants to push the code into the master branch for the build he uses the pull request option.

In the pull request, we can send the code for approval and review before merging the code into the master branch. We have created two branches and also committed a few lines in the code and merge it into the master branch.(optionally we can send for approval to review the code and merge it to their branch) and also we used Visual studio to commit/push/sync/fetch to VSTS Git repo.


We have created the private Git repository with an Agile process and committed the code in a git repo with the JSP sample app.

In continuous integration we have used maven task to build and to store the war file, we use publish artefact to store the war file.

we have used a maven task with the pom.xml file to build(optionally we can scheduled or trigger the build when the code is committed in repo).

In the triggers, tab enables the Continuous integration to automate the build process once the code is pushed in the master branch.


We use publish artefact (war file) form the building part and for continuous deployment, we have created two pipelines

  • The store file in artefact is deployed to the Azure web app on apache tomcat and also added the load test task.
  • The store file in artefact is deployed to windows VM which is hosted on Azure.

> Release pipeline for Azure Webapp

From the build artefact, we are deploying to the release environment with two tasks that will host the JSP app.

Two tasks we used are Deploy Azure App service and quick web performance test

Once the release is successful we can check the logs how the tasks are done

> Release pipeline for Azure VM

We use the build artefact to deploy to Windows VM which is hosted in Azure.

The task we used here is Azure VMs file copy which will copy the artefact build file into Azure hosted VM.

VM credentials should be given “Azure file copy” task

Once the release is successful we can check the logs how the tasks are done



  1. Works with nearly every language and technology.
  2. VSTS includes project management tools like Agile, Scrum, CMMI.
  3. Because of its cloud-based, it’s very easy to access from anywhere.
  4. Users need not worry about server maintenance.


  1. It does not support integration with Sharepoint or Project server.


The website was hosted successfully in Azure WebApp and Azure VM.



Hope the above was helpful. Happy to hear from you.

Case Study: A dual migration across AWS & Azure.

By | AWS, Azure, Blogs | No Comments

Written by Arun Kumar, Sr. Cloud Engineer and Ayush Ragesh, Cloud Solutions Architect, Powerupcloud Technologies

Problem Statement

The client is a global pioneer and leader providing end-to-end gifting solutions and manage 260Mn+ transactions a year. With over 15000 points of sale across 16 countries, they needed to migrate their platform from an on-premise datacentre to the cloud. In addition, they needed to be ready and scale-able for one of the largest e-commerce sales in the country.

Powerup’s approach:

As determined with the client it was decided to host their primary DC on AWS and DR on Azure.

Architecture Diagram

Architecture Description

  1. The applications were spread across multiple VPCs which are connected to each other via VPC peering. Different VPCs for UAT, Management Production, etc.
  2. VPN tunnels are deployed from the client’s Bangalore location to AWS and Azure environment.
  3. Multiple Load Balancers are used to distribute traffic between different applications.
  4. NAT Gateways are used for Internet access to private servers.
  5. Cisco Firepower/Palo Alto as the firewall.
  6. CloudFormation for automated deployments on AWS.
  7. Cloudtrail for logging and KMS for encryption of EBS volumes and S3 data. Config for change management.
  8. Route53 is used as the DNS service.
  9. Guard Duty and Inspector will be configured for additional security.
  10. DR site will be deployed on Azure.


* Powerupcloud was able to successfully migrate the core for their largest client on AWS.

* The client was able to achieve the required scalability, flexibility and performance.

The e-commerce sale day was a big success with zero downtime.

Lessons Learned

The customer initially wanted to use a Cisco Firepower firewall for IDS/IPS, DDOS, etc.SSL offloading needs to be done in the application server. So we decided to use Network Load Balancers. Instance-based routing was used so that the source IP addresses are available at the application server. The firewall needed 3 Ethernet cards for ‘trust’, ‘untrust’ and ‘management’.

In Cisco, by default, the eth0 is mapped management and we cannot change this. In Instance-based routing, the request always goes to eth0 while the request should go to ‘untrust’.

So we finally had to use a Palo Alto firewall where we can remap eth0 to ‘untrust’.

Cloned Linux Virtual Machine troubled by not fetching IP from DHCP client while using sys-prepped Image

By | Azure, Blogs | No Comments

Written by Nirmal Prabhu, Cloud Engineer, Powerupcloud Technologies

Initial Problem Statement

An internal error occurred while processing diagnostics profile of VM ‘test” when using the encrypted Storage Account “test storage” as the diagnostics storage account.

We came up with some weird error while performing the cloning of a virtual machine by Sysprepping it. Looking at the first sight it appeared to be stored as a troubler.

Initial Findings:

Considering storage to be the troubler, we tried to mitigate the issue focusing on storage. While analyzing the storage we floored into following observation

When we create the VM using PowerShell without specifying which Storage Account to use as the Diagnostics Account, Azure automatically used the next available Storage Account. In our case, the “test storage” encrypted account was chosen by default.

We even found our account doesn’t have sufficient permission on the key vault that used by the reported Storage Account “testdiagstorage”.

The above observations clarified that the permission error is the troubler for the popped error.

To confirm the same, we have done the below:

Using an encrypted Diagnostics Storage Account:

Created a new Storage Account in the same location and resource group.

Created a new Key Vault.

Generated a new Key.

Encrypted the Storage Account using the new Key Vault and Key.

Then we have created a new VM using the same image from the portal.

During the deployment from the portal, we chose the new encrypted storage account to be the diagnostic account.

The deployment completed successfully, and we got no errors regarding the diagnostics account.

The Real Troubler

But then came the real devil which troubled us in connecting with the virtual machine.

While scrutinizing using serial console which is a new feature in azure. This feature allows bidirectional serial console access to your virtual machines.

We figured out the issue is from the DHCP client which denied us in accessing the cloned virtual machine over SSH. Digging further we observed the following issues

§ Cloned VM was not accessible over SSH.

§ VM isn’t even able to ping from other Azure VMs.

§ VM doesn’t have a private IP obtained from the attached network adapter.

While troubleshooting this issue from the OS level, we found that the DHCP Client isn’t started.

Once we started the DHCP Client Manual by running the “dhclient”, the eth0 obtained an IP address and could communicate with the VM normally over the obtained IP.

However, this didn’t solve the issue, as once the VM rebooted the DHCP client will not start automatically as it should be.


To mitigate this issue, we had a workaround by automating the DHCP Client to start at the boot.

Run crontab -e.

Add the below line:

@reboot /sbin/dhclient

Once we have done that, the VM was running the DHCP client on boot, which ensures that the VM obtains an IP address on each reboot.

We found a similar symptom that was reported to Ubuntu on Version 18.04, that’s what brought the possibility of a known issue scenario in our case.


Adopting Azure Update Management Solution

By | Azure, Blogs | No Comments

Written by Karthik T, Principle cloud architect, Powerupcloud Technologies

This blog helps you understand how to use the Azure Update Management solution to manage updates for your Windows and Linux computers.

1.0 Introduction

This document provides details on the adoption of an Update Management solution in Azure.

2.0 Audience

· IT Infrastructure and Server Management Team

3.0 Why do we need Patch Management?

· Plugging any security vulnerabilities in the OS or Installed Application Software

· Proactive protection against newer threats and malware

· Fixing existing platform/software bugs

· Performance and stability Improvements

· Addressing known Issues

· Meet Compliance requirements (like SOX)

4.0 What is Update Management Solution in Azure

The Update Management solution in Azure automation allows you to manage operating system updates for your Windows and Linux computers deployed in Azure, on-premises environments, or other cloud providers. We can quickly assess the status of available updates on all agent computers and manage the process of installing required updates for servers.

5.0 Update Management Solution in Azure

Computers managed by update management use the following configurations for performing assessment and update deployments:

o Microsoft Monitoring Agent for Windows or Linux

o PowerShell Desired State Configuration (DSC) for Linux

o Automation Hybrid Runbook Worker

o Microsoft Update or Windows Server Update Services for Windows computers

The following diagram shows a conceptual view of the behaviour and data flow with how the solution assesses and applies security updates to all connected Windows Server and Linux computers in a workspace.

After a computer performs a scan for update compliance, the agent forwards the information in bulk to Log Analytics. On a Windows computer, the compliance scan is performed every 12 hours by default. In addition to the scan schedule, the scan for update compliance is initiated within 15 minutes if the Microsoft Monitoring Agent (MMA) is restarted, prior to update installation, and after update installation. With a Linux computer, the compliance scan is performed every 3 hours by default, and a compliance scan is initiated within 15 minutes if the MMA agent is restarted.


Update Management requires certain URLs and ports to be enabled to properly report to the service.

We can deploy and install software updates on computers that require the updates by creating a scheduled deployment. Updates classified as Optional are not included in the deployment scope for Windows computers, only required updates. The scheduled deployment defines what target computers receive the applicable updates, either by explicitly specifying computers or selecting a computer group that is based on log searches of a particular set of computers. Also, specify a schedule to approve and designate a period of time when updates are allowed to be installed within. Updates are installed by runbooks in Azure Automation. You cannot view these runbooks, and they don’t require any configuration. When an Update Deployment is created, it creates a schedule that starts a master update runbook at the specified time for the included computers. This master runbook starts a child runbook on each agent that performs the installation of required updates.

6.0 Supported Client types

6.1 Unsupported Client Types

7.0 Solution Components

This solution consists of the following resources that are added to your Automation account and directly connected agents

7.1 Hybrid Worker Groups

It is After we enable this solution, any Windows computer directly connected to Log Analytics workspace is automatically configured as a Hybrid Runbook Worker to support the runbooks included in this solution. For each Windows computer managed by the solution, it is listed under the Hybrid worker groups page as a System hybrid worker group for the Automation account following the naming convention Hostname FQDN_GUID. You cannot target these groups with runbooks in your account, otherwise, they fail. These groups are only intended to support the management solution.

However, add the Windows computers to a Hybrid Runbook Worker group in Automation account to support Automation runbooks as long as we are using the same account for both the solution and Hybrid Runbook Worker group membership. This functionality has been added to version 7.2.12024.0 of the Hybrid Runbook Worker.

7.2 Data Collection

we enable this solution, any Windows computer directly connected to Log Analytics workspace is automatically configured as a Hybrid Runbook Worker to support the runbooks included in this solution. For each Windows computer managed by the solution, it is listed under the Hybrid worker.

The solution collects information about system updates from Linux agents and initiates the installation of required updates on supported distros.

7.3 Collection Frequency

For each managed Windows computer, a scan is performed twice per day. Every 15 minutes the Windows API is called to query for the last update time to determine if the status has changed and if so a compliance scan is initiated. For each managed Linux computer, a scan is performed every 3 hours.

It can take anywhere from 30 minutes up to 6 hours for the dashboard to display updated data from managed computers.

8.0 Manage Updates for multiple Machines

It is We can use update management to manage updates and patches for your Windows and Linux virtual machines. From the Azure Automation account, we can:

· Onboard virtual machines.

· Assess the status of available updates.

· Schedule the installation of required updates.

· Review deployment results to verify that updates were applied successfully to all virtual machines for which update management is enabled.

9.0 Enable update management for Azure virtual machines

In the Azure portal, open your Automation account and select Update management.

At the top of the window, select Add Azure VM.

Select a virtual machine to onboard. The Enable Update Management dialog box appears. Select Enable to onboard the virtual machine. Once onboarding is complete, Update management is enabled for your virtual machine.

10 View computers attached to your automation account

After enabling update management for machines, we can view their information by clicking Computers. Computer information such as NameComplianceEnvironmentOS TypeCritical and Security UpdatesOther Updates, and Update Agent Readiness are available.

For computers that have recently been enabled for update management, they may not have been assessed yet. The compliance state for those computers would have a status of Not assessed. Here is a list of values for compliance state:

· Compliant — Computers that are not missing critical or security updates.

· Non-compliant — Computers that are missing at least one critical or security update.

· Not assessed — The update assessment data has not been received from the computer within the expected timeframe. For Linux computers, in the last three hours and for Windows computers, in the last 12 hours.

To view the status of the agent, click the link in the UPDATE AGENT READINESS column. This opens the Hybrid Worker page that shows the status of the Hybrid Worker. The following image shows an example of an agent that has not been connected to Update Management for an extended amount of time.

11 Schedule an update deployment

To install updates, schedule a deployment that follows a release schedule and service window. We can choose which update types to include in the deployment. For example, we can include critical or security updates and exclude update rollups.

Schedule a new update deployment for one or more virtual machines by selecting Schedule update deployment at the top of the Update management dialog box. In the New update deployment panel, specify the following:

o Name: Provide a unique name to identify the update deployment.

o OS Type: Select Windows or Linux.

11.1 Machines to update

Select the virtual machines that you want to update. The readiness of the machine is shown in the UPDATE AGENT READINESS column. This lets you see the health state of the machine before scheduling the update deployment.

11.2 Update classification

Select the types of software that the update deployment will include. For a description of the classification types, see update classifications. The classification types are:

· Critical updates

· Security updates

· Update rollups

· Feature packs

· Service packs

· Definition updates

· Tools

· Updates

11.3 Updates to exclude

This opens the Exclude page. Enter in the KBs or package names to exclude.

11.4 Schedule settings

We can accept the default date and time, which is 30 minutes after the current time. Or you can specify a different time. You can also specify whether the deployment occurs once or on a recurring schedule. To set up a recurring schedule, select the Recurring option under Recurrence.

11.5 Maintenance window (minutes)

Specify the period of time for when you want the update deployment to occur. This setting helps ensure that changes are performed within your defined service windows.

After finishing configuring the schedule, return to the status dashboard by selecting the Create button. The Scheduled table shows the deployment schedule that you just created.

Warning: For updates that require a restart, the virtual machine will restart automatically.

12 View results of an update deployment

After the scheduled deployment starts, we can see the status for that deployment on the Update deployments tab in the Update management dialog box. If the deployment is currently running, its status is In progress. After the deployment finishes successfully, it changes to Succeeded. If one or more updates fail in the deployment, the status is Partially failed.

To see the dashboard for an update deployment, select the completed deployment.

The Update results pane shows the total number of updates and the deployment results on the virtual machine. The table to the right gives a detailed breakdown of each update and the installation results. Installation results can be one of the following values:

· Not attempted: The update was not installed because insufficient time was available, based on the defined maintenance window.

· Succeeded: The update succeeded.

· Failed: The update failed.

To see all log entries that the deployment created, select All logs.

To see the job stream of the runbook that manages the update deployment on the target virtual machine, select the Output tile.

To see detailed information about any errors from the deployment, select Errors.

Azure App Gateway with Custom Image Scale Set- using ARM template

By | Azure, Blogs | No Comments

Written by Madan Mohan K, Sr. Cloud EngineerContributors; Karthik T, Principal Cloud Architect

“Security, a Tom and Jerry game”Azure Application gateway with WAF

“Higher the Availability, double the glory in production”Azure Virtual Machine Scale set

  1. Application Gateway with Scale set remains to be a boon for hosting scalable, Performance Booster, secure and robust application on Azure. The Layer 7 capable Load balancer helps in providing application-level routing and load balancing services. Furthermore, it helps in achieving scalable and highly-available web front end with Multi URL routing in Azure.
  2. On the other front, virtual machine scale sets(VMSS) which helps in achieving Improved cost management, Higher availability and higher fault tolerance. VMSS supports Azure load balancer (Layer-4) and Azure Application Gateway (Layer-7) traffic distribution. VMSS supports up to 300 virtual machine instances if it uses own custom Image otherwise it will support up to 1000 VM.

Why use this template only?

  1. This Template can be leveraged for both Windows and Linux flavours and incase of unmanaged disks the Image Uri passed will be a Source vhd and in case of a managed disk the Uri passed will be an image id.
  2. This article aims at achieving the deployment of virtual machine scale set at the backend of an Application Gateway with custom Image Uri using ARM Template for both windows and Linux flavours.


The architecture comprises of following Azure Components:

§ Virtual Machine scale set§ Application gateway with WAF§ Application Server (Golden image)§ Jump Box

Problem Statement

A client who runs an e-commerce site had trouble in their hosted application at Azure. On reviewing their environment, we found they use the Application Gateway with Virtual Machine at their backend, which resulted in a lack of scalability and higher cost.


To mitigate the issue, a virtual machine scale set is brought into the picture which helped in achieving higher availability, Performance Booster, scalability, and Improved Cost Management.

ARM Template

  1. To achieve this scenario, we opted for the use of ARM Template where we have incorporated provisioning of scale set at the backend of Application Gateway with Custom Image Uri.
  2. As network performance remains one key factor accelerated networking is taken into consideration while developing this JSON Template. This template aims at solving the problem for both windows and Linux servers as well as for both Managed and Unmanaged disks.
  3. In the network profile section, Accelerated networking has been incorporated which helps in Improving networking performance.

4. The scale set is the key part of this article where it calls the source Uri path of the Golden Image Disks for the Instances to be created at the backend of the Application gateway.

5. The web application firewall is incorporated to mitigate security risks and DDoS attacks Detection and prevention.

6. While the provision of Application Gateway SSL certificate inclusion is also taken care of the template. The passing of cert data and Password will help in getting SSL attached to the application gateway.

7. Http and Https Listeners are added in the Listener section.

8. Http to Https routing rules have been taken care of within the routing policies.

9. In this section, the diagnostic setting for the windows server has been used. The diagnostic settings changes from windows to Linux.

If you need any help on the above Json scripts, Please do reach out to us……..

End 🙂

Running Python on Azure Functions — Time Trigger, External Libraries

By | Azure, Blogs | No Comments

Written By: Ranjeeth Kuppala, Former CTO, Powerupcloud Technologies

I wanted to deploy a time-triggered Python Azure Function but it looks like its not directly available. The console just shows HTTP and Queue triggers at the moment.

But you can still use a Time trigger, just go ahead and choose HttpTrigger — Python function, give your function a name and create it. Once created, go to “Integrate” and delete HTTP Trigger and then add your own new Trigger — Timer

Now that the function is created, you will need external libraries like pandas if you are wrangling with data. If you pip install from the Kudu console (its generally available at https://{yourwebappname}, you may run into permission issues. At the time of writing this post, Azure Function apps have a windows backend and Linux backend is not available yet. So the error may look like this:

python -m pip install pandasFile "D:\Python27\lib\", line 303, in move os.unlink(src)WindowsError: [Error 5] Access is denied: 'd:\\python27\\lib\\site-packages\\pip-1.5.6.dist-info\\description.rst'

It is basically a permissions issue, On a normal windows machine, you could simply resolve this by relaunching cmd or PowerShell executable as an administrator. You can’t do that on Azure Function Apps (which is basically Azure App Service in the backend). After wasting an hour trying to fix and trying to install Python to a different path via NuGet etc, thanks to this link, I managed to solve this using venv.

From the Kudu console, open a Powershell debug session and navigate to your function’s folder

cd D:\home\site\wwwroot\{yourfunction}

Then create a new virtual environment

python -m virtualenv yourenv

Activate your env


You can now upgrade pip and install the libraries you need. You may even use requirements.txt. But make sure that you are using the python.exe from your venv.

.\python.exe -m pip install --upgrade pip
.\python.exe -m pip install pandas

Finally, come back to your function and make sure the path is updated

import sys, os.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'mcorp/Lib/site-packages')))

Connecting Clouds — AWS to Azure Site to Site VPN Step by Step

By | AWS, Azure, Blogs | No Comments

Written By: Karthik AU, Cloud Engineer at Powerupcloud.

One of our customers run a massive fleet of servers spread across AWS and Azure and it was necessary for us to establish a reliable tunnel between AWS and Azure for secure access. This post has a detailed step by step of establishing this connectivity.

We are going to configure RRAS on windows server 2012 R2 on AWS and on the Azure side, we will configure the VNet Gateway for connectivity. So let’s take a look at the steps below


On Azure

  • Virtual network gateway
  • Local network gateway
  • Virtual network
  • Ubuntu instance (to test the connectivity)
  • Make sure you disable the firewall on the instance that we create for the test.


  • VPC
  • Windows Server 2012 R2 Datacentre
  • Ubuntu instance (to test the connectivity)
  • Make sure to disable the firewall on the instance that we create for the test.

Steps to configure

  1. Let’s start with getting things configured on the AWS platform. First, let’s configure VPC with following details
  • Name: aws to azure
  • Address space :
  • Subnet name: awstoaz
  • CIDR:

Login to AWS console and select services, under services, select VPC (marked in yellow)

Select VPC Dashboard and then select start VPC Wizard

Select VPC with a Single Public Subnet

Fill in the address range and then click on Create VPC

Let’s ensure our VPC is connected to internet Gateway

Once our VPC configured. Let’s allocate an elastic IP. In the VPC dashboard, click Elastic IPs, allocate New Address and click Yes, Allocate

  1. Now we have a VPC configured and an Elastic IP ready. Next, we’ll deploy an EC2 instance.
  • A Windows Server 2012 R2 instance
  • We have used t2.micro instance
  • Ensure that Auto-assign Public IP is enabled.

To create an EC2 instance, Click on services and their select EC2 under Compute section

Select the required image, instance size as mentioned earlier

In the configuration, the section makes sure you select the correct VPC and subnet. Once done click on Review and launch.

  1. Once the instance is running associate the Elastic IP we created earlier to the Network interface of the instance. By clicking Elastic IPs in the VPC dashboard, selecting Associate Address in the actions menu, selecting the network interface used by the Windows Server 2012 R2 Datacentre instance we created earlier and click associate.

Fill in the following details and click on the associate.

  1. Next, let’s disable source/destination checking on the server.
  1. Let’s add our Azure VNET address prefix in our RRAS server’s route table. This is to route the incoming traffic from the Azure network to AWS internal network.

Now we are good with the initial configuration on the AWS platform. Let’s start with getting things configured on the Azure platform.

  1. Let’s login to Azure subscription, let’s start with creating a resource group.
  1. Next, we’ll create a virtual network with the following details
  • Name: azure-to- AWS
  • Address space :
  • Subnet name: azure-AWS-sub
  • CIDR:
  • Resource group: we created in the previous step
  1. Now we have our virtual network ready. Let’s get started with configuring VPN on Azure. First, let’s start with configuring a virtual network gateway.
  • Make sure you select the VNET that we created in the previous step.
  • Make sure you select the resource group that we created at the beginning.
  • For provisioning the virtual network gateway it would take approximately 45 minutes.
  1. Since it takes time to create a virtual network gateway. Meanwhile, we can configure the local network gateway (here we define AWS address space and also the public IP of VPN server on AWS).
  • Give the IP of the VPN server (RRAS configured server) on AWS.
  • In address, space gives the VPC range of AWS.
  • Use the same resource group.
  1. Let’s configure RRAS on the instance that we created in AWS (windows server 2012 R2 datacentre). Login to the server.
  • Click on server manager
  • Select the option Add roles and features. Add roles screen pops up click on next.
  • Under Installation type and server, selection keeps the default settings and click on next. Under the server, roles select remote Access and click next.
  • Keep the default setting and click next until you get an option to select install. Click on install.
  • Once the installation is completed. Open server manager and then select a remote access option on the left-hand side. Next right click on the server and then select remote access management. This redirects us to the RRAS configuration panel.
  • Click on Direct Access and VPN and then click on Run the Getting Started Wizard. A wizard pops up and then select the last option Deploy VPN only.
  • Routing and Remote access pane opens. Right-click on the server and then select Configure and Enable Routing and Remote access.
  • Click next. Under configuration select custom configuration and click next.
  • Under custom configuration, select VPN access. And then click on finish.
  • Now we have our RRAS installed. Let’s add our azure IP to be allowed. For that right-click on Network interface and the select first option. A wizard pops up. Click on next
  • Under VPN type select the IKEv2 option and click next.
  • Under destination address give the Azure gateway IP address and click next. Under protocol and security keep the default setting and click next.
  • Under static route for remote network add the Azure VNET address prefix. And then click on next. And then keep the rest of them with the default configuration.
  • Now right click on the created network interface and then select properties.
  • Next select security option on tab, there on bottom of the screen select the option pre-shared key and give a pre-shared key and click ok.
  1. Now we have our RRAS server that acts as a VPN on AWS. Lest go to Azure subscription and establish the connection. Login to Azure subscription. Get into your resource group and there select Local Network Gateway that we had configured in our previous step.
  1. Under the local network gateway, select configuration on the left-hand pane and there select add to add the configuration.
  1. Now add in the following configuration
  • Select the virtual network gateway that we created before.
  • Make sure you give the same pre-shared key that we had given on the RRAS server present on the AWS platform.
  1. Now let’s Login to the RRAS server on AWS end and let’s establish a VPN. In the RRAS server right click on a network interface that we created earlier and then select connect option.

Now let’s check the VPN status, we have our VPN connection status connected from AWS end.

Let’s check the connection from Azure end and we have status showing From Azure end as well.

Let’s deploy the VM’s on the AWS platform and also on Azure platform and then we’ll try to ping from both the end using their private IPs.

We have an Azure Instance with IP: (Ubuntu 14)

We have an AWS Instance with IP: (Ubuntu 14)

Now let’s try to ping AWS instance from Azure instance with private IP. We are able to ping.

Now let’s try to ping Azure instance from AWS instance with private IP. We are able to ping.

Hope you found it useful!

Building Bots using Microsoft Bot Framework and Nodejs

By | Azure, Blogs | No Comments

Written By: Saikrishna Dronavalli, Former Software Engineer, Powerupcloud Technologies

Microsoft provides a very simple and yet powerful framework for creating chatbots. Below are high-level steps for creating bots using Microsoft bot framework.

  • Create a repository on GitHub
  • Create a web application in azure.
  • Register a bot with Microsoft bot framework.
  • Develop the bot code.
  • Test the bot.
  • Deploy it in Azure.
  • Deploy it in the different mediums. (Facebook Messenger, Slack, Kik, Telegram, Twillo SMS etc..)


1.github account. account. (for creating a bot with the Microsoft bot framework)

3.Azure subscription. (for creating a web application and deploying bot application in the azure web app)

Create a repository on GitHub

This is trivial. I am using GitHub because in some section of the bot framework we need to make the continuous integration of the application with the azure web app. Using Github makes it so simple. We can save the effort of re-deploying every time you make a change to your code. Go to GitHub and create a repo.

Create a web application in azure

Bot Framework is powered by Azure’s App Service. Visit Azure’s portal and navigate to Web Apps. Create a new WebApp.

Once created, it is time to link it with out github repo and deploy. Select the app you have created click on App Deployment > Deployment options. Click Choose Source, then select the deployment source.choose Github here.

You might have to authenticate using your GitHub creds, go ahead and do that.

We have now set up continuous deployment via GitHub. You don’t really have to worry about whether its .NET, Java, Python, Node, etc — One of the benefits of using app service is, Azure will take care of CI/CD heavy lifting as long as you hooked it up with the repo.

Register your bot with Microsoft bot framework

For creating a bot which is accessible by the world we need to register the bot with Microsoft bot framework which gives us a unique Microsoft App ID, Microsoft App password which we need to use in the coding part of the bot.

use https://azure-webapp-url/api/messages for Messaging Endpoint as shown below.

Proceed to create App id and password and App password as shown in below screenshots. You might want to store the passwords etc in a text file for later use.

Click on Finish and go back to bot framework. Provide the App Id in the required fields and accept the terms and conditions click on the register

Develop the Bot

Now the interesting part — actually building our bot. Clone the GitHub repository you have created to your local machine. Note that our web app is currently blank, we actually need to put some code and start building the bot. I chose Nodejs for this tutorial.

Lets create a Package JSON and node modules.

npm init

One of the major modules necessary for making use of the bot framework is botbuilder. Let us go ahead and get that. We also need another module for creating the server.

npm install --save botbuilder
npm install --save restify

Let us create js file and add the following code — app.js


  • The has a path set to /api/message because while registering the bot, the endpoint URL was given as https://azure-webapp-url/api/messagesIf you like some other URL, you are free to choose so but make sure that you use the URL that was registered.
  • You have to add your own appId and app password in var botConnectorOptions block.

Next, create an index.html  page with the following content

<h2>Hello world</h2>

Testing the Bot

For testing the bot we need to use Microsoft bot framework emulator. Install it and run your application using

node app.js

This will run the application on the port we specified in app.js file. open the bot emulator enter the URL http://localhost:3978/api/messages.Provide the app id and app password here also click on connect.

Start typing any messages and if you get a reply from the bot — congrats! you have successfully developed a basic bot using Microsoft bot builder.

As I said, this is bare bones of a bot without any actual functionality. Now that the basic bot is working on a local emulator, time to push the changes to GitHub and let Azure Web app take care of the deployment

git add .
git commit -m “commit message”
git push origin master

I pushed it to the master branch because that was the branch I chose while configuring GitHub integration for CI/CD.

Verifying the Bot

Now that deployment is done, time to see it live from the Azure web app endpoint. You can visit If everything went well so far, you should see your bot listed. Click on “Test” and you should ideally see “Accepted” as the response.

You can also test your bot locally using ngrok. Open the bot emulator and provide the endpoint URL you have specified during the registration process. Provide the app id and app password. Start chatting with your bot. If you get the replies your endpoint is working properly.

Deploy The Bot To Different Mediums

One of the advantages of using Bot Framework is how easily it allows you to quickly build a bot and make it available in almost all of the major chat platforms available today. If you have to code all of these integrations yourself and maintain, it sure will take a lot of time and effort. Let’s see how we can integrate our bot across all platforms.

Visit your bot page under my bots in bot framework website. Click on the add button on the required medium, which will show you the steps to be followed for enabling the bot in the respective channel.

It’s that simple. As an example, I will go through Slack set up to give you a sense of how the process looks like.

visit and create a new slack application for your bot.

Open OAuth & permissions click on add a new redirect URL paste “” as redirect URL.

Create a slack bot. Select the ‘Bot Users’ tab and add a bot to your app

Gather the client Id and client secret. which is under basic information→App credentials.

provide them in the ‘submit your credentials’ part of your bot under slack section in click on submit slack credentials. Once the credentials have been validated, choose to enable this bot on slack, and click on I am done configuring slack.

Install slack bot. Visit install app section of your slack bot and click on install app and provide all the permissions required for installing. Once your bot is installed open slack and search for your bot in slack and start sending messages.

There are many other platforms you can integrate with and in each case, you will get clear instructions like below to make the integration working:

That’s about it. The goal was to demonstrate how easy it is to deploy a bot and also integrate using Azure Bot Framework. I hope you found it useful!

Happy chat-botting!