Category

Azure

Why hybrid is the preferred strategy for all your cloud needs

By | AWS, Azure, Blogs, GCP, hybrid cloud, Powerlearnings | No Comments

Written by Kiran Kumar, Business analyst at Powerupcloud Technologies.

While public cloud is a globally accepted and proven solution for CIO’s and CTO’s looking for a more agile, scalable and versatile  IT environment, there are still questions about security, reliability, cloud readiness of the enterprises and that require a lot of time and resources to fully migrate to a cloud-native organization. This is exacerbated especially for start-ups, as it is too much of a risk to work with these uncertainties. This demands a solution that is innocuous and less expensive to drive them out of the comforts of their existing on-prem infrastructure. 

Under such cases, a hybrid cloud is the best approach providing you with the best of both worlds while keeping pace with all your performance, & compliance needs within the comforts of your datacenter.

So what is a hybrid cloud?

Hybrid cloud delivers a seamless computing experience to the organizations by combining the power of the public and private cloud and allowing data and applications to be shared between them. It provides enterprises the ability to easily scale their on-premises infrastructure to the public cloud to handle any fluctuations in the work-load without giving third-party datacenters access to the entirety of their data. Understanding the benefits, various organizations around the world have streamlined their offerings to effortlessly integrate these solutions into their hybrid infrastructures. However, an enterprise has no direct control over the architecture of a public cloud so, for hybrid cloud deployment, enterprises must architect their private cloud to achieve consistent hybrid experience with the desired public cloud or clouds. 

A 2019 survey (of 2,650 IT decision-makers from around the world) respondents reported steady and substantial hybrid deployment plans over the next five years. In addition to this, a vast majority of 2019 survey respondents about more than 80% selected hybrid cloud as their ideal IT operating model and more than half of these respondents cited hybrid cloud as the model that meets all of their needs. And more than 60% of them stated that data security is the biggest influencer.

Also, respondents felt having the flexibility to match the right cloud to each application showcases the scale of adaptability that enterprises are allowed to work with, in a hybrid multi-cloud environment 

Banking is one of those industries that will embrace the full benefits of a hybrid cloud, because of how the industry operates they require a unique mix of services and an infrastructure which is easily accessible and also affordable

In a recent IBM survey 

  • 50 percent of banking executives say they believe the hybrid cloud can lower their cost of IT ownership 
  • 47 percent of banking executives say they believe hybrid cloud can improve operating margin 
  • 47 percent of banking executives say they believe hybrid cloud can accelerate innovation

Hybrid adoption – best practices and guidelines 

Some of the biggest challenges in cloud adoption include security, talent, and costs, according to the report’s hybrid computing has shown that it can eliminate security challenges and manage risk, by positioning all the important digital assets and data on-prem. Private clouds are still considered to be an appropriate solution to host and manage sensitive data and applications and also the enterprises still need the means to support their conventional enterprise computing models. A sizeable number of businesses still have substantial on-premise assets comprising archaic technology, sensitive collections of data, and highly coupled legacy apps that either can’t be easily moved or swapped for public cloud. 

Here some of the guidelines for hybrid adoption 

Have a cloud deployment model for applications and data

Deployment models talk about what cloud resources and applications should be deployed and where. Hence it is crucial to understand the 2 paced system ie, steady and fast-paced system to determine the deployment models. 

A steady paced system must continue to support the traditional enterprise applications on-prem to keep the business running and maintain the current on-premise services. Additionally, off-premises services, such as private dedicated IaaS, can be used to increase infrastructure flexibility for enterprise services.

And a fast-paced system is required to satisfy the more spontaneous needs like delivering applications and services quickly whether it’s scaling existing services to satisfy spikes in demand or providing new applications quickly to meet an immediate business need. 

The next step is determining where applications and data must reside.

Placement of application and datasets on private, public or on-prem is crucial since IT architects must access the right application architecture to achieve maximum benefit. This includes understanding application workload characteristics and determining the right deployment model for multi-tier applications. 

Create heterogeneous environments 

To achieve maximum benefits from a hybrid strategy, the enterprise must leverage its existing in-house investments with cloud services, by efficiently integrating them, as new cloud services are deployed, the applications running on them with various on-premises applications and systems becomes important.

Integration between applications typically includes 

  • Process (or control) integration, where an application invokes another one in order to execute a certain workflow. 
  • Data integration, where applications share common data, or one application’s output becomes another application’s input. 
  • Presentation integration, where multiple applications present their results simultaneously to a user through a dashboard or mashup.

To obtain a seamless integration between heterogeneous environments, the following actions are necessary:

  • a cloud service provider must support open source technologies for admin and business interfaces.
  • Examine the compatibility of in-house systems to work with cloud services providers and also ensure that on-premises applications are following SOA design principles and can utilize and expose APIs to enable interoperability with private or public cloud services.
  • Leverage the support of third party ID and Access Management functionality to authenticate and authorize access to cloud services. Put in place suitable API Management capabilities to prevent unauthorized access.

Network security requirements 

Network type – Technology used for physical connection over the WAN Depends on aspects like bandwidth, latency, service levels, and costs. Hybrid cloud solutions can rely on P2P links as well as the Internet to connect on-premises data centers and cloud providers. The selection of the connectivity type depends on the analysis of aspects like performance, availability, and type of workloads. 

Security – connectivity domain needs to be evaluated and understood; to match the network security standards between cloud provider network security standards and the overall network security policies, guidelines and compliance. The encrypting and authenticating traffic on the WAN can be evaluated at the application level. And aspects like systems for the computing resources, applications must be considered and technologies such as VPNs can be employed to provide secure connections between components running in different environments.

Web apps security and Management services like DNS and DDoS protection which are available on the cloud can free up dedicated resources required by an enterprise to procure, set-up and maintain such services and instead concentrate on business applications. This is especially applicable to the hybrid cloud for workloads that have components deployed into a cloud service and are exposed to the Internet. The system that is deployed on-premises needs to adapt to work with the cloud, to facilitate problem identification activities that may span multiple systems that have different governance boundaries.

Security and privacy challenges & counter-measures

Hybrid cloud computing has to coordinate between applications and services spanning across various environments, which involves the movement of applications and data between the environments. Security protocols need to be applied across the whole system consistently, and additional risks must be addressed with suitable controls to account for the loss of control over any assets and data placed into a cloud provider’s systems. Despite this inherent loss of control, enterprises still need to take responsibility for their use of cloud computing services to maintain situational awareness, weigh alternatives, set priorities, and effect changes in security and privacy that are in the best interest of the organization. 

  • A single and uniform interface must be used to curtail or Deal with risks arising from using services from various cloud providers since it is likely that each will have its own set of security and privacy characteristics. 
  • Authentication and Authorization. A hybrid environment could mean that gaining access to the public cloud environment could lead to access to the on-premises cloud environment.
  • Compliance check between cloud providers used and in-home systems.

Counter-measures

  • A single Id&AM system should be used.
  • Networking facilities such as VPN are recommended between the on-premises environment and the cloud.  
  • Encryption needs to be in place for all sensitive data, wherever it is located.
  • firewalls, DDoS attack handling, etc, needs to be coordinated across all environments with external interfaces.

Set-up an appropriate DB&DR plan

As already been discussed, a hybrid environment provides organizations the option to work with the multi-cloud thus offering business continuity, which has been one of the most important aspects of business operations. It is not just a simple data backup to the cloud or a Disaster Recovery Plan, it means when a disaster or failure occurs, data is still accessible with little to no downtime. Which is measured in terms of time to restart (RTO: recovery time objective) and maximum data loss allowed (RPO: recovery point objective).

Therefore a business continuity solution has to be planned considering some of the key elements such as resilience, time to restart (RTO: recovery time objective) and maximum data loss allowed (RPO: recovery point objective) which was agreed upon by the cloud provider. 

Here are some of the challenges encountered while making a DR plan 

  • Although the RTO and RPO values give us a general idea of the outcome, they cannot be trusted fully so the time required to restart the operation may take longer 
  • As the systems get back up and operational there will be a sudden burst of request for resources which is more apparent in large scale disasters.
  • Selecting the right CSP is crucial as most of the cloud providers do not provide DR as a managed service instead, they provide a basic infrastructure to enable our own DRaaS.

Hence enterprises have to be clear and select their DR strategy which best suits their IT infrastructure as this is very crucial in providing mobility to the business thus making the business more easily accessible from anywhere around the world and also data insurance in the event of a disaster natural or even in case of technical failures, by minimizing downtime and the costs associated with such an event.

How are leading OEMs like AWS, Azure and Google Cloud adapting to this changing landscape  

Google Anthos

In early, 2019 google came up with Anthos which is one of the first multi-cloud solutions from a mainstream cloud provider, Anthos is an open application modernization platform that enables you to modernize your existing applications, build new ones, and run them anywhere, built on open-source, including Kubernetes as its central command and control center, Istio enables federated network management across the platform, and Knative provides an open API and runtime environment that enables you to run your serverless workloads anywhere you choose. Anthos enables consistency between on-premises and cloud environments. Anthos helps accelerate application development and strategically enables your business with transformational technologies. 

AWS Outposts

AWS Outposts is a fully managed service that extends the same AWS hardware infrastructure, services, APIs, and tools to build and run your applications on-premises and in the cloud for a truly consistent hybrid experience. AWS compute, storage, database, and other services run locally on Outposts, and you can access the full range of AWS services available in the Region to build, manage, and scale your on-premises applications using familiar AWS services and tools. across your on-premises and cloud environments. Your Outposts infrastructure and AWS services are managed, monitored, and updated by AWS just like in the cloud.

Azure Stack

Azure Stack is a hybrid solution provided by Azure built and distributed by approved Hardware vendors(like DellLenovoHPE, etc,.) that bring Azure cloud into your on-prem data center. It is a fully managed service where hardware is managed by the certified vendors and software is managed by the Microsoft Azure. Using azure stack you can extend the azure technology anywhere, from the datacenter to edge locations and remote offices. Enabling you to build, deploy, and run hybrid and edge computing apps consistently across your IT ecosystem, with flexibility for diverse workloads.

How Powerup approaches Hybrid cloud for its customers 

Powerup is one of the few companies in the world to have achieved the status of a launch partner with AWS outposts with the experience in working on over 200+projects across various verticals and having top-tier certified expertise in all the 3 major cloud providers in the market. We can bring an agile, secure, and seamless hybrid experience across the table. Outposts is a fully managed services hence it eliminates the hassle of managing an on-prem data center so that the enterprises can concentrate more on optimizing their infrastructure

Reference Material

Practical Guide to Hybrid Cloud Computing

Azure Arc: Onboarding GCP machine to Azure

By | Azure, Blogs, Cloud, Cloud Assessment, GCP | No Comments

Written by Madan Mohan K, Associate Cloud Architect

“#Multicloud and #Hybrid cloud are not things, they are situations you find yourself in when trying to solve business problems”

In the recent past, the most organization started clinging to hybrid and multi-cloud approach. Many enterprises still face a sprawl of resources spread across multiple datacentres, clouds, and edge locations. Enterprise customers keep looking for a cloud-native control plane to inventory, organize, and enforce policies for their IT resources wherever they are, from a central place.

Azure Arc:

Azure Arc extends the Azure Resource Manager capabilities to Linux and Windows servers, and Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and the edge. Organizations can use Azure Arc to run Azure data services anywhere, which includes always up-to-date data capabilities, deployment in seconds, and dynamic scalability on any infrastructure. We will have a close look at Azure Arc for Servers is currently in preview.

Azure Arc for Servers:

Using Azure Arc for servers, managing machines that are hosted outside of Azure (on-premise & other cloud providers). When these types of machines are connected to Azure using Azure Arc for servers, they become Connected Machines, and they will be treated as native resources in Azure. Each Connected machine will get a Resource ID during registration in Azure and it will be managed as part of a Resource group inside an Azure subscription. This will enable the ability to benefit from Azure features and capabilities, such as Azure Policies, and tagging.

For each machine that you want to connect to Azure, an agent package needs to be installed. Based on how recently the agent has checked in, the machine will have a status of Connected or Disconnected. If a machine has not checked-in within the past 5 minutes, it will show as Disconnected until connectivity is restored. This check-in is called a heartbeat. The Azure Resource Manager service limits are also applicable to Azure Arc for server, which means that there is a limit of 800 servers per resource group.

Supported Operating Systems:

The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:

  • Windows Server 2012 R2 and higher
  • Ubuntu 16.04 and 18.04

Networking Requirements on Remote Firewall:

The Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. During installation and runtime, the agent requires connectivity to Azure Arc service endpoints. If outbound connectivity is blocked by the firewall, make sure that the following URLs and Service Tags are not blocked:

Service Tags:

  • AzureActiveDirectory
  • AzureTrafficManager

URLs Required:

Domain Environment Azure Service Endpoints
management.azure.com Azure Resource Manager
login.windows.net Azure Active Directory
dc.services.visualstudio.com Application Insights
agentserviceapi.azure-automation.net Guest Configuration
*-agentservice-prod-1.azure-automation.net Guest Configuration
*.his.hybridcompute.azure-automation.net Hybrid Identity Service

Register Azure resource providers:

Azure Arc for servers depends on the following Azure resource providers in your subscription in order to use this service:

  • Microsoft.HybridCompute
  • Microsoft.GuestConfiguration

First, we need to register the required resource providers in Azure. Therefore, take the following steps:

Navigate to the Azure portal at https://portal.azure.com/

Log in with administrator credentials

Registration can be done either using Azure Portal or Powershell

Using Azure Portal:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types#azure-portal

Using Azure Powershell:

Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute

Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration

In this case, we will be using Powershell to register the resource providers

Note: The resource providers are only registered in specific locations.

Connect the on-premise/cloud machine to Azure Arc for Servers:

There are two different ways to connect on-premises machines to Azure Arc.

  • Download a script and run it manually on the server.
  • Using PowerShell for adding multiple machines using a Service Principal.

When adding the single server, the best approach is to use the download script and run manually method. To connect the machine to Azure, we need to generate the agent install script in the Azure portal. This script is going to download the Azure Connected Machine Agent (AzCMAgent) installation package, install it on the on-premises machine and register the machine in Azure Arc.

Generate the agent install script using the Azure portal:

To generate the agent, install script, take the following steps:

Navigate to the Azure portal and type Azure Arc

Use https://aka.ms/hybridmachineportal instead of Azure Portal

Click on +Add.

Select Add machines using an interactive script:

Keep the defaults in Basic Pane and click on Review and generate the script.

Connect the GCP machine to Azure Arc:

GCP Instance:

To connect the GCP machine to Azure Arc, we first need to install the agent on the GCP machine. Therefore, take the following steps

Open Windows PowerShell ISE as an administrator. Paste the script, that is generated in the previous step in PowerShell, in the window and execute it.

A registration code is received during the execution of the script

Navigate to  https://microsoft.com/devicelogin Paste in the code from PowerShell and click Next

A confirmation message is displayed stating that the device is registered in Azure Arc

Validation in Azure Portal:

Now when navigating to the Azure Arc in Azure Portal we can see the GCP VM is onboarded and the status shows connected.

Managing the machine in Azure Arc:

To manage the machine from Azure, click on the machine in the overview blade

In the overview blade, it offers to add tags to the machine. Furthermore, it offers to Manage access and apply policies to the machine.

Inference:

Azure Arc acts as a centralized cloud-native control plane to inventory, organize, and enforce policies for IT resources wherever they are. With an introduction to Azure Arc, an organization can enjoy the full benefits of managing its hybrid environment and it also offers the ability to innovate using cloud technologies.

Migrating large workloads to AWS and implementing best practice IaaS

By | Azure, Case Study | No Comments

Customer: a leading provider of cloud-based software solutions

About the customer:

Being a part of the highly regulated life sciences industry, recognized the benefits of the cloud a long time ago. The Customer was one of the very first life sciences solution vendors to deliver SaaS solutions to its customers. Currently, that momentum continues as the business goes “all-in on AWS” by moving its entire cloud infrastructure to the AWS platform.

As their platform and solutions are powered entirely by the AWS cloud, the business wanted to find ways to reduce costs, strengthen security, and increase the availability of the existing AWS environment. Powerup’s services were enlisted with the following objectives:

  1. Cost optimization of the existing AWS environment
  2. Deployment automation of
  3. Safety infrastructure on AWS
  4. Architecture and deployment of centralized Log Management solution
  5. Architecture review and migration of the client’s customer environment to AWS including POC for Database Migration Service (DMS)
  6. Evaluation of DevOps strategy

Proposed Solution

1. Cost optimization of the existing AWS environment

Here are the three steps followed by Powerup to optimize costs:
● Addressing idle resources by proper server tagging, translating into instant savings
● Right-sizing recommendation for instances after a proper data analysis
● Planning Amazon EC2 Reserved Instances (RI) purchasing for resized EC2 instances to capture long-term savings

Removing idle/unused resource clutter would fail to achieve its desired objective in the absence of a proper tagging strategy. Tags created to address wasted resources also help to properly size resources by improving capacity and usage analysis. After right-sizing, committing to reserved instances gets a lot easier. For example, the Powerup team was able to draw a comparison price chart for the running EC2 & RDS instances based on the On-Demand Vs RI costs and share a detailed analysis explaining the RI Instances pricing plans. By following these steps, Powerup estimated a 30% reduction in monthly spending of the customer on AWS.

2. Deployment automation Safety infrastructure on AWS

In AWS, the client has leveraged key security features like Cloud Watch and Cloud trail to closely monitor the traffic and actions performed at the API level. Critical functions like Identity & Access Management, Encryption, Log management is also managed by using features of AWS.
Capabilities like AWS Guard Duty, which is an ML-based tool, which continuously monitors threats and add industry intelligence to the alerts it generates is used by them for 24/7 monitoring; along with AWS Inspector, which is a vulnerability detection tool. To ensure end to end cybersecurity, they have deployed an end to end Endpoint Detection and Response (EDR) solution called Trend Micro Deep Security. All their products are tested for security vulnerabilities using the IBM AppScan tool and manual code review, following OWASP Top10 guidelines and NIST standards to ensure Confidentiality, Integrity, and Availability of data.

As part of deployment automation, Powerup used Cloud formation (CF) and/or Terraform templates to automate infrastructure provision and maintenance. In addition to this, Powerup’s team simplified all modules used to perform day to day tasks to render them re-usable for deployments across multiple AWS accounts. Logs generated for all provisioning tasks were stored in a centralized S3 bucket. The business had requested for incorporating security parameters and tagging files, along with tracking of user actions in cloud trail.

3. Architecture and deployment of centralized Log Management solution

Multiple approaches for Log management were shared with the customer. Powerup and the client team agreed on the approach “AWS CW Event Scheduler/SSM Agent”. Initially, the scope was a generation of Log management system for Safety infrastructure account, later, it was
expanded to other accounts as well. Powerup team built solution architecture for Log management using ELK stack and Cloud Watch. Scripts were written such that it can be used across their clients on the AWS cloud. Separate scripts were written for Linux /Windows machines using Shell scripting and Powershell. No hard coding was done on the script. All inputs are through a CSV file that would have Instance ID, Log Path, Retention Period, backup folder path & S3 bucket path.

Furthermore, Live hands-on workshops were conducted by the Powerup team to train the client’s Operations team for future implementations.

4. Architecture review and migration of the client’s environment to AWS including POC for Database Migration Service (DMS)

The client’s pharmacovigilance software and drug safety platform is now powered by the AWS Cloud, and currently, more than 85 of their 200+ customers have been migrated, with more to quickly follow. In addition, the wanted Powerup to support the migration of one of its customers
to AWS. Powerup reviewed and validated the designed architecture. Infrastructure was deployed as per the approved architecture. Once the architecture was deployed, Powerup used the AWS Well-Architected Framework to evaluate the deployed architecture and provide guidance to implement designs that scale with customer’s application needs over time. Powerup also supported the application team for production Go-live on AWS infrastructure, along with deploying and testing DMS POC.

5. Evaluation of DevOps strategy

Powerup was responsible for evaluating DevOps automation processes and technologies to suit the products built by the client’s product engineering team.

Cloud platform

AWS.

Technologies used

EC2, RDS, CloudFormation, S3.

Benefits

Powerup equipped the client with efficient and completely on-demand infrastructure provisioning with hours, along with built-in redundancies, all managed by AWS. Eliminating idle and over-allocated capacity, RI management, and continuous monitoring enabled them to optimize costs. They successfully realized 30% savings on overlooked AWS assets, resulting in an overall 10 percent optimization in AWS cost. In addition, the client can now schedule and automate application backups, scale-up databases in minutes by changing instance type, and have instances automatically moved to a healthy infrastructure in less than 15 minutes in case of a downtime, giving customers improved resiliency and availability. The client continues to provide a globally unified, standardized solution on the AWS infrastructure-as-a-service (IaaS) platform to drive compliance and enhance the experiences of all its customers.

FEDERATE WEB APPLICATION WITHOUT A SAML PROVIDER

By | Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written by Madan Mohan K, Sr. Cloud EngineerContributor; Karthik T, Principal Cloud Architect, Powerupcloud Technologies

“Illusion appeals creativity Authentication validates Reality”-Active Directory Federation Service.

Problem Statement:

Our Prime client has their AD and ADFS on-premise. The client needed to federate their Web Application and get the User attributes without opting for a SAML provider.

Solution:

With SAML provider the configuration would have been so eased. But without a SAML provider how we have cracked this is what we are going to have in this blog.

ADFS

Active Directory Federations Services (ADFS) is an enterprise-level identity and access management service provided by Microsoft. ADFS runs as a separate service and hence any application that supports WF-Federation and Security Assertion Markup Language (SAML), can leverage this federation authentication service.

In this article, we are going to use Active Directory and ADFS configured in Azure VM. The configuration for AD an ADFS in Azure VM will be explained in the consecutive blogs.

This article has inclusively used some of the Azure services to setup the on-premise set of the client to demonstrated how action happen in live. Services opted-in Azure are as follows

  • Azure Virtual Network (VNET) configuration
  • Azure Virtual Machine (VM) provisioning
  • Active Directory configuration on Azure VM
  • Active Directory Federation Services (ADFS) configuration in Azure VM

Introduction

This article aims at explaining the configuration of AD and ADFS on Azure VM. This typically involves the following steps to be carried out from the Azure Management Portal

  • Set up Azure Virtual Network
  • Provision Azure VM

Once the VM provisioning is done, the following Services needs to be configured inside the Azure VM

  • Active Directory Domain Services
  • Active Directory Federation Services
  • IIS

Technology Stack

1. Windows Server 2012 R2 Datacenter

2. ADFS 3.0

3. IIS

4. Microsoft Azure subscription.

5. Self-Signed SSL Certificates

Configuring AD, ADFS and SSL

As we have multiple blogs on the Internet to configure Active Directory we will focus on ADFS and SSL configuration but will be brief what we have done in AD.

  • Configured Active Directory Domain Services
  • Promoted it to a domain controller
  • The domain used is cloud.Powerupcloud.com

Configure SSL certificate

Active Directory Federation Service (ADFS) uses HTTPs protocol. Certificates provisioned from Certificate authority helps us in getting this work on HTTPs. We opted to use a self-signed certificate to serve the certificates needed. To create a self-signed certificate, we have 2 options.

PS C:\Scripts> . \New-SelfSignedCertificateEx.ps1

PS C:\Scripts> New-SelfSignedCertificateEx -Subject “CN=powerup.southindia.cloud.azure.com” -EKU “Server Authentication” -KeyUsage 0xa0 -StoreLocation “LocalMachine” -ProviderName “Microsoft Strong Cryptographic Provider” -Exportable

ADFS Configuration:

In this section, ADFS configuration is explained

To begin, open Server Manager and click on “Add Roles and Features”. In server roles window, select the option “Active Directory Federation Services”. Click Next to continue.

On the Feature, section select nothing and click Next.

On the ADFS section select nothing and click Next.

In the Confirmation page click on Next.

On the success of Installation, you will be prompted to Configure the Federation service on the server.

Click on Configure the Federation service on the server.

Check to Create the first federation server in the federation server farm.

Select the privileged account for the setup to get executed.

Import the SSL certificate which is generated earlier and enter the Federation service display name.

Use the privileged account for the Service Account.

Check the create a database on this server using Windows Internal Database.

Select nothing on Review option and Click Next

The display status of successful validation in the Pre-requisites section.

ADFS configured successfully.

Once ADFS is configured we need to configure the Relying Party Trusts.

In the Relying Party Trust Tab Right click and Add a new Relying Party Trust

Check the Enter data about the relying party manually

Pass on the Display name as required

ADFS profile needs to be checked.

Click Next on the Configure Certificate section.

Check the Enable support for the SAML 2.0 WebSSO protocol and pass the https://https://powerupcloud.southindia.cloudapp.azure.com URL.

In the trust identifier pass the https://powerupcloud.southindia.cloudapp.azure.com URL.

Check on I do not want to configure multi-factor authentication settings for this relying party trust currently.

Allow all the users to access this relying party.

Click on Next in the Ready to Add Trust section.

Click on Finish.

In the add Transform Claim select the Send LDAP Attributes as Claims.

Pass on the Claim rule name and select the Active Directory Attribute storeand define the Claim Attributes as in the below screenshot.

Adjusting the trust settings

You still need to adjust a few settings on your relying party trust. To access these settings, select Properties from the Actions sidebar

  • In the Endpoints tab, click on add SAML to add a new endpoint.
  • For the Endpoint type, select SAML Logout.
  • For the Binding, choose POST.
  • For the Trusted URL, create a URL using:
  • The web address of your AD FS server
  • The AD FS SAML endpoint you noted earlier
  • The string? wa=wsignout1.0
  • The URL should look something like this:

https://powerup.southindia.cloudapp.azure.com.co.in/adfs/ls/?wa=wsignout1.0.

  • Confirm your changes by clicking OK on the endpoint.

There you go the application is up and gets redirected while authentication is provided.

VSTS Project on Azure Web-app and VM

By | Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written by Karthik T, Principal Cloud Architect, Powerupcloud Technologies

This is a four-step process to create a VSTS project on Azure web-apps and VMs.

  1. Setup
  2. Code
  3. Build
  4. Release

1.Setup :

Steps for creating a VSTS project for hosting the application on Azure Web-App and VM.

> VSTS ACCOUNT:

Sign Up for the account and create a new project. We used an agile process and Git version control.

If you want to know about the other project process available take a look at this link

> WEBHOST:

We are going to host a static webpage through Azure app services.

> WEB BROWSER:

VSTS is browser-based, so get your favourite modern browser.

> VIRTUAL MACHINE:

Create an Azure VM. OS of your choice.(Here we used Windows OS).

> VISUAL STUDIO:

Install Visual studio in your local machine from here.

> IMPORTING THE REPOSITORY:

There are a lot of ways we can import the repo. Here we used the “import a repository” option and use the git link to import the code to VSTS.

2.CODE:

Now the code will be pushed in your VSTS project.

Code will be pushed like this in the master branch

> BRANCH:

We can create multiple branches for each developer and keep permission for each branch for merging the code in the master branch.

> PULL REQUEST:

Once the developer writes his code in the new branch says ‘dev’, then he wants to push the code into the master branch for the build he uses the pull request option.

In the pull request, we can send the code for approval and review before merging the code into the master branch. We have created two branches and also committed a few lines in the code and merge it into the master branch.(optionally we can send for approval to review the code and merge it to their branch) and also we used Visual studio to commit/push/sync/fetch to VSTS Git repo.

3.BUILD :

We have created the private Git repository with an Agile process and committed the code in a git repo with the JSP sample app.

In continuous integration we have used maven task to build and to store the war file, we use publish artefact to store the war file.

we have used a maven task with the pom.xml file to build(optionally we can scheduled or trigger the build when the code is committed in repo).

In the triggers, tab enables the Continuous integration to automate the build process once the code is pushed in the master branch.

4.RELEASE:

We use publish artefact (war file) form the building part and for continuous deployment, we have created two pipelines

  • The store file in artefact is deployed to the Azure web app on apache tomcat and also added the load test task.
  • The store file in artefact is deployed to windows VM which is hosted on Azure.

> Release pipeline for Azure Webapp

From the build artefact, we are deploying to the release environment with two tasks that will host the JSP app.

Two tasks we used are Deploy Azure App service and quick web performance test

Once the release is successful we can check the logs how the tasks are done

> Release pipeline for Azure VM

We use the build artefact to deploy to Windows VM which is hosted in Azure.

The task we used here is Azure VMs file copy which will copy the artefact build file into Azure hosted VM.

VM credentials should be given “Azure file copy” task

Once the release is successful we can check the logs how the tasks are done

PROS AND CONS:

PROS

  1. Works with nearly every language and technology.
  2. VSTS includes project management tools like Agile, Scrum, CMMI.
  3. Because of its cloud-based, it’s very easy to access from anywhere.
  4. Users need not worry about server maintenance.

CONS

  1. It does not support integration with Sharepoint or Project server.

CONCLUSION:

The website was hosted successfully in Azure WebApp and Azure VM.

WebApp:

VM:

Hope the above was helpful. Happy to hear from you.

Case Study: A dual migration across AWS & Azure.

By | AWS, Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written by Arun Kumar, Sr. Cloud Engineer and Ayush Ragesh, Cloud Solutions Architect, Powerupcloud Technologies

Problem Statement

The client is a global pioneer and leader providing end-to-end gifting solutions and manage 260Mn+ transactions a year. With over 15000 points of sale across 16 countries, they needed to migrate their platform from an on-premise datacentre to the cloud. In addition, they needed to be ready and scale-able for one of the largest e-commerce sales in the country.

Powerup’s approach:

As determined with the client it was decided to host their primary DC on AWS and DR on Azure.

Architecture Diagram

Architecture Description

  1. The applications were spread across multiple VPCs which are connected to each other via VPC peering. Different VPCs for UAT, Management Production, etc.
  2. VPN tunnels are deployed from the client’s Bangalore location to AWS and Azure environment.
  3. Multiple Load Balancers are used to distribute traffic between different applications.
  4. NAT Gateways are used for Internet access to private servers.
  5. Cisco Firepower/Palo Alto as the firewall.
  6. CloudFormation for automated deployments on AWS.
  7. Cloudtrail for logging and KMS for encryption of EBS volumes and S3 data. Config for change management.
  8. Route53 is used as the DNS service.
  9. Guard Duty and Inspector will be configured for additional security.
  10. DR site will be deployed on Azure.

Outcomes

* Powerupcloud was able to successfully migrate the core for their largest client on AWS.

* The client was able to achieve the required scalability, flexibility and performance.

The e-commerce sale day was a big success with zero downtime.

Lessons Learned

The customer initially wanted to use a Cisco Firepower firewall for IDS/IPS, DDOS, etc.SSL offloading needs to be done in the application server. So we decided to use Network Load Balancers. Instance-based routing was used so that the source IP addresses are available at the application server. The firewall needed 3 Ethernet cards for ‘trust’, ‘untrust’ and ‘management’.

In Cisco, by default, the eth0 is mapped management and we cannot change this. In Instance-based routing, the request always goes to eth0 while the request should go to ‘untrust’.

So we finally had to use a Palo Alto firewall where we can remap eth0 to ‘untrust’.

Cloned Linux Virtual Machine troubled by not fetching IP from DHCP client while using sys-prepped Image

By | Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written by Nirmal Prabhu, Cloud Engineer, Powerupcloud Technologies

Initial Problem Statement

An internal error occurred while processing diagnostics profile of VM ‘test” when using the encrypted Storage Account “test storage” as the diagnostics storage account.

We came up with some weird error while performing the cloning of a virtual machine by Sysprepping it. Looking at the first sight it appeared to be stored as a troubler.

Initial Findings:

Considering storage to be the troubler, we tried to mitigate the issue focusing on storage. While analyzing the storage we floored into following observation

When we create the VM using PowerShell without specifying which Storage Account to use as the Diagnostics Account, Azure automatically used the next available Storage Account. In our case, the “test storage” encrypted account was chosen by default.

We even found our account doesn’t have sufficient permission on the key vault that used by the reported Storage Account “testdiagstorage”.

The above observations clarified that the permission error is the troubler for the popped error.

To confirm the same, we have done the below:

Using an encrypted Diagnostics Storage Account:

Created a new Storage Account in the same location and resource group.

Created a new Key Vault.

Generated a new Key.

Encrypted the Storage Account using the new Key Vault and Key.

Then we have created a new VM using the same image from the portal.

During the deployment from the portal, we chose the new encrypted storage account to be the diagnostic account.

The deployment completed successfully, and we got no errors regarding the diagnostics account.

The Real Troubler

But then came the real devil which troubled us in connecting with the virtual machine.

While scrutinizing using serial console which is a new feature in azure. This feature allows bidirectional serial console access to your virtual machines.

We figured out the issue is from the DHCP client which denied us in accessing the cloned virtual machine over SSH. Digging further we observed the following issues

§ Cloned VM was not accessible over SSH.

§ VM isn’t even able to ping from other Azure VMs.

§ VM doesn’t have a private IP obtained from the attached network adapter.

While troubleshooting this issue from the OS level, we found that the DHCP Client isn’t started.

Once we started the DHCP Client Manual by running the “dhclient”, the eth0 obtained an IP address and could communicate with the VM normally over the obtained IP.

However, this didn’t solve the issue, as once the VM rebooted the DHCP client will not start automatically as it should be.

Solution:

To mitigate this issue, we had a workaround by automating the DHCP Client to start at the boot.

Run crontab -e.

Add the below line:

@reboot /sbin/dhclient

Once we have done that, the VM was running the DHCP client on boot, which ensures that the VM obtains an IP address on each reboot.

We found a similar symptom that was reported to Ubuntu on Version 18.04, that’s what brought the possibility of a known issue scenario in our case.

End:)

Adopting Azure Update Management Solution

By | Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written by Karthik T, Principle cloud architect, Powerupcloud Technologies

This blog helps you understand how to use the Azure Update Management solution to manage updates for your Windows and Linux computers.

1.0 Introduction

This document provides details on the adoption of an Update Management solution in Azure.

2.0 Audience

· IT Infrastructure and Server Management Team

3.0 Why do we need Patch Management?

· Plugging any security vulnerabilities in the OS or Installed Application Software

· Proactive protection against newer threats and malware

· Fixing existing platform/software bugs

· Performance and stability Improvements

· Addressing known Issues

· Meet Compliance requirements (like SOX)

4.0 What is Update Management Solution in Azure

The Update Management solution in Azure automation allows you to manage operating system updates for your Windows and Linux computers deployed in Azure, on-premises environments, or other cloud providers. We can quickly assess the status of available updates on all agent computers and manage the process of installing required updates for servers.

5.0 Update Management Solution in Azure

Computers managed by update management use the following configurations for performing assessment and update deployments:

o Microsoft Monitoring Agent for Windows or Linux

o PowerShell Desired State Configuration (DSC) for Linux

o Automation Hybrid Runbook Worker

o Microsoft Update or Windows Server Update Services for Windows computers

The following diagram shows a conceptual view of the behaviour and data flow with how the solution assesses and applies security updates to all connected Windows Server and Linux computers in a workspace.

After a computer performs a scan for update compliance, the agent forwards the information in bulk to Log Analytics. On a Windows computer, the compliance scan is performed every 12 hours by default. In addition to the scan schedule, the scan for update compliance is initiated within 15 minutes if the Microsoft Monitoring Agent (MMA) is restarted, prior to update installation, and after update installation. With a Linux computer, the compliance scan is performed every 3 hours by default, and a compliance scan is initiated within 15 minutes if the MMA agent is restarted.

Note

Update Management requires certain URLs and ports to be enabled to properly report to the service.

We can deploy and install software updates on computers that require the updates by creating a scheduled deployment. Updates classified as Optional are not included in the deployment scope for Windows computers, only required updates. The scheduled deployment defines what target computers receive the applicable updates, either by explicitly specifying computers or selecting a computer group that is based on log searches of a particular set of computers. Also, specify a schedule to approve and designate a period of time when updates are allowed to be installed within. Updates are installed by runbooks in Azure Automation. You cannot view these runbooks, and they don’t require any configuration. When an Update Deployment is created, it creates a schedule that starts a master update runbook at the specified time for the included computers. This master runbook starts a child runbook on each agent that performs the installation of required updates.

6.0 Supported Client types

6.1 Unsupported Client Types

7.0 Solution Components

This solution consists of the following resources that are added to your Automation account and directly connected agents

7.1 Hybrid Worker Groups

It is After we enable this solution, any Windows computer directly connected to Log Analytics workspace is automatically configured as a Hybrid Runbook Worker to support the runbooks included in this solution. For each Windows computer managed by the solution, it is listed under the Hybrid worker groups page as a System hybrid worker group for the Automation account following the naming convention Hostname FQDN_GUID. You cannot target these groups with runbooks in your account, otherwise, they fail. These groups are only intended to support the management solution.

However, add the Windows computers to a Hybrid Runbook Worker group in Automation account to support Automation runbooks as long as we are using the same account for both the solution and Hybrid Runbook Worker group membership. This functionality has been added to version 7.2.12024.0 of the Hybrid Runbook Worker.

7.2 Data Collection

we enable this solution, any Windows computer directly connected to Log Analytics workspace is automatically configured as a Hybrid Runbook Worker to support the runbooks included in this solution. For each Windows computer managed by the solution, it is listed under the Hybrid worker.

The solution collects information about system updates from Linux agents and initiates the installation of required updates on supported distros.

7.3 Collection Frequency

For each managed Windows computer, a scan is performed twice per day. Every 15 minutes the Windows API is called to query for the last update time to determine if the status has changed and if so a compliance scan is initiated. For each managed Linux computer, a scan is performed every 3 hours.

It can take anywhere from 30 minutes up to 6 hours for the dashboard to display updated data from managed computers.

8.0 Manage Updates for multiple Machines

It is We can use update management to manage updates and patches for your Windows and Linux virtual machines. From the Azure Automation account, we can:

· Onboard virtual machines.

· Assess the status of available updates.

· Schedule the installation of required updates.

· Review deployment results to verify that updates were applied successfully to all virtual machines for which update management is enabled.

9.0 Enable update management for Azure virtual machines

In the Azure portal, open your Automation account and select Update management.

At the top of the window, select Add Azure VM.

Select a virtual machine to onboard. The Enable Update Management dialog box appears. Select Enable to onboard the virtual machine. Once onboarding is complete, Update management is enabled for your virtual machine.

10 View computers attached to your automation account

After enabling update management for machines, we can view their information by clicking Computers. Computer information such as NameComplianceEnvironmentOS TypeCritical and Security UpdatesOther Updates, and Update Agent Readiness are available.

For computers that have recently been enabled for update management, they may not have been assessed yet. The compliance state for those computers would have a status of Not assessed. Here is a list of values for compliance state:

· Compliant — Computers that are not missing critical or security updates.

· Non-compliant — Computers that are missing at least one critical or security update.

· Not assessed — The update assessment data has not been received from the computer within the expected timeframe. For Linux computers, in the last three hours and for Windows computers, in the last 12 hours.

To view the status of the agent, click the link in the UPDATE AGENT READINESS column. This opens the Hybrid Worker page that shows the status of the Hybrid Worker. The following image shows an example of an agent that has not been connected to Update Management for an extended amount of time.

11 Schedule an update deployment

To install updates, schedule a deployment that follows a release schedule and service window. We can choose which update types to include in the deployment. For example, we can include critical or security updates and exclude update rollups.

Schedule a new update deployment for one or more virtual machines by selecting Schedule update deployment at the top of the Update management dialog box. In the New update deployment panel, specify the following:

o Name: Provide a unique name to identify the update deployment.

o OS Type: Select Windows or Linux.

11.1 Machines to update

Select the virtual machines that you want to update. The readiness of the machine is shown in the UPDATE AGENT READINESS column. This lets you see the health state of the machine before scheduling the update deployment.

11.2 Update classification

Select the types of software that the update deployment will include. For a description of the classification types, see update classifications. The classification types are:

· Critical updates

· Security updates

· Update rollups

· Feature packs

· Service packs

· Definition updates

· Tools

· Updates

11.3 Updates to exclude

This opens the Exclude page. Enter in the KBs or package names to exclude.

11.4 Schedule settings

We can accept the default date and time, which is 30 minutes after the current time. Or you can specify a different time. You can also specify whether the deployment occurs once or on a recurring schedule. To set up a recurring schedule, select the Recurring option under Recurrence.

11.5 Maintenance window (minutes)

Specify the period of time for when you want the update deployment to occur. This setting helps ensure that changes are performed within your defined service windows.

After finishing configuring the schedule, return to the status dashboard by selecting the Create button. The Scheduled table shows the deployment schedule that you just created.

Warning: For updates that require a restart, the virtual machine will restart automatically.

12 View results of an update deployment

After the scheduled deployment starts, we can see the status for that deployment on the Update deployments tab in the Update management dialog box. If the deployment is currently running, its status is In progress. After the deployment finishes successfully, it changes to Succeeded. If one or more updates fail in the deployment, the status is Partially failed.

To see the dashboard for an update deployment, select the completed deployment.

The Update results pane shows the total number of updates and the deployment results on the virtual machine. The table to the right gives a detailed breakdown of each update and the installation results. Installation results can be one of the following values:

· Not attempted: The update was not installed because insufficient time was available, based on the defined maintenance window.

· Succeeded: The update succeeded.

· Failed: The update failed.

To see all log entries that the deployment created, select All logs.

To see the job stream of the runbook that manages the update deployment on the target virtual machine, select the Output tile.

To see detailed information about any errors from the deployment, select Errors.

Azure App Gateway with Custom Image Scale Set- using ARM template

By | Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written by Madan Mohan K, Sr. Cloud EngineerContributors; Karthik T, Principal Cloud Architect

“Security, a Tom and Jerry game”Azure Application gateway with WAF

“Higher the Availability, double the glory in production”Azure Virtual Machine Scale set

  1. Application Gateway with Scale set remains to be a boon for hosting scalable, Performance Booster, secure and robust application on Azure. The Layer 7 capable Load balancer helps in providing application-level routing and load balancing services. Furthermore, it helps in achieving scalable and highly-available web front end with Multi URL routing in Azure.
  2. On the other front, virtual machine scale sets(VMSS) which helps in achieving Improved cost management, Higher availability and higher fault tolerance. VMSS supports Azure load balancer (Layer-4) and Azure Application Gateway (Layer-7) traffic distribution. VMSS supports up to 300 virtual machine instances if it uses own custom Image otherwise it will support up to 1000 VM.

Why use this template only?

  1. This Template can be leveraged for both Windows and Linux flavours and incase of unmanaged disks the Image Uri passed will be a Source vhd and in case of a managed disk the Uri passed will be an image id.
  2. This article aims at achieving the deployment of virtual machine scale set at the backend of an Application Gateway with custom Image Uri using ARM Template for both windows and Linux flavours.

Architecture

The architecture comprises of following Azure Components:

§ Virtual Machine scale set§ Application gateway with WAF§ Application Server (Golden image)§ Jump Box

Problem Statement

A client who runs an e-commerce site had trouble in their hosted application at Azure. On reviewing their environment, we found they use the Application Gateway with Virtual Machine at their backend, which resulted in a lack of scalability and higher cost.

Solution

To mitigate the issue, a virtual machine scale set is brought into the picture which helped in achieving higher availability, Performance Booster, scalability, and Improved Cost Management.

ARM Template

  1. To achieve this scenario, we opted for the use of ARM Template where we have incorporated provisioning of scale set at the backend of Application Gateway with Custom Image Uri.
  2. As network performance remains one key factor accelerated networking is taken into consideration while developing this JSON Template. This template aims at solving the problem for both windows and Linux servers as well as for both Managed and Unmanaged disks.
  3. In the network profile section, Accelerated networking has been incorporated which helps in Improving networking performance.

4. The scale set is the key part of this article where it calls the source Uri path of the Golden Image Disks for the Instances to be created at the backend of the Application gateway.

5. The web application firewall is incorporated to mitigate security risks and DDoS attacks Detection and prevention.

6. While the provision of Application Gateway SSL certificate inclusion is also taken care of the template. The passing of cert data and Password will help in getting SSL attached to the application gateway.

7. Http and Https Listeners are added in the Listener section.

8. Http to Https routing rules have been taken care of within the routing policies.

9. In this section, the diagnostic setting for the windows server has been used. The diagnostic settings changes from windows to Linux.

If you need any help on the above Json scripts, Please do reach out to us……..

End 🙂

Running Python on Azure Functions — Time Trigger, External Libraries

By | Azure, Blogs, Cloud, Cloud Assessment | No Comments

Written By: Ranjeeth Kuppala, Former CTO, Powerupcloud Technologies

I wanted to deploy a time-triggered Python Azure Function but it looks like its not directly available. The console just shows HTTP and Queue triggers at the moment.

But you can still use a Time trigger, just go ahead and choose HttpTrigger — Python function, give your function a name and create it. Once created, go to “Integrate” and delete HTTP Trigger and then add your own new Trigger — Timer

Now that the function is created, you will need external libraries like pandas if you are wrangling with data. If you pip install from the Kudu console (its generally available at https://{yourwebappname}.scm.azurewebsites.net), you may run into permission issues. At the time of writing this post, Azure Function apps have a windows backend and Linux backend is not available yet. So the error may look like this:

python -m pip install pandasFile "D:\Python27\lib\shutil.py", line 303, in move os.unlink(src)WindowsError: [Error 5] Access is denied: 'd:\\python27\\lib\\site-packages\\pip-1.5.6.dist-info\\description.rst'

It is basically a permissions issue, On a normal windows machine, you could simply resolve this by relaunching cmd or PowerShell executable as an administrator. You can’t do that on Azure Function Apps (which is basically Azure App Service in the backend). After wasting an hour trying to fix and trying to install Python to a different path via NuGet etc, thanks to this link, I managed to solve this using venv.

From the Kudu console, open a Powershell debug session and navigate to your function’s folder

cd D:\home\site\wwwroot\{yourfunction}

Then create a new virtual environment

python -m virtualenv yourenv

Activate your env

yourenv/scripts/activate.bat

You can now upgrade pip and install the libraries you need. You may even use requirements.txt. But make sure that you are using the python.exe from your venv.

.\python.exe -m pip install --upgrade pip
.\python.exe -m pip install pandas

Finally, come back to your function and make sure the path is updated

import sys, os.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'mcorp/Lib/site-packages')))