Category

DevOps

DevOps for Databases using Liquibase, Jenkins and CodeCommit

By | data, Data pipeline, DevOps | 4 Comments

Written by Arun Kumar, Associate Cloud Architect at Powerupcloud Technologies

In the Infra modernization, we are moving the complete architecture to microservices-based with CI/CD deployment methods, these techniques are suitable for any application deployments. But  most of the time Database deployments are manual efforts. Applications and databases are growing day by day. Especially the database size and the operational activities are getting complex and maintaining this by a database administrator is a bit tedious task.

 For enterprise organizations this is even more complex when it comes to managing multiple DB engines with hundreds of Databases or multi-tenant databases . Currently, below are the couple of challenges that the DBA faces indeed which are the manual activities.

  • Creating or Modifying the stored procedures, triggers and functions in the database.
  •  Altering the table in the database.
  • Rollback any Database deployment.
  • Developers need to wait for any new changes to be made in the database by DBA which increase the TAT(turn around time) to test any new features even in non-production environments.
  • Security concerns by giving access to the Database to do change and maintaining access to the database will be huge overhead.
  • With vertical scaling different DB engines of the database it is difficult to manage.

One of our Enterprise customers has all the above challenges, to overcome this we have explored various tools and come up with the strategy to use Liquibasefor deployments.  Liquibase supports standard SQL databases (SQL, MySQL, Oracle, PostgreSQL, Redshift, Snowflake, DB2, etc..) but the community is improving their support towards NoSQL databases, now it supports MongoDB, Cassandra. Liquibase will help us in versioning, deployment and rollback.

With our DevOps experience, We have integrated the both open source tools Liquibase and Jenkins automation server for continuous deployment. We can implement this solution across any cloud platform or on-premise.

Architecture

For this demonstration we will be considering the AWS platform. MS SQL is our main database, lets see how to setup a CI/CD pipeline for the database.

Pre-request:

  • Setup a sample repo in codecommit .
  • Jenkins server up and running.
  • Notification service configured in Jenkins.
  • RDS MSSQL up and running.

Setup the AWS codecommit repo:

To create a code repo in AWS CodeCommit refer the following link .

https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-windows.html

Integration of codecommit with Jenkins:

To trigger the webhook from the AWS CodeCommit, We needto configure the AWS SQS and SNS. please follow the link

https://github.com/riboseinc/aws-codecommit-trigger-plugin

Webhook connection from Codecommit to Jenkins, We need to install the AWS CodeCommit Trigger Plugin.

Select -> Manage Jenkins -> Manage Plugins -> Available ->  AWS CodeCommit Trigger Plugin.

  • In Jenkin, create a new freestyle project. 
  • In the Source Code Management add your CodeCommit repo url and credentials.

Jenkins -> Manage Jenkins -> Configure System -> AWS CodeCommit Trigger SQS Plugin.

Installation and configuration of Liquibase:

sudo add-apt-repository ppa:webupd8team/java
sudo apt install openjdk-8-jdk
java -version
wget https://github.com/liquibase/liquibase/releases/download/v3.8.1/liquibase-3.8.1.tar.gz
mkdir liquibase
cd liquibase/
mv liquibase-3.8.1.tar.gz /opt/liquibase/
tar -xvzf liquibase-3.8.1.tar.gz

Based on your Database, you need to download the JDBC driver(jar file) in the same location of the liquibase directory. Go through the following link.

https://docs.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver15

Integration of Jenkins with Liquibase:

During the deployment Jenkins will ssh into Liquibase instance, we need to generate a ssh key pair for Jenkins user and paste the key into Liquibase server linux user. Here we have a Ubuntu user on the Liquibase server.

Prepare the deployment script in Liquibase server.

For Single database server deployment: singledb-deployment.sh

-- Script Name: singledb-deployment.sh
#!/bin/bash
set -x
GIT_COMMIT=`cat /tmp/gitcommit.txt`
sudo cp /opt/db/temp/temp.sql /opt/db/db-script.sql
old=$(sudo cat /opt/db/db-script.sql | grep   'change' | cut -d ":" -f 2)
sudo  sed -i "s/$old/$GIT_COMMIT/g" /opt/db/db-script.sql
dburl=`cat /home/ubuntu/test | head -1 | cut -d ":" -f 1`
dbname=`cat /home/ubuntu/test | head -1 | cut -d ":" -f 2`
sed  -i -e "1d" /home/ubuntu/test
sudo sh -c 'cat /home/ubuntu/test >> /opt/db/db-script.sql'
export PATH=/opt/liquibase/:$PATH
echo DB_URLs is $dburl
echo DB_Names is $dbname
for prepare in $dbname; do  liquibase --driver=com.microsoft.sqlserver.jdbc.SQLServerDriver --classpath="/opt/liquibase/mssql-jdbc-7.4.1.jre8.jar" --url="jdbc:sqlserver://$dburl:1433;databaseName=$prepare;integratedSecurity=false;" --changeLogFile="/opt/db/db-script.sql"  --username=xxxx --password=xxxxx  Update;  done
sudo rm -rf /opt/db/db-script.sql  /home/ubuntu/test /tmp/gitcommit.txt

For Multi database server deployment: Multidb-deployment.sh

--Script name: Multidb-deployment.sh
#!/bin/bash
set -x
GIT_COMMIT=`cat /tmp/gitcommit.txt`
sudo cp /opt/db/temp/temp.sql /opt/db/db-script.sql
old=$(sudo cat /opt/db/db-script.sql | grep   'change' | cut -d ":" -f 2)
sudo  sed -i "s/$old/$GIT_COMMIT/g" /opt/db/db-script.sql

csplit -sk /home/ubuntu/test '/#----#/' --prefix=/home/ubuntu/test
sed  -i -e "1d" /home/ubuntu/test01
while IFS=: read -r db_url db_name; do
echo "########"
sudo sh -c 'cat /home/ubuntu/test01 >> /opt/db/db-script.sql'
export PATH=/opt/liquibase/:$PATH
echo db_url is $db_url
echo db_name is $db_name
for prepare in $db_name; do  liquibase --driver=com.microsoft.sqlserver.jdbc.SQLServerDriver --classpath="/opt/liquibase/mssql-jdbc-7.4.1.jre8.jar" --url="jdbc:sqlserver://$db_url:1433;databaseName=$prepare;integratedSecurity=false;" --changeLogFile="/opt/db/db-script.sql"  --username=xxxx --password=xxxx  Update;  done
done < /home/ubuntu/test00
sudo rm -rf /opt/db/db-script.sql  /home/ubuntu/test* /tmp/gitcommit.txt
  • In your Jenkins Job use shell to execute the commands.
  • The file test is actually coming from your code commit repo which contains the SQL queries and SQL server information
  • Below is the example job for multiple database servers. So we used to trigger the mutlidb-deployment.sh file. If you are using single SQL server deployment use singledb-deployment.sh

Prepare sample SQL Database for  demo:

CREATE DATABASE employee;
use employee;
CREATE TABLE employees
( employee_id INT NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  first_name VARCHAR(30),
  salary VARCHAR(30),
  phone INT NOT NULL,
  department VARCHAR(30),
  emp_role VARCHAR(30)
);
INSERT into [dbo].[employees] values ('1', 'kumar' ,'arun', '1000000', '9999998888', 'devops', 'architect' );
INSERT into [dbo].[employees] values ('2', 'hk' ,'guna', '5000000', '9398899434, 'cloud', 'engineer' );
INSERT into [dbo].[employees] values ('3', 'kumar' ,'manoj', '900000', '98888', 'lead', 'architect' );

Deployment 1: (for single SQL server deployment)

We are going to insert a new row using CI/CD

  • db-mssql: CodeCommit Repo
  • test: SQL server information( RDS endpoint: DBname) and SQL that we need to deploy.
  • Once we commit our code to our repository(CodeCommit). The webhook triggers the deployment

Check the SQL server to verify the row inserted:

Deployment 2: (for Multiple SQL server to deploy same SQL statements)

  • db-mssql: CodeCommit Repo
  • test: SQL server information( RDS endpoint: DBname) and SQL that we need to deploy.
  • #—-#: this is the separator for the servers and SQL queries so don’t remove this.

Deployment 3:  (for Multiple SQL server to deploy same SQL stored procedure)

  • db-mssql: CodeCommit Repo
  • test: SQL server information( RDS endpoint: DBname) and SQL that we need to deploy.
  • #—-#: this is the separator for the servers and SQL queries so don’t remove this.

Notification:

  • Once the Job is executed you will get the email notification.

Liquibase Limitations:

  • Commented messages in the function or SP will not get updated in the Database.

Conclusion:

Here we used this liquibase on AWS, so we used RDS, CodeCommit and etc. But you can use the same method to configure the automatic deployment pipeline for databases with versioning, rollback in (AWS RDS, Azure SQL Database, Google Cloud SQL, Snowflake) using open source tool Liquibase and Jenkins.

How to configure Sensu Remediation?

By | Blogs, DevOps | No Comments

Written by Nirmal Prabhu, Former cloud engineer, Powerupcloud Technologies

When we have a big or complex infrastructure to monitor using a sens, something might go wrong and the engineer has to fix that issue without any impact. Though we have several checks. It’s difficult to fix those critical checks without a time impact. Due to this, some serious production loss might happen. To overcome this we can use sensu remediation which will take care of the simple and routine tasks which we’ll be using during the critical alerts.

For instance, If the service stops suddenly in a server then we receive an alert from sensu. We use to login to that server and restart the service again. It’s a bit time consuming one. Rather than going all the way around to the server. We can use this remediation check.

This will execute the command that we provide in the check into the client-server when the process stops automatically.

Working:

  1. Sensu server publishes a check request for check-apache-proc.
  2. Sensu client who was listening to check-apache-proc receives the request, runs the command, and gives the output to the Sensu server.
  3. Sensu server evaluates the handler list, discovering the default and remediation handler.
  4. Sensu server runs the remediation handler with the output given, reads the different values, and if all the conditions pass, issues the remediate check via the API.
  5. Sensu client receives the check request from the Sensu API for remediate-apache-proc and runs the remediation command.
  6. There are four different concepts: the standard check, the remediator handlers, and the unpublished remediation check.

Client configuration:

The remediator will use the subscriptions list with the hostname, it is necessary to subscribe to each host to its hostname.

The config file will look like

The standard Check:

In sensu, there are two ways of defining a standard check, server-side, where the client has to subscribe to that check and will be triggered when the Sensu server asks for it, or client-side, where the client defines the check as standalone and schedules it.
In order to apply remediation, you will need to set your remediate checks as “standalone”: false. I will also define the alert check as a server check, but you can try setting that one to standalone and will work as soon as the server evaluates the handler properly.

Here we are using the check to keep the apache process up in the /etc/sensu/conf.d/checks/check-apache-process.json. This the standard file before configuring remediation.

{“checks”: {“Process-apache”: {“handlers”: [“mailer”],“command”: “/etc/sensu/plugins/check-process.rb -p apache2”,“interval”: 60,“occurrences”: 3,“refresh”: 3600,“subscribers”: [ “APACHE” ]}}}

The remediator handler:

When the client receives a check request from the server, Using the handlers it executes the given command and sends back the output to the server. We can program special actions for evaluating that input given from the client, in the server.

Create a sensu.rb file:

vi /etc/sensu/handlers/sensu.rb

Paste the below contents to sensu.rb file.

Copy the below remediation handler and paste it under /etc/sensu/conf.d/handlers/remediator.json

{
“handlers”: {
“remediator”: {
“command”: “/etc/sensu/handlers/sensu.rb”,
“type”: “pipe”,
“severities”: [“critical”]
}
}
}

Now, add the below lines in check-apache-process.json file
/etc/sensu/conf.d/checks/check-apache-process.json

{“checks”: {
“Process-apache”: {
“handlers”: [“mailer”,”remediator”],
“command”: “/etc/sensu/plugins/check-process.rb -p apache2”,
“interval”: 60,
“occurrences”: 3,
“refresh”: 3600,
“subscribers”: [ “APACHE”,”Hostname” ]
“Standalone”: false,
“remediation”: {
“remediate-check-apache-process”: {
“occurrences”: [“1+”],
“severities”: [2] }
}
}
}
}

The remediation check:

This remediation check can’t be a standalone check. It can’t be scheduled by the client, The remediator handler will send to the Sensu API a check request, thus the Sensu API will activate it!
Copy the below in the remediate-check-apache-process file under

/etc/sensu/conf.d/checks/remediate-check-apache-process.json

{
“checks”: {
“remediate-check-apache-process”: {
“command”: “sudo service apache2 restart”,
“handlers”: [“remediator”],
“subscribers”: [”Hostname”],
“standalone”: false,
“publish”: false
}
}
}

That’s it, From now on whenever the apache stops we’ll receive the alert mail from sensu and in a meanwhile the given command “Sudo service apache2 restart” will be executed on the client machine.

At first, The sensu doesn’t have any permission to run our given command. To do so copy and paste the below line under “root” by typing visudo.

sensu ALL=(ALL:ALL) NOPASSWD: ALL

Now, you’ve successfully configured your sensu remediation.

FreeIPA Integration with Jenkins: Part II

By | Blogs, Cloud, Cloud Assessment, DevOps, Jenkins | No Comments

Written by Priyanka Sharma, DevOps Architect, Popwerupcloud Technologies.

​​​​In the previous part, we have shown how you can install the FreeIPA tool for user management on Ubuntu EC2 Server. This article will consist of the following sections:

  • ​​FreeIPA users Integration with Jenkins
  • ​​Provide Jenkins access to a specific FreeIPA user

​​FreeIPA Users Integration with Jenkins

​​Install Ldap Plugin through Jenkins Console. Manage Jenkins → Install Plugins → Search for Ldap Plugin. Install it.

​​Now Configure Global Security for FreeIPA. Go to Manage Jenkins → Configure Global Security → Access Control. Select LDAP and specify the Ldap Credentials for your server as shown in below screenshot:

​​Test the Ldap Settings before saving the configurations. Once Authentication is successful, save the settings.

​​Logout and Login with any one existing FreeIPA user. The FreeIPA users will be able to login to Jenkins.

​​Provide Jenkins access to a specific FreeIPA user

​​In FreeIPA, we will create many users with different privileges. In this section, we will show how you can grant access to the Jenkins console to a specific user. In our case, we have three existing users in FreeIPA.

​​After implementing the steps as provided in the above section, select Matrix-based security under Authorization.​​

​​We have given admin privileges to “jenkins_user”. You can add a particular user group too. Save the settings. Now only “jenkins_user” will have access to the Jenkins console. For other users, it will show Access Denied.

​​Refer to the video below:

​​Hope you found it useful. Keep following us for further parts. Happy UserManagement..!!

Microsoft Workloads

By | AWS, Case Study, Cloud Case Study, DevOps | No Comments

Customer: Sompo

Customer Engagement

Sompo International was established in March 2017 with the acquisition by Sompo Holdings, Inc.(Sompo) of Endurance Specialty Holdings Ltd.(Endurance) and it swholly owned operating subsidiaries. Sompo’s core business encompasses one of the largest property and casualty insurance groups in the Japanese domestic market. Seeking opportunities grow their business globally, Sompo acquired Endurance, a global provider of property and casualty insurance and reinsurance, to effectively become their international operation.

Problem Statement

Sompo International wants to migrate 2 of their web services from on-premise to AWS Elastic Beanstalk. Both are .NET based applications and used Microsoft SQL server as the backend. Customer wants to use RDS for the database and AD authentication for SQL server access. Sompo International wants to work with a strong Cloud Consulting Partner like Powerup to help them migrate the applications onto AWS, manage those applications 24*7 and then build Devops capabilities on cloud so that Sompo can concentrate on application development.

Proposed Solution

➢ AWSaccounts will be created and managed using AWS Organizations according to customer requirement.
➢ Appropriate users, groups and permissions will be created using Identity and Access Management (IAM) service.
➢ IAM roles will be created to access different AWS service.
➢ Network will be setup using theVPC service. Appropriate CID Rrange, subnets, route tables etc. will be created.
➢ NAT gateways will be deployed in 2 public subnets in 2 different Availability Zones of AWS.
➢ VPN Tunnel will be setup from customer location to AWS data center.
➢ 2 Application Load Balancers will be created forthe 2 applications being migrated.
➢ Route53 service will be used to create the necessary DNS records.
➢ An open source DNS forwarding application called Unbound will be deployed across 2 AZs for high availability. Unbound allows resolution of request originating from AWS by forwarding them to on-premise environment- and vice-versa.
➢ 2 Elastic Beanstalk environments will be created forthe 2 applications and the .NET code will be uploaded and then deployed onit.
➢ Windows Server 2016 R2 is used to deploy Application& AD.
➢ Both the applications will be deployed across 2 Availability Zones and auto-scaling will be enabled for high availability and scalability.
➢ MSSQL data base will be deployed on RDS service of AWS and multi AZ feature will be enabled for high availability. Database will be replicated from on-premise to AWS by taking the latest SQL dumpand restoring/enabling Always-on replication between the database/using the AWS DMS service. RDSSQL authentication will be used instead of Windows authentication.
➢ Elastic Cache Redis cluster will be deployed forstoring the user sessions. Multi-AZ feature will be turned on for high availability.
➢ All application logs will be sentto Splunk. VPC peering will be enabled between the 2 VPCs.
➢ CloudWatch service will be used for monitoring and SNS will be used to notify the users in case of alarms, metrics crossing thresholds etc.
➢ All snapshot backups will be regularly taken and automated based on the best practices.
➢ All Server Sizing was initially taken based on the current sizing and its utilization shared by the customer. Based on the utilization reports in CloudWatch Servers were scaled up or down.
➢ NAT gateway is used for in stances in private network to have access to internet.
➢ Security groups are used to control trafficat theVM level. Only the required ports will be opened, and access allowed from required IP addresses.
➢ Network Access Control Lists(NACLs) are used to control traffic at the subnet level.
o SSL certificates will be deployed on the load balancers to protect data in transit.
o CloudTrail will be enabled to capture all the API activities happening in the account.
o VPC flow logs will be enabled to capture all network traffic.
o ALB access logs will be enabled
o All the logs will be sent to AWS Guard Duty for threat detection and identifying malicious activities in the account,
account compromise.
➢ AWS Config will be enabled, and all the AWS recommended config rules will be created. Additional Details

AWS Services used:

EC2, EBS, ALB, RDS, Route53, S3, CloudFormation,
CloudWatch, CloudTrail, IAM, Config, Guard Duty, Systems Manager, Autoscaling, Transit gateway

3rd Party Solutions Used:

Unbound, Okta[Architecture Diagram]

Windows Stack used:

➢ .NET Applications
➢ IIS Web Server
➢ RDP Gateway
➢ SQL Server Enterprise Database
➢ Active Directory

Outcomes of Project

➢ Powerup was able to setup automated landing zone for Sompo
➢ Sompo was able to meet the required high availability& scalability
➢ Sompo was able to integrate the migrated applications to the on-premise
legacy systems seamlessly

DevOps & Provisioning Automation

By | Azure, Case Study, Cloud Case Study, DevOps | No Comments

Customer: A business consulting and technology integration services firm

Problem Statement

A business consulting and technology integration services firm responding with
agile solutions to the challenges of regulated industries. Partnering with
Intellivision, the customer is building a multi-tenant SaaS-based application on
Azure.

Proposed Solution

Powerupcloud helped deploy multiple flavors of the application in Azure App
Services, Azure Container Service, Azure IaaS for developers, testers, and end
customers using end to end automated provisioning scripts.

Cloud Platform

Microsoft Azure.

Technologies used

Azure App Service, Azure IaaS, Automation Runbooks, DocumentDB, Azure Backup
Services, Azure OMS, Azure CLI, Powershell, OrientDB, Nodejs, Nginx.

Optimization of AWS platform, DB consulting services, managed services and DevOps support

By | AWS, Case Study, Cloud Case Study, Data Case Study, DevOps, Managed Services | One Comment

A global logistics group

Problem Statement

A global logistics group with operations in over 14 countries including Singapore,
India, Australia, the US, China, Brazil, Africa, and APAC has its data center on AWS with
more than 70 servers and databases powering close to 20 applications. With global
users from more than 14 countries using these applications, the availability of the
applications are critical to ensure the smooth operations and freight movement.
The customer was seeking an able partner to come in and effectively manage their
cloud-based data center including the databases.

Proposed Solution

Powerup helped the customer to continuously optimize their AWS environment
and provided Database Consulting services. Powerup managed the data center
running on AWS which hosts some of their high critical enterprise workloads like
Oracle Financials, MS SQLServer Enterprise and more.
Powerup also provided 24*7 cloud managed services and DevOps support for the
customer and acts as the integration point for 6 different application development
vendors.

Cloud platform

AWS.

Technologies used

Windows Server, RHEL, Oracle, MS SQLServer, RDP.

AWS to Azure migration & DevOps

By | AWS, Azure, Case Study, Cloud Case Study, DevOps, Migration | No Comments

Customer: A gaming & simulation software company

Problem Statement

The customer is a gaming and simulation software company that creates experiential
solutions that aim to transform an organization into a modern workforce. It’s Infra
was a multi-tenant SaaS application running on AWS and they wanted to move to
Azure and achieve the same level of DevOps processes they were used to on AWS.

Proposed Solution

Powerup has successfully moved customer’s infra from AWS to Azure and while doing chose relevant services to modernize their application. Entire Infrastructure provisioning was automated using Azure ARM templates including spinning servers,
installing necessary components and configuration management via Ansible, CI/CD
pipelines using Spinnaker and Jenkins. Implemented Blue/Green deployments and
Azure scale sets.

Cloud Platform

Microsoft Azure.

Technologies used

Azure ARM, Scale Sets, App Service, Blob storage, Spinnaker, Ansible, MYSQL,
PostgreSQL, Ruby on Rails, Nginx, Azure IaaS.