Cloud Report Card: 3 Months of COVID-19 Impact

By | Cloud, CS | No Comments

Siva S, CEO of Powerupcloud, Global Cloud Practice Head at LTI

So, here we are. May 2020. It has been 3 months since the Covid-19 pandemic started impacting the global economy and with it the business functions globally. This has not been a smooth ride for governments, businesses, entrepreneurs, and most importantly, the people. I have been actively speaking to several CIOs, CEOs of global businesses with operations in the USA, UK, Germany, France, UAE, SouthAfrica, India, Singapore, Australia, and New Zealand. The business sentiments and decisions seem to follow a similar pattern irrespective of the country or government or the business itself. That’s the effect the COVID-19 pandemic has had so far on all of us.

In this article, I would be covering the major trends we are witnessing public cloud adoption and the change in priorities based on our customer and OEM interactions.

  1. Cloud Cost Optimization: The highest demand we see is with the cost optimization of cloud spend at businesses that are already on the cloud. Irrespective of their spending, be it $0.5M or $20M per year, reducing their cloud spend is a key focus for the CIOs. The ‘Save Now, Pay Later’ program that we launched which helps large businesses save cloud costs with the help of our cloud governance platform – has seen a massive uptake with our global customers due to the nature of the program. The gain-share model, where our success fee is a % of cost savings we bring to the client, thus bringing in a win-win situation for both the vendor and the client, seems exactly what the businesses need at this time of the hour.
  2. Remote Workforce Enablement: This is the second area where we see high demand from our enterprise customers. Be it migration to virtual desktops on cloud or launching a fully scalable virtual contact center on the cloud or adopting virtual collaboration platforms, CIOs are keen to explore technologies that will improve the productivity of their employees working from home. With most businesses taking a call to have their employees work from home till the end of 2020, enabling their remote workers with technology that aids them to work better is at the top of the agenda for businesses. Check out our Remote Workforce Enablement program.
  3. Data Analytics on Cloud: While most of the businesses I interact have literally stopped their big-bang data transformation exercises, they, however, do not want to stop the adoption of cloud for their data environments. We are witnessing a trend wherein customers are identifying their business-critical applications and moving them to cloud including the data layer for 2 reasons – 1. to improve the availability and reliability of the data layer powering these applications 2. to feed the data lake with data in real-time which will allow them to run ML models on the fly. This trend is most likely to follow for the next 12 months. The best part is, by the end of 12 months, businesses who follow this approach will have most of their critical applications running on the cloud with a centralized data approach.
  4. Large Scale Cloud Migrations: Plans to migrate the entire data center to the cloud is seeing a mixed response. I am interacting with a couple of CIOs of large manufacturing businesses in the EU & USA who are going ahead with their plans to migrate completely to the public cloud platforms. These companies have workloads in the nature of 15000+ servers, 1000+ applications. Their argument, a valid one, is to do this entire migration when the manufacturing activity is at its lowest due to COVID-19 impact. But this represents just 10% of our total migration pipeline.
  5. Continuous Cloud Adoption: Most of the other industries are adopting the concept of ‘continuous cloud adoption’ model where they subscribe for a ‘Cloud POD’ (a 6 member team comprising cloud architects, cloud engineers, and a project manager) for a 12 month period. The Cloud POD will work with the customer to identify the key applications and migrate them to the cloud in a sequential manner. This allows the customer to continue their cloud adoption, enabling businesses with better reliability for their key applications and helping the CFO with moving to an OPEX model on an incremental basis. My vote goes to this approach as this model brings in more flexibility to the CIOs. They can use the Cloud POD for security & governance implementations or cost optimization by pausing the migration activity when the situation demands.
  6. AI/ML Adoption: Many artificial intelligence solutions that used to be a hard sell to businesses all these years are seeing a voluntary increased adoption by businesses in these last 3 months and we expect this trend to continue for the next 2 years to come. Chatbots, for example, has seen a 200% increase in demand in this period. We are seeing requirements from building customer support chatbots to internal employee engagement HRMS chatbots to ease the dependency on the human support system to fulfill the end-user needs. Banks, Insurance companies, eCommerce players, OTT platforms, Healthcare organizations, Educational institutes are the ones that often feature in our chatbot requirements pipeline. AI+RPA is another area of focus where businesses are focusing on implementing AI & RPA technologies either in combination or stand alone to automate some of their business processes.

The bottom line is, for almost all the businesses cash conservation is the primary focus. But at the same time, they cannot afford to completely stop their digital transformation journey. The key here is to balance these 2 things so that they are better prepared when the global economy comes back on track. Businesses that take aggressive decisions on either end of the spectrum will see a greater risk of failure. It is completely fine to continue to take an ‘ambiguous’ approach and keeps things in balance instead of boiling the ocean.

Cash conservation should be your primary focus. But don’t stop your digital transformation journey.

Detect highly distributed web DDoS on CloudFront, from botnets

By | Blogs, Cloud, Cloud Assessment | No Comments

Author: Niraj Kumar Gupta, Cloud Consulting at Powerupcloud Technologies.

Contributors: Mudit Jain, Hemant Kumar R and Tiriveedi Srividhya


 CloudWatch Metrics

Metrics are abstract data points indicating performance of your systems. By default, several AWS services provide free metrics for resources (such as Amazon EC2 instances, Amazon EBS volumes, and Amazon RDS DB instances).

CloudWatch Alarms

AWS CloudWatch Alarm is a powerful service provided by Amazon for monitoring and managing our AWS services. It provides us with data and actionable insights that we can use to monitor our application/websites, understand and respond to critical changes, optimize resource utilization, and get a consolidated view of the entire account. CloudWatch collects monitoring and operational information in the form of logs, metrics, and events. You can configure alarms to initiate an action when a condition is satisfied, like reaching a pre-configured threshold.

CloudWatch Dashboard

Amazon CloudWatch Dashboards is a feature of AWS CloudWatch that offers basic monitoring home pages for your AWS accounts. It provides resource status and performance views via graphs and gauges. Dashboards can monitor resources in multiple AWS regions to present a cohesive account-wide view of your accounts.

CloudWatch Composite Alarms

Composite alarms enhance existing alarm capability giving customers a way to logically combine multiple alarms. A single infrastructure event may generate multiple alarms, and the volume of alarms can overwhelm operators or mislead the triage and diagnosis process. If this happens, operators can end up dealing with alarm fatigue or waste time reviewing a large number of alarms to identify the root cause. Composite alarms give operators the ability to add logic and group alarms into a single high-level alarm, which is triggered when the underlying conditions are met. This gives operators the ability to make intelligent decisions and reduces the time to detect, diagnose, and performance issues when it happen.

What are Anomaly detection-based alarms?

Amazon CloudWatch Anomaly Detection applies machine-learning algorithms to continuously analyze system and application metrics, determine a normal baseline, and surface anomalies with minimal user intervention. You can use Anomaly Detection to isolate and troubleshoot unexpected changes in your metric behavior.

Why Composite Alarms?

  1. Simple Alarms monitor single metrics. Most of the alarms triggered, limited by the design, will be false positives on further triage. This adds to maintenance overhead and noise.
  2. Advance use cases cannot be conceptualized and achieved with simple alarms.

Why Anomaly Detection?

  1. Static alarms trigger based on fixed higher and/or lower limits. There is no direct way to change these limits based on the day of the month, day of the week and/or time of the day etc. For most businesses these values change massively over different times of the day and so on. Specially so, while monitoring user behavior impacted metrics, like incoming or outgoing traffic. This leaves the static alarms futile for most of the time. 
  2. It is cheap AI based regression on the metrics.

Solution Overview

  1. Request count > monitored by anomaly detection based Alarm1.
  2. Cache hit > monitored by anomaly detection based Alarm2.
  3. Alarm1 and Alarm2 > monitored by composite Alarm3.
  4. Alarm3 > Send Notification(s) to SNS2, which has lambda endpoint as subscription.
  5. Lambda Function > Sends custom notification with CloudWatch Dashboard link to the distribution lists subscribed in SNS1.



  1. Enable additional CloudFront Cache-Hit metrics.


This is applicable to all enterprise’s CloudFront CDN distributions.

  1. We will configure an Anomaly Detection alarm on request count increasing by 10%(example) of expected average.

2. We will add an Anomaly Detection alarm on CacheHitRate percentage going lower than standard deviation of 10%(example) of expected average.

3. We will create a composite alarm for the above-mentioned alarms using logical AND operation.

4. Create a CloudWatch Dashboard with all required information in one place for quick access.

5. Create a lambda function:

This will be triggered by SNS2 (SNS topic) when the composite alarm state changes to “ALARM”. This lambda function will execute to send custom notifications (EMAIL alerts) to the users via SNS1 (SNS topic)

The target arn should be the SNS1, where the user’s Email id is configured as endpoints.

In the message section type the custom message which needs to be notified to the user, here we have mentioned the CloudWatch dashboard URL.

6. Create two SNS topics:

  • SNS1 – With EMAIL alerts to users [preferably to email distribution list(s)].
  • SNS2 – A Lambda function subscription with code sending custom notifications via SNS1 with links to CloudWatch dashboard(s). Same lambda can be used to pick different dashboard links based on the specific composite alarm triggered, from a DynamoDB table with mapping between SNS target topic ARN to CloudWatch Dashboard link.

7. Add notification to the composite alarm to send notification on the SNS2.

Possible False Positives

  1. There is some new promotion activity and the newly developed pages for the promotional activity.
  2. Some hotfix went wrong at the time of spikes in traffic.


This is one example of implementing a simple setup of composite alarms and anomaly-based detection alarms to achieve advance security monitoring. We are submitting the case that these are very powerful tools and can be used to design a lot of advanced functionalities.

Keycloak with java and reactJS

By | Blogs, Cloud | 2 Comments

Written by Soumya. a, Software Developer at Powerupcloud Technologies.

In a common development environment we create login algorithms and maintain all the details in the project database.which can be a risk to maintain in huge application development and maintenance of all the client data and user information so we use a 3rd party software for maintaining login details to make the application more secure. Keycloak even helps in maintaining multiple applications with different / same users. 

Problem statement:

Local database can become a task to maintain login details and maintenance of it so we use 3rd party software like keycloak to make the application more secure 

How it works:

We add keycloak to the working environment and add required details of keycloak in code.and add application details in keycloak and run in the working environment.Detailed steps for local and server environments is in the Document.


Keycloak is an open source Identity and Access Management solution aimed at modern applications and services. It makes easy to secure applications and services with little to no code.

Keycloak is used to add security to any application with the keycloak details added to the application it gives various options like a simple login ,login with username and password ,use of otp for login etc.

When we use keycloak we need not maintain login details in our database all the details are saved in the keycloak server and it is secure only the required details can be stored in our database.  

Different features of keycloak:

Users authenticate with Keycloak rather than individual applications. This means that your applications don’t have to deal with login forms, authenticating users, and storing users. Once logged-in to Keycloak, users don’t have to login again to access a different application.

This also applied to logout. Keycloak provides single-sign out, which means users only have to logout once to be logged-out of all applications that use Keycloak.

Kerberos bridge

If your users authenticate to workstations with Kerberos (LDAP or active directory) they can also be automatically authenticated to Keycloak without having to provide their username and password again after they log on to the workstation.

Identity Brokering and Social Login

Enabling login with social networks is easy to add through the admin console. It’s just a matter of selecting the social network you want to add. No code or changes to your application is required.

Keycloak can also authenticate users with existing OpenID Connect or SAML 2.0 Identity Providers. Again, this is just a matter of configuring the Identity Provider through the admin console.

User Federation

Keycloak has built-in support to connect to existing LDAP or Active Directory servers. You can also implement your own provider if you have users in other stores, such as a relational database.

Client Adapters

Keycloak Client Adapters makes it really easy to secure applications and services. We have adapters available for a number of platforms and programming languages, but if there’s not one available for your chosen platform don’t worry. Keycloak is built on standard protocols so you can use any OpenID Connect Resource Library or SAML 2.0 Service Provider library out there.


You can also opt to use a proxy to secure your applications which removes the need to modify your application at all.

Admin Console

Through the admin console administrators can centrally manage all aspects of the Keycloak server.

They can enable and disable various features. They can configure identity brokering and user federation.

They can create and manage applications and services, and define fine-grained authorization policies.

They can also manage users, including permissions and sessions.

Account Management Console

Through the account management console users can manage their own accounts. They can update the profile, change passwords, and setup two-factor authentication.

Users can also manage sessions as well as view history for the account.

If you’ve enabled social login or identity brokering users can also link their accounts with additional providers to allow them to authenticate to the same account with different identity providers.

Standard Protocols

Keycloak is based on standard protocols and provides support for OpenID Connect, OAuth 2.0, and SAML.

Authorization Services

If role-based authorization doesn’t cover your needs, Keycloak provides fine-grained authorization services as well. This allows you to manage permissions for all your services from the Keycloak admin console and gives you the power to define exactly the policies you need.

How to use:

In Local:


  1. Download keycloak
  2. Run keycloak in port 8085 (default keycloak port:8080)./ -Djboss.socket.binding.port-offset=5
  3. Log in with master login which is registered while creating keycloak

In Server:


  1. Download java 
  2. Download keycloak using  wget
  3. Run keycloak in port 8085 (default keycloak port:8080)./ -Djboss.socket.binding.port-offset=5
  4. Add ssl certificate for keycloak
  5. Log in with master login which is registered while creating keycloak

Steps for server :


Step 1: Login to the Linux server Step 2: Download Keycloak.

cd /opt/
tar -xvzf keycloak-7.0.0.tar.gz
mv keycloak-7.0.0.tar.gz keycloak

Step 3: Create a user to run keycloak application

adduser techrunnr
chown techrunnr.techrunnr -R /opt/keycloak

Step 4: switch the user to newly created user

sudo su - techrunnr

Step 5: Goto the keycloak home directory.

cd /opt/keycloak

Step 6: Execute the below command to make the application run on the reverse proxy.

./bin/ 'embed-server,/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding,value=true)'
./bin/ 'embed-server,/socket-binding-group=standard-sockets/socket-binding=proxy-https:add(port=443)'
./bin/ 'embed-server,/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=redirect-socket,value=proxy-https)'

Step 7: Create a systemd configuration to start and stop keycloak using systemd.

cat > /etc/systemd/system/keycloak.service <<EOF

ExecStart=/opt/keycloak/current/bin/ -b


Step 8: Reload the systemd daemon and start Keycloak.

systemctl daemon-reload
systemctl enable keycloak
systemctl start keycloak

Step 9: Create an admin user using below command line.

./bin/ -u admin -p YOURPASS -r master

Configure Nginx reverse proxy

Step 1: Login to Nginx server and update in nginx.conf file.

upstream keycloak {
# Use IP Hash for session persistence

# List of Keycloak servers

server {
listen 80;

# Redirect all HTTP to HTTPS
location / { 
return 301 https://\$server_name\$request_uri;

server {
listen 443 ssl http2;

ssl_certificate /etc/pki/tls/certs/my-cert.cer;
ssl_certificate_key /etc/pki/tls/private/my-key.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://ke

Once it’s completed restart the Nginx server to take immediate effect. Now access the given URL to access the keycloak server and use the credentials which you created in Step 9.

Steps for both server and local:

1. Create a new realm:

To create a new realm, complete the following steps:

  1. Go to http://localhost:8085/auth/admin/ and log in to the Keycloak Admin Console using the account you created in Install and Boot.
  2. From the Master drop-down menu, click Add Realm. When you are logged in to the master realm this drop-down menu lists all existing realms.
  3. Type demo iin the Name field and click Create.

When the realm is created, the main admin console page opens. Notice the current realm is now set to demo. Switch between managing the master realm and the realm you just created by clicking entries in the Select realm drop-down menu.

2. Create new client:

To define and register the client in the Keycloak admin console, complete the following steps:

  1. In the top-left drop-down menu select and manage the demo realm. Click Clients in the left side menu to open the Clients page.

2. On the right side, click Create.

3. Complete the fields as shown here:

4. Click Save to create the client application entry.

5. Change Access type to confidential 

6. Click the Installation tab in the Keycloak admin console to obtain a configuration template.

7. Select Keycloak OIDC JSON to generate an JSON template. Copy the contents for use in the next section.

3. Role: 

Roles identify a type or category of user. Keycloak often assigns access and permissions to specific roles rather than individual users for fine-grained access control.

Keycloak offers three types of roles:

  • Realm-level roles are in the global namespace shared by all clients.
  • Client roles have basically a namespace dedicated to a client.
  • A composite role is a role that has one or more additional roles associated with it. 

3.1. Create new role 

Roles->Add Role->Role name and Description(admin_role)

3.2. To add manage user permission for the newly created role

enable composite roles->client roles–>realm management->manage users.

3.3. This step is used to add manage user permission for default roles

Roles->Default Roles->Client Roles–>realm management–> add manage users.

4. Adding created role permission to your client

Client->select your client->scope->realmroles->add created role(admin_role).

5. Add permission to a new realm from master realm

5.1. Master realm->client->select your client(demo-realm)–>roles->manage-users(default false make it as true)

5.2. For making it true : Enable composite roles–>client-roles–>select your client(demo-realm)–>add manage users

6. For adding manage user permission for client in master

Master->roles->default roles->client roles–>select ur client(demo-realm)–>add manage users.

7. Once the permissions are given the first user in the new realm should be created using which we can create multiple users from code(outside keycloak)

7.1.Select your realm(demo)–>Users->New user->details(add email id and name)->


7.3.Role mappings->Client role->realm management->check if manage users is present else add

In React JS for connecting keycloak and adding authentication

Keycloak.js file

JSON file for adding keycloak server details(json from installation tab)

 "realm": "realm name",
 "auth-server-url": "keycloak url",
 "ssl-required": "external",
 "resource": "client name",
 "credentials": {
   "secret": "secret key"
 "confidential-port": 0

Keycloak functions in app.js

These functions are used to connect to java method and authenticate the user

async Keycloakfunc() {
   const keycloak = Keycloak("/keycloak.json");
   return keycloak
     .init({ onLoad: "login-required", promiseType: "native" })
     .then(authenticated => {
       if (authenticated) {
         if (sessionStorage.getItem("Loggined") == null) {
           return this.getUserId(keycloak.tokenParsed.preferred_username);
         } else {
           this.setState({ Loggined: true });
 async getUserId(user_name) {
   const endpoint = CURRENT_SERVER + "authenticateLogin";
   const bot_obj = {
     username: user_name
   return, bot_obj).then(res => {
     let data =;
     if (data.status == "success") {
       //setting token locally to access through out the application
       sessionStorage.setItem("authorization", data.response.token);
       this.setState({ isAuth: "true" });
       console.log("login success");
       localStorage.setItem("userId", JSON.stringify(data.response));
       localStorage.setItem("userName", user_name);
       localStorage.setItem("rolename", data.response.roleid);
       this.setState({ Loggined: true });
       sessionStorage.setItem("Loggined", true);
     } else if (data.status == "failed") {

In Java code:

Authenticake login service method:

This method is used to check if the username / email id is registered in our DB and authenticate for next steps 

public class LoginServices {

private UserMainRepo userMain;

private RoleUserPermissionRepo roleUserPermission;

private JwtTokenUtil jwtTokenUtil;

public String keyCloak(String object) {
String userId = null;
String roleid = null;
JSONObject responseJSON = new JSONObject();
JSONObject resultJSON = new JSONObject();

JSONObject obj = new JSONObject(object);
String username = (String) obj.get("username");

List<UserMain> authenticate = userMain.findByEmailId(username);

if (authenticate.isEmpty()) {
responseJSON.put("error", "user not found in DB");
resultJSON.put("status", "failed");
resultJSON.put("response", responseJSON);
} else {
List<RoleUserPermission> roleUserData = roleUserPermission.findByUserMainId(authenticate.get(0));
userId = roleUserData.get(0).getId();
roleid = roleUserData.get(0).getRoleMasterId().getId();

// Creating JWT token for security
JwtUserDetails userDetails = new JwtUserDetails();
final String token = jwtTokenUtil.generateToken(userDetails);
final String secretKey = "botzer";
String encryptedToken = AES.encrypt(token, secretKey);
responseJSON.put("token", encryptedToken);
responseJSON.put("userId", userId);
responseJSON.put("isLoggedIn", "true");
responseJSON.put("roleid", roleid);
resultJSON.put("status", "success");
resultJSON.put("response", responseJSON);
return resultJSON.toString();

User controller Method :

All the details for creating user such as email id username etc is passed from front end as a JSON

This method to create user 

@RequestMapping(value = "/createUser", method = RequestMethod.POST,consumes = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> createUser(@RequestBody UserMain json) throws ParseException 
String response=userservices.createUser(json);
return new ResponseEntity<String>(response, HttpStatus.OK);

User services method :

This Method is used to create agent:

public String createUser(@RequestBody UserMain user) {

try {
if (user.getId() == null) {
int count = getUserEmailIdCount(user.getEmailId());

if (count == 0) {
String userresult = createAgentOnKeyclock(user);
return userresult;

} else {
JSONObject obj = new JSONObject();
return "Failure";

catch(Exception e) {
return "Failure";

This method to get if the email id is existing and size of list :

public int getUserEmailIdCount(String emilId) {
List<UserMain> list= usermainRepo.findAllByEmailId(emilId);
return list.size();


This method is used to create user-id and password to the keycloack.\

public String createAgentOnKeyclock(UserMain usermain) {

try {
String serverUrl = Credentials.KEYCLOAK_SERVER_URL;
String realm = Credentials.KEYCLOAK_RELAM;
String clientId = Credentials.KEYCLOAK_CLIENT_ID;
String clientSecret = Credentials.KEYCLOAK_SECREAT_KEY;

Keycloak keycloak = KeycloakBuilder.builder() //
.serverUrl(serverUrl) //
.realm(Credentials.KEYCLOAK_RELAM) //
.grantType(OAuth2Constants.PASSWORD) //
.clientId(clientId) //
.clientSecret(clientSecret) //
.username(Credentials.KEYCLOAK_USER_NAME) //
.password(Credentials.KEYCLOAK_USER_PASSWORD) //

// Define user
UserRepresentation user = new UserRepresentation();

// Get realm
RealmResource realmResource = keycloak.realm(realm);
UsersResource userRessource = realmResource.users();

Response response = userRessource.create(user);

System.out.println("response : " + response);

String userId = response.getLocation().getPath().replaceAll(".*/([^/]+)$", "$1");

System.out.println("userId : " + userId);
// Define password credential
CredentialRepresentation passwordCred = new CredentialRepresentation();

// Set password credential
String userObj = createUser(usermain,userId);
return userObj;
} catch (Exception e) {
return "Failure";

Credential page to store keycloak credentials:

Keycloak client details (details from installation JSON from keycloak)

  • public static final String KEYCLOAK_SERVER_URL = “Keycloak server url“;
  • public static final String KEYCLOAK_RELAM = “Realm name“;
  • public static final String KEYCLOAK_CLIENT_ID = “Client name“;
  • public static final String KEYCLOAK_SECREAT_KEY = “Secret key of new client“;

First user in new realm with manage user permission details 

  • public static final String KEYCLOAK_USER_NAME = “First user emailId“;
  • public static final String KEYCLOAK_USER_PASSWORD = “First user password“;

Default password is stored in keycloak

  • public static final String DEFAULT_PASSWORD_AGENT_CREATION = “Default password“;
  • public static final String DB_SCHEMA_NAME = null;


Using keycloak in the application makes it more secure and easy for maintaining user data for login.

Copying objects using AWS Lambda based on S3 events – Part 2 – date partition

By | AWS, Blogs, Cloud | One Comment

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies


If you are here from the first of this series on S3 events with AWS Lambda, you can find some complex S3 object keys that we will be handling here.

If you are new here, you would like to visit the first part – which is more into the basics & steps in creating your Lambda function and configuring S3 event triggers.

You can find link to part 1 here :

Use Case

This is a similar use case where we try Copying new files to a different location(bucket/path) while preserving the hierarchy, plus we will partition the files according to their file names and store them in a date-partitioned structure.

Problem Statement

Our Tech Lead suggested a change in the application logic, so now the same application is writing files to  S3 bucket in a different fashion. The activity file for Ravi Bharti is written to source-bucket-006/RaviRanjanKumarBharti/20200406-1436246999.parquet.

Haha! Say our Manager wants to check activity files of Ravi Bharti date-wise, hour-wise, minute-wise, and.. no not seconds, we can skip that!

 So we need to store them in our destination bucket  as:

  • destination-test-bucket-006/RaviRanjanKumarBharti/2020-04-06/20200406-1436246999.parquet — Date wise
  • destination-test-bucket-006/RaviRanjanKumarBharti/2020-04-06/14/20200406-1436246999.parquet — Hour wise
  • destination-test-bucket-006/RaviRanjanKumarBharti/2020-04-06/14/36/20200406-1436246999.parquet — Hour/Min wise


| - AjayMuralidhar
| - GopinathP
| - IshitaSaha
| - RachanaSharma
| - RaviRanjanKumarBharti
		| - 20200406-143624699.parquet
| - Sagar Gupta
| - SiddhantPathak


Our problem is not that complex, just a good quick play with split & join of strings should solve it. You can choose any programming language for this. But we are continuing using Python & AWS Python SDK – boto3.

Python Script

Everything remains the same, we will just need to change our script as per our sub-requirements. We will make use of the event dictionary to get the file name & path of the uploaded object.

source_bucket_name = event['Records'][0]['s3']['bucket']['name']

file_key_name = event['Records'][0]['s3']['object']['key']
  • destination-test-bucket-006/RaviRanjanKumarBharti/2020-04-06/20200406-1436246999.parquet

Format: source_file_path/YYYY-MM-DD/file.parquet

You can be lazy to do

file_key_name = “RaviRanjanKumarBharti/20200406-1436246999.parquet”

Splitting file_key_name with ‘/’ to extract Employee (folder name) & filename

file_root_dir_struct = file_key_name.split(‘/’)[0]

date_file_path_struct = file_key_name.split(‘/’)[1]

Splitting filename with ‘-’ to extract date & time

date_file_path_struct = file_key_name.split(‘/’)[1].split(‘-‘)[0]

Since we know the string will be always the same, we will concat it as per the position

YYYY		  - 		MM		-	DD
String[:4] - string[4:6] - string[6:8]

date_partition_path_struct = date_file_path_struct[:4] + "-" + date_file_path_struct[4:6] + "-" + date_file_path_struct[6:8]

Since Python is all about one-liners! We will try to solve this using List Comprehension

n_split = [4, 2, 2]

date_partition_path_struct = "-".join([date_file_path_struct[sum(n_split[:i]):sum(n_split[:i+1])] for i in range(len(n_split))])

We get date_partition_path_struct as ‘2020-04-06’

  • destination-test-bucket-006/RaviRanjanKumarBharti/2020-04-06/14/20200406-1436246999.parquet
time_file_path_struct = file_key_name.split('/')[1]

We will further need to split this to separate the file extension. Using the same variable for simplicity

time_file_path_struct = file_key_name.split('/')[1].split('-')[1].split('.')[0]

This gives us time_file_path_struct  as '1436246999'

hour_time_file_path_struct = time_file_path_struct[:2]
  • destination-test-bucket-006/RaviRanjanKumarBharti/2020-04-06/14/36/20200406-1436246999.parquet

Similarly for minute

min_time_file_path_struct = time_file_path_struct[2:4]

# Complete Code

import json
import boto3

# boto3 S3 initialization
s3_client = boto3.client("s3")

def lambda_handler(event, context):
  destination_bucket_name = 'destination-test-bucket-006'

  source_bucket_name = event['Records'][0]['s3']['bucket']['name']

  file_key_name = event['Records'][0]['s3']['object']['key']

  #Split file_key_name with ‘ / ’ to extract Employee & filename
  file_root_dir_struct = file_key_name.split('/')[0]

  file_path_struct = file_key_name.split('/')[1]

  # Split filename with ‘-’ to extract date & time
  date_file_path_struct = file_path_struct.split('-')[0]

  # Date Partition Lazy Solution

  # date_partition_path_struct = date_file_path_struct[:4] + "-" + date_file_path_struct[4:6] + "-" + date_file_path_struct[6:8]

  # Date Partition using List Comprehension

  n_split = [4, 2, 2]

  date_partition_path_struct = "-".join([date_file_path_struct[sum(n_split[:i]):sum(n_split[:i+1])] for i in range(len(n_split))])

  # Split to get time part
  time_file_path_split = file_key_name.split('/')[1]

  # Time Partition
  time_file_path_struct = time_file_path_split.split('-')[1].split('.')[0]

  # Hour Partition
  hour_time_file_path_struct = time_file_path_struct[:2]

  # Minute Partition
  min_time_file_path_struct = time_file_path_struct[2:4]

  # Concat all required strings to form destination path || date
  destination_file_path = file_root_dir_struct + "/" \
   + date_partition_path_struct + "/" + file_path_struct

  # # Concat all required strings to form destination path || hour partition
  # destination_file_path = file_root_dir_struct + "/" + date_partition_path_struct + "/" + \
  #                         hour_time_file_path_struct + "/" + file_path_struct

  # # Concat all required strings to form destination path || minute partition
  destination_file_path = file_root_dir_struct + "/" + date_partition_path_struct + "/" + \
                          hour_time_file_path_struct + "/" + min_time_file_path_struct + "/" + file_path_struct

  # Copy Source Object
  copy_source_object = {'Bucket': source_bucket_name, 'Key': file_key_name}

  # S3 copy object operation
  s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket_name, Key=destination_file_path)

  return {
      'statusCode': 200,
      'body': json.dumps('Hello from S3 events Lambda!')

You can test your implementation by uploading a file in any folders of your source bucket, and then check your destination bucket of the respective Employee.




This has helped us to solve the most popular use-case involved in data migration of storing files in a partitioned structure for better readability.

Hope this two series blog was useful to understand how we can use AWS Lambda and process your S3 objects based on event triggers.

Do leave your comments. Happy reading.


Tags: Amazon S3, AWS Lambda, S3 events, Python, Boto3, S3 Triggers, Lambda Trigger, S3 copy objects, date-partitioned, time-partitioned

Copying objects using AWS Lambda based on S3 events – Part 1

By | AWS, Blogs, Cloud | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies


In this era of cloud, where data is always on the move. It is imperative for anyone dealing with moving data, to hear about Amazon’s Simple Storage Service, or popularly known as S3. As the name suggests, it is a simple file storage service, where we can upload or remove files – better referred to as objects. It is a very flexible storage and it will take care of scalability, security, performance and availability. So this is something which comes very handy for a lot of applications & use cases.

The next best thing we use here – AWS Lambda! The new world of Serverless Computing. You will be able to run your workloads easily using Lambda without absolutely bothering about provisioning any resources. Lambda takes care of it all.


S3 as we already know is object-based storage, highly scalable & efficient. We can use it as a data source or even as a destination for various applications. AWS Lambda being serverless allows us to run anything without thinking about any underlying infrastructure. So you can use Lambda for a lot of your processing jobs or even simple communicating with any of your AWS resources.

Use Case

Copying new files to a different location(bucket/path) while preserving the hierarchy. We will use AWS Python SDK to solve this.

Problem Statement

Say, we have an application writing  files to a S3 bucket path every time an Employee updates his/her tasks at any time of the day during working hours.

For eg, The work activity of Ajay Muralidhar for 6th April 2020, of 12:00 PM will be stored in source-bucket-006/AjayMuralidhar/2020-04-06/12/my-task.txt. Refer to the Tree for more clarity. We need to move these task files to a new bucket while preserving the file hierarchy.


For solving this problem, we will use Amazon S3 events. Every file pushed to the source bucket will be an event, this needs to trigger a Lambda function which can then process this file and move it to the destination bucket.

1. Creating a Lambda Function

1.1 Go to the AWS Lambda Console and click on Create Function

1.2 Select an Execution Role for your Function

This is important because this ensures that your Lambda has access to your source & destination buckets. Either you can use an existing role that already has access to the S3 buckets, or you can choose to Create an execution role. If you choose the later, you will need to attach S3 permission to your role.

1.2.1 Optional – S3 Permission for new execution role

Go to Basic settings in your Lambda Function. You will find this when you scroll down your Lambda Function. Click Edit. You can edit your Lambda runtime settings here, like Timeout – Max of 15 mins. This is the time for which your Lambda can run. Advisable to set this as per your job requirement. Any time you get an error of Lambda timed out. You can increase this value.

Or you can also check the Permissions section for the role.

Click on View the <your-function-name>-role-<xyzabcd> role on the IAM console. This takes you to the IAM console. Click on Attach policies. You can also create inline policy if you need more control on the access you are providing. You can restrict this to particular buckets. For ease of demonstration, we are using AmazonS3FullAccess here.

Select AmazonS3FullAccess, click on Attach policy

Once the policy is successfully attached to your role, you can go back to your Lambda Function.

2. Setting S3 Event Trigger

2.1 Under Designer tab, Click on Add trigger

2.2 From the Trigger List dropdown, select S3 events

Select your source bucket. There are various event types you can choose from.

Find out more about S3 events here,

We are using PUT since we want this event to trigger our Lambda when any new files are uploaded to our source bucket. You can add Prefix & Suffix if you need any particular type of files. Check on Enable Trigger

Python Script

We now write a simple Python script which will pick the incoming file from our source bucket and copy it to another location. The best thing about setting the Lambda S3 trigger is, whenever a new file is uploaded, it will trigger our Lambda. We make use of the event object here to gather all the required information.

This is how a sample event object looks like. This is passed to your Lambda function.


Your Lambda function makes use of this event dictionary to identify the location where the file is uploaded.

import json
import boto3

# boto3 S3 initialization
s3_client = boto3.client("s3")

def lambda_handler(event, context):
   destination_bucket_name = 'destination-test-bucket-006'

   # event contains all information about uploaded object
   print("Event :", event)

   # Bucket Name where file was uploaded
   source_bucket_name = event['Records'][0]['s3']['bucket']['name']

   # Filename of object (with path)
   file_key_name = event['Records'][0]['s3']['object']['key']

   # Copy Source Object
   copy_source_object = {'Bucket': source_bucket_name, 'Key': file_key_name}

   # S3 copy object operation
   s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket_name, Key=file_key_name)

   return {
       'statusCode': 200,
       'body': json.dumps('Hello from S3 events Lambda!')

You can test your implementation by uploading a file in any folders of your source bucket, and then check your destination bucket for the same file.



You can check your Lambda execution logs in CloudWatch. Go to Monitoring and click View Logs in CloudWatch

Congrats! We have solved our problem. Just before we conclude this blog, we would like to discuss an important feature of Lambda which will help you to upscale your jobs. What if your application is writing a huge number of files at the same time? Don’t worry, Lambda will help you with this too. By default, Lambda has a Concurrency of 1000. If you need to scale up, you can increase this as per your business requirements.


This is how easy it was to use S3 with Lambda to move files between buckets.

In Part 2 of this series, we will try to handle a bit more complex problem, where we will try to move files as date partitioned structures at our destination.

You can find link to part 2 here :

Hope this was helpful for an overview of the basics of using s3 events triggers with AWS Lambda. Do leave your comments. Happy reading.


Tags: Amazon S3, AWS Lambda, S3 events, Python, Boto3, S3 Triggers, Lambda Trigger, S3 copy objects

Handling Asynchronous Workflow-Driven pipeline with AWS CodePipeline and AWS Lambda

By | AWS, Blogs, Cloud, Cloud Assessment, Data pipeline | No Comments

Written by Praful Tamrakar Senior Cloud Engineer, Powerupcloud Technologies

Most of the AWS customers use AWS lambda widely for performing almost every task, especially its a very handy tool when it comes to customizing the way your pipeline works. If we are talking about pipelines, then AWS Lambda is a service that can be directly integrated with AWS CodePipeline. And the combination of these two services make it possible for AWS customers to successfully automate various tasks, including infrastructure provisioning, blue/green deployments, serverless deployments, AMI baking, database provisioning, and deal with asynchronous behavior.

Problem Statement :

Our customer has a requirement to trigger and monitor the status of the Step Function state machine, which is a long-running asynchronous process. The customer is using the AWS Step Function to run the ETL jobs with the help of AWS Glue jobs and AWS EMR. We proposed to achieve this with Lambda but lambda has a limitation of its timeout i.e. 15 min. Now the real problem is that such an asynchronous process needs to continue and succeed even if it exceeds a fifteen-minute runtime (a limit in Lambda).

Here in this blog we have a solution in which we have figured out how we can solve and automate this approach, with the combination of lambda and AWS CodePipeline with Continuous token.

Assumptions :

This blog assumes you are familiar with AWS CodePipeline and AWS Lambda and know how to create pipelines, functions, Glue jobs and the IAM policies and roles on which they depend.


  1. Glue jobs has already been configured
  2. A StepFunction StateMachine configured to run  Glue Jobs.
  3. CodeCommit repository for Glue scripts

Solution :

In this blog post, we discuss how a CodePipeline action can trigger a Step Functions state machine and how the pipeline and the state machine are kept decoupled through a Lambda function.

The source code for the sample pipeline, pipeline actions, and state machine used in this post is available at

The below diagram highlights the CodePipeline-StepFunctions integration that will be described in this post. The pipeline contains two stages: a Source stage represented by a CodeCommit Git repository and a DEV stage with CodeCommit, CodeBuild and Invoke Lambda actions that represent the workflow-driven action.

The Steps involved  in the CI/CD pipeline:

  1. Developers commit AWS Glue job’s Code in the SVC (AWS CodeCommit)
  2. The AWS CodePipeline in the Tools Account gets triggered due to step
  3. The Code build steps involve multiple things as mentioned below
    • Installations of dependencies and packages needed
    • Copying the Glue and EMR jobs to S3 location where the Glue jobs will pick the script from.
  4. CHECK_OLD_SFN: The Lambda is invoked to ensure that the Previous Step function execution is not still in a running state before we run the actual Step function. Please find below the process.
    • This action invokes a Lambda function (1).
    • In (2) Lamba Checks the State Machine  Status, which returns a Step Functions State Machine status.
    • In (3) The lambda gets the execution state of the State Machine ( RUNNING || COMPLETED || TIMEOUT )
    • In (4) The Lambda function sends a continuation token back to the pipeline

If The State Machine State is RUNNING in Seconds later, the pipeline invokes the Lambda function again (4), passing the continuation token received. The Lambda function checks the execution state of the state machine and communicates the status to the pipeline. The process is repeated until the state machine execution is complete.

Else (5) Lambda  sends a Job completion token  and completes the pipeline stage.

  1.  TRIGGER_SFN_and_CONTINUE : Invoking Lambda to execute the new Step function execution and Check the status of the new execution. Please find below the process.
    • This action invokes a Lambda function (1) called the State Machine, which, in turn, triggers a Step Functions State Machine to process the request (2).
    • The Lambda function sends a continuation token back to the pipeline (3) to continue its execution later and terminates.
    • Seconds later, the pipeline invokes the Lambda function again (4), passing the continuation token received. The Lambda function checks the execution state of the state machine (5,6) and communicates the status to the pipeline. The process is repeated until the state machine execution is complete.
    • Then the Lambda function notifies the pipeline that the corresponding pipeline action is complete (7). If the state machine has failed, the Lambda function will then fail the pipeline action and stop its execution (7). While running, the state machine triggers various Glue Jobs to perform ETL operations. The state machine and the pipeline are fully decoupled. Their interaction is handled by the Lambda function.
  2. Approval to the Higher Environment. In this stage, we Add a Manual Approval Action to a Pipeline in CodePipeline. Which can be implemented using

Deployment Steps :

Step 1: Create a Pipeline

  1. Sign in to the AWS Management Console and open the CodePipeline console at
  2. On the Welcome page, Getting started page, or the Pipelines page, choose Create pipeline.
  3. In Choose pipeline settings, in Pipeline name, enter the pipeline name.
  4. In-Service role, do one of the following:
    • Choose a New service role to allow CodePipeline to create a new service role in IAM.
    • Choose the Existing service role to use a service role already created in IAM. In Role name, choose your service role from the list.
  5. Leave the settings under Advanced settings at their defaults, and then choose Next.

6. In the Add source stage, in Source provider, Choose Source Provider as CodeCommit.

7. Provide Repository name and Branch Name

8. In Change detection options Choose AWS CodePipeline

9. In Add build stage,  in Build provider choose AWS CodeBuild, choose the Region

10. Select the existing Project name or Create project

11. You Can add Environment Variables, which you may use in buildspec.yaml file , and click Next

NOTE: The build Step has a very special reason. Here we copy the glue script from SVC (AWS CodeCommit ) to the S3 bucket, from where the Glue job picks its script to execute in its next execution.

12. Add deploy stage, Skip deploy Stage.

13. Now Finally click Create Pipeline.

Step 2: Create the CHECK OLD SFN LAMBDA Lambda Function

  1. Create the execution role
  • Sign in to the AWS Management Console and open the IAM console

Choose Policies, and then choose Create Policy. Choose the JSON tab, and then paste the following policy into the field.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "logs:*",
            "Resource": "arn:aws:logs:*:*:*"
  • Choose Review policy.
  • On the Review policy page, in Name, type a name for the policy (for example, CodePipelineLambdaExecPolicy). In Description, enter Enables Lambda to execute code.
  • Choose Create Policy.
  • On the policy dashboard page, choose Roles, and then choose to Create role.
  • On the Create role page, choose AWS service. Choose Lambda, and then choose Next: Permissions.
  • On the Attach permissions policies page, select the checkbox next to CodePipelineLambdaExecPolicy, and then choose Next: Tags. Choose Next: Review.
  • On the Review page, in Role name, enter the name, and then choose to Create role.

2. Create the CHECK_OLD_SFN_LAMBDA Lambda function to use with CodePipeline

  • Open the Lambda console and choose the Create function.
  • On the Create function page, choose Author from scratch. In the Function name, enter a name for your Lambda function (for example, CHECK_OLD_SFN_LAMBDA ) .
  • In Runtime, choose Python 2.7.
  • Under Role, select Choose an existing role. In Existing role, choose your role you created earlier, and then choose the Create function.
  • The detail page for your created function opens.
  • Copy the code into the Function code box
  • In Basic settings, for Timeout, replace the default of 3 seconds with 5 Min.
  • Choose Save.

3. Create the TRIGGER_SFN_and_CONTINUE Lambda function to use with CodePipeline

  • Open the Lambda console and choose the Create function.
  • On the Create function page, choose Author from scratch. In Function name, enter a name for your Lambda function (for example, TRIGGER_SFN_and_CONTINUE ) .
  • In Runtime, choose Python 2.7.
  • Under Role, select Choose an existing role. In Existing role, choose your role you created earlier, and then choose the Create function.
  • The detail page for your created function opens.
  • Copy the code into the Function code box
  • In Basic settings, for Timeout, replace the default of 3 seconds with 5 Min.
  • Choose Save.

Step 3: Add the CHECK OLD SFN LAMBDA, Lambda Function to a Pipeline in the CodePipeline Console

In this step, you add a new stage to your pipeline, and then add a Lambda action that calls your function to that stage.

To add stage

  • Sign in to the AWS Management Console and open the CodePipeline console at
  • On the Welcome page, choose the pipeline you created.
  • On the pipeline view page, choose Edit.
  • On the Edit page, choose + Add stage to add a stage after the Build stage with thaction. Enter a name for the stage (for example, CHECK_OLD_SFN_LAMBDA ), and choose Add stage.
  • Choose + Add action group. In Edit action, in Action name, enter a name for your Lambda action (for example, CHECK_OLD_SFN_LAMBDA ). In Provider, choose AWS Lambda. In Function name, choose or enter the name of your Lambda function (for example, CHECK_OLD_SFN_LAMBDA )
  • In UserParameters, you must provide a JSON string with a parameter: { “stateMachineARN”: “<ARN_OF_STATE_MACHINE>” } EG: 
  • choose Save.

Step 4: Add the TRIGGER_SFN_and_CONTINUE  Lambda Function to a Pipeline in the CodePipeline Console

In this step, you add a new stage to your pipeline, and then add a Lambda action that calls your function to that stage.

To add a stage

  • Sign in to the AWS Management Console and open the CodePipeline console at
  • On the Welcome page, choose the pipeline you created.
  • On the pipeline view page, choose Edit.
  • On the Edit page, choose + Add stage to add a stage after the Build stage with thaction. Enter a name for the stage (for example, TRIGGER_SFN_and_CONTINUE ), and choose Add stage.
  • Choose + Add action group. In Edit action, in Action name, enter a name for your Lambda action (for example, TRIGGER_SFN_and_CONTINUE ). In Provider, choose AWS Lambda. In Function name, choose or enter the name of your Lambda function (for example, TRIGGER_SFN_and_CONTINUE )
  • In UserParameters, you must provide a JSON string with a parameter: { “stateMachineARN”: “<ARN_OF_STATE_MACHINE>” }
  • choose Save.

Step 5: Test the Pipeline with the Lambda function

  • To test the function, release the most recent change through the pipeline.
  • To use the console to run the most recent version of an artifact through a pipeline
  • On the pipeline details page, choose Release change. This runs the most recent revision available in each source location specified in a source action through the pipeline.
  • When the Lambda action is complete, choose the Details link to view the log stream for the function in Amazon CloudWatch, including the billed duration of the event. If the function failed, the CloudWatch log provides information about the cause.

Example JSON Event

The following example shows a sample JSON event sent to Lambda by CodePipeline. The structure of this event is similar to the response to the GetJobDetails API, but without the actionTypeId and pipelineContext data types. Two action configuration details, FunctionName and UserParameters, are included in both the JSON event and the response to the GetJobDetails API. The values in green text are examples or explanations, not real values.

    "CodePipeline.job": {
        "id": "11111111-abcd-1111-abcd-111111abcdef",
        "accountId": "111111111111",
        "data": {
            "actionConfiguration": {
                "configuration": {
                    "FunctionName": "MyLambdaFunctionForAWSCodePipeline",
                    "UserParameters": "some-input-such-as-a-URL"
            "inputArtifacts": [
                    "location": {
                        "s3Location": {
                            "bucketName": "s3-bucket-name",
                            "objectKey": "for example"
                        "type": "S3"
                    "revision": null,
                    "name": "ArtifactName"
            "outputArtifacts": [],
            "artifactCredentials": {
                "secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
                "sessionToken": "MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w
                "accessKeyId": "AKIAIOSFODNN7EXAMPLE"
            "continuationToken": "A continuation token if continuing job",
            "encryptionKey": { 
              "id": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
              "type": "KMS"


In this blog post, we discussed how a Lambda function can be used fully to decouple the pipeline and the state machine and manage their interaction. We also learned how asynchronous processes that need to continue and succeed, even if it exceeds a fifteen-minute runtime (a limit in Lambda) are handled using Continuous Token.

Please Visit our Blogs for more interesting articles.

11-Point Cloud Plan To Tackle The Covid-19 Economy

By | Cloud, Cloud Assessment, CS, News | No Comments

Siva S Founder & CEO – Powerupcloud Technologies

The Covid-19 virus pandemic has changed a lot of things forever or at least that should be our best assumption now. As Sequoia quotes, this is the Black Swan of 2020 which may very well extend into most of 2021. While some are still trying to understand the impact this will have on the global economy, we need to make up our minds and accept that this is here to stay for a longer period. I find a lot of merit in the endgame possibilities of the Covid-19 virus highlighted in an article by The Atlantic. Of the 3 possible endgame scenarios, the best option sounds like this – “the world plays a protracted game of whack-a-mole with the virus, stamping out outbreaks here and there until a vaccine can be produced.”

As the CEO of Powerupcloud – a global leader in cloud consulting space and the global cloud practice head of LTI – one of the largest technology solutions and services providers in the world, I interact with several CIOs, Business Leaders, and Technologists on a daily basis and I am privy to the thought process and technology decisions being made by these global enterprise businesses in this period of volatility.

Having strategic conversations with more than 50 top CEOs, CIOs, Business Leaders of Fortune500 companies and public cloud OEMs, I am convinced that at no other point in time has there ever been such a need for the instant availability of technology than now. And the public cloud has emerged as the biggest enabler during these uncertain times. If you are an organization that is still considering the move to the cloud, stop procrastinating and start your cloud adoption process immediately. With the fast-changing economic scenario, you need to act fast and act now.

The 11-Point Cloud Plan for Covid-19 Pandemic Economy


These discussions with CIOs and Cloud Leaders have helped me to build the 11-Point Cloud Plan on how enterprises should prioritize and execute their cloud adoption & optimization plan over the next 3 months (ultra short term) and the next 12 months (short term). This is not the time for drafting a 3-year or a 5-year plan as we are not sure of the outcome even after the next 12 months. And if you are in the middle of executing such plans, please stop and revisit your priorities immediately.

The 11-Point Cloud Plan has 3 purpose tracks.

A. Business Continuity Planning

B. Cost Savings on Cloud

C. Short Term Cloud Initiatives


With more and more enterprises moving to a remote working model where employees connect from home, I see that a good amount these businesses are grappling with a viable business continuity design which will allow them to continue their operations. Remember, keeping your business moving is the best outcome the world economy needs today.

There are 5 key areas which the organizations are adopting on the cloud to support their business continuity. I have prioritized them based on the feedback we received from the market.

  1. Virtual Contact Center on Cloud – Employees doing ‘work from home’ are struggling with their traditional contact center software which doesn’t allow them to answer customer support calls effectively. Cloud-based Amazon Connect allows you to bring up a virtual contact center in just 45 minutes and with an additional 8 hours effort, you can also automate the customer care responses by integrating Amazon AI services like Amazon Lex & Amazon Polly.
  2. Virtual Desktops on Cloud – Another major request coming in from companies with remote employees is the virtual desktop solution. Both Microsoft Azure’s Windows Virtual Desktop and Amazon Workspaces are sought after technology offerings on the cloud which solves this problem. You can launch these virtual desktop solutions using automated templates for 1000s of employees in matter of hours.
  3. Support Chatbots on Cloud – Be it customer support or internal employee support, there is no better time to be proactive in responding to their queries. Chatbots enables an organization to route at least 50% of the customer queries away from customer support agents. Cloud technologies like Google Cloud’s DialogFlow, Amazon Lex, Amazon Polly, Microsoft Bot FrameworkAzure QnA Maker, Microsoft Luis helps in designing your chatbots. Powerup’s Botzer platform also helps you in integrating with the above-mentioned cloud APIs and launch & manage your chatbots in a day.
  4. Risk Modeling on Cloud – Several organizations in industries like Insurance, Stock Trading, Banking, Pharma, Life Sciences, Retail, FMCG are running their risk modeling algorithms on an almost daily basis to reassess their risk in the current market scenario. This requires additional compute power (Amazon EMRAzure HDInsightGoogle Dataproc and machine learning platforms (Google Cloud AutoMLAmazon SagemakerAzureML) on the cloud.
  5. Governance & Security on Cloud – For organizations with a lot of applications on cloud, governance, and security becomes a tricky part to handle given that most of their employees are connecting from home. Cloud Governance Products like Powerup’s helps organizations to enforce a zero-defect security model across their multi-cloud environments and ensures that the security vulnerabilities are addressed in real-time.


6. Cost Optimization on Cloud – We will witness a spending crunch by a lot of organizations across industries and IT departments in these organizations will come under severe pressure to reduce their cloud costs. From our experience in helping a lot of organizations reduce their cloud spend, I have identified 6 key methods using which you can save cloud costs.

  • Downsize your cloud resources and plan your cloud inventory to the bare minimum that is required to run your business.
  • Shutdown unused cloud resources and free up storage & network. This might require you to revisit your new initiatives and data retention policies.
  • Adopt reserved instances pricing model for a 1-year or 3-year period which will allow you to save up to 60% on your compute spend.
  • Explore spot instances for at least 50% of your cloud workloads which will help you reduce your cloud spend by 80% for the workloads where spot instances are enabled.
  • Explore containerization for a minimum 1/3rd of your large applications on the cloud which will help you save up to 75% on your compute spend.
  • Schedule your non-production instances/servers to start and stop on a daily basis which can help you save almost 50% of your server compute bills.
  • Intelligent tiering of objects stored on Amazon S3 or Azure Blob to lower storage tiers will help you save a lot of storage costs. Amazon S3 Intelligent Tiering and Azure’s Archival/Cold storage options can be leveraged to save costs.
  • Explore the possibilities of moving some of the workloads to other cloud platforms where variable pricing is possible. This can also help you save costs. For eg: some companies leverage flexible hardware configuration options of Google Cloud.
  • Migrate your applications and databases from enterprise licensed platforms to open-source platforms. Eg: migrating from Redhat Linux to Amazon Linux or from Oracle database to PostgreSQL.
  • Finally, it comes to the good old Excel sheets. The old school way of analyzing cloud usage and billing data by experienced cloud architects cannot be matched by cloud cost analytics tools. Roll up your sleeves and get your hands dirty if it comes to that.

Powerup’s has tailored modules that help organizations track their cloud spend by departments, applications, users, etc and help them with a detailed (resource level) cost optimization plan for their cloud environment.


Now that the long-term initiatives by organizations will go under the scanner and potentially recalibrated to suit the larger financial goal of the organization, there are still a lot of things that the organizations can do in the short-term to help them prepare themselves for the next 12-18 months. These ‘Short Term Initiatives’ are designed to help you make measured but important progress in your cloud adoption journey and will fit very well into your long-term plan when the situation gets better.

I have listed 5 key short-term initiatives that are being adopted by several medium and large enterprises across industries globally.

7. Data Lake on Cloud – It is during these times, several organizations realize that they don’t have enough hold on their data to make some critical business decisions. They see that their data is spread across various applications, in various formats and in various database technologies which restricts them from correlating the data and gain valuable insights. A centralized Data Lake on the cloud solves this problem. You can launch a full-fledged Data Lake that can be built in less than 60 days using cloud-native data lake technologies (Amazon Redshift, Azure DataLake) or some leading 3rd party data platforms like Snowflake or Databricks. It is the best time to start building your organization’s central data lake on the cloud if you don’t have one.

8. Fast-Track Application Development on Cloud – These uncertain times requires organizations to try new business models and introduce new processes to handle their business. I see large banks building specialized apps for payment holidays on buy-to-let mortgages in record time. And I am sure you might have similar requirements for building specialized workflow-based apps. Cloud is the best place to build these apps in a very short period of time and if you use serverless functions like Amazon Lambda, Microsoft Functions, it will help in less overhead in managing the availability of these apps. Please remember, managerial bandwidth is super key in the coming days and you should plan to free up your employees’ time for high priority tasks.

9. Outsource Cloud Managed Services – Cloud support or managed services is a human capital intensive division of IT departments and it is the right time to outsource the managed services scope to cloud partners like Powerup who can deliver the cloud support in a shared capacity model. This would greatly reduce your human capital overhead cost in managing your cloud environment.

10. Move Critical Applications to Cloud – Large scale enterprise data center migrations will help you to move to an OPEX model and reduces the stress on the cash flow which is highly recommended given the volatile economic outlook for the next 2 years. But the best way to begin this is by looking at your business-critical applications and migrate them to the cloud. This will allow you to migrate in a phased manner to the cloud within the next 12 months. Please don’t consider the big-bang cloud migration approach for the next 12 months. Conserving cash and spending them in a planned manner is going to be key for any business to survive.

11. DevOps Automation on Cloud – DevOps process automation is another key aspect that the companies are executing to reduce the dependency on their technical resources. ‘Work from home’ model has its own challenges coordination, network connectivity, etc which leads to highly delayed DevOps deployments. This might be the right time for you to look at automating your DevOps process for your applications running on the cloud or in your on-premise setup.


Like I mentioned earlier, the 11-Point Cloud Plan was devised based on the market observations and feedback from our global enterprise customers. Will the plan change in the near term? Absolutely Yes. But for now, this seems to be the best bet for us. This 11-Point Cloud Plan should get you started thinking/acting in the right direction and trust me this will help you follow the footsteps of several leading organizations in today’s economic scenario caused by Covid-19 pandemic. The plan will continue to evolve and I will keep this updated as I learn more from our partners and customers.

Stay safe with your loved ones.

Take care of the community around you.

And keep the business rolling.


Securing Spring Boot and React JS with Spring Security using JWT authentication

By | Blogs, Cloud, Cloud Assessment | 4 Comments

Written by Kiran M D Software Engineer – Powerupcloud Technologies

This article helps you set up Spring Security with Basic and JWT authentication with a full-stack application using React Js as Frontend framework and Spring Boot as the backend REST API.

Let’s understand what is the use of JWT Token and how we are going to use it in our application

JSON Web Token

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed.

How does it work? 

In authentication, when the user successfully logs in using their credentials, a JSON Web Token will be returned. Since the tokens are credentials, we must prevent security bugs/breaches. In general, you should not keep tokens longer than required.

Whenever the user wants to access a protected route or resource, the user agent should send the JWT, typically in the Authorization header using the Bearer schema. The content of the header should look like the following:

Sample JSON:


“Authorization”: “Bearer <token>”


Lets see how can we integrate with Springboot and React JS.

1. Creating spring boot application and configuring JWT authentication.

1.1 Creating a sample spring boot application

Basic spring boot application can be generated using spring initializer with the following dependencies.

1.Spring web

2.spring security

Open the Spring initializer URL and add above dependency

Spring Boot REST API Project Structure

Following screenshot shows the structure of the Spring Boot project, we can create a Basic Authentication.

1.2 Add the below dependency in Pom.xml for JWT token.


1.3 Create the following files in config package.

The purpose of this file is to handle exceptions and whenever JWT  token is not validated it throws Unauthorised exception.

public class JwtAuthenticationEntryPoint implements AuthenticationEntryPoint, Serializable
    public void commence(HttpServletRequest request, 
	HttpServletResponse response,
	AuthenticationException authException) throws IOException 
	response.sendError(HttpServletResponse.SC_UNAUTHORIZED,         "Unauthorized");

The purpose of this file is to handle filtering the request from the client-side or react js, here is where all the request will come first before hitting the rest API, if the token validation is successful then actual API gets a request.

public class JwtRequestFilter extends OncePerRequestFilter 
	private JwtUserDetailsService jwtUserDetailsService;
	private JwtTokenUtil jwtTokenUtil;

	protected void doFilterInternal(HttpServletRequest request, 
		   HttpServletResponse response, FilterChain chain)
			throws ServletException, IOException 
		final String requestTokenHeader =                             request.getHeader("authorization");
		String username = null;
		String jwtToken = null;
            // JWT Token is in the form "Bearer token". 
            //Remove Bearer word and get only the Token
		if (requestTokenHeader != null &&                       requestTokenHeader.startsWith("Bearer ")) 
			jwtToken = requestTokenHeader.substring(7);
			try {
				username = 

			} catch (IllegalArgumentException e) {
				System.out.println("Unable to get JWT Token");
			} catch (ExpiredJwtException e) {
				System.out.println("JWT Token has expired");
		} else {
			logger.warn("JWT Token does not begin
                                with Bearer String");
		// Once we get the token validate it.
		if (username != null && 
            SecurityContextHolder.getContext().getAuthentication() == null) 
			UserDetails userDetails =    this.jwtUserDetailsService.loadUserByUsername(username);
		// if token is valid configure Spring Security to manually set
			// authentication
			if (jwtTokenUtil.validateToken(jwtToken, userDetails)) 
				UsernamePasswordAuthenticationToken    usernamePasswordAuthenticationToken = new 		    		                        UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities());
						.setDetails(new WebAuthenticationDetailsSource().buildDetails(request));
// After setting the Authentication in the context, we specify
// that the current user is authenticated. So it passes the
// Spring Security Configurations successfully.
		chain.doFilter(request, response);

Util class is to create and validate JWT token.

public class JwtTokenUtil implements Serializable
	public static final long JWT_TOKEN_VALIDITY = 1000 * 3600;
	private String secret;

	// retrieve username from jwt token
	public String getUsernameFromToken(String token) {
		return getClaimFromToken(token, Claims::getSubject);

	// retrieve expiration date from jwt token
	public Date getExpirationDateFromToken(String token) {
		return getClaimFromToken(token, Claims::getExpiration);

	public <T> T getClaimFromToken(String token, Function<Claims, T> claimsResolver) {
		final Claims claims = getAllClaimsFromToken(token);
		return claimsResolver.apply(claims);

	// for retrieveing any information from token we will need the secret key
	private Claims getAllClaimsFromToken(String token) {
		return Jwts.parser().setSigningKey(secret)

	// check if the token has expired
	private Boolean isTokenExpired(String token) {
		final Date expiration = getExpirationDateFromToken(token);
		return expiration.before(new Date());

	// generate token for user
	public String generateToken(UserDetails userDetails) {
		Map<String, Object> claims = new HashMap<>();
		String username = userDetails.getUsername();
		return doGenerateToken(claims, username);

	// while creating the token -
	// 1. Define claims of the token, like Issuer, Expiration, Subject, and the ID
	// 2. Sign the JWT using the HS512 algorithm and secret key.
	// 3. According to JWS Compact
	// compaction of the JWT to a URL-safe string
	private String doGenerateToken(Map<String, Object> claims, String subject) {
		return Jwts.builder().setClaims(claims).setSubject(subject)
			.setIssuedAt(new Date(System.currentTimeMillis()))
			.setExpiration(new Date(System.currentTimeMillis() + JWT_TOKEN_VALIDITY))
			.signWith(SignatureAlgorithm.HS512, secret).compact();

	// validate token
	public Boolean validateToken(String token, UserDetails userDetails) {
		final String username = getUsernameFromToken(token);
		return (username.equals(userDetails.getUsername()) && !isTokenExpired(token));

Spring security is configured in this file that extends the websecurity configuration adapter.

@EnableGlobalMethodSecurity(prePostEnabled = true)
public class WebSecurityConfig extends WebSecurityConfigurerAdapter 
	private JwtAuthenticationEntryPoint jwtAuthenticationEntryPoint;

	private UserDetailsService jwtUserDetailsService;

	private JwtRequestFilter jwtRequestFilter;

	public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
		// configure AuthenticationManager so that it knows from where to load
		// user for matching credentials
		// Use BCryptPasswordEncoder

	public PasswordEncoder passwordEncoder() {
		return new BCryptPasswordEncoder();

	public AuthenticationManager authenticationManagerBean() throws Exception {
		return super.authenticationManagerBean();

	public void addCorsMappings(CorsRegistry registry) {
		.allowedMethods("HEAD", "GET", "PUT", "POST",
		"DELETE", "PATCH").allowedHeaders("*");

	protected void configure(HttpSecurity httpSecurity) throws Exception {
		// We don't need CSRF for this example
		// dont authenticate this particular request
		// all other requests need to be authenticated
		// make sure we use stateless session; session won't be used to
		// store user's state.
		// Add a filter to validate the tokens with every request
		httpSecurity.addFilterBefore(jwtRequestFilter, UsernamePasswordAuthenticationFilter.class);

1.4 Create the following files in the controller package.

Simple rest API to test request, after token authentication successful, the request will come here

@CrossOrigin(origins = "*", allowedHeaders = "*")
public class HelloWorldController {

	public String firstPage() {
		return "success";

This file contains authentication rest API that receives the username and password for authentication and it returns the JWT token on successful response.

public class JwtAuthenticationController {
	private AuthenticationManager authenticationManager;
	private JwtTokenUtil jwtTokenUtil;
	private JwtUserDetailsService userDetailsService;
	@RequestMapping(value = "/authenticate", method = RequestMethod.POST)
	public ResponseEntity<?> createAuthenticationToken(@RequestBody JwtRequest authenticationRequest) throws Exception 
		final UserDetails userDetails = 
		//JwtUserDetails userDetails = new JwtUserDetails();
		final String token = jwtTokenUtil.generateToken(userDetails);
		return ResponseEntity.ok(new JwtResponse(token));
	private void authenticate(String username, String password) throws Exception {
		try {
			authenticationManager.authenticate(new UsernamePasswordAuthenticationToken(username, password));
		} catch (DisabledException e) {
			throw new Exception("USER_DISABLED", e);
		} catch (BadCredentialsException e) {
			throw new Exception("INVALID_CREDENTIALS", e);

1.5 Create following files in model package.

It’s a pojo class that contains a username and password to get a request data in an authentication method.

public class JwtRequest implements Serializable {
	private String username;
	private String password;

	// need default constructor for JSON Parsing
	public JwtRequest() {


	public JwtRequest(String username, String password) {

	public String getUsername() {
		return this.username;

	public void setUsername(String username) {
		this.username = username;

	public String getPassword() {
		return this.password;

	public void setPassword(String password) {
		this.password = password;

Its pojo class that return jwt token string and if we need other field to send as response then need to declare a field in this file.

public class JwtResponse implements Serializable
	private final String jwttoken;

	public JwtResponse(String jwttoken) {
		this.jwttoken = jwttoken;

	public String getToken() {
		return this.jwttoken;

The class which contains spring security user details fields.

public class JwtUserDetails implements {

	private String username;

	public Collection<? extends GrantedAuthority> getAuthorities() {
		return null;

	public String getPassword() {
		return null;
	public String getUsername() {
		return username;

	public boolean isAccountNonExpired() {
		return false;

	public boolean isAccountNonLocked() {
		return false;

	public boolean isCredentialsNonExpired() {
		return false;

	public boolean isEnabled() {
		return false;

	public void setUsername(String username) {
		this.username = username;


1.6 Create JwtUserDetailsService in service package.

To validate username and password and return user details object.

public class JwtUserDetailsService implements UserDetailsService {
	public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException 
		if ("admin".equals(username)) 
			return new User("admin", "$2a$10$slYQmyNdGzTn7ZLBXBChFOC9f6kFjAqPhccnP6DxlWXx2lPk1C3G6",
					new ArrayList<>());
		} else {
			throw new UsernameNotFoundException("User not found with username: " + username);

Now we have done with server side setup and next will move to the second step.

2. Creating React JS application and accessing rest API using JWT token.

Run the below command in command prompt to generate react application.

Command : npx create-react-app demo-app

After creating application use prefered IDE to import.

Understanding the React js Project Structure

Following screenshot shows the structure of the React js project and Inside src folder we are going to create the login.js , dashboard.js and Interceptors file like below.


Here we have a hardcoded username and password, after successful login, we will receive JWT token as a response from a server that is saved in local storage.

import React, { Component } from "react";
import axios from "axios";

class login extends Component {
  constructor() {

    this.state = {
      username: "admin",
      password: "admin"
    this.handleFormSubmit = this.handleFormSubmit.bind(this);

  handleFormSubmit = event => {

    const endpoint = "http://localhost:8080/authenticate";

    const username = this.state.username;
    const password = this.state.password;

    const user_object = {
      username: username,
      password: password
    };, user_object).then(res => {
      return this.handleDashboard();

  handleDashboard() {
    axios.get("http://localhost:8080/dashboard").then(res => {
      if ( === "success") {
      } else {
        alert("Authentication failure");

  render() {
    return (
        <div class="wrapper">
          <form class="form-signin" onSubmit={this.handleFormSubmit}>
            <h2 class="form-signin-heading">Please login</h2>
            <div className="form-group">
              <input type="text"
                placeholder="User name"
            <div className="form-group">
              <input type="password"
            <button class="btn btn-lg btn-primary btn-block" type="submit">
export default login;


This is the Home page of the application after logging.

import React, { Component } from "react";

class dashboard extends Component {
  handleLogout() {
    window.location.href = "/";

  render() {
    return (
        <h1>WELCOME TO DASHBOARD</h1>
          className="d-b td-n pY-5 bgcH-grey-100 c-grey-700">
          <i className="ti-power-off mR-10"></i>
          <span style={{ color: "white" }}>Logout</span>
export default dashboard;


This is a global configuration that will intercept each request by adding an authorization header with a JWT token that is stored in local storage.

var axios = require("axios");

export const jwtToken = localStorage.getItem("authorization");

  function(config) {
    if (jwtToken) {
      config.headers["authorization"] = "Bearer " + jwtToken;
    return config;
  function(err) {
    return Promise.reject(err);


This file is an entry component for react application, the new route should be configured when a new file is added like below.

import React from "react";
import "./App.css";
import { BrowserRouter, Route } from "react-router-dom";
import interceptors from "../src/Interceptors";
import login from "./login";
import dashboard from "./dashboard";

function App() {
  return (
    <div className="App">
      <header className="App-header">
          <Route exact path="/" component={login} />
          <Route exact path="/dashboard" component={dashboard} />

export default App;

Note : Once all the files are added in the react application, start the spring boot application and start npm development server using the below command.

Command : npm start

Once the application has started, you can access the application using below url.

URL : http://localhost:3000


JWT Authentication URLs

You can send a POST request to

http://domain-name:port/authenticate with the request body containing the credentials.





The Response contains the JWT token

"token": "eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJyYW5nYSIsImV4cCI6MTU0MjQ3MjA3NCwiaWF0IoxNTQxODY3Mjc0fQ.kD6UJQyxjSPMzAhoTJRr-Z5UL-FfgsyxbdseWQvk0fLi7eVXAKhBkWfj06SwH43sY_ZWBEeLuxaE09szTboefw"

Since this is the demo project, the application will work only for below username and password.

After the successful login the application landed on the home page that looks similar to below image.


In this article, we added authentication to our React Js app. We secured our REST APIs in the server-side and our private routes at the client-side.

Kubernetes Security Practices on AWS

By | Blogs, Cloud, Cloud Assessment, Kubernetes | One Comment

Written by Praful Tamrakar Senior Cloud Engineer, Powerupcloud Technologies

Security in Cloud and Infra level

  1. Ensure the worker nodes AMI meet the CIS benchmark. 
    1. For K8s benchmark :
    1. Below is a list of tools and resources that can be used to automate the validation of an instance of Kubernetes against the CIS Kubernetes Benchmark:
  1. Verify that the Security Groups and NACL do not allow all traffic access and the rules allow access to ports and Protocol needed only for Application and ssh purposes.
  2. Make sure that you have encryption of data at rest. Amazon KMS can be used for encryption of data at rest. For Example : 
  • EBS volumes for ControlPlane nodes and worker nodes can be encrypted via KMS.
  • You can encrypt the  Logs Data either in Cloudwatch Logs or  in S3 using KMS.
  1. If Instance(s) are behind the ELB, make sure you have configured HTTPS encryption and decryption process (generally known as SSL termination) handled by an Elastic Load Balancer.
  2. Make sure the worker nodes and RDS are provisioned in Private Subnets.
  3. It’s always best practise to have a Separate Kubernetes(EKS) cluster for each Environment( Dev/UAT/Prod).
  4. Ensure to use AWS Shield/WAF to prevent DDOS attacks.

Container Level

  1. Ensure to use a minimal base image ( Eg: Alpine image to run the App)
  2. Ensure  that the docker image  registry you are using is a trusted, authorized and private registry. EG: Amazon ECR.
  3. Make sure you remove all the unnecessary files in your docker image. Eg: In tomcat server, you need to remove: 
  • $CATALINA_HOME/webapps/examples
  • $CATALINA_HOME/webapps/host-manager
  • $CATALINA_HOME/webapps/manager
  • $CATALINA_HOME/conf/Catalina/localhost/manager.xml 
  1. Ensure to disable the display of the app-server version or server information. For example, below in the Tomcat server, we can see the server information is displayed. This can be mitigated using the procedure below.

Update an empty value to””) in the file,$CATALINA_HOME/lib/org/apache/catalina/util/

  1. Ensure not to copy or add any sensitive file/data in the Docker image, it’s always recommended to use Secrets ( K8s secrets are encrypted at rest by default onwards Kubernetes v1.13 )  You may also use another secret management tool of choice such as  AWS Secret Manager/Hashicorp Vault.
    • Eg: do not enter Database Endpoints, username, passwords in the docker file. Use K8s secrets and these secrets can be used as an Environmental variables  
apiVersion: v1
kind: Pod
  name: secret-env-pod
  - name: myapp
    image: myapp
      - name: DB_USERNAME
            name: dbsecret
            key: username
      - name: DB_PASSWORD
            name: dbsecret
            key: password
      - name: DB_ENDPOINT
            name: dbsecret
            key: endpoint

5. Ensure to disable Bash from the container images.

6. Endorse Multi-Stage build for smaller, cleaner and secure images.

To understand how can you leverage multi-stage can be found on :

7. Verify that the container images are scanned for vulnerability assessment before it is pushed to the registry. The AWS ECR has the feature that you can scan Repository to Scan on Push. Eg : CLAIR/AQUA/etc assessment tools can be used to scan images. These tools can be embedded in the CI/CD pipeline making sure if there is any vulnerability, the docker image push can be rejected/terminated. Find sample implementation  –

K8s level

  1. Make sure to use or upgrade Kubernetes to the latest stable version.
  2. It’s recommended not to use default namespace. Instead, create a namespace for each application, i.e separate Namespaces for separate sensitive workloads.
  3. Make sure to enable Role-Based Access Control (RBAC) for Clients( Service Accounts / Users) for restricted privileges.

RBAC Elements:

  • Subjects: The set of users and processes that want to access the Kubernetes API.
  • Resources: The set of Kubernetes API Objects available in the cluster. Examples include Pods, Deployments, Services, Nodes, and PersistentVolumes, among others.
  • Verbs: The set of operations that can be executed to the resources above. Different verbs are available (examples: get, watch, create, delete, etc.), but ultimately all of them are Create, Read, Update or Delete (CRUD) operations.

Let’s see  RBAC meant for seeing Kubernetes as a production-ready platform.

  • Have multiple users with different properties, establishing a proper authentication mechanism.
  • Have full control over which operations each user or group of users can execute.
  • Have full control over which operations each process inside a pod can execute.
  • Limit the visibility of certain resources of namespaces.

4. Make sure to standardize the naming and labeling Convention of the Pod, Deployment, and service. This will ease the operational burden for security management ( Pod Network Policy ).

5. Ensure to use Kubernetes network policy which will restrict the  Pods communication, i.e how groups of pods are allowed to communicate with each other and other network endpoints. Please find how to implement the network policy in Amazon EKS

6. AWS Single Sign-On (SSO), AWS Managed Microsoft Active Directory Service, and the AWS IAM authenticator can be used to control access to your Amazon EKS cluster running on the AWS cloud.

7. Corroborate to use Pod Security Context.

  • Ensure to disable root access. the docker image should be accessible from a non-root user
  • Make sure to configure read-only root file system
  • Security-Enhanced Linux (SELinux): You can assign SELinuxOptions objects using the seLinuxOptions field. Note that the SELinux module needs to be loaded on the underlying Linux nodes for these policies to take effect.
  • Make sure  Linux capabilities and/or add non-default Linux capabilities are used if it’s required.
  • Make sure not to run pods/containers as privileged unless you will require access to all devices on the host. Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).

Please Find the  Snippet for Pod Security Context  :

    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false
    level: "s0:c123,c456"
        - NET_RAW
        - CHOWN
      add: ["NET_ADMIN", "SYS_TIME"]

Note : Pod Security content can be used in pod as well as container level.

apiVersion: v1
kind: Pod
  name: security-context-demo-2
  #Pod level
    runAsUser: 1000
  - name: sec-ctx-demo-2
   #container level
      runAsUser: 2000
      allowPrivilegeEscalation: false

8. Make sure to embed these Kubernetes Admission Controllers in all possible ways.

  • AlwaysPullImages – modifies every new Pod to force the image pull policy to Always. This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them.
  • DenyEscalatingExec – will deny exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace or have access to the host PID namespace.
  • ResourceQuota – will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ResourceQuota object in a Namespace.
  • LimitRanger- will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the LimitRange object in a Namespace. Eg: CPU and Memory

10. Ensure to scan Manifest files (yaml/json) for which any credentials are passed in objects ( deployment, charts )  Palo Alto Prisma / Alcide Kubernetes Advisor.

11. Ensure to use TLS authentication for Tiller when Helm is being used.

12. It’s always recommended not to use a default Service account

  • The default service account has a very wide range of permissions in the cluster and should, therefore be disabled.

13. Do not create a Service Account or a User which has full cluster-admin privileges unless necessary,  Always follow Least Privilege rule.

14. Make sure to disable anonymous access and send Unauthorized responses to unauthenticated requests. Verify the following Kubernetes security settings when configuring kubelet parameters:

  • anonymous-auth is set to false to disable anonymous access (it will send 401 Unauthorized responses to unauthenticated requests).
  • kubelet has a `–client-ca-file flag, providing a CA bundle to verify client certificates.
  • –authorization-mode is not set to AlwaysAllow, as the more secure Webhook mode will delegate authorization decisions to the Kubernetes API server.
  • –read-only-port is set to 0 to avoid unauthorized connections to the read-only endpoint (optional).

15. Ensure to put restricted access to etcd from only the API server and nodes that need that access. This can be restricted in the Security Group attached to ControlPlane.

K8s API call level

  1. Ensure that all the communication from the client(Pod/EndUser) to the K8s(API SERVER) should be TLS encrypted
    1. May experience throttle if huge API calls happen
  2. Corroborate that all the communication from k8s API server to ETCD/Kube Control Manager/Kubelet/worker node/Kube-proxy/Kube Scheduler  should be TLS encrypted
  3. Enable Control Plane API to call logging and Auditing. ( EG:  EKS Control Plane Logging)
  4. If you are using Managed Services for K8s such as Amazon  EKS, GKE, Azure Kubernetes Service (AKS), these all things are taken care

EKS Security Considerations

  • EKS does not support Kubernetes Network Policies or any other way to create firewall rules for Kubernetes deployment workloads apart from Security Groups on the Worker node, since it uses VPC CNI plugin by default, which does not support network policy. Fortunately, this has a simple fix. The Calico CNI can be deployed in EKS to run alongside the VPC CNI, providing Kubernetes Network Policies support.
  • Ensure to Protect EC2 Instance Role Credentials and Manage AWS IAM Permissions for Pods. These can be configured by using below tools:
  • By using the IAM roles for the service accounts feature, we no longer need to provide extended permissions to the worker node’s IAM role so that pods on that node can call AWS APIs. We can scope IAM permissions to a service account, and only pods that use that service account have access to those permissions. This feature also eliminates the need for third-party solutions such as kiam or kube2iam.

Security Monitoring of K8s

Sysdig Falco is an open-source, container security monitor designed to detect anomalous activity in your containers. Sysdig Falco taps into your host’s (or Node’s in the case Kubernetes) system calls to generate an event stream of all system activity. Falco’s rules engine then allows you to create rules based on this event stream, allowing you to alert on system events that seem abnormal. Since containers should have a very limited scope in what they run, you can easily create rules to alert on abnormal behavior inside a container.


The Alcide Advisor is a Continuous Kubernetes and Istio hygiene checks tool that provides a single-pane view for all your K8s-related issues, including audits, compliance, topology, networks, policies, and threats. This ensures that you get a better understanding and control of distributed and complex Kubernetes projects with a continuous and dynamic analysis. A partial list of the checks we run includes:

  • Kubernetes vulnerability scanning
  • Hunting misplaced secrets, or excessive secret access
  • Workload hardening from Pod Security to network policies
  • Istio security configuration and best practices
  • Ingress controllers for security best practices.
  • Kubernetes API server access privileges.
  • Kubernetes operators security best practices.

Ref :

Automate and Manage AWS KMS from Centralized AWS Account

By | AWS, Blogs, Cloud, Cloud Assessment | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

As we have discussed in our previous blog that we use the AWS Landing Zone concept for many of our customers which consists of separate AWS accounts so they can meet the different needs of their organization. One of the accounts is the Security account where the security-related components reside. KMS Keys are one of the security-related key components that helps in the encryption of data. 

A Customer Master Key (CMK) is a logical representation of a master key which includes the following details:

  • metadata, such as the key ID, creation date, description
  • key state
  • key material used to encrypt and decrypt data.

There are three types of AWS KMS:

  • Customer Managed CMK: CMKs that you create, own, and manage. You have full control over these CMKs.
  • AWS Managed CMK: CMKs that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. Some AWS services support only an AWS managed CMK.
  • AWS Owned CMK:  CMKs that an AWS service owns and manages for use in multiple AWS accounts. You do not need to create or manage the AWS owned CMKs.

This blog covers the automation of Customer Managed CMKs i.e. how we can use the Cloudformation templates to create the Customer Managed CMKs. It also discusses the strategy that we follow for our enterprise customers for enabling encryption in cross accounts. 

KMS Encryption Strategy

We are covering the KMS strategy that we follow for most of our customers.

In each of the Accounts, create a set of KMS Keys for the encryption of data. For example,

  • UAT/EC2
    • For enabling the default EC2 encryption, go to Ec2 dashboard settings in the right-hand side as shown in the below screenshot:

Select “Always encrypt the EBS volumes” and Change the default key. Paste the ARN of the UAT/EC2 KMS Key ARN.

  • UAT/S3
    • Copy the ARN of UAT/S3 KMS Key ARN.
    • Go to Bucket Properties and Enable Default Encryption with Custom AWS-KMS. Provide the KMS ARN from the security account.
    • This Key can be used while provisioning the RDS DB instance.
    • Ensure to provide the Key ARN if using via cross-account

Automated KMS Keys Creation

Below Cloudformation template can be used to create a set of KMS Keys as follows:

Ensure to replace the SECURITY_ACCOUNT_ID variable with the 12-digit AWS security account ID where KMS keys will be created.

The CF Template does the following:

  • Creates the below KMS Keys in the Target Account:
    • PROD/EC2
      • It is used to encrypt the EBS Volumes.
    • PROD/S3
      • Used to encrypt the S3 buckets.
    • PROD/RDS
      • Used to encrypt the RDS data.
      • It can be used to encrypt the AWS resources other than EC2, S3, and RDS. For example, if EFS requires to be created in the production account, PROD/OTHERS KMS key can be used for the encryption of EFS.
  • In our case, we are using the Landing Zone concept, so the “OrganizationAccountAccessRole” IAM Role used for Switch Role access from Master account is one of the Key Administrator.
  • Also, We have enabled the Single sign-on in our account, the IAM Role created by SSO “AWSReservedSSO_AdministratorAccess_3687e92578266b74″ has also the Key Administrator access.

The Key administrators can be changed as required in the Key Policy.

The “ExternalAccountID” in the Cloudformation parameters is used to enable the cross-account access via KMS Key policy.

Hope you found it useful.