All Posts By

powerupcloud

Cloud Management with ‘x’Ops – Part 2

By | Powerlearnings | No Comments

Compiled by Kiran Kumar, Business analyst at Powerup Cloud Technologies

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Summary:

After comprehending the term xOps in our preceding blog, let us move on to carry out an exhaustive synopsis of DevOps and CloudOps in this 2nd in the series blog. DevOps is a framework that contributes to the modernization of cloud business operations. Its objective is to unite IT development and operations to strengthen communications and build effective automation techniques via continuous agile methods in order to scale up businesses. CloudOps is an extension of DevOps and IT that looks to enhance business processes and systematize best practices for cloud operations. This blog covers the what, who and how of cloud management practices.

Index

1. Introduction

2. What is DevOps?

3. Stakeholders & DevOps Team

4. DevOps architecture – How it works?

4.1 Continuous integration and continuous delivery (CI/CD)

4.2 Continuous Development 

4.3 Continuous Testing 

4.4 Continuous Feedback

4.5 Continuous Monitoring

4.6 Continuous Deployment

4.7 Continuous Operations

5. Benefits of DevOps

6. What is CloudOps?

7. Building a CloudOps team

8. How CloudOps fits into DevOps?

9. Benefits of CloudOps 

10. Conclusion

Synergizing DevOps and CloudOps practices

1. Introduction

With the rapid emergence of cloud collaborated infrastructure and tools and a significant hike in the technology spend, the global cloud computing market is expected to grow exponentially at a compound annual growth rate (CAGR) of 17.5% by 2025. A surge in the registration and usage of cloud users has led to an upward and emerging cloud trend in the IT sector. 

Organizations that upscale their capabilities and look at continual modernization of software delivery are better equipped to release new features at high-speed, gain more flexibility and agility, comply with regulations and accelerate time to market. 

In the first part of the ‘x’Ops series, we saw how business operations and customer experiences can be enhanced by collaborating the teams to strengthen communications while inducing automation techniques to build an effective IT Ops. 

The development, security, network and cloud teams need to work jointly with IT operations to ensure reliability, security, and increased productivity giving rise to the term xOps. 

The xOps umbrella consists of four major ops functions broadly categorized under cloud management and cloud governance that help run efficient cloud operations. 

In this blog, the 2nd in the series, we will have a detailed look at the cloud management section that comprises of DevOps and CloudOps practices.

2. What is DevOps?

DevOps is a model that enables IT development and operations to work concurrently across the complete software development life cycle (SDLC). Prior to DevOps, discrete teams worked on requirements gathering, coding, network, database, testing and deployment of the software. Each team worked independently, unaware of the inefficiency and roadblocks that occurred due to the silos approach.

DevOps is combining development and operations to soothe the application development process on cloud while ensuring scalability, continuous and high-quality software delivery. It aims to build an agile environment of communication, collaboration, and trust among and between IT teams and application owners bringing in a significant cultural shift to the traditional IT methods.

3. Stakeholders & DevOps Team 

Christophe Capel, Principal Product Manager at Jira Service Management states, “DevOps isn’t any single person’s job. It’s everyone’s job”.

DevOps team comprises of all resources representing the development and operations team. People directly involved in the development of software products and services such as architects, product owners, project managers, quality analysts, testing team and customers form a part of the development team whereas operations team include people who deliver and manage these products and services. For example; system engineers, system administrators, IT operations, database administrators, network engineers, support team and third party vendors.

Some of the new roles emerging for DevOps are:

  • DevOps Engineer
  • DevOps Architect
  • DevOps Developer
  • DevOps Consultant
  • DevOps Test Analyst
  • DevOps Manager

4. DevOps architecture – How it works?

DevOps helps modernize existing cloud practices by implementing specific tools throughout the application lifecycle to accelerate, automate and enhance seamless processes that in turn improve productivity.

DevOps architecture enables collaborative cross-functional teams to be structured and tightly integrated to design, integrate, test, deploy, deliver and maintain continuous software services using agile methodologies to cater to large distributed applications on cloud. 

The DevOps components comprise of –

4.1 Continuous integration and continuous delivery (CI/CD)

The most crucial DevOps practice is to conduct small yet frequent release updates. With continuous integration, the developers are able to merge frequent code changes into the main code while continuous delivery enables automated deployment of new application versions into production. CI/CD facilitates full automation of the software lifecycle that involves building code, testing and faster deployment to production with least or no human errors allowing teams to become more agile and productive.

4.2. Continuous Development 

The process of maintaining the code is called Source Code Management (SCM), where distributed version control tools are preferred and used to manage different code versions. Tools like Git establish reliable communication among teams, enable writing as well as tracking changes in the code, provides notifications about changes, helps revert back to the original code if necessary and stores codes in files and folders for reuse making the entire process more flexible and foolproof. 

4.3. Continuous Testing 

A test environment is simulated with the use of Docker containers and automation test scripts are run on a continuous basis throughout the DevOps lifecycle. Reports generated improve the test evaluation process, helps analyze defects, saves testing time and effort and enables the resultant test-suite and UAT process to be accurate and user-friendly. TestNG, Selenium and JUnit are some of the DevOps tools used for automated testing. 

4.4. Continuous Feedback

Continuous tests and integration ensure consistent improvements in the application code and continuous feedback helps analyze these improvements. Feedbacks act as a steeping stone to make changes in the application development process leading to newer and improved versions of the software applications. 

4.5. Continuous Monitoring

Monitoring the performance of an application and its functionalities is the key to resolve and eliminate common system errors. Continuous monitoring helps in sustaining the availability of services in the application, detects threats, determines root cause of recurring errors and enables auto-resolution of security issues.

Sensu, ELK Stack, NewRelic, Splunk and Nagios are the key DevOps tools used to increase application reliability and productivity.

4.6. Continuous Deployment

Most systems support automated and consistent deployment of code releases, scheduled updates and configuration management on all servers. A cloud management platform enables users to capture accurate insights and view the optimization scenario, analytics on trends by the deployment of dashboards.

Ansible, Puppet and Chef are some of the effective DevOps tools used for configuration management.

4.7. Continuous Operations

DevOps allows teams to collaborate and automate the application release operations and its subsequent updates on a continuous basis. Development cycles in continuous operations get shorter, enable better monitoring and accelerate the time-to-market.  

5. Benefits of DevOps

  • Speed: The DevOps model enables developers and operations to easily adapt to changes, have faster releases, be more innovative and efficient at driving results.
  • Enhanced Delivery: The quicker the new releases and fixes the faster is the delivery response time. CI/CD practices help automate and enhance the software release processes effectively.
  • Reliability & Security: All application changes and infrastructure updates are tested to ensure they are functional, faster and reliable. DevOps grants monitoring and logging checks of real-time application performances as well as conducts automated security validations to maintain control, adhere to configuration and compliance policies and boost user experience.
  • Scalability: Infrastructure as a code helps manage complex and changing systems in a repeatable, automotive, low-risk and efficient manner enabling scalability of infrastructure and development processes. 
  • Collaboration: DevOps culture promotes values, responsibility and accountability among teams, making them more efficient, collaborative and productive. 
LTI Canvas devOps Self-service DevSecOps platform for automated enablement and persona-based governance. It is a comprehensive suite of accelerators that empowers continuous assessment, lean CI/CD automation, and value stream management provide bird’s eye view into the entire DevSecOps lifecycle

6. What is CloudOps?

Cloud operations, popularly known as CloudOps is the rationalization of best practices and processes that allow IT operations and services housed on cloud, to function and optimize efficiently in the long run. 

According to a survey conducted by Sirius Decisions, 78% of organizations have already adopted agile methods for product development. However, for organizations to accelerate agility and attain cloud smart status it is crucial for DevOps and traditional IT operations to string together the processes and people maintaining the services.

Maintaining on-premise data centers, monitoring network and server performances and running uninterrupted operations were always a challenge in the traditional IT set-up. DevOps, through its containers, microservices and serverless functions, helps create agile processes for quicker delivery of reliable services, provides efficient orchestration and deployment of infrastructure and applications from any location, automates operations and allows scalability of resources whenever required without compromising on stability and security. 

That is where CloudOps comes into the picture and has the capability to offer speed, security and operational efficiency to the DevOps team making the system predictive as well as proactive while enhancing visibility and governance.

7. Building a CloudOps team

The first step is to determine whether an organization needs a cloud operations team. Identify and align all the roles and responsibilities spread across organization’s cross-functional teams who are already involved in strategizing cloud operations. 

Cloud adoption and cloud governance teams or individuals will support the cloud strategy team. Further, to address critical areas of security, cost, performance, deployment, adoption and governance, the cloud operations team need to align with other teams within the organization. This would require the cloud operations team to collaborate with the cloud strategy, cloud adoption, cloud governance and the cloud center of excellence teams to execute, implement and manage cloud operations functions. 

8. How CloudOps fits into DevOps?

Since cloudOps is an extension of DevOps and IT, it aims at building a cloud operations management suite to direct applications and data on cloud post-migrationAccording to the Right Scale State of the Cloud Report, 94% of enterprises are using some type of cloud service and the global cloud computing market is expected to grow to $832.1 billion by 2025.

CloudOps comprises governance tools that optimize costs; enhance security and capacity planning. It also promotes continuous monitoring and managing of applications running on cloud with minimal resources. 

Cloud platforms offer DevOps the flexibility, scalability, recovery and the ability to dissociate from the existing infrastructure.

Built-in automated cloudOps techniques provision for agility, speed, and performance-related metrics. It additionally facilitates smooth handling of service incidents to fix cloud infrastructure and application-related issues.

Combining DevOps initiates a faster CI/CD pipeline guaranteeing continuous improvement, greater ROI with minimum risk, and consistent delivery of customer needs.

Once organizations implement cloudOps strategies into DevOps, the following best practices can be observed on cloud:

  • Plan and develop a cloud migration strategy keeping risks, costs and security in mind.
  • Understand current network infrastructure to map out an adaptive cloud-based technological process.
  • Bring in a cultural shift by changing the mindset and training the resources to help understand CloudOps aligned DevOps strategies.
  • Dispense self-provisioning tools that allow end-users to initiate applications and services without IT support.
  • Implement automated processes to test security configurations and establish compliance policies to ensure uniformity and stability across dynamic multi-cloud services and teams.
  • Automation organizes development workflows and agile change management systems facilitate seamless functioning of teams. To continuously improve and evaluate processes, increase accessibility and optimize productivity, streamlining the change management process is elementary.

9. Benefits of CloudOps

  1. Cost-effective: Utilizing cloud service platforms minimize the hardware and infrastructure costs while also saving on resources, utilities and data center maintenance costs.
  2. Scalability & accessibility: Organizations can build up their cloud capacity as per their need. This allows teams to become more productive and shift focus on innovative cloud techniques. Also, teams can access and manage cloud operations from any location using any devices regardless of platform.
  3. Automation: Technology intelligence tools on cloud automate infrastructure provisioning, building codes, running quality assurance tests and generating reports that lead to faster time-to-market.
  4.  Enhances security: Since cloud ensures effective security monitoring and checks on cloud data and services, a survey by RapidScale depicted that 94% of businesses saw improvement in security after moving to the cloud.
  5. Simplifies backup and disaster recovery: Cloud makes storage of data in multiple locations possible and offers several cloud-based managed services that are fault-tolerant and have failover alternatives to protect the data on cloud.
  6. Seamless integration: Applications sharing common services can co-exist in cloud without being interconnected. 
  7. Continuous operations: Cloud operations are made available 24/7 where software can be rapidly updated and deployed without disrupting any services.

10. Conclusion

Thus, cloudOps offer tremendous value to businesses while devOps ensure continuous operations, directing organizations to optimize and enhance their way of building and deploying applications on cloud. CloudOps is most advantageous to enterprises from the technical point of view and associates with devOps to upgrade their products and services. We will emphasize more on cloud governance practices that include finOps and secOps in our succeeding blog of the xOps series.

How We Ushered a CCoE for a Top Finserv

By | Powerlearnings | No Comments

Written by Hari Bhaskar, Head – Business Development, LTI

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Summary

A leading client was seeking to modernize its entire business for which, we employed our CCoE practices as a roadmap to direct the cloud adoption journey in three stages. The CCoE team began with a migration readiness assessment and planning (MRAP), followed by offering consulting, migration, deployment and managed services across all business functions.

Index

1. Preface

2. Business Needs

3. Solutions devised by the CCoE team

3.1 Building a case for cloud

3.2 Migration Readiness Assessment and Planning (MRAP)                         

3.2.1 Pre-requisites

3.2.2 Assessment

3.2.3 MRAP Deliverables

3.2.4 Key Findings

3.3 Migration

3.3.1 Security Consulting

3.3.2 Database Consulting 

3.3.3 Data Migration

3.3.4 API Gateway Migration

3.3.5 Implementing DevOps     

3.4 Deployment and Managed Services

3.4.1 Deployment

3.4.2 Managed Services

3.4.2.1 Service Monitoring 

3.4.2.2 Security Management

3.4.2.3 Backup Management

3.4.2.4 Alerts and Report Management

3.4.2.5 DevOps Support

3.4.2.6 Continuous Integration

3.4.2.7 Reviews

3.4.2.8 Value Added Services

4. Control Handover

5. Benefits and Outcome

CCoE as-a-service – A Case Study

1. Preface

In the previous three blogs of this CCoE series, we have extensively looked at what a CCoE is, why organizations need it, the factors influencing an effective CCoE set up and where can organizations instrument them. In this blog, the 4th in the series, let us look at LTI-Powerup CCoE practices administered on one of their esteemed clients to direct their cloud transformation.

The client is a leading name in the finance industry offering a broad range of financial products and services to a diversified customer base. They have a sizable presence in the large retail market segment through their life insurance, housing finance, mutual fund and retail businesses across domestic and global geographies.

Cloud computing has gained significant momentum in the financial sector and the client is looking at modernizing their technological profile across all business functions.

  • However, the current setup was heavily fragmented with 27 LOBs and independent IT strategies)
  • They lacked the nimbleness of newer players in the market with slow service launches
  • IT compelled business to accept SLAs that were not ideal
  • And IT leaders were not sure how to quantify Cloud benefits

LTI-Powerup exercised their CCoE implementations as a roadmap and deployed a team to regulate the client’s entire cloud adoption journey. This team followed the Build-Operate-Transfer (BOT) model, moving on to steadily transition the core operations to the client. Once the client team was well versed with cloud deployment & operations, they and LTI-Powerup jointly managed Cloud solutioning and L1, L2 and L3 capabilities in due course of time.

2. Business Needs

LTI-Powerup CCOE to –

  • Offer Consulting to build a business case for the cloud. Propose and carry out a cloud readiness assessment in order to understand how well prepared the client is for the technology-driven transitional shift.
  • Impart Cloud Migration Solutions
  • Provide a transformation roadmap for applications and services

Provide Deployment and Managed Servicesolutions devised by the CCoE team

3.  Solutions Devised by the CCoE Team

3.1 Building a case for cloud

LTI-Powerup CCoE team was to build a business case that would help in smooth transition of the client’s current set up on to cloud. This involved carrying out an assessment of the client’s existing applications and operations, detecting improvement areas to enhance the transition, as well as identifying all the key stages of cloud adoption and the associated costs that come with it. The team was also to cater to analysis of existing and anticipated future workloads to create the best subsequent migration plan in order to oblige to increase in workload demands, if any. The basic intention was to enhance customer experiences, win the support of key stakeholders and reap maximum benefits and savings achieved from moving to cloud.

3.2. Migration Readiness Assessment and Planning (MRAP):

3.2.1 Pre-requisites

LTI-Powerup CCoE team’s scope of services was to define and plan a business case for the Migration Readiness Assessment & Planning (MRAP) to assess and analyze the client’s current on-premise environment to state how well equipped it is to migrate to the cloud. An MRAP report would then be drafted which would act as a roadmap to the actual migration. 

The team planned the MRAP exercise to understand the number and type of applications involved, identify the right stakeholders for interviews, tools to be installed, different types of installations and creation of project plan. The application migration assessment covered 2 Data Centers, 1500 VMs and around 500 applications in all.

 Application discovery services were configured to help collect hardware and server specification information, credentials, details of running processes, network connectivity and port details. It also helped acquire a list of all on-premise servers in scope along with their IP addresses, business application names hosted on them and the stakeholders using those apps. The scope ensured that the CCoE team looked into and advocated the prediction analysis of future workloads as well.

3.2.2. Assessment

Once the tool was deployed with the necessary access, servers were licensed to collect data for a span of 2 weeks before building and grouping the applications in scope.

The team conducted assessment and group interviews with the application, IT, security, devOps and network teams to bridge gaps if any. A proposed migration plan was to be developed post-analysis that would state identified migration patterns for the applications in scope, create customized or modernized target architecture to plan a rapid lift & shift migration strategy.

3.2.3. MRAP Deliverables

A comprehensive MRAP report included information on the overall current on-premise infrastructure, the architecture of all identified applications, suggested migration methodology for each application which included leveraging PaaS solutions, key changes to the application, a roadmap with migration waves, total cost of ownership (TCO) analysis and an executive presentation for business cases.

The CCoE in coordination with the client team, set up a framework to create, automate, baseline, scale and maintain a multi-account environment. This is considered as a best practice usually recommended before deploying applications on the cloud. This architecture catered to not just application and network deployment but also covered non-functional requirements, data security, data sizes, operations and monitoring and logging. The production environment was isolated as the customer had several applications running development, test and production from the same account.

3.2.4. Key Findings

  • Current Infrastructure provisioned was utilized to only 30%.
  • Close to 20% of servers were already outdated or turning obsolete within the next year and another 20% of applications could be moved to new architectures to save cost.
  • Databases were being shared across multiple applications whereas several applications were found to be running on the same servers, with servers being shared across various lines of business.
  • Up to 50% savings on TCO can be achieved over the next 5 years by moving to the cloud.

Above is a snap shot of how our team performed and recorded the peak utilization assesment for prediction analysis. This assisted the client in having clearer visibility towards future demands on cloud and accordingly plan for provisioning a strategic build up on the cloud.

3.3. Migration:

The agreement between LTI-Powerup and the client is to provide cloud CoE resources to form a consulting services pod to help the customer understand and adopt cloud services. Once the CCoE consultants suggest the design and development of a cloud team, they collaborate with cross-functional teams to check if the proposed architecture, cloud applications and processes are the right fit. Based on the MRAP findings, they put forward any alterations if necessary before moving on to migrating existing data or legacy workloads. Analysis and recommendations are also provided based on parameter-wise assessment done for future requirements and expansion purposes where modernized techniques like devops and containerization are suggested as required. The team is also responsible for training the technical as well as the non-technical workforce to conduct smooth cloud migration operations.

3.3.1. Security consulting

The scope of this engagement is for the CCoE team to help the customer design a security framework on the cloud. LTI-Powerup understood the list of applications, their current account and traffic mapping to categorize all those applications that had common security and compliance requirements.

A security framework was designed incorporating all the identified security components, after which, the existing setup was validated to recommend changes. The team also developed the scope of work for implementation of the same.

3.3.2. Database Consulting

LTI-Powerup database professionals detailed the scope of work and the responsibility matrix for database services. The team offered consultative services for database administration and suggested regular database health checks, maintain database server uptime, configure security, users and permissions as well as perform backups and recovery.

3.3.3. Data Migration

The client had been running their data analytics and prediction workloads in an on-premise cluster, which took approximately 12 hours to process data. Therefore, they wanted to validate the move to the cloud with the intent of reducing costs as well as processing time.

With their cloud-first initiative in mind, LTI-Powerup deployed its CCoE team to evaluate the feasibility to have all their systems tested and working on an Infrastructure as-a-code service to ensure the cloud meets all their future expansion plans.

The cloud solutions architects along with senior cloud engineers and data architects understood the existing setup and designed, installed and integrated the solutions on cloud using managed cluster platforms where highly available cluster configuration management, patching and cross-data-center replication were undertaken.

Solutions were also customized wherever required and data was then loaded onto a cloud data warehouse with provisions for backup, disaster recovery, upgrade, perform on-demand point-in-time restores, continuous support and maintenance.

3.3.4. API Gateway Deployment

The client was planning to deploy their APIs on cloud as well as create an open API to cater to external developers across their business groups with robust multi-factor authentication policies. The scope of this engagement between LTI-Powerup and the customer was to provide cloud resources to help understand and adopt cloud API services.

The LTI-Powerup CCoE team proposed API configuration and deployment solutions on the cloud along with the required Identity and access management (IAM) roles.

The proposed solutions covered all the APIs being deployed. It included API gateway usage plans that control each API’s usage, caching was enabled for faster access, scripts were run to deploy developer portal and multiple cloud services were initiated to host web pages, as well as store API key for user mapping. User pools with roles to grant access to the portal when required were also created. Additionally, the APIs were integrated on the cloud to generate customer-wise API usage billing reports to keep a check on costs whereas the client team constructed documents for each API gateway for future reference.

3.3.5. Implementing DevOps

The client had a 3-tier infrastructure for its applications based on a content management system (CMS) framework with close to 10 corporate websites and open-source relational database management systems running on cloud.

However with increasing workloads, the demand for hardware and infrastructure will certainly scale up. To cater to such rising demands, the CCoE team conducts a predictive analysis that helps analyze the current data to make predictions for the future. The team then gives out recommendations to the client with respect to building Greenfield applications on the cloud to accommodate for future workloads.

The client in this case, wanted to build a CI/CD pipeline with the help of cloud services for their future workloads. The CCoE team recommended devOps solutions to optimize and modernize the client’s setup that was spread across varied cloud and on-premise environments with multiple deployments in store. In this approach, due to the variety of connectors to be integrated with various stages of the pipeline, an entire orchestration of CI/CD pipeline was to be managed on cloud right from storing the cloned code to building the application code, integrating with other tools, code analysis, application deployment on the beta environment, testing and validation.

The code pipeline on the cloud would also facilitate customization at every stage of the pipeline as needed. Once validation was done, the applications were deployed to production instances on cloud with manual and automatic rollback support in case of a failure or application malfunction.

3.4. Deployment and Managed Services 

3.4.1. Deployment

The scope of this engagement between LTI-Powerup and the customer was to build a team to understand and deploy cloud services for the various lines of business adopting cloud. The CCoE recommended deployment team consisted of the deployment engineers and the consulting experts who looked after cloud servers, OS, API gateway deployment and development, scripting, database, infrastructure automation, monitoring tools, hardware, continuous integration, docker orchestration tools and configuration management.

3.4.2. Managed Services

The scope of the LTI-Powerup team was to facilitate:

  • Provide 24*7 cloud support for production workloads on cloud and
  • Provide security operational services for applications hosted on the cloud.
  • Provide Cost optimization services
  • Ensure automation and continuous optimization

After the introductory onboarding process, LTI-Powerup in discussions with the client IT team provided a blueprint of the complete architecture and was responsible to detect failure points, servers and databases without backup schedules, missing version control mechanisms for backups and the absence of high availability set up that would otherwise lead to Single Point of Failures (SPOC).

The CCoE team comprising of the service delivery manager, lead cloud support engineer and the technical lead prepared the escalation matrix, SPOC analysis, shared plan to automate backups, fix security loopholes, implement alert systems, service desk, client sanctioned security measures and metrics, cloud service monitoring agents and a helpdesk.

The day-to-day tasks performed by cloud operations team in the customer environment were:

3.4.2.1. Service Monitoring

The devOps team supported continuous monitoring of the cloud infrastructure health including CPU, memory and storage utilization, URL uptime and application performance. The team also monitored third party SaaS tools and applications that were integrated into cloud. Defects, if any, were raised as a ticket in the helpdesk, client communication would be sent out and logged issues would be resolved with immediate effect based on severity.

LTI-Powerup devOps team would thus provide L0, L1, L2, L3 support including support for infrastructure and application. Any L3 issues were to be escalated to the cloud vendor and LTI-Powerup would follow-up with the cloud platform support team for resolution on an on-going basis.

3.4.2.2. Security Management

Security in cloud was a shared responsibility between the client, cloud provider and LTI-Powerup managed services team with the latter taking complete responsibility for the infrastructure and application security on behalf of the client. Security components for the cloud could be broadly classified into native cloud components and third-party security tools.

The managed services team conducted a monthly security vulnerability assessment of the cloud infrastructure with the help of audit tools, remediated the issues and maintained security best practices. The team also owned and controlled the IAM functions, multi-factor authentication of user accounts, VPN, server and data encryption, managed SSL certificates for the websites, inspected firewalls, enabled and monitored logs for security analysis, resource change tracking and compliance auditing on behalf of the customer.

The managed services team highly recommended detection and prevention from DDoS attacks on websites and portals and took charge of implementing anti-virus and malware protection as well as monitoring and mitigating DDoS attacks, if any.

3.4.2.3. Backup Management

LTI-Powerup devOps team will continuously monitor the status of automated and manual backups and record the events in a tracker. If the customer uses a third-party backup agent then the servers running the backup master server will be monitored for uptime. In case of missed automatic backups, the team notified the client, conducted a root cause analysis of the error and proceeded to take manual backups as a corrective step. Backup policies were revisited every month with the client to avoid future pit-falls.

3.4.2.4. Alerts and Reports Management

Alerts were configured for all metrics monitored at cloud infrastructure and application levels. The monitoring dashboards were shared with the client IT team and alerts triggered for cloud hardware, database or security issues were logged as a ticket and informed to the customer’s designated point of contact. In case of no acknowledgment from the first point of contact, the LTI-Powerup team would escalate to the next level or business heads. The client had access to the helpdesk tool to log or edit change requests for the devOps team to take appropriate action.

3.4.2.5. DevOps Support

A typical devOps support enabled the import of source code from another repository, integrated relevant extensions to the code bundle and allowed zipping and moving it to the respective directory, also facilitating ‘one click’ automated deployments for applications.

In this case, for the production environment, manual checks were carried out to identify the correct environment and servers for deployment followed by verification of event logs and status URL. Rollback, if needed, was executed using snapshots taken during the beginning of the deployment process.

3.4.2.6. Continuous Integration

The managed services team enabled a seamless CI model by managing a standard single-source repository, automating the build and testing it while also tracking the commit for building integration machines. Transparency in the build progress ensured successful deployments and in case of a build or test fails, the CI server alerted the team to fix the issue, generating continuous integration and test throughout the project.

3.4.2.7. Reviews

 LTI-Powerup service delivery manager conducted monthly review meetings with the client team to discuss total downtime for the previous month, total tickets raised and resolved, a best practice implemented, incident and problem management, and lastly, lessons learned for continuous improvisation.

3.4.2.8. Value-added services

The LTI-Powerup CCoE team would handle cloud administration and governance on behalf of the client to ensure all deployment activities like data center account management, IAM access management and controls, billing consolidation and analysis as well as proposing new governance strategies from time to time is accomplished with high standards.

4. Control Handover

LTI-Powerup handover process allows the clients to steadily take over the control of their cloud setup.

A dedicated training unit powered by LTI migration and modernization experts facilitates smooth conduct of cloud skills training by offering to train the employees on multiple facets of the cloud transformation lifecycle.

To begin with, client will participate in the ratio of 3:1 where there will be 1 client employee against 3 LTI-Powerup cloud resources to manage the cloud operations eventually reversing to 1:3 (LTI: client) in the final year. This enables the client’s cloud team to get a hands-on experience in migration related activities over a period of 3 years.

Similarly, the cloud transformation PODs will have participation from client employees in the ratio of 2:1 reversing to 1:2 (LTI: client) in the last year which enables clients cloud team to be better equipped in handling complex cloud transformation processes over a span of 3 years.

For cloud managed services, teams handling client’s workloads will have participation from client employees in the ratio of 2:1 increasing to 1:2 (LTI: client) in the last year. This ensures the client cloud team to be fully efficient on 24*7 cloud management services including the management of CloudOps, DevOps, FinOps and SecOps.

5. Benefits and Outcome

The CCoE implementation by LTI-Powerup enabled the client to experience agility, flexibility, and faster go-to-market strategic solutions through PaaSification allowing the client to achieve 40% reduction in time to market for their products and services. The cloud transformation enhanced business performance and triggered intangible benefits to a large extent.

Significant savings in infrastructure and operational costs lead to cost optimization, reducing the projected TCO with a notable benefit of 45% over a period of 5 years.

A centralized cloud management led to eliminating duplication of efforts and overheads whereas the cloud implementation and tools accelerated migration by 60%, reduced the migration bubble, and leveraged a comprehensive state-of-the-art setup.

How to Position your CCOE

By | Powerlearnings | No Comments

Written by Vinay Kalyan Parakala, SVP – Global Sales (Cloud Practice), LTI

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Summary:

Cloud CoE is a well-known concept in organizations today but the role it plays in accelerating cloud-enabled transformation is still evolving as compared to the rate at which industries are adopting to cloud. We have seen in our previous blog what a CCoE is and how more and more organizations are realizing the need and importance of establishing a Cloud Centre of Excellence (CCoE), also known as Cloud Business Office or Cloud Strategy Office to scale their cloud journeys.

This blog is a second in the series that will emphasize factors to be considered while setting up a CCoE, the challenges that come with it, and when and where organizations must actually consider implementing them.

Index:

1. Introduction

2. The concept of a centralized CCoE

3. Factors that establish an effective CCoE

3.1 Align the goal and purpose of CCoE to business needs

3.2 Ascertain the CCoE team structure

3.3 Create the cloud governance roadmap

3.4 Set a long-term vision

4. The key challenges in building a CCOE

5. Where can CCOEs be hosted?

6. Conclusion

1. Introduction

According to RightScale, 66% of enterprises already have a central cloud team or a cloud CoE and another 21% plan on having one in the near future.

A CCoE is a team of experts whose objective is to provide organizations with guidance and governance around cloud systems.

Cloud CoEs are focused mainly around technology, business concepts and skills in order to gain the right structure and expertise within enterprises.

It helps bridge the gap between the available knowledge and skills vis-à-vis what is required in order to establish matured cloud centric operations for businesses.

With the previous blog revealing what a CCoE is and why is it important and necessary, we now move on to this second blog in the series that will emphasize key components required to form a CCoE and where should organizations implement it.

2. The concept of a centralized CCoE

Let us first understand that the CCoE is an enterprise architecture function that aids in setting up organization-wide cloud computing policies and tools to protect businesses and mainstream cloud governance. Establishing a centralized CCoE is the optimal way to sync people, process and technology and a centralized cloud center of excellence, supported by an advisory committee and community of practice, is a best-practice approach to ensure cloud adoption success.

Some organizations are reluctant to form CCoEs either due to lack of priority or the belief that their level of cloud usage doesn’t justify the effort or they have a cloud set up that moves too quickly to operationalize the process.

However, a centralized CCoE is a must as it monitors the overall governance directing cloud-computing policies, selecting and managing the right cloud providers, provisioning cloud solution architecture and workload deployment, regulating security and compliance, optimizing costs and bringing in best practices to drive organizations towards cloud maturity.

Thus, such well-structured CCoE frameworks help companies to not just minimize and manage their risks better but also help them become more result-oriented and agile in their cloud-lead IT transformations.

3. Factors that establish an effective CCoE

3.1 Align the goal and purpose of CCoE to business needs

The cloud migration strategy will mainly influence the objective of the CCoE based on which the CCoE will identify the stakeholders that need to come together to ensure cloud objectives are defined, measured and aligned to business goals. For example, the CCoE for an organization that largely replaces applications with SaaS will be different from the CCoE of an organization that is rewriting applications, where the latter will focus on application engineering and the former on application integration.

It is also vital that organizations assess themselves in terms of cloud security, compliance, finances and operations so that the CCoE knows what areas need maximum focus to begin with.

Once a set of goals is defined and aligned to business needs, it should be able to answer questions like who should be part of the CCoE, what are the outcomes being targeted, how mature is our functioning and who should lead the CCoE. The intent of a CCoE must orient with the type of business strategies and its needs.

3.2 Ascertain the CCoE team structure

The executive sponsor and a group of cross-functional leaders who form a steering committee drive the CCoE leadership activities. They aid in strategizing and decision making processes, approving roadmaps, adopting standards and defining compliance policies, thus adding visibility and structure to the cloud migration program. They also have the authority to initiate, build and communicate with an association of CCoE representatives and various stakeholders across the enterprise to define and create models of the CCoE structure, roles and responsibilities.

For complex organizations, apart from a high-level model like a head CCoE team, there will be individual roles and responsibilities defined for each functional area within the organization that will report into this high-level model. The CCoE roles will evolve over time as the organization as a whole matures in its cloud capabilities. Therefore, it is important, especially in large organizations, to set maturity goals and roadmaps, define the cloud maturity model and measure it with KPIs, to ensure cross-functional teams also progress incrementally towards the desired state of maturity after which a cloud governance model will have to be defined.

3.3 Create the cloud governance roadmap

As the team grows, it is essential for companies to draft a vision, guidelines and strategies pertaining to governance policies.

The roadmap must cover objectives that cater to the people, process and technology.

The first step would be building a community of practice to culturally change people within organizations to adopt better to cloud while taking responsibilities and ownership to fulfill the purpose of a CCoE. Reforming the processes would include identifying workloads that can be migrated, establish architectural standards and best practices, implement monitoring as well as reporting systems, handle disaster recovery, business continuity strategies, and configuration management practices.

Lastly but most importantly, introducing tools to define and ensure security and compliance policies, automate workflows, optimize capacity and enhance architectural implementations.

3.4 Set a long-term vision

Once the CCoE team establishes itself as an effective resource across the business, consider a longer-term roadmap that involves driving a “cloud-first” approach, migrating more complex applications, gaining credibility by enhancing the existing policies and practices and obtaining funding and sponsorship to optimize cloud governance.

Establish cloud best practices for organizations to create knowledge and code repositories and learning material for trainings that would act as a guide to speed up the cloud operations while ensuring security, scalability, integrity and performance.

4. The key challenges in building a CCOE

The most common structural CCoE issues are:

  • Lack of directives It is essential for enterprises to define and understand CCoE goals and intentions in order to stay focused.
  • Incorrect scopeGoals must match the expertise of the CCoE team. It is best to begin with a small team with minimal set objectives and eventually scale up as the team becomes more efficient. Unreasonable expectations may compromise projects and team’s performances.
  • Delays in cloud adoption As CCoE drives cloud adoption, it is imperative that there are no delays failing which the team’s ability and efficiency might get questioned.
  • Focus on governance rather than control CCoE can guide and contribute constructively if they adopt a flexible and adaptive cloud approach. Rather than focusing on control, CCoEs must provide businesses with apt processes to maintain and upscale cloud practices through governance strategies.
  • Lack of flexibilityMost organizations are diverse when it comes to innovation and are willing to experiment with emerging technologies. CCoE must not adopt a one-size-fits-all approach to cloud computing guidelines and technological preferences. Organizations should be open to embracing a dynamic and flexible outlook to keep pace with cloud advancements and the ever-changing governance needs.

5. Where can CCOE be hosted?

As per the latest reports from CloudCheckr, an advanced technology partner in the AWS Partner Network (APN) program, 47 % of organizations have formed a CCoE of some kind and 63 % have added new roles to customize and improve their CCoE practices. 

CCoEs can be hosted depending on the type of organization, its size, and the capability of its resources. If resources are cloud proficient and are technically sound and engaging, then cloud CoE team members can be staffed internally. 

If the organization is a small or mid-sized set up with minimal or zero cloud expertise, it can host an on-premise centralized CCoE team that serves as a cloud service agent or a consultant acting in an advisory role to the organization’s central and distributed IT and cloud service users.

Furthermore, if the scope of cloud migration is part of a large enterprise conglomerate’s digital transformation, then the scope of CCoE will also broaden and must comprise of cross-functional business units and stakeholders across the enterprise. Such organizations may opt to outsource the CCoE functions entirely to external vendors or adopt a hybrid approach.

In a hybrid CCoE set up, the centralized CCoE team comprises in-house technologists and resources from cross-functional teams across the organization as well as external cloud consultants to look after its entire cloud practices. 

A single consolidated team is not enough to cater to such a vast set up and would require business unit-wise (BU-wise) CCoE teams to manage and look after their respective BUs. These independent CCoE teams can then collectively account for one chief CCoE team. 

A recent study reveals that the hybrid cloud is the weapon of choice for 45% of enterprises and 60% of government agencies use internal resources to lead their cloud migration projects while 40% hire an external service provider.

AWS Head of Enterprise strategy, Stephen Orban, explains that creating a CCoE in his past role as the CIO helped him dictate how he and his team could build and execute their cloud strategy across the organization. He said, “I knew from seeing change-management programs succeed and fail throughout my career that having a dedicated team with single-threaded ownership over an organization’s most important initiatives is one of the most effective ways to get results fast and influence change.”

6. Conclusion

CCoEs play an essential role in the development and measurement of the cloud business success.

83 % of the organizations admit that with a CCoE in place, their business productivity has significantly flared up and 96 % believe they would benefit from it without doubt. The top reported benefits of a CCoE include reducing security risks (56 %), reducing costs (50 %) and improving the ability to be agile and innovative (44 %).

Thus, if CCoEs are built in an appropriate manner, organizations can –

  • Use its resources in a more efficient way,
  • Provide quality services and products to customers,
  • Reduce costs by eliminating inefficient practices and
  • Cut the time required for the implementation of new technologies and skills. It can help achieve consistency, as well as reduce complexity.

Stay tuned (follow us) for the next blog in this series, where we will have a look at the detailed structure of a CCoE with the roles, responsibilities, resources, and technical requirements.

CCoE-as-a-Service

By | Powerlearnings | No Comments

Written by Vinay Kalyan Parakala, SVP – Global Sales (Cloud Practice), LTI

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Summary:

With the significant growth in cloud markets, it is necessary that organizations inculcate cloud governance and operational excellence through dedicated Cloud Centre of Excellence (CCoE). It helps streamline and strengthen businesses by executing governance strategies across the infrastructure, platform and software as a service cloud models.

Every organization essentially needs to adopt a particular type of CCoE that best fits, in order to modernize their business and progress alongside continuously evolving technologies and innovation. CCoEs are internal and external teams built across finance, operations, security, compliance, architecture and human resources functions that aligns cloud offerings with the organizational strategies.

Therefore to facilitate effortless migrations along with agility, flexibility, cost optimization and multi-cloud management while perceiving how enterprises can structure and standardize their operations, it is important to establish a CCoE that would lead organizations through an effective cloud transformation journey. 

Index:

  1. What is Cloud Centre of Excellence (CCoE)?
  2. The need for CCoE
  3. Types of CCoE 
    1. Functional CCoE
    2. Advisory CCoE
    3. Prescriptive CCoE
  4. Best practices for configuring a CCoE
  5. Conclusion

What is Cloud Centre of Excellence (CCoE)?

Fortune Business Insights predict that the global cloud computing market size is anticipated to hit USD 760.98 billion by 2027. With the cloud markets accelerating, it is vital that organizations emphasize strongly on strategic planning and migration more than ever. 

A successful shift to the cloud needs complete alignment of businesses and resources, which is why a comprehensive cloud governance structure may not be enough to interact with and support cloud environments. 

Forging ahead, enterprises will need to focus attention on cloud operational excellence in order to streamline and enhance their businesses, thus driving them to establish a dedicated centralized Cloud Centre of Excellence (CCoE).

A CCoE is a cross-functional internal or external team comprising mainly of DevOps, CloudOps, Infrastructure, Security and FinOps that oversees cloud adoption, migration and operational functions. Additionally, the CCoE team also governs the IT and cloud infrastructure, confirming that it meets the organization’s expectations. 

To elaborate further, it signifies that CCOE enables cloud operational excellence across all cloud service models of infrastructure, platform and software as a service (IaaS, PaaS, and SaaS). The three pillars of CCoE being; 

Governance – CCoE creates cloud policies and guidelines in collaboration with cross functional teams, helps plan technical strategies and select centralized governance tools to address financial and risk management processes. 

Brokerage – Encompasses selection of cloud providers, architects cloud solutions as well as directs contract negotiation and vendor management.

Community – CCoE puts up a community of practice, enables knowledge sharing via building knowledge and source code repositories, conducts trainings, CoP councils and collaboration across all segments of an organization.

In totality, the CCoE ensures that cloud adoption is not siloed and encourages repeatable cloud processes and standards to be incepted as best practices. According to a recent survey, only 16 % of organizations have a fully-fledged CCoE, while 47 % are still working towards it.

The need for CCoE

The objective of the CCoE is to focus on the modernization of existing ITIL-based processes and governance standards while taking people, processes and technology, collectively into account.

  • By assembling the right people from the right functions, CCoEs can accurately comprehend and address opportunities to keep pace with progressing technology and innovative transformations.

The CCoE has the ability to answer migration related concerns like;

  • Is re-platforming needed? 
  • Will the lift and shift strategy be a better choice? 
  • What must be done to the original data while migrating? Etc. 

This strengthens and eases the decision-making capabilities of organizational teams kindling a structured cloud ecosystem in the long run.

When CCoE is successfully implemented, there is significant reduction in time-to-market, increased reliability, security and performance efficiency.

  • Over time, the CCoE team operations can gain more maturity and experience; affirming notable improvement in quality, security, reliability and performances on cloud. Organizations would eventually shape shift towards agility, paving way for smoother migrations, multi-cloud management, asset management, cost governance and customer satisfaction. 
  • Since a CCoE model works in coalition with cloud adoption, cloud strategy, governance, cloud platform and automation, the focus is more on delegated responsibility and centralized control, unlike the traditional IT set up, bringing about an impactful cultural shift within enterprises. 

Types of CCoE 

There are mainly three operational CCoEs that can help reinforce a cloud governance model. 

  • Functional CCoE: Encourages in building an exclusive team that can drive cloud initiatives, speed up analysis and decisions making processes, set cloud expertise standards, and act as a delivery catalyst in the cloud transformation.
  • Advisory CCoE: This is a team that provides consultative reviews and guidance on cloud best practices. Advisory teams establish and propel standards for new policies especially in a large and dynamic multi-project organizational set up.
  • Prescriptive CCoE: Acts as a leadership policy board highlighting how cloud projects should be constituted and executed within organizations. They help in defining policies for application deployment as well as identity and access management, set automation, security and audit standards and ensure that large enterprises become cloud governance competent.

Best practices for configuring a CCoE

Once organizations determine what type of CCoE fits them, the right team constructs and role definitions are vital in defining the cloud governance model. It is recommended that the founding team starts small with 3 to 5 members who can focus on a finite vision to begin with.

The most critical role in the CCoE team is that of an Executive Sponsor who leads the change bringing along other stakeholders from functions like finance, operations, security, compliance, architecture and human resources. 

Finance implements cost controls and optimization; operations manage the entire DevOps lifecycle round the clock whereas security and compliance define and establish cloud security standards and governance compliance practices. Cloud architecture expertise is included to bring in best practices and define a future roadmap that is cloud technology led. The CCoE is incomplete without the human resources who execute training programs and changes in workforce to make organizations cloud savvy. 

As soon as the appropriate team is formed, a CCoE charter stating the objectives, operational goals along with roles and responsibilities need to be scripted. 

For CCOE to define how cloud solutions will be extended and be in line with the organization’s project lifecycle, it is essential to draft a deployment plan.

It is important that the CCoE team works with authority and yet maintains harmony while integrating with the rest of the organization for a successful cloud transformation.

Lastly, organizations that migrate to cloud do so, to avail cost benefits, increase efficiency, flexibility and scalability of operations. Therefore, it is the responsibility of the CCoE team to measure key performances to keep a check on the cloud usage, infrastructure cost and performance at regular intervals.

Conclusion

The Cloud Center of Excellence (CCoE) helps accelerate cloud adoption by driving governance, developing reusable frameworks, overseeing cloud usage, and maintaining cloud learning. It aligns cloud offerings with the organizational strategies to lead an effective cloud transformation journey.

Deciphering Compliance on Cloud

By | Powerlearnings | No Comments

Compiled by Kiran Kumar, Business analyst at Powerup Cloud Technologies

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Blog Flow

  1. What is cloud compliance?
  2. Why is it important to be compliant?
  3. Types of cloud compliance 
    1. ISO
    2. HIPAA
    3. PCI DSS
    4. GLBA
    5. GDPR
    6. FedRAMP
    7. SOX
    8. FISMA
    9. FERPA
  4. Challenges in cloud compliance
  5. How can organizations ascertain security and compliance standards are met with while moving to cloud?
  6. Conclusion

Summary

As time progresses, businesses are getting more data-driven and cloud-centric imposing the need for stringent security and compliance measures. With the alarming rise in the number of cyber-attacks and data breaches lately, it is crucial that organizations understand, implement and monitor data and infrastructure protection on the cloud. 

It is important yet challenging for large distributed organizations with complex virtual and physical architectures across multiple locations, to define compliance policies and establish security standards that will help them accelerate change and innovation while enhancing data storage and privacy.

There are various compliance standards like HIPAA, ISO, GDPR, SOX, FISMA, and more that ensure appropriate guidelines and compliance updates are met to augment cloud security and compliance. Prioritizing cloud security, determining accurate cloud platforms, implementing change management, investing in automated compliance tools, and administering cloud governance are some of the measures that will surely warrant cloud compliance across all domains.

What is cloud compliance?

Most of the businesses today are largely data-driven. The 2020 Global State of Enterprise Analytics Report states that 94% of businesses feel data and analytics are drivers of growth and digital transformation today, out of which, 56% of organizations leveraging analytics are experiencing significant financial benefits along with more scope for innovation and effective decision-making capacities.

To accelerate further, organizations are steering rapidly towards the cloud for its obvious versatile offerings like guaranteed business continuity, reduced IT costs, scalability and flexibility. 

With cloud, strengthening security and compliance policies has become a necessity. Cloud compliance is about consenting with the industry rules, laws and regulatory policies while delivering through the cloud. The law compels the cloud users to verify if the effective security provisions of their vendors are in line with their compliance needs.

Consequently, the cloud-delivered systems are better placed to be compliant with the various industry standards and internal policies while also being able to efficiently track and report status. 

The shift to cloud enables businesses to not just transit from capital to operational expenses but also from internal to external operational security. Issues related to security and compliance can pose as barriers especially with regards to cloud storage and back up services.

Therefore, it is imperative to understand in which part of the world will our data be stored and processed, the kind of authorities and laws that will be applicable to this data and its impact on business. Every country has varied information security laws, data protection laws, access to information laws, information retention and sovereignty laws that need to be taken into consideration in order to build appropriate security measures that adhere to these set standards. 

Why is it important to be compliant?

Gartner research Vice President Sid Nag says, “At this point, cloud adoption is mainstream.” 

Recent data from Risk Based Security revealed that the number of records exposed has increased to a staggering 36 billion in 2020 with Q3 alone depicting an additional 8.3 billion records to what was already the “worst year so far.”

“There were a number of notable data breaches but the compromise of the Twitter accounts held by several high profile celebrities probably garnered the most headlines”, says Chris Hallenbeck, Chief Information Security Officer for the Americas at Tanium.

With enterprises moving their data and applications substantially on cloud, security threats and breaches across all operations emerge as their biggest concern. 

Therefore it is crucial for organizations to attain full visibility and foresight on security, governance and compliance on cloud.

Data storage and its privacy is the topmost concern and not being compliant with industry rules and regulations would augment data violation and confidentiality breach. A structured compliance management system also enables organizations to steer clear of heavy non-compliance penalties.

An effective corporate compliance management guarantees a positive business image and fabricates the customer’s trust and loyalty. It constructs customer reliability and commitment that helps build a strong and lasting customer base. 

Administering compliance solutions reduce unforced errors and helps keep a check on genuine risks and errors arising out of internal business performances.

Compliance is considered a valuable asset for driving innovation and change.

Types of cloud compliance 

Until recently, most service providers focused on providing data and cloud storage services without much concern towards data security or industry standards. As the cloud scales up, the need for compliance with regards to data storage also increases requiring service providers to draft new guidelines and compliance updates while measuring up to the ever changing national and industry regulations. 

Some of the most seasoned regulations governing cloud compliance today are:

1. International Organization for Standardization (ISO)

ISO is one of the most eminent administrative bodies in charge of cloud guidelines and has developed numerous laws that govern the applications of cloud computing.

ISO/IEC 27001:2013 is one of the most widely used of all ISO cloud requirements. Right from formation to maintenance of information security management systems, ISO specifies how organizations must address their security risks, how to establish reliable security measures for cloud vendors and users and helps set firm IT governance standards.

2. Health Insurance Portability and Accountability Act (HIPAA)

HIPAA, applicable only within the United States, provisions for security and management of protected health information (PHI). It helps institutions like the hospitals; doctors’ clinics and health insurance organizations to follow strict guidelines on how confidential patient information can be used, managed and stored along with reporting security breaches, if any. Title II, the most significant section of HIPAA ensures that the healthcare industry adopts secure encryption processes to secure data and operate electronic transactions with significant safety measures.

3. PCI DSS (Payment Card Industry Data Security Standard) 

PCI DSS is a standard pertaining to organizations that process or handle payment card information like credit cards, where it is mandatory that each of the stated 12 requirements in the act are met with to achieve compliance. Major credit card companies like American Express, MasterCard, Discover and Visa came together to establish PCI DSS to provide better security for cardholder’s data and payment transactions. PCI DSS has further implemented new controls for multifactor user authentication and data encryption requirements of late. 

4. GLBA (Gramm-Leach-Bliley Act) 

GLBA applies to financial institutions that need to understand and define how a customer’s confidential data should be protected. The law enforces organizations to create transparency by sharing with customers how their data is being stored and secured.

5. General Data Protection Regulation (GDPR)

GDPR regulations facilitate organizations that work with European Union residents to govern and control their data in order to create a better international standard for business.

The GDPR levies heavy fines, as much as 4% of the annual global turnover or €20 million, whichever is greater, if not complied with. Identity and access management frameworks can enable organizations to comply with GDPR requirements like managing consent from individuals to have their data recorded and tracked, responding to individuals’ right to have their data erased and notifying people in the event of a personal data breach. 

6. Federal Risk and Authorization Management Program (FedRAMP)

FedRAMP provides enhanced security within the cloud as per the numerous security controls set through the National Institute of Standards and Technology (NIST) Special Publication 800-53. It helps in the evaluation of management and analysis of different cloud solutions and products while also ensuring that the cloud service vendors remain in compliance with the stated security controls as well.

7. Sarbanes-Oxley Act of 2002 (SOX)

SOX regulations were introduced after prominent financial scandals in the early 2000. It ensures all public companies in the US take steps to mitigate fraudulent accounting and financial activities. SOX safeguards the American public from corporate wrongdoing and it is mandatory for organizations that constitute under SOX, to work only with those cloud providers that employ SSAE 16 or SAS 70 auditing guidelines.

8. Federal Information Security Management Act (FISMA)

FISMA is responsible for governing the US Federal Government ensuring that federal agencies safeguard their assets and information by creating and implementing an internal security plan. FISMA sets a one-year timeline for review of this plan to enhance the effectiveness of the program and the ongoing mitigation of risks.

FISMA also controls the technology security of third-party cloud vendors.

9. Family Educational Rights and Privacy Act of 1974 (FERPA)

FERPA caters to governing student records maintained by educational institutions and agencies and also applies to all federally funded elementary, secondary, and post secondary institutions. It plans for these institutes to identify and authenticate the identity of parents, students, school officials and other parties before permitting access to personally identifiable information (PII). FERPA enforces relevant policies to reduce authentication misuse in order to efficiently manage user identity life cycle with periodic account recertification.

Challenges in cloud compliance

With an on-premise data center set up, enterprises are responsible for the entire network, security controls and hardware, physically present in the premises whereas security controls in the cloud are virtual and are usually provided by third-party cloud vendors. 

Keeping track of data and assuring its security, especially if it involves large, distributed organizations with complex architectures spread across multiple locations and systems, both physical and virtual, is extremely challenging.

The pressure builds up even more on enterprises when industry regulators are imposed to tighten data protection techniques, violating which leads to heavy fines. Regular audits and security policy checks have to be embraced by organizations to manifest compliance.

The challenges with cloud compliance are:

  • Multi-location regulations: Large organizations serving clients globally need to adhere to regional, national and international regulations with regards to data possession and transfer. However while migrating to cloud, the preferred cloud vendor may not always be able to offer the exact stated requirements. Adopting technology that supports major public cloud vendors, promoting hybrid cloud strategies, determining which data can be safely moved to cloud while retaining sensitive data on-premises are some measures that will help establish security and compliance on cloud.
  • Data Visibility: Data storage is a huge challenge in terms of where and how data can be stored resulting in poor data visibility. Moving to cloud facilitates using distributed cloud storage services for different types of data, entitling organizations to act in accordance with security directives while data storage and back ups.
  • Data Breach: Security compliance regulations on cloud need to be set in place to evade data security vulnerabilities and risks in real time. Adopting microservices on cloud, which is breaking down the applications into smaller components, each of which are categorized to its own dedicated resource is a must. This process improves data security among other features, as it generates additional layers of isolation with the breakdown approach, making it tougher for invaders to hack the infrastructure. 
  • Data Protection Authority: Moving to the cloud enables enterprises to offload their responsibility of securing their physical infrastructure on to the cloud service provider. However, it is still compelling for organizations to oblige to privacy and security of data that is under their control and verify appropriate data protection measures internally.
  •  Network Visibility: Managing firewall policies where traffic flows are typically complex are a challenge. Visibility of the network becomes tricky. Many organizations are using the multi-cloud approach to support their infrastructure in order to curb network issues.
  • Network management: Automation is the key to management of network firewalls that have countless security policies across multiple devices, which is otherwise difficult to manage as well as time-consuming. Also, appropriate network security configurations are a prerequisite but with compliance management mostly left to cloud providers, the regulations and implementation process often end up haywire. 
  • Data Privacy and Storage: Keeping track of personal data by mapping the flow of data on cloud is a must. The right to access, modify and delete data can be strengthened via implementation of privacy laws. The cloud can further simplify matters by offering low-cost storage solutions for backup and archiving.
  • Data Inventory Management: Data is stored in unstructured formats on both on-premises and cloud, mainly to be used for business forecasting, social media analytics and fraud prevention. This would require data inventory management solutions to ensure speedy and efficient responses to requests that need to be compliant with regulatory laws.

How can organizations ascertain security and compliance standards are met with while moving to cloud?

According to a recent Sophos report of The State of Cloud Security 2020, 70% of companies that host data or workloads in the cloud have experienced a breach of their public cloud environment and the most common attack types were malware (34%), followed by exposed data (29%), ransomware (28%), account compromises (25%), and cryptojacking (17%).

The biggest areas of concern are data loss, detection and response and multi-cloud management. Organizations that use two or more public cloud providers experienced the most security incidents. India was the worst affected country with 93% of organizations experiencing a cloud security breach. 

It is of utmost importance for cloud service providers (CSP) to ensure that security and compliance standards are met with while moving data on to cloud and to do so, some of the following measures can be administered:

  • Determine appropriate cloud platforms: Organizations must evaluate initial cloud risks to determine suitable cloud platforms. It is also essential to realize which set of data and applications can be moved to cloud. For example: Sensitive data or critical applications may still remain on premise or use the private cloud whereas non-critical applications may be hosted on public or hybrid models. Relevant security control frameworks, irrespective of whether data and applications are hosted on private, public, multi-cloud or hybrid platforms need to be established. Continuous compliance monitoring via these security measures, prioritization and remediation of compliance risks, if any and generation of periodic compliance reports help in developing a consolidated picture of all cloud accounts. 
  • Undertake a security-first approach: Leveraging real-time tracking tools and automated security policies, processes and controls holistically across internal and external environments from the very beginning, help in keeping complete and continuous visibility on cloud compliance. 

Monitoring and managing security breach and threats via compliance checklists for all the services that include infrastructure, networks, applications, servers, data, storage, OS and virtualization establishes pertinent data protection measures, reduces costs and simplifies cloud operations.

  • Implementing change management: AI and tailored workflows facilitate identifying, remediating and integrating security policy changes that can be processed in no time. 

Automation streamlines and helps tighten the entire security policy change management through auditing. 

  • Building resources: It is important to collaborate IT Security and DevOps, commonly known as SecOps, to effectively mitigate risks across the software development life cycle. Through SecOps, business teams can prioritize and amend critical vulnerabilities as well as address compliance violations via an integrated approach across all work segments. It enables a faster and risk-free deployment into production. 
  • Invest in tools: Advanced automated tools comprise of built-in templates that certify and maintain security management standards. Compliance tools based on AI technology, acts as a framework towards protecting privacy of all stakeholders, meets data security needs, provides frequent reports on stored cloud data and detects possible violations beforehand. Thus, investing in tools enhances visibility, data encryption and control over cloud deployments. 
  • Ensuring efficient incident response: Due to seamless integration with the leading cloud solutions, compliance tools are able to map security incidents to actual business processes that can potentially be impacted. Organizations can instantly evaluate the scale of the risk and prioritize remediation efforts consequently leading to efficient incident response management. For instance, in case of a cyber attack, the compliance tool enables isolation of those servers that have been compromised ensuring business continuity.
  • Administer cloud governance: Cloud security governance is an effective regulatory model designed to define and address security standards, policies and processes. The governance tool provides consolidated synopsis of all security issues, which are monitored, tracked and compiled in the form of dashboards. They also facilitate configuration of customized audits and policies, generation of periodic summarization of compliance checks and one-click remediation capabilities with a fully traceable remediation history of all the fixed issues. It also generates pre-populated, audit-ready reports that provides information before an audit is actually conducted. 

LTI Powerup’s CloudEnsure is a prominent instance of an autonomous multi-cloud governance platform that has been successfully offering audit, compliance, remediation and governance services in order to construct and maintain a well architected and healthy cloud environment for their customers.

  • Conducting audits: It is recommended to have compliance checks both manual and automated, against all the major industry regulations like PCI DSS, HIPAA and SOX, including customized corporate policies in order to keep a constant check on all security policy changes and compliance violations. A cloud health score reveals how compliant all the operations are.

Audits furnish reports on cloud security and cloud compliance summary, security compliance by policy that tracks real-time risks and vulnerabilities against set policies, detailed automated metrics on the health of your multi-cloud infrastructure which displays critical risks along with an overall security compliance summary to name a few.

  • Drive digital transformation: Security tools that can accelerate application delivery; prioritize security policy change management while enhancing and extending security across all data, applications, platforms and processes regardless of location must be embraced to accelerate digitization of business processes.

Conclusion

Compliance is a shared responsibility between cloud service providers and organizations availing their services. 

Today, a majority of cloud service providers have begun to recognize the importance of giving precedence to security and compliance services with the aim to continually improve their offerings. 

Therefore, organizations are endlessly striving to reassess and redeploy their security strategies by trying to revive and control their cloud undertakings especially post pandemic. 

No matter what type of cloud is chosen, the migrated data must meet all of the compliance regulations and guidelines. 

How Containers Enable Cloud Modernization

By | Powerlearnings | No Comments

Compiled by Kiran Kumar, Business analyst at Powerup Cloud Technologies

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Summary

Container-as-a-service (CaaS) is a business model offered by cloud service providers that facilitates the software developers in organizing, running, managing, and deploying containers by using container-based virtualization.

Containers are responsible for packaging applications and their dependencies all together in a compact format that can be version controlled, are scalable and can be replicated across teams and clusters as and when required. 

By segregating the infrastructure and application components of a system, containers can accommodate themselves between multi-cloud and hybrid environments without altering the code, thus posing as a significant layer between the IaaS and PaaS platforms of cloud computing.

Implementing CaaS has advantages like rapid delivery and deployment of new application containers, operational simplicity, scalability, cost-effectiveness, increased productivity, automated testing and platform independence to list a few. CaaS markets are growing rapidly with enterprises across all domains adapting to container technology.

Index

1. What are Containers?

2. What is Container-as-a-Service (CaaS)?

3. How CaaS differs from other cloud models?

3.1 How CaaS works?

4. Who should use CaaS?

4.1 Type of companies

4.2 Type of Workloads

4.3 Type of Use Cases

5. How CaaS has impacted the cloud market?

6. Benefits and drawbacks

What are Containers?

Containers are a set of software capable of bundling application code along with its dependencies, which can be run on traditional IT setups or cloud. Dependencies include all necessary executable files or programs, code, runtime, system libraries and configuration files.

Since containers are efficient in running application files without exhausting a great deal of resources, users see it as an approach to operating system virtualization.

56% of the organizations that polled for the 2020 edition of “The State of Enterprise Open Source” report said they expected their use of containers to increase in the next 12 months.

Containers leverage operating system features to control as well as isolate the amount of CPU, memory and disk being used while running only those files that an application needs to run unlike virtual machines that end up running additional files and services.

Containerized environment can thus optimize a system to run several hundreds of containers as against 5 or 6 virtual machines that would typically run on a traditional virtualization approach.

What is Container-as-a-Service (CaaS)?

Container as a service (CaaS) is an automated cloud based service that enables users to host and deploy highly secure and scalable containers, applications and clusters via on-premise data centers or cloud.

CaaS acts as a significant bridge between the IaaS and PaaS layers of cloud computing services often regarded as a sub-division of IaaS delivery model.

By segregating the infrastructure and application components of a system, containers can accommodate themselves between multi-cloud and hybrid environments without altering the code.

Gartner says that 81 % of companies that have already adopted public cloud are working with two or more cloud providers.

CaaS is considered the most flexible and best-fit framework in providing tools to cater to the entire application lifecycle while also being capable of operating in any language and under any OS and infrastructure benefitting the organization’s software development and operations teams.

It helps organizations attain product excellence, speedy deployment, agility and application portability while assuring improved application delivery and stellar customer service.

How it differs from other cloud models

With the evolution of cloud computing, several “ as a service” offerings have actualized in enhancing core business operations. The three traditional service models securing maximum prominence in the recent past are:

Infrastructure-as-a-Service (IaaS) that provides virtual hardware, storage and network capacity, Platform-as-a-Service (PaaS) catering to the entire software development lifecycle and Software-as-a-Service (SaaS) that deals completely with running application software on cloud.

The first six months of 2020 saw a 22% rise in organizations that have containerized more than half of their applications.

Container-as-a-Service can be positioned between the IaaS and PaaS layers of cloud computing models where using container technology, CaaS is able to create a virtualized abstract layer that captures applications and files from the elementary system and directs it on any container based platform for operations.

Meaning, CaaS utilizes native functions of an OS to isolate and virtualize individual processes within the same operating system unlike IaaS model, where the user is responsible for installing and maintaining the virtual hardware and operating systems. Thus, CaaS manages the software application lifecycle similar to what IaaS and PaaS do but with a slight difference.

Additionally, in traditional cloud systems, software developers are heavily dependent on technologies provided by the cloud vendor.

For instance, a developer who uses PaaS to test applications needs to load his own code onto the cloud while all technical requirements for the build process as well as managing and deploying applications are taken care of by the PaaS platform. However, Container-as-a-Service, provides users with a relatively independent programming platform and framework, where applications confined in containers can be scaled over diverse infrastructures, regardless of their technical requirements making it less reliant on the PaaS model.

How CaaS works

CaaS platform is a comprehensive container management environment comprising orchestration tools, image repositories, cluster management software, service discovery, storage and network plug-ins that enable IT and DevOps to effortlessly deploy, manage and scale container based applications and services.

The interaction with the cloud-based container environment is either through the graphical user interface (GUI) or API calls and the provider controls which container technologies should be made available to users.

Docker Swarm, Kubernetes, and Mesosphere DC/OS are the three most dominating orchestration tools in the market. With such built-in orchestration engines, CaaS solutions enable automated provisioning, scaling, and administration of container applications on distributed IT infrastructures. Moreover, cluster management features allow applications to run as clusters of containers that can integrate and work collaboratively as a single system.

Containers are capable of overcoming problems arising from having multiple environments with disparate configurations as it enables development teams to use the same image that gets deployed to production. Also, since containers are meant to be recreated whenever needed, it is considered best to centralize logs. CaaS facilitates aggregation and standardization of logs along with monitoring capacities.

All leading IaaS providers like Google, AWS, Microsoft Azure, Red Hat, Docker Cloud and OpenShift have CaaS solutions built on top of their IaaS platforms whose underlying orchestration solutions help automate provisioning, clustering and load balancing. Some of these providers also offer PaaS solutions that allow developers to build their codes before deploying to CaaS.

Who should use CaaS?

A recent survey by Gartner predicts that by 2023, 70% of organizations will be running three or more containerized applications in production where containers, Kubernetes and microservices would emerge as leading drivers of IT digitization.

With everyone favoring DevOps these days, numerous large IT organizations are attaining container capabilities by purchasing smaller startups.

According to the Datadog report, the move to Docker is actually being led by larger companies (with 500 or more hosts), rather than smaller startups which supports the fact that Docker use by enterprise-scale organizations is greater than average for all businesses as it is considered relatively simple to deploy.

For example, the scale of deployment reported at a SaaS company was as high as 15,000 containers per day where the process of container deployment was easier and faster making the present-day transition achievable.

Type of workloads

Most infrastructures are complex and host a diverse set of workloads. While virtual machines virtualize physical hardware keeping an efficient isolation mechanism, containers virtualize the operating system provisioning very little for workload isolation. Hence it is important for enterprises to determine the percentage and portions of infrastructure that are best suited for containerization.

Containers, regardless of their popularity, would continue to coexist with virtual machines and servers, as they cannot substitute them entirely.

If there are workloads that need to scale significantly or applications that need prompt and swift updates, deploying new container images could be a reliable and authentic solution as well.

High number of workloads, understanding how open an organization’s container solutions are and availability of container expertise teams are a few more factors that decide how much to containerize.

Type of use cases

Organizations are known to use containers to either lift or shift existing applications into modern cloud architectures, which provide restricted benefits of operating system virtualization or they restructure existing applications for containers offering full-fledged advantage of container-based application architectures.

Similar to refactoring, developing new native applications also provide full benefits of containers.

Moreover, distributed applications and microservices can be conveniently isolated, deployed and scaled using independent container blocks, containers can provide DevOps support to streamline integration and deployment (CI/CD) as well as allow smooth implementation of repetitive processes that can run in the background.

One of LTI Powerup’s distinguished clients, a large ecommerce start-up, was running all their applications on AWS multi-container environment. Due to this set up, the existing environment was unable to scale for individual services, cost of running their micro-services were increasing and deployment of one service was affecting other services as well. The DevOps team at LTI Powerup proposed and implemented the Kubernetes cluster from scratch to help the client overcome all the above stated issues.

Read the full case study here 

How CaaS has impacted the cloud market?

The Containers as a Service (CaaS) market is expected to grow at a Compound Annual Growth Rate (CAGR) of 35% for the period 2020 to 2025.

The CaaS market has been segregated based on deployment models, service types, size of the enterprise, end user application and geographical regions.

The demand for CaaS is navigated by factors like rapid delivery and deployment of new application containers, operational simplicity, benefits of cost-effectiveness, increased productivity, automated testing, platform independence, reduced shipment time due to hosted applications and increasing popularity of microservices.

As per market studies, the security and network capability segments are expected to grow at the highest CAGR while it is also anticipated that CaaS will provide new business opportunities to small-medium enterprises during the forecast period. Technologies like mobile banking and digital payments are transfiguring the banking industry, especially in emerging countries like India and China where major BFSI companies have already started deploying container application platforms in their systems.

Among the deployment models, the public cloud segment is estimated to continue to hold a significant market share as it offers scalability, reliability, more agility and flexibility to organizations adopting containers.

However, markets foresee data security threats on cloud that may hamper the growth trend, which needs to be strengthened by implementing security and compliance measures with immediate effect.

North America showcased the largest market share in 2017 whereas the Asia Pacific (APAC) region is projected to grow at the highest CAGR by 2022. The increasing use of microservices and the shift of focus from DevOps to serverless architecture are driving the demand for CaaS globally.

Some major influential public cloud vendors providing CaaS are Google Container Engine (GKE), Amazon Elastic Container Service (ECS) , Microsoft Azure kubernetes Service (AKS) followed closely by Docker, to name a few. Google Kubernetes and Docker Swarm are two examples of CaaS orchestration platforms while Docker Hub can be integrated as a registry for Docker images.

CaaS markets have evolved rapidly in the past 3 years and Enterprise clients from all industries are seeing the benefits of CaaS and container technology.

Benefits and drawbacks

  • Speedy Development: Containers are definitely the answer to organizations looking at developing an application at a fast pace subject to maintaining scalability and security. Since containers do not need operating systems, it takes only seconds to initialize, replicate or terminate a container leading to speedy development processes, consolidating new features, timely response to defects and enhanced customer experience.
  • Easy Deployment: Containers simplify the process of deployment and composition of distributed systems or micro service architectures. To illustrate, if a software system is organized by business domain ownership in microservice architecture, where the service domain can be payments, authentication and shopping cart, each of these services will have its own code base  and can be containerized. Therefore, using CaaS, these service containers can be instantly   employed to a live system.
  • Efficiency: Containerized application tools such as log aggregation and monitoring enable performance efficiency.
  • Scalability and High Availability: Built in CaaS functions for auto scaling and orchestration management allows teams to swiftly build high visibility and high availability distributed systems. Besides, it not just builds consistency but also accelerates deployments.
  • Cost Effectiveness: As CaaS do not need a separate operating system, it calls for minimal resources, thus significantly controlling engineering operating costs as well as optimally minimizing the DevOps team size.
  • Increased Portability: Containers trigger portability that enables end users to accurately launch applications in different environments, such as public or private clouds. Furthermore, it lets incorporation of multiple identical containers within the same cluster in order to scale.
  • Business continuity: As containers, to a certain degree, remain isolated from other containers on the same servers, in case of an application malfunction or crash for one container, other containers can continue to run efficiently without experiencing technical issues. Similarly, the shielding that the containers have from each other, doubles as a safety feature, minimizing the risk. If an application is at risk, the effects will not extend to other containers.

However, organizations need to contemplate whether they even need containers before implementing CaaS. To begin with, containerization increases complexity because it introduces components that are not present in the IaaS platform.

Containers are conditional to network layers and interfacing with host systems can hinder operational performances. Constant data storage on containers is a challenge as all the data disappears by default once containers are shut down.

Additionally, container platforms may not be compatible with other container products on the CaaS ecosystem. Lastly, creative GUI apps may not work well as CaaS services were designed to mainly cater to applications that do not need graphics.

CaaS is a powerful modern hosting model, most beneficial to applications that are designed to run as independent microservices. Migration to containers may not necessarily be the best choice for all users as in some cases, traditional virtual machines may serve better. Nevertheless, CaaS, IaaS and PaaS are distinct services with different management models and only organizations can determine how and when CaaS can benefit their operations.

Cloud Report Card: 3 Months of COVID-19 Impact

By | Cloud, CS | No Comments

Siva S, CEO of Powerupcloud, Global Cloud Practice Head at LTI

So, here we are. May 2020. It has been 3 months since the Covid-19 pandemic started impacting the global economy and with it the business functions globally. This has not been a smooth ride for governments, businesses, entrepreneurs, and most importantly, the people. I have been actively speaking to several CIOs, CEOs of global businesses with operations in the USA, UK, Germany, France, UAE, SouthAfrica, India, Singapore, Australia, and New Zealand. The business sentiments and decisions seem to follow a similar pattern irrespective of the country or government or the business itself. That’s the effect the COVID-19 pandemic has had so far on all of us.

In this article, I would be covering the major trends we are witnessing w.r.to public cloud adoption and the change in priorities based on our customer and OEM interactions.

  1. Cloud Cost Optimization: The highest demand we see is with the cost optimization of cloud spend at businesses that are already on the cloud. Irrespective of their spending, be it $0.5M or $20M per year, reducing their cloud spend is a key focus for the CIOs. The ‘Save Now, Pay Later’ program that we launched which helps large businesses save cloud costs with the help of our cloud governance platform – www.cloudensure.io has seen a massive uptake with our global customers due to the nature of the program. The gain-share model, where our success fee is a % of cost savings we bring to the client, thus bringing in a win-win situation for both the vendor and the client, seems exactly what the businesses need at this time of the hour.
  2. Remote Workforce Enablement: This is the second area where we see high demand from our enterprise customers. Be it migration to virtual desktops on cloud or launching a fully scalable virtual contact center on the cloud or adopting virtual collaboration platforms, CIOs are keen to explore technologies that will improve the productivity of their employees working from home. With most businesses taking a call to have their employees work from home till the end of 2020, enabling their remote workers with technology that aids them to work better is at the top of the agenda for businesses. Check out our Remote Workforce Enablement program.
  3. Data Analytics on Cloud: While most of the businesses I interact have literally stopped their big-bang data transformation exercises, they, however, do not want to stop the adoption of cloud for their data environments. We are witnessing a trend wherein customers are identifying their business-critical applications and moving them to cloud including the data layer for 2 reasons – 1. to improve the availability and reliability of the data layer powering these applications 2. to feed the data lake with data in real-time which will allow them to run ML models on the fly. This trend is most likely to follow for the next 12 months. The best part is, by the end of 12 months, businesses who follow this approach will have most of their critical applications running on the cloud with a centralized data approach.
  4. Large Scale Cloud Migrations: Plans to migrate the entire data center to the cloud is seeing a mixed response. I am interacting with a couple of CIOs of large manufacturing businesses in the EU & USA who are going ahead with their plans to migrate completely to the public cloud platforms. These companies have workloads in the nature of 15000+ servers, 1000+ applications. Their argument, a valid one, is to do this entire migration when the manufacturing activity is at its lowest due to COVID-19 impact. But this represents just 10% of our total migration pipeline.
  5. Continuous Cloud Adoption: Most of the other industries are adopting the concept of ‘continuous cloud adoption’ model where they subscribe for a ‘Cloud POD’ (a 6 member team comprising cloud architects, cloud engineers, and a project manager) for a 12 month period. The Cloud POD will work with the customer to identify the key applications and migrate them to the cloud in a sequential manner. This allows the customer to continue their cloud adoption, enabling businesses with better reliability for their key applications and helping the CFO with moving to an OPEX model on an incremental basis. My vote goes to this approach as this model brings in more flexibility to the CIOs. They can use the Cloud POD for security & governance implementations or cost optimization by pausing the migration activity when the situation demands.
  6. AI/ML Adoption: Many artificial intelligence solutions that used to be a hard sell to businesses all these years are seeing a voluntary increased adoption by businesses in these last 3 months and we expect this trend to continue for the next 2 years to come. Chatbots, for example, has seen a 200% increase in demand in this period. We are seeing requirements from building customer support chatbots to internal employee engagement HRMS chatbots to ease the dependency on the human support system to fulfill the end-user needs. Banks, Insurance companies, eCommerce players, OTT platforms, Healthcare organizations, Educational institutes are the ones that often feature in our chatbot requirements pipeline. AI+RPA is another area of focus where businesses are focusing on implementing AI & RPA technologies either in combination or stand alone to automate some of their business processes.

The bottom line is, for almost all the businesses cash conservation is the primary focus. But at the same time, they cannot afford to completely stop their digital transformation journey. The key here is to balance these 2 things so that they are better prepared when the global economy comes back on track. Businesses that take aggressive decisions on either end of the spectrum will see a greater risk of failure. It is completely fine to continue to take an ‘ambiguous’ approach and keeps things in balance instead of boiling the ocean.

Cash conservation should be your primary focus. But don’t stop your digital transformation journey.

The Era Of Contact Center AI Has Started, Officially!!!

By | AI, CS | No Comments

Siva S, CEO of Powerupcloud, Global Cloud Practice Head at LTI

Before I start, I sincerely hope you and your loved ones are safe and well.

There are no doubts that COVID-19 Pandemic has brought great distress to our lives, both in the personal and business front. We see businesses, governments, large corporations are reeling from the effect of lockdowns and COVID-19 spread. Over 90% of the businesses worldwide are suffering due to the lockdown. But the tech industry, especially Cloud and AI are seeing a very different trend. Businesses are realizing that they cannot ignore Cloud and AI anymore and with each day passing, they start feeling more and more pain with their existing traditional IT systems.

In today’s article, I would like to focus on the Contact Center AI solution which is currently the #1 sought after technology on the cloud for businesses across the world.

In 2016, I envisioned a Chatbot platform named IRA.AI (now called Botzer.io) as a customer support chatbot which will automate the customer support process by interacting with customers and provide them with answers in real-time. I must admit that it was a super-hard sell back in 2016. We were one of the first to build a robust chatbot development and management platform, well before AWS & Azure launched their versions. We used Python NLTK to power our chatbot platform. Fast forward to the second half of 2019, the scene wasn’t very different. We still saw a majority of businesses experimenting but not fully adopting the AI Chatbot solutions.

But the COVID-19 Pandemic has changed this situation overnight, very much like how it changed a lot of our lives in a very short time. We are seeing the customer care calls going over the roof since January 2020. One instance where the US citizens applying for unemployment benefits had to wait for almost 48 hours to get through a customer support agent. Another instance where a UK based Telco experiencing a surge of 30% increase in the incoming customer care calls as a lot of their users were struggling with internet bandwidth issues with most of the population started working from home. While the large BPO industry in countries like India and the Philippines were struggling to get their employees working from home, some of the US and UK businesses have canceled the contracts and moved the jobs back to their respective home countries in order to comply with the regulations around data security. This has invariably increased the customer care spend for these businesses.

All these have resulted in businesses looking towards AI Chatbot powered digital agents to help them cope up with the surge in demand and at the same time, keep the costs in check.

How was the AI Chatbot adoption before 2020?

As I mentioned earlier, AI Chatbot concept was seen more as an R&D investment rather than a viable solution to automate the customer care center operations. We saw some early success with insurance companies, banks, airlines, and real-estate companies. But it was always a hard sell to the majority of the businesses primarily because of below reasons,

  • the existing customer care support process was reliable and relatively cheaper when outsourced to countries like India, Philippines
  • there was more emphasis on customer loyalty management, providing the human touch
  • the Natural Language Processing (NLP) technology was not very evolved and fool-proof to be considered as a real alternative
  • the demand was predictable and the training materials were designed to train humans and not AI

But, several technology companies like us have been relentless in their efforts to solve the above-mentioned problems seen with AI Chatbot technology. The NLP accuracy has improved a lot (proof: Alexa, Google Home, Siri) and the leading cloud platforms have launched these NLP techs as full-fledged services for developers to integrate them and build end-to-end AI Chatbot solutions (Amazon Lex, Microsoft LUIS, Google DialogFlow).

How does the Contact Center AI actually work?

No alt text provided for this image
  • The customer calls the customer care support number
  • The contact center software, once it receives the call, will check with the internal customer database (or CRM) to identify the customer
  • The contact center software will then route the call to the workflow tool which will interact with the customer and identify the entity & intent from the customer’s query
  • Based on the preset business logic (or algorithm), the workflow tool will then call the right set of application APIs to resolve the customer’s queries or pick the appropriate response from the AI Chatbot

Below is an architecture diagram that depicts a Contact Center AI implementation on AWS using Amazon Connect (cloud-based contact center software), Amazon Lex (cloud-based NLP service), and Amazon Lambda (cloud-based workflow service).

No alt text provided for this image

So how do you go about AI adoption for your Contact Center AI?

Rome was not built in a day. So is the case with your vision of bringing in AI automation for a large part of your customer care process. Projects in this space would fail and leave a bad taste if we embark on this journey with very high expectations and immediate results. And I have witnessed the pain several times from close quarters. So how does one go about adopting AI for their contact center?

Step 1: Analyse your existing customer care process and identify the low hanging fruits which can be moved to the AI model quickly (almost all AI Chatbot consulting & product companies can help you with this).

Step 2: Segregate the queries and workflows which can be handled by a simple ‘if-else’ rule vs an ‘intent-entity’ identifying the NLP model. Using NLP for queries that can be handled by a simple ‘if-else’ rule will be an overkill and will bring down the accuracy of your NLP engine.

Step 3: Once you have experienced a fair amount of success with the NLP powered model to identify intents, entities in answering customer queries, bring in Machine Learning to further improve on the accuracy of intent identification, bot training, and customer experience management. Yes, there is a whole different world out there already in the field of Contact Center AI. 🙂

Why AI chatbots win over application design?

I often get this very common question on ‘why chatbots when you have beautifully designed applications which can do the job?’. I totally agree with the fact that the apps with better UX will make a customer’s life easier. But the problem with apps is that you will have to adhere to workflow which has been designed in the app. You cannot skip any step, you cannot change your inputs as you wish. The AI Chatbot however allows you to interact the way you would interact with a human and not a machine. You need not learn to use an app (though the learning curve may be less for better-designed apps), you can simply post your query and get your answers or post your intent and get the workflow executed (like policy claims, refund processing, airline reservations, blocking credit cards, etc).

Let the customers interact with your business in their natural way. AI Chatbots allow you to achieve that and this goes a long way in customer experience.

What should customers look for while choosing to embark on this path?

Building and launching your Contact Center AI solution powered by chatbot agents is just the first step. I see a lot of customers struggling with managing the bot they launch and further improve them on a continuous basis. This leads to a low customer satisfaction rating and eventually resulting in a failed project.

Any business looking to implement Contact Center AI to automate their customer care process should consider below check-points.

  1. Check if a solution like Contact Center AI will actually improve efficiency and bring down costs. If your existing support model is broken, do not embark on this before fixing the overall support process.
  2. Choose a bot management platform (like Botzer.io) which will not only help you with building and launching the AI chatbots which will power your contact center, but also help you track the performance of the bots closely.
  3. The bot management platform should allow you to pick up anomalies and help you train the new queries quickly.
  4. The bot management platform should also allow the Contact Center AI solution to handover the call to a human agent if the bot fails to resolve the customer’s queries.
  5. And the most important part, the bot management platform should have rich analytics embedded in the tool which will allow you to track your customer experience score on a real-time basis. This will help you course-correct in your bot training process and will prevent you from experiencing negative reviews in your customer care process.

How will this evolve in future? Will the Contact Center AI replace humans?

No. The Contact Center AI will not replace humans entirely. A healthy model will have a good mix of AI chatbots and human agents working hand-in-hand to support the customers’ queries. The below architecture is highly recommended when you are looking to implement a Contact Center AI for your business.

No alt text provided for this image

I am seeing an increase in Contact Center AI adoption from businesses in industries including insurance, food delivery, e-commerce, healthcare, airlines, telco, banks, etc.

If you have been mulling with the idea of introducing AI into your businesses, the time is here for you to initiate the AI adoption. I would suggest you start with the Contact Center AI solution. It works. And it is one of the mature AI solutions that you can adopt.

The Evolution of Serverless Computing

By | Powerlearnings | No Comments

Compiled by Kiran Kumar, Business analyst at Powerup Cloud Technologies

Contributor Agnel Bankien, Head – Marketing at Powerup Cloud Technologies

Summary

Serverless computing also known as serverless architecture or Function-as-a-Service is an emerging cloud deployment model provided by cloud service providers to govern server management and infrastructure services. The blog helps understand how serverless is different from the other cloud computing service models and who should be using it, lists its architectural and economic impact on cloud computing and states a few features like serverless stacks and event driven computing, that has successfully brought about a revolutionary shift in the way businesses are run today. 

Index

1. What is Serverless Computing?

2. Other Cloud Computing Models Vs. Serverless

3. Serverless Computing – Architectural impact 

4. Serverless Computing – Economic impact 

5. Who Should use a Serverless Architecture?

6. How Serverless has Impacted Cloud Computing?

7. Serverless Computing Benefits and Drawbacks

7.1 Reduces Organizational Costs

7.2 Serverless Stacks

7.3 Optimizes Release Cycle Time

7.4 Improved Flexibility and Deployment

7.5 Event based Computing

7.6 Green Computing

7.7 Better Scalability

What is Serverless Computing?

Servers have always been an integral part of computer architecture and today, with the onset of cloud computing, the IT sector has progressed dynamically towards web based architecture further leading the way to serverless computing. 

Gartner estimated that by 2020, 20% of the world’s organizations would have gone serverless. 

Using a virtual server from a cloud provider not only offloads their development team from taking care of server infrastructure but also helps the operations team in running the code smoothly.  

Serverless computing, also known as serverless architecture or function as a service (FaaS) is a cloud deployment model offered by cloud service providers, to govern server and infrastructure management services of their customers.  

This model provisions for allocation of resources, equipping virtual machines, container management and even tasks like multithreading which are built into the application code, thus reducing the responsibility and accountability of software developers and application architects.

As a result, the application developers can solely focus on building the code more efficiently while cloud providers maintain complete transparency with them.

Actual physical servers are nevertheless used by cloud service providers to implement the code into production; but developers are least concerned with regards to executing, altering or scaling a server.

An organization seeking serverless computing services is charged on a flexible ‘pay-as-you-go’ basis, where you pay for only the actual amount of resources utilized by an application. The service is auto-scaling and paying for a fixed amount of bandwidth or servers like before has become redundant.

With a serverless architecture, the focus is mainly on the individual functions in an application code, while the cloud service provider automatically provisions, scales and manages the infrastructure required to run the code.

Other Cloud Computing Models Vs. Serverless

Cloud computing is the on-demand delivery of services pertaining to server, storage, database, networking, software and more via the Internet.

The three main service models of cloud computing are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) with serverless being the newest addition in the stack. All four services address distinct requirements, supplement each other and focus on specific functions within cloud computing and therefore commonly referred to as the “cloud-computing stack.”

Serverless computing ensures server provisioning and infrastructure management is handled only by the cloud provider on-demand on a per-request basis with auto-scaling capabilities. It shifts the onus away from developers and operations, thus mitigating serious issues like security breach, downtime and loss of customer data that otherwise proves uneconomical.

In the conventional cloud computing set up, the resources are dedicatedly available irrespective of whether it is put to use or idle while serverless enables customers to pay only for resources being used, which means that serverless is capable of delivering the exact units of resources in response to a demand from the application.

To elaborate further, applications are framed into independent autonomous functions and whenever a request from a particular application comes in, the corresponding functions are instanced and resources are applied across relevant functions as needed. The key advantage of a serverless model is to facilitate whatever the application calls for, whether it is additional computational power or more storage capacity.

Traditionally, the process of spinning up and maintaining a server is a tedious and risky task that may pose high security threats especially in case of misconfigurations or errors. On the FaaS or serverless models, virtual servers are utilized with nominal operations to keep applications running in the background. 

In other cloud computing models, resource allocation occurs in sections and buffers need to be accommodated for, in order to avoid failures in case of excess loads. This arrangement eventually leads to the application not always operating in full capacity resulting in unwanted expenses. However, in serverless computing, functions are invoked only on demand and turned off when not in use enhancing cost optimization.

Serverless Computing – Architectural Impact 

Rather than running services on a continuous basis, users can deploy individual functions and pay only for the CPU time when their code is actually executing. Such Function as a Service (FaaS) capabilities are capable of significantly changing how client/server applications are designed, developed and operated. 

Gartner predicts that half of global enterprises will have deployed function Platform as a Service (fPaaS) by 2025, up from only 20% today.

The technology stack for an IT service delivery can be re-conceptualized to fit the serverless stack across each layers of network, compute and database. A serverless architecture includes three main components:

  • API Gateway 
  • FaaS
  • DbaaS,

where the API Gateway is nothing but a communication layer between the frontend and FaaS. It maps the architectural interface with the respective functions that run the business logic.

With the abdication of servers in the serverless set up, the need for distribution of network or application traffic via load balancers takes a backseat as well.

FaaS helps execute codes in response to events while the cloud provider attends to the underlying infrastructure associated with building and managing microservices applications.

DbaaS is a cloud based backend service that basically helps get rid of database administration overheads.

For serverless architectures, the key objective is to divide the system into a group of individual functions where costs are directly proportional to usage and not reserved capacity. All benefits of bundling and individual services sharing the same reserved capacity become obsolete. FaaS provisions for development of secured remote modules that can be maintained or replaced more efficiently.

Another major development with the serverless model is that it facilitates client applications to directly access backend resources like storage, with appropriately distributed authentication and authorization techniques in place.

Serverless Computing – Economic Impact 

The pay as you go model offers a considerable remunerative benefit, as users are not paying for idle capacity. For instance, a 300 milliseconds service task that needs to run every five minutes would need a dedicated service instance in the traditional setup, but with a FaaS model, organizations will be billed for only those 300 milliseconds out of every five minutes, resulting in a potential saving of almost 99.5%

Also, as different cloud services are billed according to its utilization, allowing client applications to connect directly to backend resources can optimize costs significantly. The costs of event-driven serverless cloud computing services rises with the increase in memory requirement and processing time and any service that does not charge for execution time adds to cost effectiveness.

Who Should Use a Serverless Architecture?

In the recent past, Gartner identified Serverless computing as one of the most emerging software infrastructure and operations architecture stating that going forward, serverless would eliminate the need for infrastructure provisioning and management. IT enterprises need to adopt an application-centric approach to serverless computing, managing APIs and SLAs, rather than physical infrastructures. 

Organizations looking for scalability, flexibility and better testability of their applications should opt for serverless computing. 

Developers wanting to achieve reduced time to market with building optimal and agile applications would also benefit from serverless architecture models.

The need to have a server running 24/7 is no longer relevant and the module-based functions can be called by applications only when required, thus incurring costs only while at use. 

This in turn paves the way for organizations to have a product based approach where a part of the development team can focus on developing and launching new features without the hassle of having to deploy an entire server for the same. 

Also, with serverless architecture, developers have the option to provide users with access to some of the applications or functions in order to reduce latency.

Running a robust and scalable server along with being able to reduce the time and complexity of the infrastructure is a must. With serverless, the effort required to maintain the IT infrastructure is nominal as most of the server related issues are resolved automatically.

One of the most preferred cloud serverless services is AWS Lambda, which tops the list when it comes to integrating with other services. It offers features like event triggering, layers, high-level security control and online code editing. 

Microsoft Azure functions and Google Cloud functions that offer similar services by integrating with their own set of services and triggers are a close second.

There are players like Auth0, AWS Cognito UserPools and Azure B2C that offer serverless identity management with single sign-on and custom domain support while implementing real-time application features are provisioned by platforms like PubNub, Google Firebase, Azure SignalR and AWS AppSync. 

Amazon S3 by AWS is a leader in file storage services and Azure Blog Storage is an alternative to it.

Azure DevOps and the combination of AWS Code Commit, AWS Code Build, AWS Code Pipeline and AWS Code Star services cater to the entire DevOps management with tools like CircleCI, Bamboo focusing mainly on CI/CD functions. 

Thus, there are numerous serverless offerings in the market to evaluate and choose from, based on the platform that an organization is using with respect to their application needs.

https://azure.microsoft.com/en-in/solutions/serverless/

https://aws.amazon.com/serverless/

https://cloud.google.com/serverless

How Serverless has Impacted Cloud Computing?

In a recent worldwide IDC survey of more than 3,000 developers, 55.7% of respondents indicated they are currently using or have solid plans to implement serverless computing on public cloud infrastructure.

While physical servers are still a part of the serverless set up, serverless applications do not need to cater to or manage hardware and software constituents. Cloud service providers are equipped to offer lucrative alternatives to configuration selection, integration testing, operations and all other tasks related to infrastructure management. 

This is a notable shift in the IT infrastructure services. 

Developers are now responsible primarily for the code they develop while FaaS takes care of right sizing, scalability, operations, resource provisioning, testing and high availability of infrastructure. 

Therefore, infrastructure related costs are significantly reduced promoting a highly economical business set up.

As per Google trends, serverless computing is gaining immense popularity due to the simplicity and economical advantages it offers. The market size for FaaS services is estimated to grow to 7.72 billion by 2021.

Serverless Computing Benefits and Drawbacks

Serverless computing has initiated a revolutionary shift in the way businesses are run improving the accuracy and impact of technology services. Some of the benefits derived from implementing a serverless architecture are:

Reduces Organizational Costs

Adopting serverless computing eliminates IT infrastructure costs, as cloud providers build and maintain physical servers on behalf of     organizations. In addition, servers are exposed to breakdown, require maintenance and need additional workforce to deploy and operate them on a regular basis, all of which can be excluded by going serverless. It facilitates  enhanced workflow management, as organizations are able to convert operational processes into functions, thus, maintaining    profitability and bringing down expenses to a large extent. 

Serverless Stacks

Serverless stacks act as an alternative to the conventional technology stacks by creating a responsive environment to develop agile        applications without being concerned about building complicated application stacks themselves.

Optimizes Release Cycle Time

Serverless computing offers microservices that can be deployed and run on a serverless infrastructure only when needed by the application. It enables organizations to make the smallest of application-specific   developments, isolate and resolve issues and manage independent applications as well. According to a survey conducted, serverless        microservices have proven to bring down the standard release cycle from 65 to just 16 days.

Improved Flexibility and Deployment

Serverless computing microservices provide flexibility, technical support and clarity needed to process data owing to which organizations can boost a more consistent and well structured data warehouse. Similarly, since remote applications can be created, deployed and fixed in serverless, it is feasible to schedule specific automated repetitive tasks to enhance quick deployments and reduce the time to market.

Event based Computing

 With FaaS, cloud providers are able to offer   event driven computing methodologies where molecular functions respond to application needs when called for. Therefore, developers can focus only on building codes allowing organizations to escape the conditional time-consuming traditional workflows. It moreover reduces the DevOps costs and lets developers focus on building new features and products.

Green Computing

It is important for organizations to be mindful of the climatic and environmental changes in today’s times. With serverless computing,      organizations can operate servers on demand rather than run servers at all times, ensure energy consumption is reduced and help decrease the amount of radiation shed from actual physical servers and data centers.

Better Scalability

Serverless is highly scalable and accommodates growth and increase in load   without any additional infrastructure. It is researched that 30% of the world’s servers remain idle at any point in time and most servers only utilize 5%-15% of their total capacity, which makes it best to opt for       scalable serverless solutions.

However, organizations need to be wary of the downside of serverless computing as well.

  • Not Universally Suitable 

Serverless is best for transitory applications and not efficient if workloads have to be run long-term on a dedicated server

  • Vendor Lock-in

Applications are entirely dependent on third party vendors with organizations having minimal or no control over them. It is also difficult for customers to change the cloud platform or provider without making changes to the applications.

  • Security Issues

Major security issues may arise due to cloud providers conducting multi-tenancy operations on a shared environment in order to use their own resources more efficiently.

  • Not Ideal for Testing  

Some of the FaaS services do not facilitate testing of functions locally assuming that developers will use the same cloud for testing.

  • Practical Difficulties

A scalable serverless platform needs to initialize or stop internal resources when application requests come in or when there have been no requests for a long time. Usually when functions handle such first time requests, they take more time than usual triggering an issue called cold start. Additional overheads may be incurred for function calls if the two communicating functions are located on different servers.

Serverless computing is an emerging technology with considerable scope for advancement. In the future, businesses can anticipate a more unified approach between FaaS, APIs and frameworks to overcome the listed drawbacks. 

As of today, serverless architecture gives organizations the freedom to focus on their core business offerings in order to develop a competitive edge over their counterparts. Its high deliverability and multi-cloud support coupled with the immense opportunities it promises, makes it a must-adopt in any organization.