Data lake implementation improved processing time by 4X for India’s largest media company

By | Alexa, Case Study, Data Case Study | No Comments

Customer: India’s largest media company


The customer is one of India’s leading media and entertainment companies. They were looking to improve their ad placements across channels for improved conversion along with taking other parameters like social media feedback into consideration. With the push towards digital content, shifting the on-premise infrastructure set up to cloud was necessary to optimize costs and manage high volumes of data. Powerup was also to support and maintain the deployed AWS services for them.

About Customer

The customer is one of the substantial News network houses in India with rights to more than 3,818 movie titles, it entertains over 1 billion viewers across 172+ countries offering 80+ channels. The customer has always provided quality entertainment across the globe and is committed to achieving maximum efficiency in their ad conversion rates through strategic advertisement placements across their multiple channels.

Problem Statement

The customer’s current on-premise infrastructure was proving expensive due to the volume of data being generated and the shift to the cloud was the need of the hour. Softwares provided TRP information on a weekly basis whereas some reports needed to be generated every 6 to 12 minutes from the source to destination. All the players in the business generated these reports to aid critical

decisions and the customer, with some major failure in its existing scheduled-based processes were facing further delays. The time is taken to generate reports while making changes in promos and ad placements were proving to be highly critical.

The customer is looking at a fully managed and scalable Infrastructure setup and configuration on AWS. We proposed to create a data lake on AWS with all the source data getting pushed to a commonplace. All data warehouse objects are to be created as in the existing system. Migrate existing SQL Server Integration Services (SSIS) extract, transform, load (ETL) jobs on Talend for the new data to start moving to data warehouse along with the movement of Tableau dashboards on AWS and point to Redshift, Redshift spectrum.

Proposed Solution

A plan was drafted to make the shift from a tight-knit synchronous architecture to an event-based loosely coupled asynchronous architecture in order to ensure accurate and on-time report generation as per the user’s requirement.

Their entire process transformation involved a cloud-first approach where the client gathered on-premise data from multiple sources like SAP, Chrome feeds, Twitter feeds, social media feedback in excel files and then piped it to cloud. This gave birth to data warehouses where data got extracted and moved from physical to digital form.

AWS landing zone to be setup with the following – organization account, centralized logging, shared services, security and production accounts. The shared service account used to deploy common applications like Bastion and Tableau server whereas the security account is created only for audit purposes.

Appropriate users, groups and permissions created using Identity and Access Management (IAM) service to access different AWS service along with Multi-factor authentication (MFA) activation. The network is setup using Virtual Private Cloud (VPC) with appropriate Classless inter-domain routing (CIDR) range, subnets and route table creation.

VPN tunnel is setup between AWS and customer location. One-time data transfer will be done directly to Amazon Simple Storage Service (S3) after which, the backup file on S3 will be restored in Amazon Relational Database Service (RDS) and then the same will be moved to Redshift entirely. Ultimately S3 will have the entire dump, which will be deployed on Amazon Elastic Compute Cloud (EC2) on AWS.

Once collated as one single repository, the data could be easily transformed from raw to columnar format using Lambda functions that can then be smoothly pushed and visualized on Tableau.

An extract, transform, load (ETL) tool like Talend can be used to transfer incremental data to S3. An SSH File Transfer Protocol (SFTP) server can be used to upload the excel files to S3. Alternatively, Talend can be used to extract the data from excel files and load it into S3. Active Directory Federation Services (ADFS) is configured to provide federated access to Tableau server as on-premise AD has employees as well as third party vendors added. Glue Crawlers will run periodically and scan the S3 data lake to automatically populate structured as well as unstructured data in S3, which in turn can be connected with Amazon Redshift and all other data warehouses being used.

CloudWatch service will be used for monitoring and Amazon Simple Notification Service (SNS) will be used to notify the users in case of alarms or metrics crossing thresholds. All snapshot backups will be regularly taken and automated based on best practices.

Security and Logging

It was ensured that the system also had a built-in centralized log system which, kept a check on various parameters like time taken for each process, success/failure of a process, the reason for the failure of the process and so on. Data will be secured and security groups will be used to control traffic at the Virtual Machine (VM) level.

Network Access Control Lists (NACLs) are used to control traffic at the subnet level and VPC flow logs will be enabled to capture the entire network traffic. CloudTrail will be enabled to capture all the Application Program Interface (API) activities. All the logs will be sent to AWS Guard Duty for threat detection and identifying malicious activities in the account and AWS Config will be enabled.

To ease the process, even an EC2 auto-recovery feature is enabled to address failures, if any, so that data is not lost.

The solution was designed in a modular manner keeping in mind the possibility of the addition of new channels and scalability in future, where components could be added or removed without any code changes.

The solution architecture

Application Support

Our support involved components that were developed or modified as a part of the project implementation process.

  • Broadcast Audience Research Council (BARC) sequences related to ETL (extract, transform, load) pipeline functionality support.
  • Data lake support on S3.
  • For Amazon Redshift data warehouse, support on data issues on the stored procedures migrated as a part of the project.
  • Tableau Dashboard support on data links to Redshift.

The customer was to submit requests, prioritize defects and provide technical support to Powerup before planned releases. Based on these inputs, we were able to maintain a backlog of defects reported by client stakeholders. Powerup cloud resources were to plan and schedule each release jointly with the customer’s team. Conduct unit tests and provides support acceptance tests by the customer while fixing business-critical issues if any. Deploy releases to production and warranty support for any defects found in production releases. Conduct weekly project status meetings with client stakeholders to review work progress, planned activities, risks and issues, dependencies and action items, if any.


Post the shift to cloud, the customer was able to derive Sentiment Analysis through TRP. These TRPs were based on Social Media data on new as well as existing shows. It also paved the way for them to conduct GAP Analysis in order to understand and compare the current Infrastructure and process improvements with potential or expected performance, which helped enhance their efficiency.

Cloud platform


Technologies used

Tableau, Redshift, DMS, Glue, Athena.

Business Benefits

  • The customer enjoyed a fully managed and scalable Infrastructure set up and configuration on AWS.
  • 120 dashboards created and data processing time reduced from 2-hr to 30-min.
  • The immediate business impact recorded was the modular solution resulting in the management being able to take improved and timely business-critical decisions.
  • Migration to cloud-enabled a swift generation of critical reports in an end-to-end time span of 3-min from 6-12min which improved the decision-making capability of business leaders to a significant extent. Going forward, it is anticipated that TRP is also to increase further due to this digital shift.

Building your first Alexa Skill — Part 1

By | AI, Alexa, Blogs, Machine Learning, ML | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies

Technological advancement in the area of Artificial Intelligence & Machine Learning has not only helped systems to become more intelligent but has also made them more vocational. You can just speak to the phone & add items to your shopping list or just instruct your laptop to read your email. In this fast-growing era of voice-enabled automation, Amazon’s Alexa enabled devices are changing the way people go through their daily routines. In fact, it has introduced a new term in the dictionary, Intelligent Virtual Assistant (IVA).

Technopedia defines Intelligent Virtual Assistant as an engineered entity residing in software that interfaces with humans in a human way. This technology incorporates elements of interactive voice response and other modern artificial intelligence projects to deliver full-fledged “virtual identities” that converse with users.”

Some of the most commonly used IVAs are Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana, with Samsung Bixby joining the already brimming list lately. Although IVAs seem to be technically charged, they bring enormous automation & value. Not only do they make jobs for humans easier, but they also optimize processes and reduce inefficiencies. These systems are so seamless, that just a simple voice command is required to get tasks completed.

The future of personalized customer experience is inevitably tied to “Intelligent Assistance”. –Dan Miller, Founder, Opus Research

So let’s bring our focus to Alexa, Amazon’s IVA. Alexa is Amazon’s cloud-based voice service, which can interface with multiple devices on Amazon. Alexa gives you the power to create applications, which have the capability to interact in natural language, making your systems more intuitive to interact with technology. Its capabilities mimic those of other IVAs such as Google Assistant, Apple Siri, Microsoft Cortana, and Samsung Bixby.

The Alexa Voice Service (AVS) is Amazon’s intelligent voice recognition and natural language understanding service that allows you to voice-enable any connected device that has a microphone and a speaker.

Powerupcloud has worked on multiple use-cases, where they have developed Alexa voice automation. One of the most successful & adopted use cases being one of the largest General Insurance providers.

This blog series aims at giving a high-level overview of building your first Alexa Skills. It has been divided into two parts, first, covering the required configurations for setting up the Alexa skills, while the second focuses on the approach for training the model and programming.

Before we dive in to start building our first skill, let’s have a look at some Alexa terminologies.

  • Alexa Skill — It is a robust set of actions or tasks that are accomplished by Alexa. It provides a set of built-in skills (such as playing music), and developers can use the Alexa Skills Kit to give Alexa new skills. A skill includes both the code (in the form of a cloud-based service) and the configuration provided on the developer console.
  • Alexa Skills Kit — A collection of APIs, tools, and documentation that will help us work with Alexa.
  • Utterances — The words, phrases or sentences the user says to Alexa to convey a meaning.
  • Intents — A representation of the action that fulfils the user’s spoken request.

You can find the detailed glossary at

Following are the prerequisites to get started with your 1st Alexa skill.

  1. Amazon Developer Account (Free: It’s the same as the account you use for
  2. Amazon Web Services (AWS) Account (Recommended)
  3. Basic Programming knowledge

Let’s now spend some time going through each requirement in depth.

We need to use the Amazon Developer Portal to configure our skill and build our model which is a necessity.

  • Click on Create Skill, and then select Custom Model to create your Custom Skill.

Please select your locale carefully. Alexa currently caters to English (AU), English (CA), English (IN), English (UK), German (DE), Japanese (JP), Spanish (ES), Spanish (MX), French (FR), and Italian (IT). We will use English (IN) while developing the current skill.

  • Select ‘Start from Scratch’
  • Alexa Developer Console
  • Enter an Invocation Name for your skill. Invocation name should be unique because it identifies Skills. Invocation Name is what you say Alexa to invoke or activate your skill.

There are certain requirements that your Invocation name must strictly adhere to.

  • Invocation name should be two or more words and can contain only lowercase alphabetic characters, spaces between words, possessive apostrophes (for example, “sam’s science trivia”), or periods used in abbreviations (for example, “a. b. c.”). Other characters like numbers must be spelt out. For example, “twenty-one”.
  • Invocation names cannot contain any of the Alexa skill launch phrases such as “launch”, “ask”, “tell”, “load”, “begin”, and “enable”. Wake words including “Alexa”, “Amazon”, “Echo”, “Computer”, or the words “skill” or “app” are not allowed. Learn more about invocation names for custom skills.
  • Changes to your skill’s invocation name will not take effect until you have built your skill’s interaction model. In order to successfully build, your skill’s interaction model must contain an intent with at least one sample utterance. Learn more about creating interaction models for custom skills.
  • Endpoint — The Endpoint will receive POST requests when a user interacts with your Alexa Skill. So this is basically the backend for your Alexa Skill. You can host your skill’s service endpoint either using AWS Lambda ARN, which is recommended, or a simple HTTPS endpoint. Advantages of using an AWS Lambda ARN are :
  • Sign in to AWS Management Console at
  • Lookup for Lambda in AWS services
  • US East (N. Virginia)
  • EU (Ireland)
  • US West (Oregon)
  • Asia Pacific(Tokyo)

We are using Lambda in the N.Virginia (us-east-1) region.

  • Once we are in a supported region, we can go ahead to create a new function. There are three different options for creating your function. You can create a function from scratch or you can also use available Blueprints and Serverless Application Repositories.
  • C# / .NET
  • Go
  • Java
  • NodeJS
  • Python

We will discuss programming Alexa with different languages in the next part of this series.

  • Go back to the Endpoint section in Alexa Developer Console, and add the ARN we had copied from Lambda in AWS Lambda ARN Default Region.

ARN format — arn:aws:lambda:us-east-1:XXXXX:function:function_name

In, part 2, we will discuss the training our model — adding Intents & Utterances, finding walkarounds for some interesting issues we faced, making workflows using dialog state, understanding the Alexa Request & Response JSON, and finally our programming approach in Python.

Enabling the leadership of a large OTT access business data just by asking for it

By | Alexa, AWS, Blogs, data | No Comments

Written By: Kartikeya Sinha, Lead Data Architect, Powerupcloud & Siva S, CEO, Powerupcloud Technologies

Just imagine the work-life of a Chief Executive or someone from the senior leadership team of a company. You would see them getting into meetings after meetings. They always seem to be thinking about something. To make better business decisions, they need to understand their business data. In their super busy schedule, it often turns out to be cumbersome for them to navigate through complex Business Intelligence (BI) dashboards and tens & hundreds of reports to find the metrics they need.

With the introduction of Natural Language Processing (NLP) APIs from leading pubic cloud providers like AWS, Azure & Google, we have started receiving a lot of requirements around integrating these NLP APIs with BI dashboards so that the senior business executives can simply ask for specific data and hear them out instantly.

One such case is discussed in this blog post.


Problem Statement

One of our customers is a large video streaming company. They collect several metrics including video streaming, customer behaviour, application usage, network usage, etc. But these metrics were distributed across several software used by them for video streaming including the likes of Mixpanel, Youbora, Appsee, etc. The customer had the following requirements:


  1. Build a data lake so that all data can be accessed from one centralized location
  2. Build ML engines for prediction, correlation of the app data
  3. Build a highly responsive and graphically rich reporting dashboard
  4. Enable NLP to search metrics using voice or text query

In this blog, we will be covering the custom reporting dashboard and NLP integration modules.


Data Lake Solution

Powerupcloud’s data team built a data lake using Amazon Redshift, Amazon S3 to support the data analysis processes. The data was loaded to Amazon S3 by Talend jobs. An ETL job converts the raw data files to readable CSV files and pushes to a target bucket. This allows the data to be queried either by Redshift Spectrum or Athena directly from Amazon S3 and this brings down the data storage costs quite a bit.

Below is a high-level architecture diagram without the Redshift Spectrum or Athena component.



Tech Stack

– Amazon Redshift as DWH.

– Amazon Lex to do NLP on the query text and extract intent and slot values.

– Elasticbeanstalk based Query processing engine written in Python3

– Webkit Speech Recognition API to convert speech to text.

– Elasticbeanstalk to host the BI dashboard

– Tech stack for the BI dashboard — Bootstrap, jQuery, Morris.js charts


Rich Reporting Dashboard

Once the data lake was implemented, we were faced with the next big problem-how can you integrate NLP into a BI platform? We tried several out-of-the-box BI platforms like Redash, PowerBI, etc. But integrating a browser-based voice-to-text converter was a challenge. So we decided to go with Google Web Kit and a custom reporting dashboard.

As the customer needed a rich UI, we chose morris.js charts running on a bootstrap theme. Morris.js allowed us to have rich colours and graphics in the graphs while the bootstrap theme helped in a high level of customization.



Integrating Amazon Lex

This architecture gives you a flow of data from the browser to Redshift.

The queries generated by Google Webkit is passed to Amazon NLP for intents and associated slots. Once the slots are identified, the parameters are passed to the Query Processing API which queries the Redshift for relevant data. This data is then presented through the custom reports built.


How does the solution work?


  1. Click on the ‘mic’ icon and ask your query.
  2. The BI tool does the speech to text conversion using Webkit Speech API.
  3. The text query is then sent to a Query Processing engine.
  4. Query processing engine sends a request to Amazon Lex for extracting intent and slot values from the query.
  5. Amazon Lex responds back with the intent name and slot values.
  6. Query processing engine uses the intent name and slot values to form a SQL query to the backend DWH-Amazon Redshift.
  7. Using the result of the query from Redshift, the query processing engine forms a response back to the frontend dashboard (BI).
  8. The frontend (BI) dashboard uses the response data to plot the graph/display it in the table.


Training Amazon Lex

The utterances are trained as below. Please note that the more utterances you train, the smarter the engine gets. The slots can be added as per the reports built in the dashboard. In this example, we chose ‘DeviceOS’, ‘GraphType’ and ‘# of days’ as the slots that are needed to be supplied from the customer’s query.




Challenges Faced


  1. Webkit Speech API does a pretty good job of converting speech to text. However, it works only on Google Chrome browser. Firefox has recently launched support for speech recognition, but that is still in very nascent stage.
  2. Although the ideal situation would be that you ask any meaningful query to the BI tool and it should be able to answer it. However, in order to do that Query processing engine needs to be really super smart to form dynamic SQL queries based on the user query. We have not yet achieved that and are evolving the Query processing engine to handle as many queries as possible without a need for modification.


Voice-Based BI Engine in Action

The voice search can pull reports based on 3 inputs,


  • Metrics-Visitors or Viewers or Video Views
  • Devices-iOS or Android or TV or PWA
  • Time-Last X days
  • Sample Query: Can you show me the number of visitors from iOS for the last 10 days?
  • Note: Voice search for terms like ‘Video Views’ and ‘PWA’ might be a little difficult for Lex to comprehend. Text search works better.

Hope this read was insightful. The future is voice-based platforms, be it apps, reports, customer service, etc.

If you would like to know more details on this project or if you want us to build something similar for you, please write to us at


AWS Connect and Lex — Automate Your Customer Support

By | AI, Alexa, AWS, Blogs, Botzer | No Comments

Written By: Rachana Sharma, Software Engineer, Powerupcloud Technologies

Ever since AWS Connect was announced, we have been waiting for an excuse to get our hands on it and put it to use for solving a real-world use case. Soon enough, we were presented with the opportunity to automate voice-based IVR support for a large government entity in Singapore.

Regular readers of this blog might be aware that we have a thriving enterprise chatbot platform, and our product are used by multiple enterprise customers. So with the experience of deploying chatbots under our belt, when we ran into something like AWS Connect, it made perfect sense to integrate with Lex to automate voice-based IVR customer support. Why settle for text when you can make them bots talk 😉

That’s exactly what we did and this post explains how we made Connect, AWS Lambda, and Lex work together along with our Botzer engine at a high level.

Some of the queries handled by our final deployed solution are below:

  • Verifying user’s identity using NRIC number from the database
  • Verifying user’s mobile number using OTP authentication
  • Providing user a personalized response post verification
  • Allowing user to make transactions

AWS Connect And Contact Flows

After you set up your first AWS connect instance (tip: go through this video)you will be able to make calls to a customer care solution. Every activity on AWS connect including the IVR played is a contact flow. A contact flow is an editable roadmap directing the customer experience of the contact center. You can edit all contact flows in the contact flow module of the Routing menu.

The first Contact flow played as IVR to the user is Sample inbound flow (first call experience). You can edit this flow to put customized flows in IVR.

  1. Edit Get customer input module: The Sample IVR gives you 7 predefined options. we have reduced it to 4 DTMF input where each will perform specific operations. After you edit this module by double-clicking you can see an output connection for branching all 4 respective options.
  1. For each of the specified DTMF input, we have used the transfer to flow module in Transfer/Terminate menu. We can edit a transfer to flow to specify the contact flow name to be called when the respective button is pressed. The flow needs to be published so that it comes in the search menu to select a flow option.

Enable call recording: The module is available in the Set section of flow modules. We can edit call recording behaviour by double-clicking the module and choosing one of the following:

  • None
  • Agent and Customer
  • Agent only
  • Customer only

This will save your recording in the S3 bucket mentioned in the setting of the AWS connect Data storage section.

Store customer input to Contact attribute:

We can store customer’s input to contact attribute using store customer input module in interact section. You need to specify text for which user will type this value. This value will get saved as a system attribute. We can use the Set Contact attributes module to assign a key to this value so that we pass this key as an attribute other Lambda, Lex modules or use it in other contact flows.

Lambda Integration with Connect

Amazon Connect can successfully invoke a Lambda function in an AWS account when a resource policy has been set on the Lambda function. you can follow this link to set a resource policy for lambda and see sample request response from lambda to AWS connect.

Use Case: Buy CDP data

  1. This contact flow Calls lambda function to send OTP: We need to give full ARN of Lambda function. We are sending the user’s mobile number as a system attribute and NRIC number as a user-defined attribute( Taken from the first flow from a user) to lambda.

Lambda will get an event with a mobile number in it and make a rest call to send OTP to the user. We are using the AWS SNS service to send a message to the mobile. We can avoid using REST and put all the code in Lambda if we make a deployment package. Details of creating a deployment package here.

Following is the code for sending the OTP:

2) After the Lambda function sends an OTP to the user’s mobile IVR will ask for verification: Aws connect can take a user’s input in Store Customer input block of interact menu. We can branch the activity further based on the customer’s input using Check contact attribute module in the Branch menu. Here we are, authentication users, if the user types the correct OTP otherwise returning him to the main flow with a failed authentication message.

Lex integration with AWS connect

Amazon Lex is a service for building conversational interfaces using voice and text. By integrating these two services you can take advantage of Lex‘s automatic speech recognition (ASR) and natural language processing/understanding (NLU) capabilities to create great self-service experiences for your customers.

Use Case Opening of a CDP account

  1. We need to first add our Lexbot to connect to start using it in our Contact flows. To do this from the AWS console we need to go to our AWS instance settings and add bots in contact flows section.
  1. We can start building a contact flow by dragging Get customer Input from the interact menu. After dragging double click on the block and set the input to Amazon Lex and specify the name and Alias. you can also send a session attribute to the bot.
  1. We have created a Lex bot that will trigger for slots if the intent of the user is “open CDP account”. On fulfilment of all the required slots, Lex will call the Lambda function to connect integration.
  1. The Lambda function will fetch the slot values from the event sent and make a POST call to REST API which will insert a record of the user in DB and send an Email to the customer with his details using AWS SES service.

On successful execution of Lambda flow will be transferred again from lex to aws connect.

We will soon publish the relevant code used for this on GitHub and update this post.

Hope you found this useful. Happy customer servicing! 🙂

Text to speech using JavaScript and Python and AWS Polly

By | AI, Alexa, AWS, Blogs, Chatbot | No Comments

Written by Saikrishna Dronavalli, Former Software Engineer, Powerupcloud Technologies.

Giving voice responses is an important feature of your application these days. In this post, we will discuss how to convert text to speech using JavaScript and Python.


Among all the ways using simple JavaScript library is an easy process to convert a text into speech. All we need to do is add the JavaScript library and call the respective function with the text you want to convert to speech as an argument.

I am using ResponsiveVoice.JSplugin for this. Please follow the following procedure and sample code snippets provided.

Download the plugin from the website and add it to your HTML page or use the CDN in your HTML page as shown below.

For different usages please follow the code snippets.

Fetching all Available voices

Stopping, pausing and resuming the voice

Conversion when the browser supports voice

Using Python

I will show you two ways for Text to speech conversion using python.


Polly is a text to speech synthesis service from AWS. It is a very simple service to use. All you need is an AWS subscription.

we can test AWS services using in multiple ways like AWS CLI(command line interface), API Client for different technologies and using AWS console.

  • To test Polly in AWS console login to your AWS account and go to Polly console. Fill the text area and choose the language and region and voice and click on listen to speech button.

By using Polly API client we can do all operations which can be performed on AWS console such upload your own lexicon and can use it for text to speech synthesis besides using the default voices provided by the Polly etc.

Install boto3 client by using these instructions.

  • Once your installation is successfully completed then we need to create a Client Object for Polly as shown below.

once you create a client you can do all the operations that you can do on AWS Polly console-like getting the supported voices, synthesizing speech, uploading a lexicon, deleting a lexicon, etc…

To know all the voice in a particular language

To convert text to speech

to convert the text into speech the request will look like

Using pyttsx

install it using below command in Linux

sudo pip install pyttsx

converting text to speech using pyttsx

Besides just conversion, you can also add events.

For more information on using pyttsx please visit here.

That’s it. Happy conversion!

Creating a Simple AWS Lex Bot

By | AI, Alexa, AWS, Blogs | No Comments

Written By: Saikrishna Dronavalli, Former Software Engineer, Powerupcloud Technologies

What is AWS LEX?

Amazon Lex is an AWS service for building conversational interfaces for any applications using voice and text. Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) to enable you to build highly engaging user experiences with lifelike, conversational interactions

Advantages of LEX

Well, Lex democratizes bot creating. It is simple, easy to deploy. You don’t have to do the heavy lifting of NLP, Deep Learning and Machine Learning and its cost-effective. Also, it has different input-output formats(text and speech).

With that out of the way, let’s get started with creating a simple bot.


we need to create 2 IAM roles for creating a Lex bot. I am using the following roles — lex-exec-role, lambda-exec-role-for-lex-get-started

Role Name:-lex-exec-role

Select Role Type:-AWS Lambda

in “permissions” choose “Inline permissions”:

{ “Version”: “2012–10–17”, “Statement”: [ { “Action”: [ “lambda:InvokeFunction”, “polly:SynthesizeSpeech” ], “Effect”: “Allow”, “Resource”: “*” } ] }

Edit Trust Relationship as:

{ “Version”: “2012–10–17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “Service”: “” }, “Action”: “sts:AssumeRole” } ] }

for the Second IAM role

Role Name:-lambda-exec-role-for-lex-get-started

Select Role Type:-AWS Lambda

in permissions choose Inline permissions

{ “Version”: “2012–10–17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “logs:CreateLogGroup”, “logs:CreateLogStream”, “logs:PutLogEvents” ], “Resource”: [ “*” ] } ] }

Create Lex Bot

Open AWS Console and from services, select Lex, choose will open a UI for creating the bot. It will show some examples Custombot, BookTrip, OrderFlowers, ScheduleAppointment, etc. We are going with a custom bot for this article.

Choose Custom bot Enter the following details

App name: HotelBookingBot

Output voice: Salli

Session timeout: 5 minutes.

IAM role: Choose the lex-exec-role from the list.

Choose Create.

Creating Slot Types

Slot types are the value types that your bot is going to ask the user in the questions.

For example, if your bot is for ordering a pizza user needs to provide different values like “size of the pizza”, “crust”, and “Type(veg,non-veg, etc)”. So Slot types would be “Type”,” Size”,” Crust”. For the sake of this example, we are going to build a hotel booking chatbot. So the kind of questions we will ask — will be user check-in date, time and checkout date time, etc.

on HotelBookingBot on the left side find Slot Types +.lets create a slot type for asking the number of people say it NoOfPeopleclick on + and add Slot type.

Adding Values to Slot Types

We have created a Slot Type called NoOfPeople in the above step. We now need to add the Values to this Slot Type. To add the Values to this Slot type Click onNoOfPeople. It will open a UI as shown below. Add all the values. I am going with values 1, 2, 3 for this example.

Creating Intent

The intent of a bot is defined as the goal of your bot. For example, if you create pizza ordering bot your bot intent is taking pizza orders. Similarly, in our Application, our Intent is booking a hotel.

To create an intent select our bot and look for intents. Click on +, this will open a modal as shown below.

choose to create own intent. I am going to add an Intent Called “BookHotel”

NOTE: Similar to Slot Values there are several Intents created by AWS. If your application meets the Intents given by AWS Intents choose an existing one.

We have Slot types and Intent. We now need to add the sample Utterancesand slots.

after adding the intent, the bot will open a screen as shown below.

Add Slots

To add the slots choose a slot and Enter the Name of the slot. Choose the slot type that you have created during Slot type creation as per requirement. I will use a predefined slot type AMAZON.TIME. This fits for CheckIn and CheckOut date and time. Come up with a human-sounding prompt so that the user doesn’t feel he/she is chatting with a bot

Adding Response Cards

To add a response card to a particular prompt, click on the Settings button of particular prompt. It will open a modal for entering the response card. Enter all the values such as card image (URL of the Image), title, card text and button values as shown in the figure. Once you deploy your bot in the facebook messenger or if you make an API call you will get the response in this format.

Error Handling

Error Handling handles the phrases that are not understood by the Bot. It is very important to handle these errors. AWS lex has a very simple error Handling Mechanism.

For Error Handling choose your bot and click on the “Error Handling” on the left side of AWS lex console. Enter the “Clarification prompts”,” Maximum number of retries”,” Hang-up phrase

this is the way of asking Lex bot to show the given clarification prompt for a maximum number of retries whenever it is not understanding the user data. if it exceeds the maximum number of retries it will show the Hung up phrase.

Build and Test

Building is a process where your bot is trained with corresponding utterances and slots. If there is no mistake in the above all steps your bots will be ready for testing. For building your bot click on build button on the right up corner of AWS Lex Console of your bot. Once your bot is built then Lex will create a version of the bot. Once building your bot is successful then you can test the bot using test bot.

Publishing a Bot

Once your testing is done successfully (if it meets all your requirements) you can publish the bot for the public usage. Publishing can be done from the aws console. To publish a bot click on the publish button on the upright corner of the particular bot console. You can enter the alias name of the bot or you can update the existing alias. Alias names are very useful when you want to deploy a can have a bot with multiple alias names so that you can use it as different versions of your bot.