Category

Alexa

Building your first Alexa Skill — Part 1

By | AI, Alexa, Blogs, Machine Learning, ML | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies

Technological advancement in the area of Artificial Intelligence & Machine Learning has not only helped systems to become more intelligent but has also made them more vocational. You can just speak to the phone & add items to your shopping list or just instruct your laptop to read your email. In this fast-growing era of voice-enabled automation, Amazon’s Alexa enabled devices are changing the way people go through their daily routines. In fact, it has introduced a new term in the dictionary, Intelligent Virtual Assistant (IVA).

Technopedia defines Intelligent Virtual Assistant as an engineered entity residing in software that interfaces with humans in a human way. This technology incorporates elements of interactive voice response and other modern artificial intelligence projects to deliver full-fledged “virtual identities” that converse with users.”

Some of the most commonly used IVAs are Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana, with Samsung Bixby joining the already brimming list lately. Although IVAs seem to be technically charged, they bring enormous automation & value. Not only do they make jobs for humans easier, but they also optimize processes and reduce inefficiencies. These systems are so seamless, that just a simple voice command is required to get tasks completed.

The future of personalized customer experience is inevitably tied to “Intelligent Assistance”. –Dan Miller, Founder, Opus Research

So let’s bring our focus to Alexa, Amazon’s IVA. Alexa is Amazon’s cloud-based voice service, which can interface with multiple devices on Amazon. Alexa gives you the power to create applications, which have the capability to interact in natural language, making your systems more intuitive to interact with technology. Its capabilities mimic those of other IVAs such as Google Assistant, Apple Siri, Microsoft Cortana, and Samsung Bixby.

The Alexa Voice Service (AVS) is Amazon’s intelligent voice recognition and natural language understanding service that allows you to voice-enable any connected device that has a microphone and a speaker.

Powerupcloud has worked on multiple use-cases, where they have developed Alexa voice automation. One of the most successful & adopted use cases being one of the largest General Insurance providers.

This blog series aims at giving a high-level overview of building your first Alexa Skills. It has been divided into two parts, first, covering the required configurations for setting up the Alexa skills, while the second focuses on the approach for training the model and programming.

Before we dive in to start building our first skill, let’s have a look at some Alexa terminologies.

  • Alexa Skill — It is a robust set of actions or tasks that are accomplished by Alexa. It provides a set of built-in skills (such as playing music), and developers can use the Alexa Skills Kit to give Alexa new skills. A skill includes both the code (in the form of a cloud-based service) and the configuration provided on the developer console.
  • Alexa Skills Kit — A collection of APIs, tools, and documentation that will help us work with Alexa.
  • Utterances — The words, phrases or sentences the user says to Alexa to convey a meaning.
  • Intents — A representation of the action that fulfils the user’s spoken request.

You can find the detailed glossary at

https://developer.amazon.com/docs/ask-overviews/alexa-skills-kit-glossary.html

Following are the prerequisites to get started with your 1st Alexa skill.

  1. Amazon Developer Account (Free: It’s the same as the account you use for Amazon.in)
  2. Amazon Web Services (AWS) Account (Recommended)
  3. Basic Programming knowledge

Let’s now spend some time going through each requirement in depth.

We need to use the Amazon Developer Portal to configure our skill and build our model which is a necessity.

  • Click on Create Skill, and then select Custom Model to create your Custom Skill.

Please select your locale carefully. Alexa currently caters to English (AU), English (CA), English (IN), English (UK), German (DE), Japanese (JP), Spanish (ES), Spanish (MX), French (FR), and Italian (IT). We will use English (IN) while developing the current skill.

  • Select ‘Start from Scratch’
  • Alexa Developer Console
  • Enter an Invocation Name for your skill. Invocation name should be unique because it identifies Skills. Invocation Name is what you say Alexa to invoke or activate your skill.

There are certain requirements that your Invocation name must strictly adhere to.

  • Invocation name should be two or more words and can contain only lowercase alphabetic characters, spaces between words, possessive apostrophes (for example, “sam’s science trivia”), or periods used in abbreviations (for example, “a. b. c.”). Other characters like numbers must be spelt out. For example, “twenty-one”.
  • Invocation names cannot contain any of the Alexa skill launch phrases such as “launch”, “ask”, “tell”, “load”, “begin”, and “enable”. Wake words including “Alexa”, “Amazon”, “Echo”, “Computer”, or the words “skill” or “app” are not allowed. Learn more about invocation names for custom skills.
  • Changes to your skill’s invocation name will not take effect until you have built your skill’s interaction model. In order to successfully build, your skill’s interaction model must contain an intent with at least one sample utterance. Learn more about creating interaction models for custom skills.
  • Endpoint — The Endpoint will receive POST requests when a user interacts with your Alexa Skill. So this is basically the backend for your Alexa Skill. You can host your skill’s service endpoint either using AWS Lambda ARN, which is recommended, or a simple HTTPS endpoint. Advantages of using an AWS Lambda ARN are :
  • Sign in to AWS Management Console at https://aws.amazon.com/console/
  • Lookup for Lambda in AWS services
  • US East (N. Virginia)
  • EU (Ireland)
  • US West (Oregon)
  • Asia Pacific(Tokyo)

We are using Lambda in the N.Virginia (us-east-1) region.

  • Once we are in a supported region, we can go ahead to create a new function. There are three different options for creating your function. You can create a function from scratch or you can also use available Blueprints and Serverless Application Repositories.
  • C# / .NET
  • Go
  • Java
  • NodeJS
  • Python

We will discuss programming Alexa with different languages in the next part of this series.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

  • Go back to the Endpoint section in Alexa Developer Console, and add the ARN we had copied from Lambda in AWS Lambda ARN Default Region.

ARN format — arn:aws:lambda:us-east-1:XXXXX:function:function_name

In, part 2, we will discuss the training our model — adding Intents & Utterances, finding walkarounds for some interesting issues we faced, making workflows using dialog state, understanding the Alexa Request & Response JSON, and finally our programming approach in Python.

Voice-Enabled BI Platform

By | Alexa, AWS, Blogs | No Comments

Written By: Kartikeya Sinha, Lead Data Architect, Powerupcloud & Siva S, CEO, Powerupcloud Technologies

Just imagine the work-life of a Chief Executive or someone from the senior leadership team of a company. You would see them getting into meetings after meetings. They always seem to be thinking about something. To make better business decisions, they need to understand their business data. In their super busy schedule, it often turns out to be cumbersome for them to navigate through complex Business Intelligence (BI) dashboards and tens & hundreds of reports to find the metrics they need.

With the introduction of Natural Language Processing (NLP) APIs from leading pubic cloud providers like AWS, Azure & Google, we have started receiving a lot of requirements around integrating these NLP APIs with BI dashboards so that the senior business executives can simply ask for specific data and hear them out instantly.

One such case is discussed in this blog post.

Problem Statement

One of our customers is a large video streaming company. They collect several metrics including video streaming, customer behaviour, application usage, network usage, etc. But these metrics were distributed across several software used by them for video streaming including the likes of Mixpanel, Youbora, Appsee, etc. The customer had the following requirements:

  1. Build a data lake so that all data can be accessed from one centralized location
  2. Build ML engines for prediction, correlation of the app data
  3. Build a highly responsive and graphically rich reporting dashboard
  4. Enable NLP to search metrics using voice or text query

In this blog, we will be covering the custom reporting dashboard and NLP integration modules.

Data Lake Solution

Powerupcloud’s data team built a data lake using Amazon Redshift, Amazon S3 to support the data analysis processes. The data was loaded to Amazon S3 by Talend jobs. An ETL job converts the raw data files to readable CSV files and pushes to a target bucket. This allows the data to be queried either by Redshift Spectrum or Athena directly from Amazon S3 and this brings down the data storage costs quite a bit.

Below is a high-level architecture diagram without the Redshift Spectrum or Athena component.

Tech Stack

– Amazon Redshift as DWH.

– Amazon Lex to do NLP on the query text and extract intent and slot values.

– Elasticbeanstalk based Query processing engine written in Python3

– Webkit Speech Recognition API to convert speech to text.

– Elasticbeanstalk to host the BI dashboard

– Tech stack for the BI dashboard — Bootstrap, jQuery, Morris.js charts

Rich Reporting Dashboard

Once the data lake was implemented, we were faced with the next big problem-how can you integrate NLP into a BI platform? We tried several out-of-the-box BI platforms like Redash, PowerBI, etc. But integrating a browser-based voice-to-text converter was a challenge. So we decided to go with Google Web Kit and a custom reporting dashboard.

As the customer needed a rich UI, we chose morris.js charts running on a bootstrap theme. Morris.js allowed us to have rich colours and graphics in the graphs while the bootstrap theme helped in a high level of customization.

Integrating Amazon Lex

This architecture gives you a flow of data from the browser to Redshift.

The queries generated by Google Webkit is passed to Amazon NLP for intents and associated slots. Once the slots are identified, the parameters are passed to the Query Processing API which queries the Redshift for relevant data. This data is then presented through the custom reports built.

How does the solution work?

  1. Click on the ‘mic’ icon and ask your query.
  2. The BI tool does the speech to text conversion using Webkit Speech API.
  3. The text query is then sent to a Query Processing engine.
  4. Query processing engine sends a request to Amazon Lex for extracting intent and slot values from the query.
  5. Amazon Lex responds back with the intent name and slot values.
  6. Query processing engine uses the intent name and slot values to form a SQL query to the backend DWH-Amazon Redshift.
  7. Using the result of the query from Redshift, the query processing engine forms a response back to the frontend dashboard (BI).
  8. The frontend (BI) dashboard uses the response data to plot the graph/display it in the table.

Training Amazon Lex

The utterances are trained as below. Please note that the more utterances you train, the smarter the engine gets. The slots can be added as per the reports built in the dashboard. In this example, we chose ‘DeviceOS’, ‘GraphType’ and ‘# of days’ as the slots that are needed to be supplied from the customer’s query.

Challenges Faced

  1. Webkit Speech API does a pretty good job of converting speech to text. However, it works only on Google Chrome browser. Firefox has recently launched support for speech recognition, but that is still in very nascent stage.
  2. Although the ideal situation would be that you ask any meaningful query to the BI tool and it should be able to answer it. However, in order to do that Query processing engine needs to be really super smart to form dynamic SQL queries based on the user query. We have not yet achieved that and are evolving the Query processing engine to handle as many queries as possible without a need for modification.

Voice-Based BI Engine in Action

Demo URL: https://voice-bi.powerupcloud.com/

The voice search can pull reports based on 3 inputs,

  • Metrics-Visitors or Viewers or Video Views
  • Devices-iOS or Android or TV or PWA
  • Time-Last X days
  • Sample Query: Can you show me the number of visitors from iOS for the last 10 days?
  • Note: Voice search for terms like ‘Video Views’ and ‘PWA’ might be a little difficult for Lex to comprehend. Text search works better.

Hope this read was insightful. The future is voice-based platforms, be it apps, reports, customer service, etc.

If you would like to know more details on this project or if you want us to build something similar for you, please write to us at data@powerupcloud.com.

AWS Connect and Lex — Automate Your Customer Support

By | Alexa, AWS, Blogs, Botzer | No Comments

Written By: Rachana Sharma, Software Engineer, Powerupcloud Technologies

Ever since AWS Connect was announced, we have been waiting for an excuse to get our hands on it and put it to use for solving a real-world use case. Soon enough, we were presented with the opportunity to automate voice-based IVR support for a large government entity in Singapore.

Regular readers of this blog might be aware that we have a thriving enterprise chatbot platform, Botzer.io and our product are used by multiple enterprise customers. So with the experience of deploying chatbots under our belt, when we ran into something like AWS Connect, it made perfect sense to integrate with Lex to automate voice-based IVR customer support. Why settle for text when you can make them bots talk 😉

That’s exactly what we did and this post explains how we made Connect, AWS Lambda, and Lex work together along with our Botzer engine at a high level.

Some of the queries handled by our final deployed solution are below:

  • Verifying user’s identity using NRIC number from the database
  • Verifying user’s mobile number using OTP authentication
  • Providing user a personalized response post verification
  • Allowing user to make transactions

AWS Connect And Contact Flows

After you set up your first AWS connect instance (tip: go through this video)you will be able to make calls to a customer care solution. Every activity on AWS connect including the IVR played is a contact flow. A contact flow is an editable roadmap directing the customer experience of the contact center. You can edit all contact flows in the contact flow module of the Routing menu.

The first Contact flow played as IVR to the user is Sample inbound flow (first call experience). You can edit this flow to put customized flows in IVR.

  1. Edit Get customer input module: The Sample IVR gives you 7 predefined options. we have reduced it to 4 DTMF input where each will perform specific operations. After you edit this module by double-clicking you can see an output connection for branching all 4 respective options.
  1. For each of the specified DTMF input, we have used the transfer to flow module in Transfer/Terminate menu. We can edit a transfer to flow to specify the contact flow name to be called when the respective button is pressed. The flow needs to be published so that it comes in the search menu to select a flow option.

Enable call recording: The module is available in the Set section of flow modules. We can edit call recording behaviour by double-clicking the module and choosing one of the following:

  • None
  • Agent and Customer
  • Agent only
  • Customer only

This will save your recording in the S3 bucket mentioned in the setting of the AWS connect Data storage section.

Store customer input to Contact attribute:

We can store customer’s input to contact attribute using store customer input module in interact section. You need to specify text for which user will type this value. This value will get saved as a system attribute. We can use the Set Contact attributes module to assign a key to this value so that we pass this key as an attribute other Lambda, Lex modules or use it in other contact flows.

Lambda Integration with Connect

Amazon Connect can successfully invoke a Lambda function in an AWS account when a resource policy has been set on the Lambda function. you can follow this link to set a resource policy for lambda and see sample request response from lambda to AWS connect.

Use Case: Buy CDP data

  1. This contact flow Calls lambda function to send OTP: We need to give full ARN of Lambda function. We are sending the user’s mobile number as a system attribute and NRIC number as a user-defined attribute( Taken from the first flow from a user) to lambda.

Lambda will get an event with a mobile number in it and make a rest call to send OTP to the user. We are using the AWS SNS service to send a message to the mobile. We can avoid using REST and put all the code in Lambda if we make a deployment package. Details of creating a deployment package here.

Following is the code for sending the OTP:

2) After the Lambda function sends an OTP to the user’s mobile IVR will ask for verification: Aws connect can take a user’s input in Store Customer input block of interact menu. We can branch the activity further based on the customer’s input using Check contact attribute module in the Branch menu. Here we are, authentication users, if the user types the correct OTP otherwise returning him to the main flow with a failed authentication message.

Lex integration with AWS connect

Amazon Lex is a service for building conversational interfaces using voice and text. By integrating these two services you can take advantage of Lex‘s automatic speech recognition (ASR) and natural language processing/understanding (NLU) capabilities to create great self-service experiences for your customers.

Use Case Opening of a CDP account

  1. We need to first add our Lexbot to connect to start using it in our Contact flows. To do this from the AWS console we need to go to our AWS instance settings and add bots in contact flows section.
  1. We can start building a contact flow by dragging Get customer Input from the interact menu. After dragging double click on the block and set the input to Amazon Lex and specify the name and Alias. you can also send a session attribute to the bot.
  1. We have created a Lex bot that will trigger for slots if the intent of the user is “open CDP account”. On fulfilment of all the required slots, Lex will call the Lambda function to connect integration.
  1. The Lambda function will fetch the slot values from the event sent and make a POST call to REST API which will insert a record of the user in DB and send an Email to the customer with his details using AWS SES service.

On successful execution of Lambda flow will be transferred again from lex to aws connect.

We will soon publish the relevant code used for this on GitHub and update this post.

Hope you found this useful. Happy customer servicing! 🙂

Creating a Simple AWS Lex Bot

By | Alexa, AWS, Blogs | No Comments

Written By: Saikrishna Dronavalli, Former Software Engineer, Powerupcloud Technologies

What is AWS LEX?

Amazon Lex is an AWS service for building conversational interfaces for any applications using voice and text. Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) to enable you to build highly engaging user experiences with lifelike, conversational interactions

Advantages of LEX

Well, Lex democratizes bot creating. It is simple, easy to deploy. You don’t have to do the heavy lifting of NLP, Deep Learning and Machine Learning and its cost-effective. Also, it has different input-output formats(text and speech).

With that out of the way, let’s get started with creating a simple bot.

Prepare

we need to create 2 IAM roles for creating a Lex bot. I am using the following roles — lex-exec-role, lambda-exec-role-for-lex-get-started

Role Name:-lex-exec-role

Select Role Type:-AWS Lambda

in “permissions” choose “Inline permissions”:

{ “Version”: “2012–10–17”, “Statement”: [ { “Action”: [ “lambda:InvokeFunction”, “polly:SynthesizeSpeech” ], “Effect”: “Allow”, “Resource”: “*” } ] }

Edit Trust Relationship as:

{ “Version”: “2012–10–17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “Service”: “lex.amazonaws.com” }, “Action”: “sts:AssumeRole” } ] }

for the Second IAM role

Role Name:-lambda-exec-role-for-lex-get-started

Select Role Type:-AWS Lambda

in permissions choose Inline permissions

{ “Version”: “2012–10–17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “logs:CreateLogGroup”, “logs:CreateLogStream”, “logs:PutLogEvents” ], “Resource”: [ “*” ] } ] }

Create Lex Bot

Open AWS Console and from services, select Lex, choose Create.it will open a UI for creating the bot. It will show some examples Custombot, BookTrip, OrderFlowers, ScheduleAppointment, etc. We are going with a custom bot for this article.

Choose Custom bot Enter the following details

App name: HotelBookingBot

Output voice: Salli

Session timeout: 5 minutes.

IAM role: Choose the lex-exec-role from the list.

Choose Create.

Creating Slot Types

Slot types are the value types that your bot is going to ask the user in the questions.

For example, if your bot is for ordering a pizza user needs to provide different values like “size of the pizza”, “crust”, and “Type(veg,non-veg, etc)”. So Slot types would be “Type”,” Size”,” Crust”. For the sake of this example, we are going to build a hotel booking chatbot. So the kind of questions we will ask — will be user check-in date, time and checkout date time, etc.

on HotelBookingBot on the left side find Slot Types +.lets create a slot type for asking the number of people say it NoOfPeopleclick on + and add Slot type.

Adding Values to Slot Types

We have created a Slot Type called NoOfPeople in the above step. We now need to add the Values to this Slot Type. To add the Values to this Slot type Click onNoOfPeople. It will open a UI as shown below. Add all the values. I am going with values 1, 2, 3 for this example.

Creating Intent

The intent of a bot is defined as the goal of your bot. For example, if you create pizza ordering bot your bot intent is taking pizza orders. Similarly, in our Application, our Intent is booking a hotel.

To create an intent select our bot and look for intents. Click on +, this will open a modal as shown below.

choose to create own intent. I am going to add an Intent Called “BookHotel”

NOTE: Similar to Slot Values there are several Intents created by AWS. If your application meets the Intents given by AWS Intents choose an existing one.

We have Slot types and Intent. We now need to add the sample Utterancesand slots.

after adding the intent, the bot will open a screen as shown below.

Add Slots

To add the slots choose a slot and Enter the Name of the slot. Choose the slot type that you have created during Slot type creation as per requirement. I will use a predefined slot type AMAZON.TIME. This fits for CheckIn and CheckOut date and time. Come up with a human-sounding prompt so that the user doesn’t feel he/she is chatting with a bot

Adding Response Cards

To add a response card to a particular prompt, click on the Settings button of particular prompt. It will open a modal for entering the response card. Enter all the values such as card image (URL of the Image), title, card text and button values as shown in the figure. Once you deploy your bot in the facebook messenger or if you make an API call you will get the response in this format.

Error Handling

Error Handling handles the phrases that are not understood by the Bot. It is very important to handle these errors. AWS lex has a very simple error Handling Mechanism.

For Error Handling choose your bot and click on the “Error Handling” on the left side of AWS lex console. Enter the “Clarification prompts”,” Maximum number of retries”,” Hang-up phrase

this is the way of asking Lex bot to show the given clarification prompt for a maximum number of retries whenever it is not understanding the user data. if it exceeds the maximum number of retries it will show the Hung up phrase.

Build and Test

Building is a process where your bot is trained with corresponding utterances and slots. If there is no mistake in the above all steps your bots will be ready for testing. For building your bot click on build button on the right up corner of AWS Lex Console of your bot. Once your bot is built then Lex will create a version of the bot. Once building your bot is successful then you can test the bot using test bot.

Publishing a Bot

Once your testing is done successfully (if it meets all your requirements) you can publish the bot for the public usage. Publishing can be done from the aws console. To publish a bot click on the publish button on the upright corner of the particular bot console. You can enter the alias name of the bot or you can update the existing alias. Alias names are very useful when you want to deploy a bot.you can have a bot with multiple alias names so that you can use it as different versions of your bot.