Category

AI

AI-based solution

By | AI, Case Study | No Comments

Customer: A leading food and agri-business company in the world

Problem Statement

One of our client is a leading food and agri-business company in the world was in the process of building an E-Commerce application for their products to ensure global access & availability. They were in need of a solution which gives them complete visibility into their Micro-services & PAAS architecture and track all the application transaction rather than a sampling of them. They also wanted visibility in User Analytics so they can analyse the conversion trends & user behaviour in the context of User Session.

With above complexity they needed a solution that gives them Automated Problem Detection & Root Cause analytics so they can focus on the findings and make the end-user experience smoother rather than investing time in finding the root cause of those problems.

Proposed Solution

After thorough evaluation Powerup recommended the use of an AI-based solution which can automatically analyse all the dependency at micro-services level and can also trace the Root Cause at code-level depth. For this Powerup leverage the capabilities & offerings of Dynatrace APM tool.

The approach

Implementation stage: Powerup implemented Dynatrace by deploying a one agent on the Kubernetes host which initiated monitoring of all the Micro-services. Within a few minutes Dynatrace could automatically discover the Application Topology map with dependencies.

Powerup also integrated Azure PAAS Service with Dynatrace to gain complete visibility in application.

Configuration stage

  1. Management zone: Powerup configured different management zones so different teams can have the visibility of relevant data.
  2. User Tagging: Powerup configured user session tagging, Key User actions and set up the conversion goals to track the revenue over user experience.
  3. Dashboard: Powerup created the all in one dashboard so in single view they can track the User Experience, Application Transactions status, Infrastructure health, API Calls and Problem detection.

Dynatrace applied Dynamic thresholding on all the detected anomalies, Powerup helped customer to understand and analyse the automated detected problems and trace the Root Cause.
Powerup ensures High availability & quick content delivery of application at a global level by managing the PAAS services in HA mode & CDN to ensure quick response.

Cloud platform

AZURE

Technologies used

Dynatrace One Agent, Dynatrace DEM, Kubernetes, AZURE PAAS Services, CDN

Transforming Invoice Processing through Automation

By | AI, Automation, Blogs, Image Processing | One Comment

Written by Jeremiah Peter, Solution specialist-Advanced Services Group, Contributor: Amita PM, Associate Tech Lead at Powerupcloud Technologies.

Automation Myth

According to a recent survey by a US-based consultancy firm, organizations spend anywhere between $12 to $20 from the time they receive an invoice until they reconcile it. The statistic is a stark reminder of how organizations, in pursuit of grand cost-cutting measures, often overlook gaping loopholes in their RPA adoption policy- All or nothing!

This blog makes a compelling case for implementing RPA incrementally in strategic processes to yield satisfactory results. Streamlining the invoice management process is, undoubtedly, a judicious leap in that direction.

Unstructured invoice dilemma

In a real-world scenario, data in invoices are not standardized and the quality of submission is often diverse and unpredictable. Under these circumstances, conventional data extraction tools lack the sophistication to parse necessary parameters and, often, present organizations the short end of the stick. 

Consequently, most invoice processing solutions available today fail to reconcile the format variance within the invoices. The Powerup Invoice Processing Application is a simple Web Application (written in HTML and Python) that leverages cloud OCR (Optical Character Recognition) services, to extract text from myriad invoice formats. Powered by an intelligent algorithm, the solution uses the pattern-matching feature to extract data (e.g. Date MM-DD-YYYY) and breaks free from the limitations of traditional data extraction solutions.

A high-level peek into the solution


Picture by Google.com

Driven by a highly user-friendly interface, the Powerup Invoice Processing Application enables users to upload invoices (png, jpg) from their local workstations. The action invokes a seamless API call to Google OCR service, which returns a long string object as API response. A sample of the string is presented below:

Subsequently, the string is converted to a human-readable format through a script, which uses a Python-based Regex library to identify desirable parameters in the invoice such as date, invoice number, order number, unit price, etc. The extracted parameters are passed back to the web application after successful validation. The entire process lasts not more than 10 seconds. The video below demonstrates how Powerup has successfully deployed the complete process:

Another noteworthy feature of the solution is that it seamlessly integrates with popular ERP systems such as SAP, QuickBooks, Sage, Microsoft Dynamics, etc. Given that ERP systems stash critical accounts payable documents (purchase orders, invoices, shipping receipts), a versatile solution requires integration with the organization’s ERP software to complete the automation cycle. 

A brief look at the advantages offered by invoice processing automation can help you assess the value delivered by the solution. 

The Silver-lining

Picture by Google.com

The adoption of Powerup Invoice Processing Application helps organizations reap the following benefits:

  • Deeply optimized invoice processing TAT resulting in quicker payment cycles
  • Up to 40% cost savings in procurement and invoice processing
  • Highly scalable solution that can process multiple invoices in a few minutes
  • Fewer errors and elimination of human data-entry errors
  • Free-form parameter pattern-matching 
  • Easy integration with ERP software
  • Readily implementable solution; no change required from vendor’s end 

Conclusion 

While procurement teams in various organizations struggle to strike a trade-off between low funds dispensation and high-cost savings, measures that enable them to cut expenses and improve efficiencies in the invoicing process are a welcome respite. 

Tools such as the Powerup Invoice Processing Application can help organizations infuse automation and agility into its processes, as well as, knockdown process complexities into manageable parts. Moreover, the time and cost efficiencies achieved in these undertakings can be passed on to other functions that can significantly bolster the organization’s service offerings. To find out how your organization can be positively impacted, sign up for a free demo session here.

Building your first Alexa Skill — Part 1

By | AI, Alexa, Blogs, Machine Learning, ML | No Comments

Written by Tejaswee Das, Software Engineer, Powerupcloud Technologies

Technological advancement in the area of Artificial Intelligence & Machine Learning has not only helped systems to become more intelligent but has also made them more vocational. You can just speak to the phone & add items to your shopping list or just instruct your laptop to read your email. In this fast-growing era of voice-enabled automation, Amazon’s Alexa enabled devices are changing the way people go through their daily routines. In fact, it has introduced a new term in the dictionary, Intelligent Virtual Assistant (IVA).

Technopedia defines Intelligent Virtual Assistant as an engineered entity residing in software that interfaces with humans in a human way. This technology incorporates elements of interactive voice response and other modern artificial intelligence projects to deliver full-fledged “virtual identities” that converse with users.”

Some of the most commonly used IVAs are Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana, with Samsung Bixby joining the already brimming list lately. Although IVAs seem to be technically charged, they bring enormous automation & value. Not only do they make jobs for humans easier, but they also optimize processes and reduce inefficiencies. These systems are so seamless, that just a simple voice command is required to get tasks completed.

The future of personalized customer experience is inevitably tied to “Intelligent Assistance”. –Dan Miller, Founder, Opus Research

So let’s bring our focus to Alexa, Amazon’s IVA. Alexa is Amazon’s cloud-based voice service, which can interface with multiple devices on Amazon. Alexa gives you the power to create applications, which have the capability to interact in natural language, making your systems more intuitive to interact with technology. Its capabilities mimic those of other IVAs such as Google Assistant, Apple Siri, Microsoft Cortana, and Samsung Bixby.

The Alexa Voice Service (AVS) is Amazon’s intelligent voice recognition and natural language understanding service that allows you to voice-enable any connected device that has a microphone and a speaker.

Powerupcloud has worked on multiple use-cases, where they have developed Alexa voice automation. One of the most successful & adopted use cases being one of the largest General Insurance providers.

This blog series aims at giving a high-level overview of building your first Alexa Skills. It has been divided into two parts, first, covering the required configurations for setting up the Alexa skills, while the second focuses on the approach for training the model and programming.

Before we dive in to start building our first skill, let’s have a look at some Alexa terminologies.

  • Alexa Skill — It is a robust set of actions or tasks that are accomplished by Alexa. It provides a set of built-in skills (such as playing music), and developers can use the Alexa Skills Kit to give Alexa new skills. A skill includes both the code (in the form of a cloud-based service) and the configuration provided on the developer console.
  • Alexa Skills Kit — A collection of APIs, tools, and documentation that will help us work with Alexa.
  • Utterances — The words, phrases or sentences the user says to Alexa to convey a meaning.
  • Intents — A representation of the action that fulfils the user’s spoken request.

You can find the detailed glossary at

https://developer.amazon.com/docs/ask-overviews/alexa-skills-kit-glossary.html

Following are the prerequisites to get started with your 1st Alexa skill.

  1. Amazon Developer Account (Free: It’s the same as the account you use for Amazon.in)
  2. Amazon Web Services (AWS) Account (Recommended)
  3. Basic Programming knowledge

Let’s now spend some time going through each requirement in depth.

We need to use the Amazon Developer Portal to configure our skill and build our model which is a necessity.

  • Click on Create Skill, and then select Custom Model to create your Custom Skill.

Please select your locale carefully. Alexa currently caters to English (AU), English (CA), English (IN), English (UK), German (DE), Japanese (JP), Spanish (ES), Spanish (MX), French (FR), and Italian (IT). We will use English (IN) while developing the current skill.

  • Select ‘Start from Scratch’
  • Alexa Developer Console
  • Enter an Invocation Name for your skill. Invocation name should be unique because it identifies Skills. Invocation Name is what you say Alexa to invoke or activate your skill.

There are certain requirements that your Invocation name must strictly adhere to.

  • Invocation name should be two or more words and can contain only lowercase alphabetic characters, spaces between words, possessive apostrophes (for example, “sam’s science trivia”), or periods used in abbreviations (for example, “a. b. c.”). Other characters like numbers must be spelt out. For example, “twenty-one”.
  • Invocation names cannot contain any of the Alexa skill launch phrases such as “launch”, “ask”, “tell”, “load”, “begin”, and “enable”. Wake words including “Alexa”, “Amazon”, “Echo”, “Computer”, or the words “skill” or “app” are not allowed. Learn more about invocation names for custom skills.
  • Changes to your skill’s invocation name will not take effect until you have built your skill’s interaction model. In order to successfully build, your skill’s interaction model must contain an intent with at least one sample utterance. Learn more about creating interaction models for custom skills.
  • Endpoint — The Endpoint will receive POST requests when a user interacts with your Alexa Skill. So this is basically the backend for your Alexa Skill. You can host your skill’s service endpoint either using AWS Lambda ARN, which is recommended, or a simple HTTPS endpoint. Advantages of using an AWS Lambda ARN are :
  • Sign in to AWS Management Console at https://aws.amazon.com/console/
  • Lookup for Lambda in AWS services
  • US East (N. Virginia)
  • EU (Ireland)
  • US West (Oregon)
  • Asia Pacific(Tokyo)

We are using Lambda in the N.Virginia (us-east-1) region.

  • Once we are in a supported region, we can go ahead to create a new function. There are three different options for creating your function. You can create a function from scratch or you can also use available Blueprints and Serverless Application Repositories.
  • C# / .NET
  • Go
  • Java
  • NodeJS
  • Python

We will discuss programming Alexa with different languages in the next part of this series.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

  • Go back to the Endpoint section in Alexa Developer Console, and add the ARN we had copied from Lambda in AWS Lambda ARN Default Region.

ARN format — arn:aws:lambda:us-east-1:XXXXX:function:function_name

In, part 2, we will discuss the training our model — adding Intents & Utterances, finding walkarounds for some interesting issues we faced, making workflows using dialog state, understanding the Alexa Request & Response JSON, and finally our programming approach in Python.

Chatbots 2.0 — The new Series of bots & their influence on Automation

By | AI, Artificial Intelligence, Blogs, Chatbot | No Comments

Written by Rishabh Sood, Associate Director — Advanced Services Group at Powerupcoud Technologies

Chatbots as a concept are not new. In fact, under the domain of Artificial Intelligence, the origin of chatbots is quite early, tracing back to as early as 1955. Alan Turing published “Complete Machinery & Intelligence”, starting an unending debate, “Can machines think?”, laying the foundation of the Turing test & eventually leading to ELIZA in 1966, the 1st ever chatbot. It failed to pass the Turing test but did start a horde of chatbots to follow, each one more mature than its predecessor.

The next few years saw a host of chatbots, from PARRY to ALICE, but hardly any saw the light of the day. The actual war on the chatbots started with the larger players coming into the picture. Apple led with Siri in 2010, followed closely by Google Now, Amazon’s Alexa & Microsoft’s Cortana. These chatbots made life a tad easier for the users, as they could now speak to Siri
to book an Uber or tell Alexa to switch off the lights (another way to make our lives more cushioned). While these chatbots did create a huge value to users in terms of making their daily chores automated (& speak to a companion, for the lonely ones), business still was a long way from extracting benefits from the automated conversational channel.

Fast track to the world of today & we see chatbots part of every business. Every company has budgets allocated for automating at least 1 process on chatbots. Oracle says that 80% of the businesses are already using or have plans to start using chatbots for major business functions by 2020.
Chatbots have been implemented across companies & functions, primarily with a focus on automating support systems (internal as well as external). Most of the bots available in the market today respond to user queries basis keywords/phrases match. The more advanced bots today use the concept of intent matching & entities extraction to respond to more complex user queries. A handful of bots today even interact with the enterprise
systems to provide real-time data to the users. Most of the commercially successful bots in the market today are text-based interactions.

Most of the bots in action today augment tasks, which are repeatable/predictable in nature. Such tasks, if not automated, would require considerable human effort, if not automated. These chatbots are powered by Natural Language Processing engines to identify user’s intent (verb or action), which then is passed to the bot’s brain to execute a series of steps, to generate a response for the identified intent. A handful of bots also contain Natural Language Generation engines to generate conversations, with a human touch to it. Sadly, 99.9% of today’s implementations will still fail more than 60 years old Turing test.

It’s true that the conversational Engines, as chatbots are often referred to as, have been there for a couple of years, but the usefulness of their existence will now be brought to test. The last couple of months have seen a considerable improvement in how the conversational engines add value to the businesses, that someone refers to as the chatbot 2.0 wave.

At Powerup, we continuously spend efforts on researching & making our products & offerings better, to suit the increasing market demands. So, what can one expect from this new wave of bots? For starters, the whole world is moving towards voice-based interactions, the text remains only for the traditional few. So, the bots need to be equipped with the smart & intelligent voice to text engines, which can understand different accents & word pronunciations, in addition, to be able to extract the relevant text from the noise in the user’s query, to deliver actual value. The likes of Google & Microsoft have spent billions of dollars on voice to text engines, but the above still remains a tough nut to crack, keeping the accuracy of the voice-based system limited in the business world.

With the voice-based devices, such as Amazon Echo & Google Home, bring convenience & accessibility together. Being available for cheap & in mass (the smart speakers’ market is slated to grow to $11.79 billion by 2023), makes it a regular household item, rather than a luxury. The bots will have to start interacting with users via such devices, not limited to the
traditional channels of Web & Social. This will not only require the traditional voice to text layers to be built in, but specific skills (such as Alexa Voice Services for Alexa compatible devices) to be written. A key factor here is how the user experience on a platform that is purely voice-based (although Echo Spot also has a small screen attached to it), where visual rendering is almost nil, is seamless & equally engaging for the users, as is on traditional channels.

In 2017, 45% of the people globally were reported to have preferred speaking to a chatbot, rather than a human agent. 2 years down the line, chatbots are all set to become mainstream, rather than alternative sources of communication. But this poses a greater challenge for the companies into the business. The bots will now have to start delivering business value, in terms of ROI, conversions, conversation drops & metrics that matter to the business. HnM uses a bot that quizzes the users to understand their references & then show clothing recommendations basis the above-identified preferences. This significantly increased their conversion on customer queries.

The new age of chatbots has already started moving in a more conversational direction, rather than the rule-based response generation, which the earlier bots were capable of. This means the bots now understand human speech better & are able to sustain conversations with humans for longer periods. This has been possible due to the movement of the traditional intent & entity models on NLP to advancement on Neural networks & Convolutional networks, building word clouds & deriving relations on these to understand user queries.

Traditionally, Retail has remained the biggest adopter of the chatbots. According to Statista.com, Retail remained to occupy more than 50% of the chunk in the chatbots market till 2016. With the advancement being brought into the world of chatbots at lightning speed, other sectors are picking up the pace. Healthcare & Telecommunications, followed by Banking are joining the race of deriving business outputs via chatbots, reporting 27%, 25% & 20% acceptance in the area in 2018. The new wave of bots is slated to narrow this gap across sectors in terms of adoption further. A study released by Deloitte this year highlights the increase of internal chatbot use-cases growing more than customer-facing functions, reporting IT use-cases to be the highest.

Chatbots have always remained as a way of conversing with users. Businesses have always focused on how the experience on a chatbot can be improved for the end customer, while technology has focused on how chatbots can be made more intelligent. The bots, being one of the highest growing channels of communication with the customers, generates a host of data in the form of conversational logs. Business can derive a host of insights from this data,
as the adoption of bots among customers increases over the next couple of years. A challenge that most businesses will face would be the regulatory authorities, such as GDPR in the EU. How business work around these, would be interesting to see.

Mobile apps remain the widest adopted means of usage & communication in the 21 st century, but the customers are tired of installing multiple apps on their phones. An average user installs more than 50 apps on a smartphone, the trend is only going to change. With multiple players consolidating the usage of apps, users will limit the no of apps that get the coveted memory on their mobile phones. This will give an opportunity to the businesses to push chatbots as a communication channel, by integrating bots not only on their websites (mobile compatible of course) but other mobile adaptable channels, such as Google Assistant.

According to Harvard Business Review researchers, a 5-minute delay in responding to a customer query increases the chances of losing the customer by 100%, while a 10-minute delay increases this chance 4 times. This basic premise of customer service is taken care of by automated conversational engines, chatbots.

Chatbots have a bright future, especially with the technological advancement, availability & adaptability increasing. How the new age bots add value to the business, remains to be seen and monitored.

It would be great to hear what you think the future of automated user engagement would be and their degree of influence.

Bringing automation to Online Dating & Matrimony, saving big bucks!!

By | AI, AWS, ML | No Comments

Written by Rishabh Sood, Associate director-ASG at Powerupcloud Technologies.

Matchmaking is probably one of the oldest professions, as far as documented history can be traced. We all have come across some form of matchmaking, be it the neighbor aunty constantly looking to hook up her daughter to the NRI 30 something, or a relative with a lifelong wish of setting you up with her niece/nephew. Those were the simpler times, when the adults understood (or assumed they did) our requirements & with deep regard to our feelings (no pun intended!!), would search for the most (un)suitable match.

With the advancement in technology, the dating & matchmaking industry started migrating online & with the advancement in lifestyle, open dating & matchmaking no longer remained a taboo. In 2005, 29 percent of U.S. adults agreed with the statement, “People who use online dating sites are desperate;” but in 2013, only 21 percent of adults agreed with the statement.

With the advent of technology, more than half of the dating industry moved online (According to IBISWorld, the dating industry accounts for $3 billion revenues in the US alone) & is slated to grow at 25% CAGR through 2020. With this Digital revolution in the industry, the companies started to accumulate a host of data in the form of images. These sets of images, which were uploaded by the users while creating their profiles, was a goldmine of information to derive user insights & improve business metrics.

India, with a population of more than 1.3 billion people, is the 2nd largest online market across the globe, with over 460 million internet users. The online dating industry in India is supposed to grow at a CAGR of 10.5% from 2018–2023. With such a huge userbase & bright future, across the country, companies have spruced up in sizeable numbers, with a host of them already hitting profitability, unlike the other online ventures.

A very niche segment under online dating caters to the traditional audience, the Social matchmaking business. These companies bring individuals together, based on their preferences & lifestyle matches, allow them to connect, get to know each other & then take the plunge. Most of these companies, running the business legally, need to regulate their user base & digitally available data according to the Internet Censorship norms, which are quite stringent in India.

With one of the largest Matrimonial Players in India (& across the globe), the no of users registered & the no of images uploaded on a daily basis would be quite a sizeable no. The matrimonial player would generally get around 10,000 profiles created & 30,000 images uploaded on a monthly basis. These images were the profile images that their users would upload as part of their portfolio. Being one of the major criteria for getting matches, these images would be mandatory for profile creation & needed to go through a very stringent process of acceptance & rejection by the moderators.

The business process followed for profile activation is shown in the below image.

The business had invested heavily in the Photo Moderation process, as this was the core of the model. A 20 member team would manually asses each & every image being uploaded & would analyze it on the following parameters

· Age (should be between 25–60 years)

· Gender match (for male profiles, it should be a male image)

· Group Photo (for profile images, group photo would not be allowed)

· Indecently dressed/nude image

· Should not be a celebrity image

· Should not be a selfie

· Should not have a watermark across the image

· Should be above a specified quality (no blur, contrast, reflection, etc. in the image)

The manual moderation process not only made the process of profile activation very slow (up to 72 hours or 3 days) but would also leave the moderation to manual judgment. While to 1 human it might look an indecent image, to the other it might be perfectly alright to approve. Being a critical process, which might also lead to the legal watchdogs’ ire, in the case of misjudgment.

At Powerup, we expertise in Image Vision & building custom models to deliver business ROI. We worked with the leader of the matrimonial service to do a feasibility study on the automation of the complete Photo Moderation process. Our team of ML solution experts analyzed the set of images from the customer. The images were sanitized, structured & then labeled to train the Vision model created. They then tested multiple models that would suit the business problem. One peculiar problem to solve with the business was celebrity detection. Although a host of open source libraries detect the personalities across the world, none of them can detect the smaller known faces, Indian television industry artists, Lollywood & Tollywood actors, etc.

To resolve this, the database of celebrity images (which were manually rejected during the manual photo moderation process) was borrowed from the business team & via re-enforced learning models, used for training the model.

The team followed a 5 stepped approach (depicted below), to automate the photo moderation process, backed by powerful Image processing models.

But how would such a model scale up for a company that processes more than 30,000 images on a monthly basis? Will it be able to identify the new data sets added every day? What if the system fails to recognize an anomaly & the image gets activated as an incorrect approve?

The system was designed with a feedback loop, where the engine would constantly feed on the manual feedback on the classified & unclassified data. To resolve system scalability, the custom image processing models were backed by a strong re-enforced learning model, which would constantly add to the dataset for enhanced accuracy on the Photo Moderation process.

With a dataset of around 1 million images, the model has developed to delivery 60% accuracy on deployment. Within 6 months the model increased the accuracy to 78%, with daily dataset build & re-enforced learning on the custom-built engine.

With the engine increasing accuracy on a daily basis, it helped not only automate a critical function in the process but also helped achieve a positive ROI on the implementation. Within 6 months, the manual moderation team was reduced to a 5 member team to only look into the exception scenarios, which is a 75% reduction in the taskforce. The profile activation process was reduced from 72 hours to within a day, again a 72% improvement in TAT. With a 78% accuracy on positive image classification, the engine not only was a compelling use case to implement but was now a critical support system for the business.

To understand how the solution was implemented, please refer to our tech blog series at https://blog.powerupcloud.com/realtime-image-moderation-at-scale-using-aws-rekognition-d5e0a1969244