Amazon Web Services (AWS), the cloud computing arm of e-commerce giant Amazon, has made Amazon Lex available to all customers.
Amazon Lex is an artificial intelligence (AI) system that enables developers to build applications that incorporate both voice and text for conversation with the user.
It is the system that provides the deep learning algorithms behind Amazon Alexa, for example.
Previously, few developers have been able to build, deploy, and scale apps with automatic speech recognition (ASR) and natural language understanding (NLU), according to AWS.
This is largely due to the need to the amount of heavy lifting required to train the deep learning algorithms that power these applications.
Now, however, AWS is offering Amazon Lex as a fully managed service for customers that wish to build their own applications.
Typically, Amazon Lex is used via the Amazon Echo, Amazon’s smart home assistant, for tasks like checking the weather or the news, or perhaps turning on lights or heating in homes with smarter capabilities.
To build new apps on the system, developers should in theory be able to input sample phrases that describe a user’s intent into Amazon Lex, along with the corresponding information to meet that intent and any additional questions that Amazon Lex may need to ask to get the full picture of the situation.
Taking the example of a pizza delivery, a developer would need to input the intent, such as “order a pizza”, and other corresponding information such as toppings and location. Finally, questions like “when do you want it delivered?” need to be asked in order to complete the delivery.
AWS claims Amazon Lex then takes care of the rest by building a machine learning model that parses the speech or text from the user, understands the intent and manages the conversation.
At this stage, developers can publish the conversation app onto IoT devices or web applications like Facebook Messenger.
Importantly, Amazon Lex is integrated with AWS Lambda, the event-driven computing that lets developers run code without managing servers.
Read more: Google to invade your (smart) home to show off AI software
Growing demand for conversational apps
“Thousands of machine learning and deep learning experts across Amazon have been developing AI technologies for years, and Amazon Alexa includes some of the most sophisticated and powerful deep learning technologies in existence,” said Raju Gulabani, vice president of databases, analytics and AI at AWS.
“We thought customers might be excited to use the same technology that powers Alexa to build conversational apps, but we’ve been blown away by the customer response to our preview – as organizations in virtually every industry like Capital One, Freshdesk, Hubspot, Liberty Mutual, Ohio Health, and Vonage have mobilized quickly to build on top of Amazon Lex.”
One customer that is already benefitting from Amazon Lex is the American Heart Association (AHA).
“The AHA engages nearly 1 million participants, nationwide, through our premier Heart Walk events to further our mission of saving lives,” said Roger Santone, executive vice president of technology, AHA.
“The world is rapidly changing and we are constantly rethinking traditional approaches to reach people at the pace that they live their everyday lives.
“We used Amazon Lex’s AI technology to streamline the registration process so prospective Heart Walk participants can use their natural voice to easily register through heartwalk.org.
“We are committed to the role that technology plays in enabling consumers, patients, and physicians to achieve better health outcomes and to further the mission of the AHA.”
Customers can launch Amazon Lex using the AWS Management Console, AWS Command Line Interface (CLI), or AWS software development kits (SDKs). Amazon Lex is currently available in the US East (N. Virginia) Region.
Read more: CES 2017: AI smart assistants line up to do battle with Amazon’s Echo