Be it controlling a bot, to ordering coffee or even creating your own chatbot, Amazon Web Services’s recent media workshop, had all the attendees present get a clear picture as to how the technology actually plays out in real world and real time scenarios.
The session which was conducted by Olivier Klein, Solutions Architect, Emerging Technologies, Asia Pacific, AWS, was targeted to provide information to demystify the world of artificial intelligence and the various terminologies associated with it, insights on building the AI model and data analytics, as well as real world examples with live demonstrations on how AI is being applied.
While the session was driven around understanding Big Data and Data Analytics and how it goes from coding a particular command, right up to developing a finished product, it was clear that the message for the day was the importance of de-coupling storage from compute in the AWS cloud.
“With AWS, companies who need to store information don’t need to pay for the analytics until they require it. This also means that they can scale out however large their analytics requires based on time spent scrapping the data and not spending on the data used. This is a powerful tool for any organisation,” he explained.
AWS, who are also the providers used by Grab, are active in providing AI that aligns with the physical world by creating an understanding of how Internet-of-Things (IoT) space helps to create natural interaction channels.
Some of the examples given were through real world demonstrations that include interactions with voice through Amazon Lex, Amazon Polly and the Alexa Voice Services, as well as understanding visual recognitions with services such as Amazon Rekognition and bridge real-time data that is sensed from the physical world via AWS IoT.
Olivier gave a demonstration by using the Alexa where he requested Alexa by voice command to change the display colours on a prototype weather and temperature model.
“The command is sent to the Cloud environment to be deciphered and sent back to the model to be executed,” he says. This was completed in under 2 seconds from the time of command to the execution.
Alexa was also used in facial recognition where Olivier programmed Alexa to know his schedule and remind him of his days activities by logging in with a photo of himself. During the programming of Alexa, Olivier was asked where he lives to which Olivier answered Hong Kong. Alexa aptly replied “That’s a nice place!”
Olivier explained that the Natural Language Programming takes into consideration intonation and human connection.
“We want the AI to adapt to humans and human behaviour and not the other way around. It’s much easier if we can interact with AI that sounds like how a human would sound and react in a conversation. This is done by training the AI with millions of data entries so the AI will know how to respond,” he added. This was apparent in the smooth interaction he had with Alexa during the entire session.
The media was also given a hands-on experience on creating their own chatbot to order a cup of coffee through the AWS platform. While some had a little trouble tinkering with the protocols, quite a number had the bot up and running within a few minutes. Though this writer had a few bugs to iron out, it was apparent that the technology has reached a point where we as users need no longer worry about the backend in starting up a solution, the platforms are easily available. All that is needed is a little guidance that is provided by the vendor and you are set to create your chat bot, solve mundane everyday issues or find ways to solve world hunger and world peace.