I am constantly on the look-out for inventing the most engaging customer interaction methods. On that pursuit, we probably are the first in the world to voice enable the Inbound Offer APIs and thereby delivering a rich and refreshing man-machine engagement. In this theme, we first focussed our attention on Amazon’s Alexa, which is one of the popular voice assistants. The architecture involves the following 3 layers that communicate with each other in that order
- Amazon’s Echo, a device that listens to/answers the human
- A backend application hosted on Amazon’s infrastructure that manages the conversational flow
- Sift Online Server which serves the real time contextual offer upon an inbound request
The Echo device has a very decent voice to text capability. At its base, it offers a good integration with home automation devices and enables voice commands to operate them. As a smart extension, it can connect to Amazon’s backend (skills) on specific voice command requests. This is where the layer 2 comes in. It provides a very constrained but a non-fuzzy natural language capabilities to infer the commands and muster the responses. This layer can also invoke an external webservice within the conversation, which is the layer 3, a Sift Inbound service.
The demo video is posted here and its real fun.