Voice technology is emerging as the interface of the future and conversational artificial intelligence is driving this transformation in how humans interact with machines.

Amazon is using machine learning and cloud computing to fuel innovation as it builds the next generation of voice-first user experiences, said Amazon vice president of Alexa and Echo Speech Al Lindsay.

Lindsay discussed how a lot of features coming to market now are from direct customer feedback, sometimes from surprising use cases that hadn’t been considered as recently as three years ago. About three months ago, new Echo products were announced, with 5000 employees now working on Alexa globally.

At CES 2017, headlines said that Alexa stole the show as it was showing up in other company’s devices, personalising experiences through machine learning. The longer term vision for Alexa is for voice to grow, as Lindsay views it as a much more natural user interface.

He felt that visuals still have their place on devices, for example, as a seven-day weather forecast. However, for a non tech savvy OAP, speech can cut down the friction: “I don’t think there’s a learning curve for speech”.

“Voice as a natural interface is great for a lot of things, but there are still some things where visual is better,” he said. “We’re trying to find the sweet spot of a complimentary voice visual system.”