Amazon has announced a number of new Speech Synthesis Markup Language (or SSML) features that are part of Amazon’s tools for developers of Alexa-enabled apps or “Skills” as the company calls them. Developers can now program the pronunciation, intonation, timing, and emotion conveyed through this artificial dictation.Amazon Echo Dot: 2nd Gen
In order to achieve these changes in Alexa’s voice, five new tags have been added as part of the coding process for Alexa’s skills:
Amazon is really invested in its Echo product, not to mention the Alexa branding in itself. Because of this, Amazon won’t allow developers the complete freedom to modify the way Alexa speaks on its skills. After all, many legitimate businesses rely on the platform and Alexa represents all of them. Therefore, devs can only “nudge” Alexa’s voice by preset thresholds. This is to make sure no one makes Alexa speak slow where its unnecessary.
These new SSML are available in US and UK English, as well as German. That’s all the languages that the Echo currently supports. However, we’ll need to rely on Skill developers to put the new features to use, so it might be a while before you hear Alexa sound less like a humanoid.
Q: OK Google, would you lie to me?' A: 'I always try to tell the truth'. Q: 'what is the CIA?' A: 'the United States Central Intelligence Agency, CIA Q: Are you connected to the CIA?' A: Always! And also to the NSA, and many other spying ag...
I think it's gonna have a lot of unnatural moaning instead haha