Amazon announced the AZ1 Neural Edge processor at its fall event. Thanks to this silicone unit, Alexa’s responses to users’ questions and commands will improve by hundreds of milliseconds per response.
The company is implementing this hardware unit in collaboration with MediaTek. This allows the neural speech recognition function in the device to be used in new products. Amazon’s newly introduced products such as the new Echo Smart Speaker, Echo Dot, Echo Dot with Clock, Echo Dot Kids Edition, and Echo Show 10 Smart Display include this processor. Amazon says these products also include the higher device memory required for this level of processing. The AZ1 will be found in more Echo series in the future.
Amazon’s existing products that do not carry AZ1 send both the voice and the associated interaction to the cloud, i.e. to the remote server, where, after processing, the response returns. By comparison, new products with AZ1 process audio on the device, reducing response time for users. While how these devices process your voice and how it displays your voice history, the Alexa app remains the same.
Amazon notes that the gains on the delay side will apply to US English first, but more languages will be supported over time.
This collaboration between Amazon and MediaTek is reminiscent of Microsoft’s SQ1 processor-focused collaboration with Qualcomm for the Surface Pro X. In practice, AZ1 is more like the Neural Core processor used by Google in Pixel 4. Thanks to the processor, the device understood spoken English and was able to translate audio recordings into text without the need to connect to the internet.
Amazon has not provided information that Echo devices can be used without a connection to the internet. However, performing more operations on the device instead of in the cloud improves the user experience.