Related to #995.
Version
master
Describe the bug
Speech synthesis by Cognitive Services is done through Web Audio API.
Today, Safari block any audio playback on AudioContext instance unless they are triggered from user gesture.
To prime the specific AudioContext instance, we can play a short/empty utterance when the user click on the microphone button.
To Reproduce
Steps to reproduce the behavior:
- Navigate to sample
06.c on Safari on iPad
- Click microphone button and start speak
- Wait until the bot response
Expected behavior
The activity responded from the bot should be synthesized. Instead, on Safari on iPad, it cannot be synthesized.
Additional context
Every independent AudioContext instance need to be primed.
[Bug]
Version
masterDescribe the bug
Speech synthesis by Cognitive Services is done through Web Audio API.
Today, Safari block any audio playback on
AudioContextinstance unless they are triggered from user gesture.To prime the specific
AudioContextinstance, we can play a short/empty utterance when the user click on the microphone button.To Reproduce
Steps to reproduce the behavior:
06.con Safari on iPadExpected behavior
The activity responded from the bot should be synthesized. Instead, on Safari on iPad, it cannot be synthesized.
Additional context
Every independent
AudioContextinstance need to be primed.[Bug]