Skip to content

Speech: Fix Safari by priming AudioContext #2245

@compulim

Description

@compulim

Related to #995.

Version

master

Describe the bug

Speech synthesis by Cognitive Services is done through Web Audio API.

Today, Safari block any audio playback on AudioContext instance unless they are triggered from user gesture.

To prime the specific AudioContext instance, we can play a short/empty utterance when the user click on the microphone button.

To Reproduce

Steps to reproduce the behavior:

  1. Navigate to sample 06.c on Safari on iPad
  2. Click microphone button and start speak
  3. Wait until the bot response

Expected behavior

The activity responded from the bot should be synthesized. Instead, on Safari on iPad, it cannot be synthesized.

Additional context

Every independent AudioContext instance need to be primed.

[Bug]

Metadata

Metadata

Assignees

Labels

bugIndicates an unexpected problem or an unintended behavior.front-burner

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions