I’ve been thinking about sound on websites for a while now.
When we talk about using sound on websites, most of us grimace and think of the old days, when blaring background music played when the website loaded.
Today this isn’t and needn’t be a thing. We can get clever with sound. We have the Web Audio API now and it gives us a great deal of control over how we design sound to be used within our web applications.
In this article, we’ll experiment with just one simple example: a form.
What if when you were filling out a form it gave you auditory feedback as well as visual feedback. I can see your grimacing faces! But give me a moment.
We already have a lot of auditory feedback within the digital products we use. The keyboard on a phone produces a tapping sound. Even if you have “message received” sounds turned off, you’re more than likely able to hear your phone vibrate. My MacBook makes a sound when I restart it and so do games consoles. Auditory feedback is everywhere and pretty well integrated, to the point that we don’t really think about it. When was the last time you grimaced at the microwave when it pinged? I bet you’re glad you didn’t have to watch it to know when it was done.
As I’m writing this article my computer just pinged. One of my open tabs sent me a useful notification. My point being sound can be helpful. We may not all need to know with our ears whether we’ve filled a form incorrectly, there may be plenty of people out there that do find it beneficial.
So I’m going to try it!
Why now? We have the capabilities at our finger tips now. I already mentioned the Web Audio API, we can use this to create/load and play sounds. Add this to HTML form validating capabilities and we should be all set to go.
Let’s start with a small form.
Here’s a simple sign up form.
See the Pen Simple Form by Chris Coyier (@chriscoyier) on CodePen.
We can wire up a form like this with really robust validation.
With everything we learned from Chris Ferdinandi’s guide to form validation, here’s a version of that form with validation:
See the Pen Simple Form with Validation by Chris Coyier (@chriscoyier) on CodePen.
Getting The Sounds Ready
We don’t want awful obtrusive sounds, but we do want those sounds to represent success and failure. One simple way to do this would be to have a higher, brighter sounds which go up for success and lower, more distorted sounds that go down for failure. This still gives us very broad options to choose from but is a general sound design pattern.
With the Web Audio API, we can create sounds right in the browser. Here are examples of little functions that play positive and negative sounds:
See the Pen Created Sounds by Chris Coyier (@chriscoyier) on CodePen.
Those are examples of creating sound with the oscillator, which is kinda cool because it doesn’t require any web requests. You’re literally coding the sounds. It’s a bit like the SVG of the sound world. It can be fun, but it can be a lot of work and a lot of code.
While I was playing around with this idea, FaceBook released their SoundKit, which is:
To help designers explore how sound can impact their designs, Facebook Design created a collection of interaction sounds for prototypes.

Here’s an example of selecting a few sounds from that and playing them:
See the Pen Playing Sound Files by Chris Coyier (@chriscoyier) on CodePen.
Another way would be to fetch the sound file and use the audioBufferSourceNode
. As we’re using small files there isn’t much overhead here, but, the demo above does fetch the file over the network everytime it is played. If we put the sound in a buffer, we wouldn’t have to do that.
Figuring Out When to Play the Sounds
This experiment of adding sounds to a form brings up a lot of questions around the UX of using sound within an interface.
So far, we have two sounds, a positive/success sound and a negative/fail sound. It makes sense that we’d play these sounds to alert the user of these scenarios. But when exactly?
Here’s some food for thought:
- Do we play sound for everyone, or is it an opt-in scenario? opt-out? Are there APIs or media queries we can use to inform the default?
- Do we play success and fail sounds upon form submission or is it at the individual input level? Or maybe even groups/fieldsets/pages?
- If we’re playing sounds for each input, when do we do that? On
blur
? - Do we play sounds on every blur? Is there different logic for success and fail sounds, like only one fail sound until it’s fixed?
There aren’t any extremely established best practices for this stuff. The best we can do is make tasteful choices and do user research. Which is to say, the examples in this post are ideas, not gospel.
Demo
Here’s one!
And here’s a video, with sound, of it working:
Voice
Greg Genovese has an article all about form validation and screen readers. “Readers” being relevant here, as that’s all about audio! There is a lot to be done with aria roles and moving focus and such so that errors are clear and it’s clear how to fix them.
The Web Audio API could play a role here as well, or more likely, the Web Speech API. Audio feedback for form validation need not be limited to screen reader software. It certainly would be interesting to experiment with reading out actual error messages, perhaps in conjunction with other sounds like we’ve experimented with here.
Thoughts
All of this is what I call Sound Design in Web Design. It’s not merely just playing music and sounds, it’s giving the sound scape thought and undertaking some planning and designing like you would with any other aspect of what you design and build.
There is loads more to be said on this topic and absolutely more ways in which you can use sound in your designs. Let’s talk about it!
I could go on, but this is starting to sound more negative than I’d like. I’m not trying to be abrasive or rude. It is a useful article on using sound. I don’t agree on the use case, and I’m not sold on the idea that our world is full of beeping, nor that it should be.
Great post! I agree that we have needless hangups about audio on the web. The interesting thing is that we don’t have these same hangups about audio feedback on desktop or mobile apps. I gave a preso about this very kind of audio feedback 2 years ago at Fluent (https://www.safaribooksonline.com/library/view/fluent-conference-2015/9781491927786/video213375.html) but, despite the fact that web audio has come a long way, not much has changed – we’re still mostly afraid to use it.
Thanks, great post.
Agree, this can certainly be implemented in a nonintrusive way.
OSs does it, why not UAs ;-)
Being visually impaired, I always found bus stop signs so much better when they combine visual and auditory cues, compared to single cue indicator.
Gotta admit, I was incredibly skeptical when I saw the title. I thought that with accessibility, this was just going to be a nightmare. After checking it with VO, it wasn’t as intrusive as I thought it would be. I think with wide-spread adoption and consistency, this could be an added bonus. Otherwise, I think that we would have to educate the user that sound1 is good and sound2 is bad. Interesting article.
I think it would be less annoying with negative sounds only. I don’t agree that every sound on the web is irritating, but frequent sounds certainly can be, and since there’s a perception that they are annoying, tying them to leaving an input can feel like punishment. So maybe they should only be tied to inputs that you messed up and ought to be punished for! Either way, that positive ding is incessant and I’d turn my sound off if I had to fill out a form longer than three fields with it on.
Note on microwave: I currently don’t own one because people don’t tend to note how loud they beep in reviews or comparisons, so I find shopping for them to be difficult. My previous one sounded like a truck backing up and I can’t deal with that again. I’d much rather have something that silently finished cooking—it already hums anyway.
Totally tangential to the article, several of the microwaves we’ve owned have an option (though sometimes buried) to disable all beeps. When our kids were infants/toddlers that would wake at any strange sound, we disabled the blaring beeps. So you don’t have to forego a microwave, just get one that lets you turn off the sound.
(though as an aside to my already-an-aside, on our current one, it disables all beeps, including the count-down timer that is distinct from the cooking timer, which makes it about the most useless count-down timer in the world unless you stare at it for the entire countdown. Though for that, I could switch to using the count-down timer on my phone and enjoy microwave silence.)
I too was incredibly skeptical – but having read the article and viewed the demo I can certainly see the merits of considering sound. Ultimately though – I would still never implement this on any project. There are simply too many considerations:
Whether the user is expecting audio and, assuming no, how their response would be.
Whether the audio creates the expected result (ie. User hears noise and knows something is wrong) versus an undesired result (ie. User associates noise to phone notifications and checks to see if they got an SMS, or thinks their smoke alarm battery is dying, etc, etc).
Arguments in favor of this based on the prevalence of audio in Operating System notifications seem to be overlooking the fact that these same systems provide a myriad of options for controlling said audio – for good reason: engaging another sense can have mixed results.
Some users will find it a welcome means by which to be alerted. Others will be distracted or even distressed; the sound being unwelcome or in certain settings even inappropriate.
The greatest strength of this approach (engaging another sense to help assist the user) is equally its greatest flaw.
I think one of the biggest distinguishing factors is how intrusive the sounds are. A subtle, volume-so-low-you-almost-miss it “you got it” chirp or “sorry, mate” sad-trombone adds the same sort of niceness that a subtle fade-in/out or motion animation would provide. But, like with animation, it’s really easy to go overboard. Just as I don’t want my animations to interfere with my work or focus, sounds shouldn’t either.
Thanks everyone for the conversation!
The most interesting points for consideration for myself are in fact ‘is audio on interfaces intrusive or enhancive and helpful?’ And yes I think this comes to what most people have mentioned in one way or another, the type of sound, and when we use it. Absolutely @goose no one likes the repetitive reversing truck beep, but with the scope of functionality we have with the Web Audio API, there’s no reason we have to make intrusive sounds :)
Also we can consider the bigger UX picture – like mute toggles. Yes @aminimalanimal only producing a small error noise might be more appropriate and less annoying :)
And @Brian Rinaldi I’m checking out that talk – as it was pretty much the same take I took for my last talk at CSS Day. We somewhat expect and accept sounds at the OS level, with software and apps, but as soon as we turn to a browser it’s culturally unacceptable.
Maybe it’s back to the mute toggle again…
Looks promising to me. When the E-commerce world tries and tests this kind of feedback and sees what it does for the conversion, it won’t be long before we get some sort of best practices.
We avoid sound because it’s super annoying most of the time. Raise your hand if you love autoplaying videos.
But, sound could be a great tool for improving accessibility. Vibration too – Apple uses haptic feedback to indicate success and error states.
To avoid the annoyance factor, I think putting sound and vibration with a user media query, like the reduced motion query, could be a big win for accessibility.
Earcons (the name I’ve known for these audio cues) generate lots of opinions. I have my own set of opinions, some informed by research and testing, and some from just being a user. I’ll spare everyone those.
I am linking a video I made of the demo as viewed with NVDA 2017.3 and Firefox 55, since I suspect few readers have that set-up handy: http://adrianroselli.com/wp-content/uploads/2017/09/CSS-tricks_form-validation-web-audio_NVDA-FF.mp4
I am not saying whether this is good or bad, as that depends on many factors (users, context, audio choice, technical implementation, etc). At the very least, I figured hearing it with a screen reader (at a very slow speaking rate) might be good insight to how this might be experienced by SR users.
I did find one issue — when I Shift + Tab from the submit button. Leaving the submit button triggers the success sound. If I am a screen reader user, I do not know I messed up the last field until I get to the submit button, so when I Shift + Tab that success sound can be confusing.
Unrelated to the audio piece, some (unrequested) suggestions from years of doing usability studies: 1. avoid marking a field as an error when a user has done nothing more than put focus on it; 2. the error messages should identify which field is affected (I know this is just a demo); 3. I love that you are using
aria-describedby
to associate those error messages with the fields. Thank you for adding that.