Sound choreography

Sound can be choreographed to express hierarchy, relationships, and optimize your product experience.


Hierarchy

When choreographing the sounds that play in your product, each sound should reflect its level of importance in the UI’s hierarchy. A sound’s prominence and personality should be appropriate to its level, and sounds of the same type (such as hero sounds) share the same level of hierarchy.

High in the hierarchy

Sounds that are higher in the hierarchy are important representations of a brand or product.

Peer sounds

In a user flow, sounds that follow or precede one another should have related attributes (like timbre, melody, or envelope).

Priority Sounds Type of sounds
1 Brand sounds Mnemonic
2 Hero sounds Celebration moments
3 Alerts and notifications Ringtones and alarms Notifications
4 Primary UX sounds Main UX sounds
5 Secondary UX sounds Functional sounds

Sound relationships

Sounds that share attributes are unified as a group.

Key signature

Key signatures are a defining characteristic of tonal sounds. They help build harmonic relationships between interactions. Sounds that are played in close proximity to one...

Key signatures are a defining characteristic of tonal sounds. They help build harmonic relationships between interactions.

Sounds that are played in close proximity to one another should use the same or complementary key signatures, unless a specific use case requires otherwise.

DoEarcons in a product should use complementary signatures to create a relationship between them.
Don'tDon’t create earcons with unrelated key signatures, as it doesn’t express a unified product sound experience.

Expressing sound relationships

Show how states are related to one another by using motifs to express that connection. For example, the sound for an “on” state can relate...

Show how states are related to one another by using motifs to express that connection. For example, the sound for an “on” state can relate to the sound for an “off” state.

DoEach sound plays in a direction, and that direction reverses depending on whether the switch is toggled on or off. This indicates that the two states are related, while performing opposite functions.
Don'tDon’t express opposite states with notes that have an ambiguous relationship.

Repeated sounds

Interaction sounds that occur regularly – such as sounds associated with typing, swiping, scrolling, or navigation – can benefit from small changes to those sounds....

Interaction sounds that occur regularly – such as sounds associated with typing, swiping, scrolling, or navigation – can benefit from small changes to those sounds. These interactions should include minor variations in sound timbre, to mimic the variance of sounds in real-world experiences.

DoWhen swiped, each item triggers a sound effect that includes minor variations in sound characteristics.
DoEach tap on the same UI element triggers a slightly different sound that contains subtle variations.

Mixing sound

Mixing is the art of combining different sound sources into one audio stream. It involves adjusting each sound’s volume, frequency, spatial positioning, and more to create a rich, cohesive sound.

Sound sources

Different sound sources can be mixed to vary the emotion, intent, or character of the final sound. You can also adjust a sound’s focal point.

1. This mix feels more open, making high frequencies more prominent.
2. This mix feels more closed, putting the focus on the trill and reducing high-frequency content.

Sound priority

UX sounds should be balanced to accommodate other sounds in the UI and the physical environment. Treatments that isolate, duck, mix, and balance some sounds at specific moments can help focus user attention properly, so that the intent behind a sound comes across.

When a notification sound occurs while music is playing, the system temporarily gives the notification prominence. The sound priority moves away from the music until the notification is swiped away.

Mixing factors

Sound mixing is nuanced and depends on the overall experience being designed. Consider these factors in determining how sounds should interact:

Other device sounds

Multiple sounds can occur at the same time, both from user-generated activities and system sounds. For example, sounds from incoming notifications may occur while a user listens to music.

Sound optimization

To optimize sound, the sound designer can audition a sound by testing the sound using devices and real-world environments. By listening to sounds in real-world...

Sound for the user’s environment

To optimize sound, the sound designer can audition a sound by testing the sound using devices and real-world environments. By listening to sounds in real-world conditions (using the software, hardware, environmental noise, acoustics and other factors of an environment) sound can be better adjusted to play in a wider range of conditions.

Changes can also be made to a sound’s attributes (such as timbre) using the following processes: composition rewrites, re-orchestration, melodic variations, equalization, and other changes.

Equalization

Equalization (EQ) is an effect that enhances or reduces specific frequencies. EQ should be adjusted for the range of devices on which playback is designed to occur.

1. This sound is equalized for full fidelity playback.
2. This sound is equalized to reduce low-end frequencies and amplify high frequencies.

Loudness

Sounds should play at a consistent level of loudness depending on their position in the sound hierarchy (determined by a sound’s priority level and category). For example, sound from a ringtone alert can be louder than sound from UI feedback, as it has higher priority in the moment it occurs.

Measuring loudness

When measuring loudness through specific hardware, take into account “perceived loudness,” (measured in A-weighted decibels or dB(A)), rather than relying solely on the direct peak meter level.

Adjusting volume

Volume controls should reflect how people hear sound, rather than what’s mechanically possible. Volume level increases should use logarithmic (rather than linear) volume increases.

For more information on loudness, see the Actions on Google Audio Loudness guidelines.


File formats

Memory optimization

The final audio file playback may change depending on a product’s hardware and software limitations. To reduce file size (with minimal degradations to quality): Don’t...

The final audio file playback may change depending on a product’s hardware and software limitations.

To reduce file size (with minimal degradations to quality):

  1. Apply lossy compression (such as mp3 or ogg) up until artifacts can be heard
  2. Lower the bit-depth and sample rate until artifacts can be heard
  3. Trim any unnecessary silence at the beginning or the end of the file
Don'tDon’t degrade or compress a sound such that audible artifacts are noticeable (such as noise, distortion, or stray frequencies that can arise from file compression). It’s better to design a new sound than have audible artifacts.

1. Uncompressed audio
2. The lowered bit-depth and sample rate have introduced a noticeable degradation in quality.

File format recommendations

The final format of the audio depends on system-level implementation and restrictions. Try to choose the best (most lossless) format your system will allow, especially...

The final format of the audio depends on system-level implementation and restrictions. Try to choose the best (most lossless) format your system will allow, especially for key sounds in your user experience.

For more information on supported file formats, visit the Android Developers supported media documentation.

Up next