How Sound Works: The Science of What We Hear
How Sound Works: The Science of What We Hear
From the soothing sounds of ocean waves to the complex harmonies of a symphony orchestra, sound is a fundamental part of how we experience the world. But what exactly is sound? How does it travel through air? And how do our ears convert these invisible waves into the rich audio experiences we perceive? In this interactive guide, we’ll explore the fascinating physics and biology of sound.
What Is Sound?
Term
Sound is a form of energy that travels through matter as a pressure wave. When an object vibrates, it creates disturbances in the surrounding medium (usually air), causing molecules to compress and expand in a wave-like pattern.
At its most basic level, sound is a form of energy that travels through matter as a pressure wave. When an object vibrates, it creates disturbances in the surrounding medium (usually air), causing molecules to compress and expand in a wave-like pattern. These pressure waves radiate outward from the source, eventually reaching our ears where they’re transformed into the sensation we know as sound.
Note
Unlike water waves, which move up and down, sound waves are longitudinal waves — the air molecules move back and forth in the same direction that the wave travels.
Unlike water waves, which move up and down, sound waves are longitudinal waves — the air molecules move back and forth in the same direction that the wave travels. This creates areas of higher pressure (compressions) and lower pressure (rarefactions) that propagate through the air.
The Properties of Sound Waves
Sound waves have several key properties that determine what we hear:
Frequency: Determining Pitch
Frequency refers to the number of complete wave cycles that occur in one second, measured in Hertz (Hz). Our ears can typically detect frequencies from about 20 Hz to 20,000 Hz, though this range diminishes with age.
- Low frequencies (20-250 Hz) are perceived as bass or low pitches
- Mid frequencies (250-4,000 Hz) cover most speech and common sounds
- High frequencies (4,000-20,000 Hz) give sounds their clarity and brilliance
Interactive
Try adjusting the frequency slider below to hear how changes in frequency affect the pitch of a sound:
When we talk about musical notes, we’re really talking about specific frequencies. For example, the A above middle C (A4) on a piano is standardized at 440 Hz. Each octave represents a doubling or halving of frequency — A5 is 880 Hz, and A3 is 220 Hz.
Amplitude: Controlling Volume
Amplitude refers to the magnitude of the pressure change in a sound wave. Larger amplitudes result in louder sounds, while smaller amplitudes create quieter sounds. We typically measure sound amplitude in decibels (dB), a logarithmic scale that better matches how our ears perceive loudness.
Caution
Prolonged exposure to sounds above 85 dB can cause permanent hearing damage. Always use hearing protection in loud environments!
Here are some common sounds and their approximate decibel levels:
- 0 dB: Threshold of hearing
- 30 dB: Whisper
- 60 dB: Normal conversation
- 90 dB: Lawn mower
- 120 dB: Rock concert
- 140 dB: Threshold of pain
The logarithmic nature of the decibel scale means that a 10 dB increase represents approximately a doubling of perceived loudness. So a 70 dB sound seems twice as loud as a 60 dB sound.
Wavelength: The Physical Size of Sound
Wavelength is the physical distance between successive peaks (or compressions) in a sound wave. It’s inversely related to frequency — higher frequencies have shorter wavelengths, while lower frequencies have longer wavelengths.
The relationship between wavelength (λ), frequency (f), and the speed of sound (v) is given by:
λ = v / f
At room temperature, sound travels through air at approximately 343 meters per second (1,125 feet per second). This means a 20 Hz bass note has a wavelength of about 17 meters (56 feet), while a 20,000 Hz high-pitched sound has a wavelength of just 1.7 centimeters (0.67 inches).
These physical dimensions of sound waves explain why bass frequencies are more difficult to block or absorb and why they can travel around obstacles more easily than high frequencies.
Harmonics and Timbre: Why Instruments Sound Different
If you play a middle C on a piano, a guitar, and a flute, you’ll hear the same note, but each instrument sounds distinctly different. This distinctive quality of a sound is called timbre (pronounced “tam-ber”), and it’s largely determined by the harmonic content of the sound.
When an instrument produces a note, it doesn’t just create a single, pure frequency (called the fundamental). It also generates a series of higher frequencies called harmonics or overtones, which are integer multiples of the fundamental frequency.
Different instruments produce different patterns of harmonics, giving each its unique voice:
- A flute produces relatively few harmonics, creating a pure, simple tone
- A violin creates strong odd harmonics, giving it a rich, complex sound
- A clarinet emphasizes odd-numbered harmonics, creating its distinctive hollow sound
- A piano’s hammer action creates a broad spectrum of harmonics that evolve over time
This harmonic fingerprint is why we can tell instruments apart even when they’re playing the same note. It’s also why synthesizers need to recreate these harmonic structures to simulate real instruments convincingly.
Resonance: When Objects Amplify Sound
Resonance occurs when an object naturally vibrates at the same frequency as an incoming sound wave, causing it to vibrate with greater amplitude. This phenomenon is central to how many musical instruments work and explains various acoustic phenomena.
The body of an acoustic guitar, for example, is designed to resonate with the frequencies produced by the strings, amplifying the sound. Similarly, the tube of a flute or the body of a violin creates resonant chambers that enhance certain frequencies.
Resonance can sometimes have dramatic effects. Opera singers can shatter glass by singing a note that matches the glass’s natural resonant frequency. More practically, resonance explains why certain rooms might amplify some frequencies while dampening others, creating acoustic “sweet spots” or problematic areas.
Standing Waves: Sound in Enclosed Spaces
When sound waves reflect back and forth in an enclosed space, they can create standing waves. This happens when the forward-traveling and reflected waves combine, creating patterns of minimum vibration (nodes) and maximum vibration (antinodes).
Practice
Try this: Clap your hands in different rooms of your house. Notice how the sound changes based on the room’s size and materials. This is due to standing waves and room acoustics!
Standing waves are fundamental to how many instruments work:
- In wind instruments, standing waves form in air columns
- In string instruments, standing waves vibrate along the strings
- In percussion instruments like drums, standing waves form on the stretched membrane
Standing waves are also important in room acoustics. They can create “room modes” — specific frequencies that get amplified or attenuated depending on the room’s dimensions, potentially causing uneven frequency response in recording studios or listening spaces.
How We Hear: From Air Pressure to Neural Signals
Our auditory system is a remarkable biological mechanism that converts pressure waves in the air into the experience of sound:
- Outer ear (pinna and ear canal): Collects sound and directs it to the eardrum
- Middle ear (eardrum, ossicles): Converts air pressure waves into mechanical vibrations
- Inner ear (cochlea): Transforms mechanical vibrations into electrical signals
- Auditory nerve: Carries electrical signals to the brain
- Auditory cortex: Processes and interprets signals as sound
Term
The cochlea is a fluid-filled, spiral-shaped structure in the inner ear that contains thousands of tiny hair cells, each tuned to respond to specific frequencies.
The cochlea is particularly fascinating. This fluid-filled, spiral-shaped structure contains thousands of tiny hair cells, each tuned to respond to specific frequencies. When sound causes the basilar membrane to vibrate, it bends these hair cells, triggering electrical impulses that our brain interprets as different pitches.
This frequency-to-position mapping (called tonotopic organization) is why we can distinguish different pitches and hear complex sounds like chords or entire orchestras simultaneously.
The Sound Spectrum: Visualizing What We Hear
Modern technology allows us to visualize sound as a spectrum, showing the intensity of different frequencies in real-time. This representation helps us understand the complex makeup of sounds that our ears process automatically.
Sound spectrum analysis is used in many applications:
- Audio engineering for identifying problematic frequencies
- Voice recognition systems
- Musical tuning and analysis
- Noise control and environmental monitoring
- Speech therapy and language learning
The spectrum view reveals details that our ears might miss, showing how even simple sounds contain complex patterns of frequencies, and how different sounds occupy different regions of the frequency spectrum.
The Speed of Sound: Not Always 343 m/s
Note
The speed of sound varies with temperature, medium, humidity, and altitude. This explains phenomena like thunder and why sound behaves differently underwater.
While we commonly use 343 meters per second (1,125 feet per second) as the speed of sound in air, this value only applies at room temperature (20°C or 68°F) at sea level. The speed of sound varies with:
- Temperature: Sound travels faster in warmer air (increases about 0.6 m/s per °C)
- Medium: Sound travels faster in liquids and solids than in gases
- Water: ~1,480 m/s
- Steel: ~5,960 m/s
- Humidity: Slightly increases with higher humidity
- Altitude: Decreases at higher altitudes due to lower air density
This variable speed explains phenomena like thunder (lightning and thunder occur simultaneously, but light travels much faster than sound) and why sound behaves differently underwater or through solid structures.
The Doppler Effect: Why Sounds Change as They Pass By
You’ve likely noticed how the pitch of a siren changes as an ambulance passes by. This change in perceived frequency due to relative motion between the source and observer is called the Doppler effect.
When a sound source moves toward you, the sound waves are compressed, resulting in a higher perceived frequency (higher pitch). As it moves away, the waves are stretched out, creating a lower perceived frequency (lower pitch).
This effect isn’t just a curiosity — it has practical applications in fields like medicine (Doppler ultrasound), astronomy (measuring stellar motion), and radar systems (speed detection).
Beyond Human Hearing: Infrasound and Ultrasound
Our auditory range of 20 Hz to 20,000 Hz represents just a fraction of the sound spectrum:
-
Infrasound (below 20 Hz): Can’t be heard but can be felt as vibrations. Natural sources include earthquakes, volcanoes, and some animal communications. Some large pipe organs can produce infrasound, creating a physical sensation even when the notes aren’t audible.
-
Ultrasound (above 20,000 Hz): Beyond human hearing but detectable by many animals. Bats use ultrasound for echolocation, and dolphins communicate with ultrasonic clicks. We harness ultrasound for medical imaging, distance measurement, and cleaning delicate items.
These ranges remind us that our perception of sound represents just one slice of a much broader acoustic reality.
The Psychology of Sound: How We Process What We Hear
Our experience of sound isn’t just about physics — it’s deeply influenced by psychology:
-
Sound localization: Our brain uses tiny differences in timing and intensity between our two ears to determine where sounds are coming from.
-
Auditory masking: Louder sounds can make it difficult to hear quieter sounds at similar frequencies (why it’s hard to have conversations in noisy environments).
-
Missing fundamental: We can perceive a fundamental frequency even when it’s not present, as long as its harmonics are.
-
Cocktail party effect: Our ability to focus on a single conversation in a noisy room while filtering out other sounds.
These psychological aspects of hearing highlight how much processing our brain does to create our seamless audio experience.
Conclusion: Sound as Experience
Sound is both a physical phenomenon and a perceptual experience. The pressure waves traveling through air are transformed not just by our ears, but by our brains and our memories, creating the rich sonic landscape we navigate daily.
Understanding the science of sound enhances our appreciation for music, helps us design better acoustic spaces, and gives us insight into one of our most important senses. Whether you’re a musician, an audio engineer, or simply someone who enjoys listening, the physics and biology of sound form the foundation of our auditory world.
As you go about your day, take a moment to listen to the symphony of sounds around you, knowing that each one represents a complex interplay of physics, biology, and psychology that connects you to the vibrating world.
Further Resources
-
Books
- “This Is Your Brain on Music” by Daniel J. Levitin
- “The Science of Sound” by Thomas D. Rossing
- “How Music Works” by David Byrne
-
Online Learning
-
Interactive Tools