Featured Post

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect  provides a good explanation as to what the Doppler effect is and the properties of sound that ca...

Sunday, February 28, 2016

Death Valley & Singing Sand

I spent the past week in Death Valley for my Marin Academy Minicourse, and I discovered something that connects the environment of the desert to my study of sound. I've decided to do some additional research and add write blog post about this topic in order to better understand a fascinating natural phenomenon and to commemorate my wonderful Minicourse experience.

The Eureka Valley Sand Dunes in Death Valley are an impressive range of towering sand dunes. What makes them even more interesting, however, is that a low, mysterious rumble can be heard when traversing the dunes and in the nearby area. Can the sand dunes be the source of this sound? It can be heard here:


The name of this phenomenon is called "singing sand." When a strong wind disrupts a sand dune, the sand particles roll down the side of the dune and vibrate. These vibrations create reverberations throughout the dry top layer of the sand in a dune, which amplifies the sound, producing the Eureka Valley Sand Dunes' characteristic "booming."

While research has been done on this topic, such as the research done by the Caltech engineers in the video above, there is still some uncertainty surrounding singing sand. Much debate exists surrounding the factors that determine the pitch of the sound produced by sand dunes. Three hypotheses are that the size of the sand particles, the depth of the top layer of sand, or the speed of displaced sand control the pitch of the sound.

It is exciting that more research is needed, and I look forward to reading about or even participating in future developments in our understanding of singing sand.

Works Cited:

Phasing & Phase Cancellation

As governed by the laws of physics, sound waves have specific interactions when they come in contact with each other. A good model of this is Fourier's theorem, which shows that simple sine waves with different characteristics can combine to create a more complicated sound wave. This graphic from my post on Fourier's theorem is very important in relation to a phenomenon called phase cancellation:

The constructive waves are called in phase because their periods are the same and they line up perfectly with each other. The destructive waves, on the other hand, are called out of phase because they are shifted relative to each other. Because the waves are exactly half of a period out of phase, they cancel each other out and produce no sound. The interference caused by of out of phase sound waves creates what is known as phase cancellation. 

Phase cancellation is an integral part of music technology, especially as most audio is recorded and mixed in stereo. When you have two speakers playing the left and right channels of a song, a slight offset in the sound produced by the speakers can result in phase cancellation, which ruins the sound reproduction. When recording an instrument in stereo, phase is an important consideration in regards to microphone placement. If microphones are set up such that they record the same sound, but at an offset in phase, the recording can be ruined.

It is clear that complete phase cancellation is something to be avoided in recording and mixing audio. However, like with distortion, phasing is often used in moderation as a desirable effect. By shifting the phase of the left and right channels of a sound with a phase effect, both constructive and destructive interference occur to different parts of the sound wave. The manipulated sound wave ends up with a distinct quality due to the unique filtering of the phase effect. This video illustrates the difference between a clean guitar sound and a guitar sound affected by a phase effect.

Works Cited:

Wednesday, February 3, 2016

Echo & Reverb

Echo and reverberation, commonly referred to as reverb, are two naturally occurring sound effects that are also commonly utilized digitally in the context of music production. Echo and reverb are similar in nature as both are caused by the reflection of sound waves. However, sound reflection is perceived very differently by the human ear depending on certain factors. This is why echo and reverb are considered to be different effects.

Most people are familiar with echo. It occurs when a sound is reflected and then can be heard again after a short delay. An echo generally sounds very similar to the original sound but at a lower volume. This effect often occurs in nature places like canyons and other large open spaces with walls that can reflect sound.

Reverb may be a bit more unfamiliar to someone who does not produce or play music. This video provides examples of different kinds of reverb and how they affect the original sound.
The differences between the original sound and the sounds with reverb effects should be easy to perceive even on low quality speakers.

What distinguishes reverb from echo is the time it takes for a reflected sound to come back to your ear. Sounds that return after less than .1 seconds are perceived as reverb, while sounds that take longer to reflect are perceived as an echo. When the time interval is shorter than .1 second, the human brain perceives the original sound and the reflected sound as a single sound wave.

Because reverb is an effect that occurs naturally in most acoustic environments, it is critically important to include reverb in music production, especially electronic music production, because it sounds natural to the human ear. Electronic instruments and synthesizers lack natural reverb, so it is generally essential to add some sort of simulated reverb. This can be done through a plugin that digitally simulates reverb effects or naturally by playing a recording of the instrument in an acoustic environment and then recording it with a microphone. Adding appropriate reverb allows synthesizers to sound more natural and to fit better into a mix that involves acoustic instruments.

Works Cited:

Monday, February 1, 2016

Sound Distortion

Distortion in the context of music is a word that is often tossed around by people who do not have a sufficiently clear or deep understanding of what it actually is. Fundamentally, distortion describes a change in a sound's waveform that occurs as the sound is being transmitted electronically or digitally. Viewing a wave form digitally can illustrate distortion quite well. The following image shows an audio signal and then the same signal after being distorted:

Given the focus on sound, we also want to be able to hear and recognize distortion. The guitar in the following video presents a clearly audible difference between a more pure sound and a distorted sound:

An interesting takeaway from this video is that distortion is not something that is intrinsically bad; many guitar and bass amps have a distortion knob as a feature. Moderate and proper usage of distortion can often be used to enhance the sound of an instrument in the right musical context, but what causes this change in a sound's waveform?

There are two main categories of distortion: linear and nonlinear (commonly harmonic distortion). In linear distortion, the amplitude of different parts of the sound wave are changed, and in nonlinear distortion, different frequencies or harmonics are added to the sound. This explains why distortion can be utilized in a beneficial way - adding appropriate harmonics can add complexity to a sound while adding clashing harmonics can make something sound inharmonious. To add a bit of information about how electronics pertain to distortion, "Harmonic distortion in amplifiers is usually caused by the amplifier needing more voltage than its power supply can provide. It can also be caused by some part of the internal circuit (usually the output transistors) exceeding its output capacity"(Source).

The two main branches of distortion, linear and nonlinear, can be broken down into many different types of distortion. Harmonic distortion is often the type of distortion that people refer to when speaking about distortion, and I have addressed it already, but other types include bandwidth distortion, intermodulation distortion, dynamic distortion, temporal distortion, noise distortion, and acoustic distortion. If you are interested in more information about these specifics types of distortion, this source provides descriptions with good depth about all of them.

Works Cited: