Featured Post

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect  provides a good explanation as to what the Doppler effect is and the properties of sound that ca...

Sunday, January 31, 2016

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect provides a good explanation as to what the Doppler effect is and the properties of sound that cause it to exist. I decided to collect some data to try to provide a more tangible example of the Doppler effect at work in both a simple system and in a Leslie cabinet. 

To get a simple measurement of the Doppler effect, working with Jon Bretan, we attached a battery to a simple buzzer so that it was constantly producing sound and tied some string tightly around these two objects. With the buzzer tied to one end of the string, we were able to spin it in a circle. We recorded the changes in sound pressure over the span of one second, and we also recorded the stationary buzzer to act as a control. This is our initial data set:

Both the uniformity of the sound pressure data and the thin band of frequencies on the fast Fourier transform show the purity of the tone generated by the stationary buzzer. In contrast, the complicated shape of the sound pressure data for the rotating buzzer reveals a much more complicated sound wave. The breadth of frequencies shown on the fast Fourier transform best illustrates the Doppler effect. Because the buzzer was moving relatively to the microphone, the microphone picked up many more frequencies than the central pitch generated by the stationary buzzer. The mix of frequencies appear to be fairly uniform, but they are not perfectly uniform due to aspects of the rotation that were not controlled, such as ensuring that the buzzer rotated in the same plane as the microphone. Despite this, the data set gives a very clear example of the Doppler effect at work. Using the same methodology, I recorded the frequencies produced by a popular piece of music equipment.


The Leslie cabinet, as I mentioned in my previous post about the Doppler effect, is an amplifier with a very unique way of producing sound. The cabinet is divided into two sections, the horn and the drum, which produce high and low pitches respectively. The speaker in each section has two speeds of rotation, fast and slow, and can also be stationary. I played a sine tone at 440 hertz through the Leslie cabinet and used a microphone to collect data for both the horn and the drum at both speeds. Additionally, I recorded sound produced by the stationary setting as a control. 

The following data comes from the sound produced by the horn (higher pitch speaker):

The data appears similar to the simple Doppler generator as the faster rotating speaker produces a wider spread of frequencies. The logarithmic scaling of the Y axis reveals the incredible level of symmetry of the frequency spread. This reveals an important distinction between the simple spinning buzzer and the Leslie cabinet and suggests that the uniform rotation of the Leslie cabinet produces the uniform frequency spread that was recorded.

The following data comes from the sound produced by the drum (lower pitch speaker):
Looking at the full frequency range, we are able to see the very low pitch produced by the drum as well as the central tone of the 440 hertz sine wave. We are also able to see the frequency spread caused by the Doppler effect.

When we look at the same frequency range as the horn data, we are able to clearly see the Doppler effect:
The similarities in frequency spread that correspond to speed of rotation are a clear indicator of the Doppler effect in the Leslie cabinet. Again, we are able to see near uniformity in the symmetry of the frequency spread, especially for the fastest speed of rotation.

Friday, January 22, 2016

Sound Amplification & The Inverse Square Law

Amplifying sound is the act of making a sound louder to the human ear. It relies on the process of increasing the amplitude of the sound wave, which is where "amplification" comes from. As I mentioned in a much earlier post, our perception of loudness corresponds to the amplitude of sound waves. Therefore, to make a sound seem louder, you need to find a way to increase the amplitude of its waveform. This can be done in a couple of ways.

The simplest way to amplify the volume of sound is when it is produced by an acoustic instrument. Striking a guitar string harder, blowing more air or with more force into a wind instrument, and beating a drum with more force are all ways to amplify the sound produced by acoustic instruments. As you can recall, the defining characteristics that determine the pitch of the sound produced are the length and thickness of the guitar string, or length and width of the air cavity in a wind instrument. Because this does not change during these processes, the pitch is able to remain constant while the sound is amplified.

The more interesting form of sound amplification, in my opinion, is the electronic/digital amplification that exists universally in our technology today. A great example of how this works relies on a slightly older piece of technology, a record player with electronic amplification. A vinyl record reproduces sound as the needle of the record player spins through its grooves and vibrates. The record player is able to convert these small vibrations into electrical signal. In the amplifier, this current is strengthened and reproduced as sound at a much louder volume. Much of our other technology works in the same way - electrical impulses are increased in strength in order to produce a louder sound. This is how microphones work as well as musical instrument amplifiers, such as guitar amps.

The existence of sound amplification brings up a few more questions about when it is necessary and the ways in which sound travels. Obviously in a small, quiet room, a guitar amp is not needed to hear the sound produced by a guitarist. What happens if it is a long room, however, and the guitar player is very far away? Though it may seem quiet, it may not be as easy for you to hear the guitar player without amplification if you are situated at opposite ends of the room. This pertains to the inverse square law, which relates the amplitude of sound to the distance it travels from its source to a listener. The law states that as the distance doubles, the intensity of the sound is decreased by a factor of four. This corresponds to a constant drop of six decibels each time the distance is doubled. Because of this, in certain cases sound amplification may be necessary simply due to the distance that sound needs to travel. This is especially relevant in large concert halls or outdoor venues.

Works Cited:

Wednesday, January 13, 2016

Haas Effect / Precedence Effect

The Haas effect and the precedence effect are effectively two names for the same psychoacoustic phenomenon and can be used interchangeably for the most part. Haas refers to Helmut Haas, the scientist credited with first describing this effect, while precedence provides a better sense as to what the effect actually is. I did explore some psychoacoustic phenomena during my first semester studies, but decided to put off study of the precedence effect until now due to its relevance. The precedence effect is a fundamental reason that we are able to perceive sound dimensionally, which relates directly to my previous post about stereo imaging.

The precedence effect occurs when we hear an initial sound and then its reflections (echos/delays, reverberations) approximately 1-40 milliseconds later. The information that arrives in our ears during this short period is critical in how our brains determine the location or width of a sound. This is the precedence effect in a nutshell; in reality it is a very simple phenomenon to understand if you do not dive deeply into neuroscience, and within that area there is still much to discover. A basic understanding of this effect can still be very useful for a musician or producer.

The precedence effect, along with stereo imaging, is often used in the mixing process in music production in order to make an instrument sound "wider" or fit better into the mix. By using a delay/echo audio plugin on a single channel, you can add a delay that falls into the Hass range, 1-40 ms. You can adjust this value to help your instrument or sound fall into the ideal place in the mix. It is important to consider the other parameters in the delay plugin and make sure that it is subtle enough to take advantage of the precedence effect without adding unnecessary echo. It is also possible to use the precedence effect to make a mono recording sound like a full stereo recording. This process is described succinctly here.

Works Cited: 

Monday, January 11, 2016

Stereo Imaging

Stereo imaging presents another interesting connection between light and sound. The way we perceive depth relies on the fact that we have two eyes that perceived slightly different images due to their spacing. When our brain puts these two flat images together, we are able to see depth. Our ears work in a very similar way in the perception of sound. By hearing slightly different sounds in each ear, our brain is able to corroborate them to create a sense of spacial awareness. This should be second nature to you, as you certainly use your spacial sense of hearing on a daily basis.

The production of stereo music takes advantage of the sense of depth that our ears are able to produce. A song in stereo means that the two speakers or two headphones/earbuds actually play slightly different versions of the same song. Stereo imaging is the process of using the stereo reproduction of sound to create music that has audible depth and width. A song with good stereo imaging tends to sound more natural and allows instruments to have more clarity in the mix. If you close your eyes while listening to a well-produced recording of a jazz trio, you should be able to easily envision the location of each musician in front of you due to the recording's stereo imaging.

Achieving a recording with good stereo imaging is not an easy process. When recording acoustic instruments, stereo recording techniques need to be utilized in order to capture audio for both sound channels while also utilizing specific types of microphones that properly capture the depth and width of specific instruments.

In post production, after a performance has already been recorded, digital audio plugins that simulate the effect of stereo imaging can be carefully used to make a recording sound more natural or full. Stereo imaging can also be used to make a more well-mixed song by allowing each instrument to have its own physical space. This process can be especially important in the creation of electronic music, which will almost never be recorded by using a microphone. To achieve a good stereo image in the context of electronic music will effectively always require the use of stereo imaging plugins. In the mastering stages of producing a song, stereo imaging can be applied to different frequency ranges instead of specific instruments. In this case, higher frequencies are often manipulated to sound more wide, while lower frequencies are kept in the middle. This can allow a mix to sound more clear by granting new space to the different frequency ranges of a song.

Works Cited:
"Introduction to Stereo Imaging -- Theory." Cardiff School of Computer Science and Informatics. Web. 11 Jan. 2016.
"Stereo Imaging." CDT Audio. Web. 11 Jan. 2016.

The Doppler Effect

The Doppler effect one of the most interesting phenomena related to sound. In order for the Doppler effect to be observed, something must be producing a constant sound while moving relative to an observer. The result of this is something that many people have experienced or can easily perceive. You may have noticed when an ambulance passes by you on the road that its siren seems to change pitch. Interestingly, the pitch of the siren will seem constant to the person driving the vehicle. The pitch variation that you perceive is a result of the Doppler effect.

What causes the changing pitch of this sound is the movement of its source. The siren produces a sound of a constant frequency, but its distance relative to you changes, which effectively changes the wavelength of the sound wave that is generated. Because the speed of sound is constant and proportional to wavelength * frequency, the changing wavelength causes the frequency to change. This is the root cause of the Doppler effect. The way we perceive the pitch of sound is related to the frequency, so this explains how we can hear the Doppler effect as it applies to sound waves. The Doppler effect is also present in light waves, a characteristic that is critical in the study of astronomy. As my blog focuses on sound, I will not go into depth about this aspect of the Doppler effect, but the this source provides good introductory information if you are interested.

As I begin my second semester, the focus of my studies is shifting toward the recording and reproduction of sound, as well as the digital processing of sound. The Doppler effect exists in the reproduction of sound in certain amps and speakers. Leslie cabinets, commonly used amplifiers for Hammond organs, rely on rotational motion in the production of sound. This gives them a unique and often desirable quality for some musicians. Soon, I will analyze the changing frequencies of sound produced by a Leslie cabinet as an example of the Doppler effect.

Works Cited:
"The Doppler Effect." The Doppler Effect. Physics Classroom. Web. 11 Jan. 2016.
"Doppler Effect." Hyper Physics. Web. 11 Jan. 2016.