Featured Post

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect  provides a good explanation as to what the Doppler effect is and the properties of sound that ca...

Thursday, March 24, 2016

Moog Sub Phatty Waveform Purity Analysis

Recently, I have brought my Moog Sub Phatty into school to study it from the perspective of my independent study. My primary method of analysis is measuring the voltage output from the Sub Phatty's headphone jack:


The voltage data output from the synthesizer reflects the sound wave it produces; this is how sound is transferred through electronics. I was initially curious as to how the Sub Phatty's different wave forms impact the sound waves it produces. The waveshapes available are controlled by this knob:


Moog describes them, starting from the bottom left and going clockwise, as triangle, sawtooth, square, and narrow pulse waves. I decided to arbitrarily number each of the lines around the wave knob, starting from one and going to eleven in the same clockwise order. In analyzing the sound produced by the different waveforms, I wanted to assess purity. In order to quantify "purity," I measured how well each wave is fit by a sine wave curve fit. I determined this by calculating the sine fit correlation. 

The following table shows the wave number and the value of its sine fit correlation (with the bold numbers representing the waveforms named in the Moog's manual):

Wave Number:Sine Fit Correlation:
10.9966
20.9937
30.9555
40.8839
50.8168
60.7608
70.8669
80.9122
90.8201
100.5641
110.3595

What was initially interesting is how pure the "triangle" wave is. While I was not surprised by the sine fit correlation of .9966 given how pure the tone sounded to me, it is interesting that it is named a triangle wave when it seems to essentially be a sine wave.
Another interesting observation is that there is not a real correlation or trend between the wave number and its sine fit correlation as I had hoped for. However, this table provides a sense of how "gritty" or impure each of the Sub Phatty's waveforms are, which can be useful in sound design. For a harsher tone, it is best to select one of the waves with a lower correlation. Hopefully this table can serve as a reference point.

Tuesday, March 1, 2016

Analysis of a Distortion Circuit

Building on my post about distortion, I built a distortion circuit with the help of Jon Bretan. The circuit takes in audio signal and uses transistors to distort the signal. Instead of outputting the distorted signal to a speaker, I recorded it using voltage probes. The final setup looked like this:

I recorded the voltage input (red) and the distorted signal (blue) in several different scenarios. First, I recorded the data for sawtooth, square, and triangle waves to look at how distortion uniquely effects each waveform. The data are pictured below:

100.87 Hertz Sawtooth:



100.87 Hertz Square:



100.87 Hertz Triangle:


Interestingly, the distorted waveform has a unique shape for the different input waves. The distortion is clear in the jagged and flattened parts of the blue wave, and it makes sense to see the amplitude of the distorted waveform match the general shape of the initial waveforms.  The type of distortion occurring here is called clipping, and I will be dedicating a brief blog post to this in the future. Normally, clipping results in a louder seeming wave because the top of the initial wave is "flattened" as it reaches peak intensity. However, in this circuit, the distorted sound is not amplified to a level that would make this apparent as its magnitude is significantly lower than the initial signal.

After looking at these different waveforms, I took a basic sine wave signal and recorded it at a range of frequencies to see how the frequency impacted the distortion occurring in the circuit. The following data represent the frequency range I was able to record:

59.44 Hertz Sine:


100.87 Hertz Sine:


200 Hertz Sine:



302.27 Hertz Sine:



403.48 Hertz Sine:


493.88 Hertz Sine:


I initially did not think that the effect of the distortion would not be significantly affected by the frequency of the input signal. However, looking at the data in order of increasing frequency reveals a really fascinating trend. The input signal for all the frequencies seem to be perfectly uniform, but the distorted signal at higher frequencies appears to follow a sinusoidal pattern. The peaks and troughs of the distorted signal that follow the shape of the clean signal seem to be superimposed over a sine wave. It is likely that this added oscillation is caused by the electronic components of the circuit as electric current flows through. It is interesting to see how this effect is only very clear for the higher frequency sine waves.

Works Cited:

Sunday, February 28, 2016

Death Valley & Singing Sand

I spent the past week in Death Valley for my Marin Academy Minicourse, and I discovered something that connects the environment of the desert to my study of sound. I've decided to do some additional research and add write blog post about this topic in order to better understand a fascinating natural phenomenon and to commemorate my wonderful Minicourse experience.

The Eureka Valley Sand Dunes in Death Valley are an impressive range of towering sand dunes. What makes them even more interesting, however, is that a low, mysterious rumble can be heard when traversing the dunes and in the nearby area. Can the sand dunes be the source of this sound? It can be heard here:


The name of this phenomenon is called "singing sand." When a strong wind disrupts a sand dune, the sand particles roll down the side of the dune and vibrate. These vibrations create reverberations throughout the dry top layer of the sand in a dune, which amplifies the sound, producing the Eureka Valley Sand Dunes' characteristic "booming."

While research has been done on this topic, such as the research done by the Caltech engineers in the video above, there is still some uncertainty surrounding singing sand. Much debate exists surrounding the factors that determine the pitch of the sound produced by sand dunes. Three hypotheses are that the size of the sand particles, the depth of the top layer of sand, or the speed of displaced sand control the pitch of the sound.

It is exciting that more research is needed, and I look forward to reading about or even participating in future developments in our understanding of singing sand.

Works Cited:

Phasing & Phase Cancellation

As governed by the laws of physics, sound waves have specific interactions when they come in contact with each other. A good model of this is Fourier's theorem, which shows that simple sine waves with different characteristics can combine to create a more complicated sound wave. This graphic from my post on Fourier's theorem is very important in relation to a phenomenon called phase cancellation:

The constructive waves are called in phase because their periods are the same and they line up perfectly with each other. The destructive waves, on the other hand, are called out of phase because they are shifted relative to each other. Because the waves are exactly half of a period out of phase, they cancel each other out and produce no sound. The interference caused by of out of phase sound waves creates what is known as phase cancellation. 

Phase cancellation is an integral part of music technology, especially as most audio is recorded and mixed in stereo. When you have two speakers playing the left and right channels of a song, a slight offset in the sound produced by the speakers can result in phase cancellation, which ruins the sound reproduction. When recording an instrument in stereo, phase is an important consideration in regards to microphone placement. If microphones are set up such that they record the same sound, but at an offset in phase, the recording can be ruined.

It is clear that complete phase cancellation is something to be avoided in recording and mixing audio. However, like with distortion, phasing is often used in moderation as a desirable effect. By shifting the phase of the left and right channels of a sound with a phase effect, both constructive and destructive interference occur to different parts of the sound wave. The manipulated sound wave ends up with a distinct quality due to the unique filtering of the phase effect. This video illustrates the difference between a clean guitar sound and a guitar sound affected by a phase effect.

Works Cited:

Wednesday, February 3, 2016

Echo & Reverb

Echo and reverberation, commonly referred to as reverb, are two naturally occurring sound effects that are also commonly utilized digitally in the context of music production. Echo and reverb are similar in nature as both are caused by the reflection of sound waves. However, sound reflection is perceived very differently by the human ear depending on certain factors. This is why echo and reverb are considered to be different effects.

Most people are familiar with echo. It occurs when a sound is reflected and then can be heard again after a short delay. An echo generally sounds very similar to the original sound but at a lower volume. This effect often occurs in nature places like canyons and other large open spaces with walls that can reflect sound.

Reverb may be a bit more unfamiliar to someone who does not produce or play music. This video provides examples of different kinds of reverb and how they affect the original sound.
The differences between the original sound and the sounds with reverb effects should be easy to perceive even on low quality speakers.

What distinguishes reverb from echo is the time it takes for a reflected sound to come back to your ear. Sounds that return after less than .1 seconds are perceived as reverb, while sounds that take longer to reflect are perceived as an echo. When the time interval is shorter than .1 second, the human brain perceives the original sound and the reflected sound as a single sound wave.

Because reverb is an effect that occurs naturally in most acoustic environments, it is critically important to include reverb in music production, especially electronic music production, because it sounds natural to the human ear. Electronic instruments and synthesizers lack natural reverb, so it is generally essential to add some sort of simulated reverb. This can be done through a plugin that digitally simulates reverb effects or naturally by playing a recording of the instrument in an acoustic environment and then recording it with a microphone. Adding appropriate reverb allows synthesizers to sound more natural and to fit better into a mix that involves acoustic instruments.

Works Cited:

Monday, February 1, 2016

Sound Distortion

Distortion in the context of music is a word that is often tossed around by people who do not have a sufficiently clear or deep understanding of what it actually is. Fundamentally, distortion describes a change in a sound's waveform that occurs as the sound is being transmitted electronically or digitally. Viewing a wave form digitally can illustrate distortion quite well. The following image shows an audio signal and then the same signal after being distorted:

Given the focus on sound, we also want to be able to hear and recognize distortion. The guitar in the following video presents a clearly audible difference between a more pure sound and a distorted sound:

An interesting takeaway from this video is that distortion is not something that is intrinsically bad; many guitar and bass amps have a distortion knob as a feature. Moderate and proper usage of distortion can often be used to enhance the sound of an instrument in the right musical context, but what causes this change in a sound's waveform?

There are two main categories of distortion: linear and nonlinear (commonly harmonic distortion). In linear distortion, the amplitude of different parts of the sound wave are changed, and in nonlinear distortion, different frequencies or harmonics are added to the sound. This explains why distortion can be utilized in a beneficial way - adding appropriate harmonics can add complexity to a sound while adding clashing harmonics can make something sound inharmonious. To add a bit of information about how electronics pertain to distortion, "Harmonic distortion in amplifiers is usually caused by the amplifier needing more voltage than its power supply can provide. It can also be caused by some part of the internal circuit (usually the output transistors) exceeding its output capacity"(Source).

The two main branches of distortion, linear and nonlinear, can be broken down into many different types of distortion. Harmonic distortion is often the type of distortion that people refer to when speaking about distortion, and I have addressed it already, but other types include bandwidth distortion, intermodulation distortion, dynamic distortion, temporal distortion, noise distortion, and acoustic distortion. If you are interested in more information about these specifics types of distortion, this source provides descriptions with good depth about all of them.

Works Cited:

Sunday, January 31, 2016

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect provides a good explanation as to what the Doppler effect is and the properties of sound that cause it to exist. I decided to collect some data to try to provide a more tangible example of the Doppler effect at work in both a simple system and in a Leslie cabinet. 

To get a simple measurement of the Doppler effect, working with Jon Bretan, we attached a battery to a simple buzzer so that it was constantly producing sound and tied some string tightly around these two objects. With the buzzer tied to one end of the string, we were able to spin it in a circle. We recorded the changes in sound pressure over the span of one second, and we also recorded the stationary buzzer to act as a control. This is our initial data set:

Both the uniformity of the sound pressure data and the thin band of frequencies on the fast Fourier transform show the purity of the tone generated by the stationary buzzer. In contrast, the complicated shape of the sound pressure data for the rotating buzzer reveals a much more complicated sound wave. The breadth of frequencies shown on the fast Fourier transform best illustrates the Doppler effect. Because the buzzer was moving relatively to the microphone, the microphone picked up many more frequencies than the central pitch generated by the stationary buzzer. The mix of frequencies appear to be fairly uniform, but they are not perfectly uniform due to aspects of the rotation that were not controlled, such as ensuring that the buzzer rotated in the same plane as the microphone. Despite this, the data set gives a very clear example of the Doppler effect at work. Using the same methodology, I recorded the frequencies produced by a popular piece of music equipment.


The Leslie cabinet, as I mentioned in my previous post about the Doppler effect, is an amplifier with a very unique way of producing sound. The cabinet is divided into two sections, the horn and the drum, which produce high and low pitches respectively. The speaker in each section has two speeds of rotation, fast and slow, and can also be stationary. I played a sine tone at 440 hertz through the Leslie cabinet and used a microphone to collect data for both the horn and the drum at both speeds. Additionally, I recorded sound produced by the stationary setting as a control. 

The following data comes from the sound produced by the horn (higher pitch speaker):

The data appears similar to the simple Doppler generator as the faster rotating speaker produces a wider spread of frequencies. The logarithmic scaling of the Y axis reveals the incredible level of symmetry of the frequency spread. This reveals an important distinction between the simple spinning buzzer and the Leslie cabinet and suggests that the uniform rotation of the Leslie cabinet produces the uniform frequency spread that was recorded.

The following data comes from the sound produced by the drum (lower pitch speaker):
Looking at the full frequency range, we are able to see the very low pitch produced by the drum as well as the central tone of the 440 hertz sine wave. We are also able to see the frequency spread caused by the Doppler effect.

When we look at the same frequency range as the horn data, we are able to clearly see the Doppler effect:
The similarities in frequency spread that correspond to speed of rotation are a clear indicator of the Doppler effect in the Leslie cabinet. Again, we are able to see near uniformity in the symmetry of the frequency spread, especially for the fastest speed of rotation.

Friday, January 22, 2016

Sound Amplification & The Inverse Square Law

Amplifying sound is the act of making a sound louder to the human ear. It relies on the process of increasing the amplitude of the sound wave, which is where "amplification" comes from. As I mentioned in a much earlier post, our perception of loudness corresponds to the amplitude of sound waves. Therefore, to make a sound seem louder, you need to find a way to increase the amplitude of its waveform. This can be done in a couple of ways.

The simplest way to amplify the volume of sound is when it is produced by an acoustic instrument. Striking a guitar string harder, blowing more air or with more force into a wind instrument, and beating a drum with more force are all ways to amplify the sound produced by acoustic instruments. As you can recall, the defining characteristics that determine the pitch of the sound produced are the length and thickness of the guitar string, or length and width of the air cavity in a wind instrument. Because this does not change during these processes, the pitch is able to remain constant while the sound is amplified.

The more interesting form of sound amplification, in my opinion, is the electronic/digital amplification that exists universally in our technology today. A great example of how this works relies on a slightly older piece of technology, a record player with electronic amplification. A vinyl record reproduces sound as the needle of the record player spins through its grooves and vibrates. The record player is able to convert these small vibrations into electrical signal. In the amplifier, this current is strengthened and reproduced as sound at a much louder volume. Much of our other technology works in the same way - electrical impulses are increased in strength in order to produce a louder sound. This is how microphones work as well as musical instrument amplifiers, such as guitar amps.

The existence of sound amplification brings up a few more questions about when it is necessary and the ways in which sound travels. Obviously in a small, quiet room, a guitar amp is not needed to hear the sound produced by a guitarist. What happens if it is a long room, however, and the guitar player is very far away? Though it may seem quiet, it may not be as easy for you to hear the guitar player without amplification if you are situated at opposite ends of the room. This pertains to the inverse square law, which relates the amplitude of sound to the distance it travels from its source to a listener. The law states that as the distance doubles, the intensity of the sound is decreased by a factor of four. This corresponds to a constant drop of six decibels each time the distance is doubled. Because of this, in certain cases sound amplification may be necessary simply due to the distance that sound needs to travel. This is especially relevant in large concert halls or outdoor venues.

Works Cited:

Wednesday, January 13, 2016

Haas Effect / Precedence Effect

The Haas effect and the precedence effect are effectively two names for the same psychoacoustic phenomenon and can be used interchangeably for the most part. Haas refers to Helmut Haas, the scientist credited with first describing this effect, while precedence provides a better sense as to what the effect actually is. I did explore some psychoacoustic phenomena during my first semester studies, but decided to put off study of the precedence effect until now due to its relevance. The precedence effect is a fundamental reason that we are able to perceive sound dimensionally, which relates directly to my previous post about stereo imaging.

The precedence effect occurs when we hear an initial sound and then its reflections (echos/delays, reverberations) approximately 1-40 milliseconds later. The information that arrives in our ears during this short period is critical in how our brains determine the location or width of a sound. This is the precedence effect in a nutshell; in reality it is a very simple phenomenon to understand if you do not dive deeply into neuroscience, and within that area there is still much to discover. A basic understanding of this effect can still be very useful for a musician or producer.

The precedence effect, along with stereo imaging, is often used in the mixing process in music production in order to make an instrument sound "wider" or fit better into the mix. By using a delay/echo audio plugin on a single channel, you can add a delay that falls into the Hass range, 1-40 ms. You can adjust this value to help your instrument or sound fall into the ideal place in the mix. It is important to consider the other parameters in the delay plugin and make sure that it is subtle enough to take advantage of the precedence effect without adding unnecessary echo. It is also possible to use the precedence effect to make a mono recording sound like a full stereo recording. This process is described succinctly here.

Works Cited: 

Monday, January 11, 2016

Stereo Imaging

Stereo imaging presents another interesting connection between light and sound. The way we perceive depth relies on the fact that we have two eyes that perceived slightly different images due to their spacing. When our brain puts these two flat images together, we are able to see depth. Our ears work in a very similar way in the perception of sound. By hearing slightly different sounds in each ear, our brain is able to corroborate them to create a sense of spacial awareness. This should be second nature to you, as you certainly use your spacial sense of hearing on a daily basis.

The production of stereo music takes advantage of the sense of depth that our ears are able to produce. A song in stereo means that the two speakers or two headphones/earbuds actually play slightly different versions of the same song. Stereo imaging is the process of using the stereo reproduction of sound to create music that has audible depth and width. A song with good stereo imaging tends to sound more natural and allows instruments to have more clarity in the mix. If you close your eyes while listening to a well-produced recording of a jazz trio, you should be able to easily envision the location of each musician in front of you due to the recording's stereo imaging.

Achieving a recording with good stereo imaging is not an easy process. When recording acoustic instruments, stereo recording techniques need to be utilized in order to capture audio for both sound channels while also utilizing specific types of microphones that properly capture the depth and width of specific instruments.

In post production, after a performance has already been recorded, digital audio plugins that simulate the effect of stereo imaging can be carefully used to make a recording sound more natural or full. Stereo imaging can also be used to make a more well-mixed song by allowing each instrument to have its own physical space. This process can be especially important in the creation of electronic music, which will almost never be recorded by using a microphone. To achieve a good stereo image in the context of electronic music will effectively always require the use of stereo imaging plugins. In the mastering stages of producing a song, stereo imaging can be applied to different frequency ranges instead of specific instruments. In this case, higher frequencies are often manipulated to sound more wide, while lower frequencies are kept in the middle. This can allow a mix to sound more clear by granting new space to the different frequency ranges of a song.

Works Cited:
"Introduction to Stereo Imaging -- Theory." Cardiff School of Computer Science and Informatics. Web. 11 Jan. 2016.
"Stereo Imaging." CDT Audio. Web. 11 Jan. 2016.

The Doppler Effect

The Doppler effect one of the most interesting phenomena related to sound. In order for the Doppler effect to be observed, something must be producing a constant sound while moving relative to an observer. The result of this is something that many people have experienced or can easily perceive. You may have noticed when an ambulance passes by you on the road that its siren seems to change pitch. Interestingly, the pitch of the siren will seem constant to the person driving the vehicle. The pitch variation that you perceive is a result of the Doppler effect.

What causes the changing pitch of this sound is the movement of its source. The siren produces a sound of a constant frequency, but its distance relative to you changes, which effectively changes the wavelength of the sound wave that is generated. Because the speed of sound is constant and proportional to wavelength * frequency, the changing wavelength causes the frequency to change. This is the root cause of the Doppler effect. The way we perceive the pitch of sound is related to the frequency, so this explains how we can hear the Doppler effect as it applies to sound waves. The Doppler effect is also present in light waves, a characteristic that is critical in the study of astronomy. As my blog focuses on sound, I will not go into depth about this aspect of the Doppler effect, but the this source provides good introductory information if you are interested.

As I begin my second semester, the focus of my studies is shifting toward the recording and reproduction of sound, as well as the digital processing of sound. The Doppler effect exists in the reproduction of sound in certain amps and speakers. Leslie cabinets, commonly used amplifiers for Hammond organs, rely on rotational motion in the production of sound. This gives them a unique and often desirable quality for some musicians. Soon, I will analyze the changing frequencies of sound produced by a Leslie cabinet as an example of the Doppler effect.

Works Cited:
"The Doppler Effect." The Doppler Effect. Physics Classroom. Web. 11 Jan. 2016.
"Doppler Effect." Hyper Physics. Web. 11 Jan. 2016.