Featured Post

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect  provides a good explanation as to what the Doppler effect is and the properties of sound that ca...

Thursday, October 29, 2015

More Psychoacoustics

In addition to the psychoacoustic phenomena that I have studied, there is a handful of more interesting things that cause the human perception of sound to differ from the physical reality.

For most frequencies, the perception of pitch varies with the volume of the sound. Higher pitches tend to increase in pitch as they increase in volume, and lower pitches tend to decrease in pitch as they increase in pitch. This makes it important for instrumentalists to tune to each other at the same volume, because it enables them to stay in tune as they play with different dynamics.

The ability for humans to perceive differences in pitch is limited. For very slight changes in pitch, humans cannot perceive the difference despite a physical difference existing in the sound wave.
In the same vein, human perception of differences in loudness are limited.

Masking is another effect that happens when two sounds are played at the same time. When one sound is substantially larger than the other, the quieter one cannot be perceived at all, even though it still physically exists in the system.


Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, October 27, 2015

Ocarina Analysis

The ocarina is a very unique instrument; because of its shape, it is able to generate a very pure tone. I recorded the waveform of the note of an ocarina, and compared it to a couple of other sources in order to verify this. 

The first sound recorded was a clap:
This wave shape is extremely far from pure due to its lack of uniformity.

I then recorded a tuning fork:
Compared to the clap, this waveform is much more pure and is clearly sinusoidal.

Lastly, I recorded an ocarina (Twelve hole alto playing a low C):

As you can see, the wave produced by the ocarina is even more pure than that of the tuning fork by a noticeable margin. The claim that an ocarina can produce a nearly perfect tone is certainly valid! 



Friday, October 23, 2015

Combination Tones

On of the most interesting psychoacoustic phenomena is the perception of combination tones. Essentially, what this means is that humans can perceive harmonics of sound waves that do not actually exist. When two pure tones are played, two main combination tones are heard: the sum tone and the difference tone. Like the names imply, the sum tone is a frequency perceived as the sum of the frequencies of the original tones, and the difference tone is a frequency perceived as the difference between the original tones. Because of this, two pure tones played simultaneously will cause one to perceive four different frequencies!

Beyond being an interesting characteristic of human hearing, combination tones have played a practical role in music technology. When speaker technology was less advanced, many speakers struggled to properly produce low frequency sounds, but combination tones allowed them to be perceived by the listener.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Friday, October 16, 2015

Equal Loudness Curves

Having studied some of the essential parts of the ear, I am now moving on to psychoacoustics, the study of how sound is perceived by humans and how this perception differs from the physical reality of sound.

One main difference in the human perception of sound versus reality is the perceived loudness of sounds of different frequencies. Studies were conducted that varied the frequency and intensity of sounds to find the intensity of different frequencies that humans perceive to be equally loud. Data from such studies are often compiled graphically as equal loudness curves:
(The Physics of Music and Color)

In this graph, the threshold of hearing represents the lowest intensity of each frequency that could be perceived, and the threshold of pain represents the lowest intensity at which each frequency began to cause pain. The lower parts of each curve signify the frequencies that humans are able to perceive the most easily, or perceive as louder, because such frequencies require less intensity to be perceived. This makes the most sensitive range of human hearing 1000-4000 hertz.

An understanding of equal loudness curves is an important part of mixing music properly. The goal of mixing is to make each instrument heard well. Due to this, one might assume that an EQ spectrum for a well mixed song would be perfectly flat, because the instruments that take up different frequency ranges would have the same intensity. Equal loudness curves prove this assumption wrong, however. In order to accommodate for the human perception of sound, a well mixed song will have lower frequencies that are more intense than the 1000-4000 hertz range, so that people perceive them to be the same volume. Here is an EQ spectrum analysis for a professionally mixed song to illustrate this point:


Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Wednesday, October 14, 2015

Pitch Discrimination

As I have discussed in a previous post, we are able to distinguish sounds with a subjective quality called pitch. Most people are able to easily identify the notes of a bass as low and the notes of a piccolo as high. The physical property of sound that differentiates these instruments is the frequency of the sounds that they produce. Soundwaves with higher frequencies are perceived to have a higher pitch, and the opposite is true for soundwaves with lower frequencies. While you may know that the general range of hearing for humans is 20-20,000 hertz, how are we able to differentiate sounds that differ in frequency? The answer lies within the concept of pitch discrimination.

As I mentioned in my previous post, the fundamental part of the ear that allows humans to differentiate the pitch of sounds is the cochlea, and hair cells in the cochlea are able to convert sound energy to nerve signals that the brain can perceive. The problem with this lies in the brain's differentiation of nerve signals sent to it by the cochlea as "all nerve impulses are alike"(The Physics of Music and Color). The two things that differentiate nerve impulses are the specific hair cells that transmit the signal and the pattern of impulses over time. Scientific evidence suggests that the brain utilizes both of these aspects in processing pitch discrimination. The theories that explain these different approaches respectively are the Place Theory of Pitch Perception and the Rhythm Theory of Pitch Perception.

The Place Theory of Pitch Perception can be explained with hair cells. Effectively, different hair cells correspond to different frequencies because of how sound passes through the ear. Certain hair cells are triggered by sounds of specific frequencies. Because the brain is able to distinguish specific hair cells, it is able to create pitch discrimination.

The Place Theory of Pitch Perception is often considered incomplete, because it is not able to account for how sensitive the human ear is to slight changes in pitch. For this reason, the Rhythm Theory of Pitch Perception is often considered to occur simultaneously. According to this theory, the brain is able to interpret different patterns of vibrating hair cells triggered by sound waves of different frequencies in order to create pitch discrimination.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

The Ossicles and the Cochlea

The ossicles and the cochlea are the two main parts of the ear that take the vibrations of the eardrum and convert them into information that the brain can understand.

The ossicles are three bones within the middle ear that aim to increase the amount of sound that passes from the eardrum to the cochlea. To achieve this, the ossicles apply a process rooted in Archimedes' Principle of Lever Action. This image of a balanced teeterboard provides a nice picture of this principle:
(The Physics of Music and Color)

Though the smaller figure weighs more than the other, the teeterboard remains balanced because of the distance between the point where the board pivots (F). Because of this difference in distance between the pivot point, moving closer end of the board will move the other end by a larger magnitude. The ossicles operate in a similar way by using a pivot point to increase the amplitude of the vibrations of sound waves that pass into the ear. 

The amplified vibrations of the ossicles pass into the inner ear where they meet the cochlea. The cochlea is the part of the ear that is able to distinguish the frequency of sound and also converts sound energy into information that the brain can interpret. The cochlea relies on a small amount of fluid that is similar to water, but has twice the viscosity. The different parts of the cochlea take the vibration of sound and cause vibrations within the cochlear fluid. Stem cells, which are nerve cells, interpret the vibrations in the cochlear fluid and convert them into information that the brain can understand. 

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Wednesday, October 7, 2015

The Auditory Canal and the Eardrum

In order to begin my study of the human perception of sound, I began by researching the auditory canal and eardrum, which are arguably the two most important components of the ear. For reference, here is the diagram from my initial post that details the parts of the ear:
(The Physics of Music and Color)

The auditory canal is the tube within the ear that connects the outer ear to the eardrum and middle ear. Its purpose is bring the vibration of air particles, or sound waves, from the outside air to the inner parts of the ear that can interpret them. Interestingly, the auditory canal behaves like a tube that is open at one end, something I explored in a previous experiment. This causes it to resonate at certain frequencies that are dependent its length and diameter. They are approximately 3,000, 9,000, and 16,000 hertz (The Physics of Music and Color). This emphasizes certain frequencies, which alters how our perception of sound differs from the reality of sound, but this topic will get its own blog post in the future. The shape of the auditory canal is useful for another reason in that it is able to prevent some of the reflection of sound that naturally occurs within the ear; this allows the ear to maximize the amount of sound humans can perceive.

The eardrum is the first part of the ear that sound waves come into contact with after passing through the auditory canal, which makes it incredibly important. The eardrum effectively measures differences between pressure within the ear and pressure outside of the ear. Differences in pressure cause the eardrum to vibrate, and the eardrum sends this energy to the other parts of the ear to be interpreted. It is a very small and delicate membrane; these characteristics allow it to vibrate more easily and to be sensitive, which helps us to perceive quiet sounds.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, October 6, 2015

Introduction to Hearing

My previous posts have discussed the physical nature of sound by looking at sound waves. While I have researched the physical properties of sound waves that dictate the timbre and pitch of instruments, I have not yet addressed how we as humans are able to perceive these differences. While there is a lot of knowledge about the neurological processes that allow us to hear, there is still much to learn.

Humans perceive sound through the use of two things: the ear and the brain. The ear is like a sensor. It collects the different sound waves passing through the air, and then converts them into information that the brain can understand. The actual perception of sound then occurs in the brain. The ear is a very complex piece of anatomy: in order to turn sound waves into nerve signals, it utilizes many different components. This is a nice diagram showing the parts of the inner ear:
(The Physics of Music and Color)

Future posts will go into specific details about some of these individual components, but for now it is important to grasp just how complicated the ear is. It must be able to detect different frequencies of sound waves - this is what allows us to distinguish sounds by pitch. In fact, most people can usually perceive sounds from 20-20,000 hz, but the upper limit decreases with age.  The ear also must be able to process different wave shapes - this is what allows us to distinguish the timbre of sound and differentiate different musical instruments. It also must be able to perceive the amplitude of sound waves - this is what allows us to distinguish sound by volume. All of these different processes are accounted for by the different components of the inner ear in interesting ways.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Sunday, October 4, 2015

Reflectance of Sound Waves

Up to this point, my blog posts have focused on how sound is produced and how it is transmitted through air. Sound displays other behaviors, however, as it passes through air and comes into contact with other objects. Sound's behavior is very similar to that of light. When light comes in contact with an object, it can be transmitted, absorbed, or reflected. Every substance has an index of refraction, which dictates how it reflects and refracts light. This can be modeled by the following equation where R is reflectance and the n values are the indices of refraction:
(The Physics of Music and Color)

The index of refraction is a part of the relationship v=c/n where v is the wave velocity, c is the speed of light, and n is the index of refraction. With a known index of refraction, the wave velocity for different mediums can be calculated, and the above equation can be written in terms of wave velocity. This allows us to connect to the equation for sound reflectance, which must account for mass density(p) as well. The equation for the reflectance of sound is given by the following equation:
(The Physics of Music and Color)

In this equation, the P values represent the mass densities and the v values represent the wave velocity of sound. This allows us to calculate the percentage of sound that reflects off of a medium when we know the mass density of the medium and the wave velocity of sound. 

Knowing how much sound certain materials reflect has a nice application in the world of music. When constructing a music studio, you will want to have a space where you can monitor sound in an ideal setting. If sound reflects significantly off of the surfaces of your studio space, it becomes less than ideal for accurate mixing. The materials of the walls in a music studio should be made of a substance that reflects sound as little as possible. This is why you often see sponge-like material on the walls of treated rooms; they absorb sound and reflect a very low percentage of it. 

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.