Featured Post

Doppler Analysis & Analysis of Leslie Cabinet

My previous post about the Doppler effect  provides a good explanation as to what the Doppler effect is and the properties of sound that ca...

Saturday, December 19, 2015

Equal Tempered Tuning & Flaws in Just Tuning

Just tuning presents an interesting method of tuning the different notes of a scale based on specific frequency ratios, and ultimately ends up with twelve notes of unique frequency intervals that are perfectly harmonic. While this sounds like the best possible tuning system in theory, it runs into some substantial problems in practice. For example, when an instrument is tuned to C, the minor third of D-F ends up having a different frequency ratio than the normal minor third, C-Eb. There are many other cases in which these harmonic inconsistencies occur, such as changing the key of the song, and this leads to dissonance and a lack of flexibility in playing music.

Equal tempered tuning, also known as equal temperament tuning, aims to resolve the problems created by just tuning. It does so by making the twelve semitones of an octave equally spaced in terms of the relationship of their frequencies. This process adds consistency to the tuning process that just tuning lacks, but loses some of the harmonic purity of the fractional intervals. The relationship between the frequencies of notes in equal tempered tuning is given by the equation, frequency ratio = 2(n/12), where n is the number of semitones (The Physics of Music and Color). To hear a comparison between equal temperament and just tuning, check out this video.

The general consensus is that the flexibility given by equal temperament tuning is essential and makes up for the lack of purity that is achieved through just tuning.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Cents and Musical Intervals

While the western musical scale is broken into twelve notes, we often need more specific ways of describing the pitch of a sound, especially during the tuning process. For example, let's say that you are tuning one of your guitar's strings to the E note played by a piano. As you twist your tuning peg and strike the corresponding string, you get closer the frequency of the E and eventually reach the point where the frequency of your string is lower than F but higher than E. How can we quantify this difference? The piano player may tell you that you are 50 cents sharp of E, but what does this mean?

The chromatic scale is broken into twelve notes, but the cents system allows us to further break this up. The basic definition of this is that the interval between two semitones consists of 100 cents, evenly spaced frequency values between the two notes. Since an octave consists of twelve notes in a chromatic scale, we also know that an octave is made up of 1200 cents. Given this understanding, we now know when the pianist tells us that we are 50 cents sharp of E when tuning, our note is tuned exactly in between E and F. Though this frequency does not have a specific letter name, it can easily be quantified by using the cents system.

Many electronic tuning devices (or tuning apps) can help you tune your instrument to standard pitches. With remarkable precision, these devices are often able to show how many cents sharp or flat your detuned note is in order to help you reach the ideal frequency.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

The Just Chromatic Scale

The western musical scale consists of twelve notes, each a semitone (half step) apart, and the scale consisting of all twelve notes is called the chromatic scale. Tuning an instrument to be able to play all twelve half steps of the chromatic scale adds a bit more difficulty to the process of tuning a pentatonic or diatonic scale.

Starting with the eight frequencies of a just diatonic scale, major third intervals are used to calculate the needed sharp/flat notes to complete the chromatic scale. Both descending and ascending major thirds can be used to find the ratios of the new notes, and the process of reducing the octave, as mentioned in my previous post, can be used to ultimately end up with the twelve ascending ratios of a chromatic scale. The following graphic is very useful in understanding the just chromatic tuning process:
(The Physics of Music and Color)

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, November 24, 2015

Pythagorean Tuning & Just Tuning

One of the most basic and essential methods of tuning is called pythagorean tuning. It relies on a 3:2 frequency relationship between the fifth and the root of the scale. Starting with an initial frequency, for example middle C, the rest of the notes are tuned by multiplying the frequency of C by 3/2. This multiplication process is continued by treating the new note as the root note until each note has a specific frequency. This does create a problem, however, as the resulting notes do not fit into a scale within the same octave range. One essential relationship between notes is that the relationship between an octave is 2:1; this relationship is found in essential every method of western tuning. The octave relationship is applied to all of the highly tuned frequencies, and they are essentially halved until they form an ascending scale. This process is known as reducing the octave (The Physics of Music and Color).

Another important tuning method is called just tuning. Similarly to pythagorean tuning, just tuning relies on the 3:2 relationship between the fifth and the root of the scale. It also relies on a 5:4 relationship between the major third and the root of the scale. By applying these two relationships to the root note of a scale, and through the process of reducing the octave, a full diatonic scale can be defined through just tuning. After the process is completed, each note ends up with a fractional relationship to the root:
(The Physics of Music and Color)
Because of the reliance on the ratio of the third, just tuning creates a scale with relationships that differ from pythagorean tuning. 


Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Wednesday, November 18, 2015

Musical Scales and Tuning

For the rest of my research this semester, I am going to be looking mostly at musical scales and the tuning of musical instruments as I prepare to construct a musical instrument.

Musical scales are sets of notes that instruments play centered around one note that acts as a resolution. For example, a C major scale includes eight notes, the white keys on a piano, and resolves on the note C. Another important scale is the chromatic scale, which includes every note in western music, or every white and black key on a piano. The relationships between the frequencies of notes in a scale can be quantified mathematically, and different scales have different physical relationships. As I continue with my research, I will spend a fair amount of time exploring these relationships within different types of scales.

Musical tuning is a more relative concept. For example, two instruments playing a C major scale can be playing notes of completely different frequencies, because they are tuned to a different frequency. One instrument can be tuned to A440 and the other could be tuned to A410, and they would be horribly out of tune. A440 is the conventional tuning method in today's instruments. This means that instruments are initially tuned by starting with an A note (the A just above middle C) at a frequency of 440 hertz. The remaining notes are tuned to this initial A note by using mathematical relationships. There are two main types of tuning relationships, just intonation and equal temperament. I will be exploring both throughout the next couple of weeks.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Monday, November 16, 2015

Beats Experiment

Beats, or beat frequencies, are an acoustic phenomenon that occurs when sounds of two different frequencies overlap. As I have already learned through fourier's theorem, overlapping waves synthesize a wave of a new shape. This is fundamental in beats.

For my experiment, I first recorded the waveform of two tuning forks:

(C 256)

(G 384)

Both tuning forks appear to have a nearly pure waveform. As a recorded the ringing of two tuning forks simultaneously, I found a very different looking waveform:



The data pertaining to the graphs are summarized in the following table:

The frequency found when both tuning forks were played was found to be about 125 hertz, which equals the higher frequency minus the lower frequency. This difference is known as the beat frequency, and it appears because of the overlap of the two waves' different frequencies.

Beats play an essential role in the physics of music and harmony. Because the two tuning forks were in tune relative to each other, the beat frequency was also in tune and sounded pleasant. When instruments are properly in tune, beat frequencies are able to add to the harmonic richness of sound.
However, if two musical instruments are out of tune and are played together, their beat frequency is not in tune. When this happens, humans perceive the combination of sounds to be dissonant. In this sense, playing music is the act of creating air vibrations that act constructively with each other in order to synthesize something new. 

Works Cited:
 "Interference and Beats." The Physics Classroom. Web. 16 Nov. 2015. 


Monday, November 9, 2015

Simple Harmonic Oscillations and Hook's Law

As I mentioned in my first post, the sine wave is the basic building block of sound; this idea is developed through an understanding of Fourier's Theorem. A sine wave models simple harmonic motion, and because of this, understanding simple harmonic motion is critical to understanding sound. In this post, I am temporarily moving away from the direct physics of sound in order to focus on this foundational concept in another context.

The most traditional way of modeling the sine wave, through a simple harmonic oscillator, requires a simple setup that involves a spring and a mass. The spring hangs from a surface, and the mass is attached to the bottom. One then applies a force to the spring, by pushing it up or pulling it down. An interesting concept that occurs in this system is Hook's Law, which states that the displacement of the object on the spring is directly proportional to the force applied to the string. This is represented by the equation F = kX, where k is the spring constant, X is the displacement, and F is the force. The integral of this equation, or an equation derived with geometry, shows that the elastic energy of the spring is equal to x*x*k/2. The elastic energy stored in the spring causes the mass to bounce up and down in a sinusoidal manner. I was able to record the displacement over time of a mass on a spring:
The shape of the simple harmonic motion is clear in the spring system, and it is interesting to compare this graph to a musical wave. The following image is taken from my post about the sound of an ocarina and depicts the waveform of an ocarina through the relative air pressure over time: 
I would argue that based on this visual comparison, an ocarina acts as a better and more interesting simple harmonic oscillator based on the purity of the shape graph that it generated, but both systems model simple harmonic motion very well.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Wednesday, November 4, 2015

Wave Velocity of String Vibrations

As I move away from the human perception of sound, I'm beginning to look at the physics of sound that are directly applied to musical instruments in order to prepare for my culminating project of the semester. The first thing I'm studying in this context is the wave velocity of strings, a topic essential to understanding how string instruments (piano, violin, guitar) work.

In a previous post, I established that the frequency of a string is proportional to the square of the tension divided by the mass. This makes sense at a basic level to anyone who is familiar with the tuning of a string instrument, such as a guitar. Increasing the tension of a string causes it to create a sound of higher pitch, and increasing the mass of a string causes it to create a sound of lower pitch. This ties into the idea of wave velocity, the speed at which a wave is able to travel on the string. A faster wave has a higher frequency and thus a higher pitch. The opposite is true for a slower wave.

I've established that the pitch of a string is dependent on its tension and its mass density. Linear mass density is defined as mass divided by length for a string. From this definition, we can see that changing mass of a string will affect its pitch.

With an understanding of wave velocity, an instrument creator has three ways to set the pitch of a string instrument. It should already be known that pitch is related to the length of a string, so one can change the length of the string. However, only being able to adjust string length does not grant much freedom for creativity in instrument design. Luckily, one can change the tension on a string and the linear mass density of the string in order to tune it. This is a fundamental idea in a guitar. While the strings of a guitar are similar in length, they clearly differ in thickness, which allows them to produce sounds of different pitches. A guitar player also knows that he or she can adjust the pitch of strings by making them more or less tense by turning the tuning keys.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Thursday, October 29, 2015

More Psychoacoustics

In addition to the psychoacoustic phenomena that I have studied, there is a handful of more interesting things that cause the human perception of sound to differ from the physical reality.

For most frequencies, the perception of pitch varies with the volume of the sound. Higher pitches tend to increase in pitch as they increase in volume, and lower pitches tend to decrease in pitch as they increase in pitch. This makes it important for instrumentalists to tune to each other at the same volume, because it enables them to stay in tune as they play with different dynamics.

The ability for humans to perceive differences in pitch is limited. For very slight changes in pitch, humans cannot perceive the difference despite a physical difference existing in the sound wave.
In the same vein, human perception of differences in loudness are limited.

Masking is another effect that happens when two sounds are played at the same time. When one sound is substantially larger than the other, the quieter one cannot be perceived at all, even though it still physically exists in the system.


Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, October 27, 2015

Ocarina Analysis

The ocarina is a very unique instrument; because of its shape, it is able to generate a very pure tone. I recorded the waveform of the note of an ocarina, and compared it to a couple of other sources in order to verify this. 

The first sound recorded was a clap:
This wave shape is extremely far from pure due to its lack of uniformity.

I then recorded a tuning fork:
Compared to the clap, this waveform is much more pure and is clearly sinusoidal.

Lastly, I recorded an ocarina (Twelve hole alto playing a low C):

As you can see, the wave produced by the ocarina is even more pure than that of the tuning fork by a noticeable margin. The claim that an ocarina can produce a nearly perfect tone is certainly valid! 



Friday, October 23, 2015

Combination Tones

On of the most interesting psychoacoustic phenomena is the perception of combination tones. Essentially, what this means is that humans can perceive harmonics of sound waves that do not actually exist. When two pure tones are played, two main combination tones are heard: the sum tone and the difference tone. Like the names imply, the sum tone is a frequency perceived as the sum of the frequencies of the original tones, and the difference tone is a frequency perceived as the difference between the original tones. Because of this, two pure tones played simultaneously will cause one to perceive four different frequencies!

Beyond being an interesting characteristic of human hearing, combination tones have played a practical role in music technology. When speaker technology was less advanced, many speakers struggled to properly produce low frequency sounds, but combination tones allowed them to be perceived by the listener.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Friday, October 16, 2015

Equal Loudness Curves

Having studied some of the essential parts of the ear, I am now moving on to psychoacoustics, the study of how sound is perceived by humans and how this perception differs from the physical reality of sound.

One main difference in the human perception of sound versus reality is the perceived loudness of sounds of different frequencies. Studies were conducted that varied the frequency and intensity of sounds to find the intensity of different frequencies that humans perceive to be equally loud. Data from such studies are often compiled graphically as equal loudness curves:
(The Physics of Music and Color)

In this graph, the threshold of hearing represents the lowest intensity of each frequency that could be perceived, and the threshold of pain represents the lowest intensity at which each frequency began to cause pain. The lower parts of each curve signify the frequencies that humans are able to perceive the most easily, or perceive as louder, because such frequencies require less intensity to be perceived. This makes the most sensitive range of human hearing 1000-4000 hertz.

An understanding of equal loudness curves is an important part of mixing music properly. The goal of mixing is to make each instrument heard well. Due to this, one might assume that an EQ spectrum for a well mixed song would be perfectly flat, because the instruments that take up different frequency ranges would have the same intensity. Equal loudness curves prove this assumption wrong, however. In order to accommodate for the human perception of sound, a well mixed song will have lower frequencies that are more intense than the 1000-4000 hertz range, so that people perceive them to be the same volume. Here is an EQ spectrum analysis for a professionally mixed song to illustrate this point:


Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Wednesday, October 14, 2015

Pitch Discrimination

As I have discussed in a previous post, we are able to distinguish sounds with a subjective quality called pitch. Most people are able to easily identify the notes of a bass as low and the notes of a piccolo as high. The physical property of sound that differentiates these instruments is the frequency of the sounds that they produce. Soundwaves with higher frequencies are perceived to have a higher pitch, and the opposite is true for soundwaves with lower frequencies. While you may know that the general range of hearing for humans is 20-20,000 hertz, how are we able to differentiate sounds that differ in frequency? The answer lies within the concept of pitch discrimination.

As I mentioned in my previous post, the fundamental part of the ear that allows humans to differentiate the pitch of sounds is the cochlea, and hair cells in the cochlea are able to convert sound energy to nerve signals that the brain can perceive. The problem with this lies in the brain's differentiation of nerve signals sent to it by the cochlea as "all nerve impulses are alike"(The Physics of Music and Color). The two things that differentiate nerve impulses are the specific hair cells that transmit the signal and the pattern of impulses over time. Scientific evidence suggests that the brain utilizes both of these aspects in processing pitch discrimination. The theories that explain these different approaches respectively are the Place Theory of Pitch Perception and the Rhythm Theory of Pitch Perception.

The Place Theory of Pitch Perception can be explained with hair cells. Effectively, different hair cells correspond to different frequencies because of how sound passes through the ear. Certain hair cells are triggered by sounds of specific frequencies. Because the brain is able to distinguish specific hair cells, it is able to create pitch discrimination.

The Place Theory of Pitch Perception is often considered incomplete, because it is not able to account for how sensitive the human ear is to slight changes in pitch. For this reason, the Rhythm Theory of Pitch Perception is often considered to occur simultaneously. According to this theory, the brain is able to interpret different patterns of vibrating hair cells triggered by sound waves of different frequencies in order to create pitch discrimination.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

The Ossicles and the Cochlea

The ossicles and the cochlea are the two main parts of the ear that take the vibrations of the eardrum and convert them into information that the brain can understand.

The ossicles are three bones within the middle ear that aim to increase the amount of sound that passes from the eardrum to the cochlea. To achieve this, the ossicles apply a process rooted in Archimedes' Principle of Lever Action. This image of a balanced teeterboard provides a nice picture of this principle:
(The Physics of Music and Color)

Though the smaller figure weighs more than the other, the teeterboard remains balanced because of the distance between the point where the board pivots (F). Because of this difference in distance between the pivot point, moving closer end of the board will move the other end by a larger magnitude. The ossicles operate in a similar way by using a pivot point to increase the amplitude of the vibrations of sound waves that pass into the ear. 

The amplified vibrations of the ossicles pass into the inner ear where they meet the cochlea. The cochlea is the part of the ear that is able to distinguish the frequency of sound and also converts sound energy into information that the brain can interpret. The cochlea relies on a small amount of fluid that is similar to water, but has twice the viscosity. The different parts of the cochlea take the vibration of sound and cause vibrations within the cochlear fluid. Stem cells, which are nerve cells, interpret the vibrations in the cochlear fluid and convert them into information that the brain can understand. 

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Wednesday, October 7, 2015

The Auditory Canal and the Eardrum

In order to begin my study of the human perception of sound, I began by researching the auditory canal and eardrum, which are arguably the two most important components of the ear. For reference, here is the diagram from my initial post that details the parts of the ear:
(The Physics of Music and Color)

The auditory canal is the tube within the ear that connects the outer ear to the eardrum and middle ear. Its purpose is bring the vibration of air particles, or sound waves, from the outside air to the inner parts of the ear that can interpret them. Interestingly, the auditory canal behaves like a tube that is open at one end, something I explored in a previous experiment. This causes it to resonate at certain frequencies that are dependent its length and diameter. They are approximately 3,000, 9,000, and 16,000 hertz (The Physics of Music and Color). This emphasizes certain frequencies, which alters how our perception of sound differs from the reality of sound, but this topic will get its own blog post in the future. The shape of the auditory canal is useful for another reason in that it is able to prevent some of the reflection of sound that naturally occurs within the ear; this allows the ear to maximize the amount of sound humans can perceive.

The eardrum is the first part of the ear that sound waves come into contact with after passing through the auditory canal, which makes it incredibly important. The eardrum effectively measures differences between pressure within the ear and pressure outside of the ear. Differences in pressure cause the eardrum to vibrate, and the eardrum sends this energy to the other parts of the ear to be interpreted. It is a very small and delicate membrane; these characteristics allow it to vibrate more easily and to be sensitive, which helps us to perceive quiet sounds.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, October 6, 2015

Introduction to Hearing

My previous posts have discussed the physical nature of sound by looking at sound waves. While I have researched the physical properties of sound waves that dictate the timbre and pitch of instruments, I have not yet addressed how we as humans are able to perceive these differences. While there is a lot of knowledge about the neurological processes that allow us to hear, there is still much to learn.

Humans perceive sound through the use of two things: the ear and the brain. The ear is like a sensor. It collects the different sound waves passing through the air, and then converts them into information that the brain can understand. The actual perception of sound then occurs in the brain. The ear is a very complex piece of anatomy: in order to turn sound waves into nerve signals, it utilizes many different components. This is a nice diagram showing the parts of the inner ear:
(The Physics of Music and Color)

Future posts will go into specific details about some of these individual components, but for now it is important to grasp just how complicated the ear is. It must be able to detect different frequencies of sound waves - this is what allows us to distinguish sounds by pitch. In fact, most people can usually perceive sounds from 20-20,000 hz, but the upper limit decreases with age.  The ear also must be able to process different wave shapes - this is what allows us to distinguish the timbre of sound and differentiate different musical instruments. It also must be able to perceive the amplitude of sound waves - this is what allows us to distinguish sound by volume. All of these different processes are accounted for by the different components of the inner ear in interesting ways.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Sunday, October 4, 2015

Reflectance of Sound Waves

Up to this point, my blog posts have focused on how sound is produced and how it is transmitted through air. Sound displays other behaviors, however, as it passes through air and comes into contact with other objects. Sound's behavior is very similar to that of light. When light comes in contact with an object, it can be transmitted, absorbed, or reflected. Every substance has an index of refraction, which dictates how it reflects and refracts light. This can be modeled by the following equation where R is reflectance and the n values are the indices of refraction:
(The Physics of Music and Color)

The index of refraction is a part of the relationship v=c/n where v is the wave velocity, c is the speed of light, and n is the index of refraction. With a known index of refraction, the wave velocity for different mediums can be calculated, and the above equation can be written in terms of wave velocity. This allows us to connect to the equation for sound reflectance, which must account for mass density(p) as well. The equation for the reflectance of sound is given by the following equation:
(The Physics of Music and Color)

In this equation, the P values represent the mass densities and the v values represent the wave velocity of sound. This allows us to calculate the percentage of sound that reflects off of a medium when we know the mass density of the medium and the wave velocity of sound. 

Knowing how much sound certain materials reflect has a nice application in the world of music. When constructing a music studio, you will want to have a space where you can monitor sound in an ideal setting. If sound reflects significantly off of the surfaces of your studio space, it becomes less than ideal for accurate mixing. The materials of the walls in a music studio should be made of a substance that reflects sound as little as possible. This is why you often see sponge-like material on the walls of treated rooms; they absorb sound and reflect a very low percentage of it. 

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, September 29, 2015

Resonance

Resonance looks at the interactions between different objects. What happens when a vibrating object comes in contact with another object? Each object has natural frequencies of vibration or resonant frequencies. Using some data from my first lab as an example, we can see the natural frequencies of the harmonics of a single length of string with tension of 200 grams:
Frequency (hz)
Mode #
28.5
Fundamental (1st)
57
2nd
114
3rd
228
4th
456
5th
In order to get the string to reach stable harmonic structures, it had to be vibrated at these specific frequencies by the speaker that it was attached to. Resonance occurred at these five frequencies because the speaker was vibrating at the natural frequencies of the object. This gives us the definition of resonance: resonance occurs when a vibrating object comes into contact with another object and is vibrating at a resonant frequency of that object.

Resonance plays an integral role in the production of sound in instruments. Columns of air act as an object and have their own resonant frequencies, which I explored in my second experiment. This is fundamental in wind instruments. I like to use the clarinet as an example. To play a clarinet, one vibrates a reed on the mouthpiece, which creates resonance in the body of the instrument by vibrating the air particles in wave patterns that corresponding to the natural frequencies of the tube. In order to change the note being played, a clarinet player will place their fingers on different keys in order to block holes in the instrument. This changes the resonant frequency of the air column within the clarinet, which changes the frequency of sound produced by the instrument as the reed vibrates. The clarinet was designed so that the different fingering patterns create resonant frequencies that correspond to the frequencies of musical notes in the Pythagorean scale.

Works Cited:
1.  Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.
2. "Resonance." Resonance. Physics Classroom. Web. 29 Sept. 2015.

Monday, September 28, 2015

Sound Wave Spatial Structure

The spatial structure of a sound wave is directly connected to my previous post about harmonics. In that post, I looked at the relationship between string length and wave velocity, but a comparison can be drawn between the wavelength of a sound wave and the length of its medium as well. This concept was critical in explaining the foundation of my previous two experiments, but was not something that I have dedicated a blog post to. This graphic does an excellent job of reiterating the harmonics of a sound wave, while also illustrating the relationship between the wavelengths of the harmonics and their relationship to the medium:

(The Physics of Music and Color)


Looking at the relationship between the wavelength and the length of the string (L) we get:
Harmonic Mode
Wavelength in terms of String Length(L)
1st
1/2L
2nd
L
3rd
2/3L
4th
2/4L
5th
2/5L
6th
2/6L
This relationship can be modeled by the same equation given in the previous blogpost:
This relationship relates well because the wave velocity (V) is a constant for a specific medium, and was arbitrarily chosen in the example above based on the graphic.

Works Cited:
1. Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Friday, September 25, 2015

Harmonics

Harmonic frequencies are an essential part of sound. If you have read some of my previous posts and experiment summaries, you will probably have noticed references to different harmonic modes, but I have not dedicated a post to the concept yet. Harmonics are effectively different modes of vibration, sounds with different frequencies, that can exist in a medium (on a string or in a tube, for example). These different harmonics are often combined to create more complicated sounds in instruments through Fourier's Theorem. In this sense, harmonics are able to change the timbre of sound and to distinguish instruments. A good example of this is a comparison between the flute and the clarinet. The flute is effectively a column that is open at both ends, while a clarinet is effectively a column that is closed at one end. This means that the clarinet creates only odd harmonics, while the flute is able to create both odd and even harmonics. What does this mean?

Let's take a look at what the different harmonic modes look like on a string:
(The Physics of Music and Color)

The first harmonic is not pictured, but would have one antinode in the middle. As you can see, the harmonic mode number corresponds to the number of loops in the pattern. An even harmonic is a harmonic with an even number of loops, and an odd harmonic is the opposite. While this should provide a basic understanding of what harmonics are, we can look at them through a mathematical lens to understand the relationship between them.

In the image above, we see a full wavelength in the second harmonic. The first harmonic is half of a wavelength. We can break down the relationship between the length of the string and the harmonic mode number to look for a pattern:


Harmonic Mode
Wavelength in Terms of Wave Velocity (V) & String Length (L)
1
V/2L
2
2V/2L
3
3V/2L
4
4V/2L
5
5V/2L

While the 2L remains constant, the V coefficient increases by an interval equal to the harmonic mode number. From this observation, we can derive the equation:
(The Physics of Music and Color)

Works Cited:
1. Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Thursday, September 24, 2015

Fourier's Theorem

In my initial post about the sine wave and all of its magical implications on sound, I mentioned that the sine wave is the building block of sound, meaning that every waveform can be produced by combinations of sine waves. I did not go into very much depth in this claim, however. The scientific explanation for this is given by Fourier's Theorem: "any periodic function that is reasonably continuous can be expressed as the sum of series of sine or cosine terms."
http://www.sfu.ca/sonic-studio/handbook/Fourier_Theorem.html

Initially this may be hard to grasp. How can every imaginable periodic function just be sine waves? I like to think of it by beginning with the concept of wave interference:

http://www.gwoptics.org/images/ebook/interference-explain.png

Imagine waves that aren't offset by those intervals, and you start to get more complicated waves. This is a good demo that illustrates this simply with three harmonic waves that you can combine:
http://phet.colorado.edu/sims/normal-modes/normal-modes_en.html

Once you reach a basic understanding of this concept, you can use this resource:
http://phet.colorado.edu/en/simulation/fourier
It provides you with the tools to make some very complicated waves, but it also gives you presets with different basic wave types to show how they are sums of sine functions.

An interesting extension of this is to look at the app's square wave:

What is interesting to me, is that the square wave is composed of only odd harmonics, which is identical to what happens to sound in a tube that is closed at one end. A clarinet is similar in that it is closed at one end. It is a bit more complicated though, because clarinets have a bell-shaped opening. Regardless, the waveform of sound from a clarinet looks similar to this square wave:


Tuesday, September 22, 2015

Standing Waves in a Column of Air Experiment

For this experiment, I looked at the sound produced by different PVC pipes when hit differently so that they would be open or closed tubes. I aimed to look at how the length of tube affected the pitch of the sound and how sound differs in an open vs. closed tube. I used five PVC pipes of different length with a constant diameter of 2 cm and a constant thickness. They were numbered as follows:
Tube #9
15.6cm
Tube #1
25.6cm
Tube #7
46.0cm
Tube #3
50.9cm
Tube #5
61.0cm

I recorded the fundamental frequency produced by hitting the top each pipe with my finger. For each length of pipe, I recorded the resulting frequency three different times. I plotted the average frequency for each length and calculated a best fit curve. I repeated this process by treating the pipe like a closed tube and hitting it with my palm. The data is plotted here:


What is interesting about this plot, is that the ratio between of the best fit curve of the open pipe to the best fit curve of the closed pipe is .52. In order to understand why this makes sense, you need to understand how a sound wave travels in a column. For a pipe that is open at both ends, the fundamental wave looks like this: 

For a pipe that is closed at one end and open at the other, the fundamental wave looks like this:

As you can see, 2L is the wavelength for an open tube's fundamental frequency, while 4L is the wavelength for a closed tube's fundamental frequency. The ratio between these two is .50. The value in practice was .52, however. Why is this larger value still accurate? In the real world, the nodes and anti-notes of a sound wave traveling through a pipe exist outside of the pipe's open ends. For the pipe open at one end, the effective tube length is the length of the tube + a third of the diameter. For the pipe that is open at both ends, two thirds of the diameter must be added to the tube length to find the effective tube length.

Using the same data set for the open pipe, one can create an estimate for the speed of sound. I plotted the wavelength and period for each pipe length. The slope of the line of best fit provides the approximate speed of sound:


The slope is 331.2 meters/sec which is the nearly the exact speed of sound at 0 degrees Celsius. The temperature in the room during the experiment was 24.6 degrees Celsius, however, which predicts an actual speed of sound of 345.76 degrees Celsius. This makes the percentage error 4.2%. This can be attributed the lack of precision in retrieving the information about the sound waves from the Logger Pro software and to potential fluctuation in the temperature of the room during the experiment, as this temperature was not monitored after data collection began.

Thursday, September 10, 2015

Generating a Sound Pulse in Air

Previous posts have focused on how sound travels along a string and how this relates to string instruments. In addition to traveling on strings, sound waves travel through air. This is not just critical for wind instruments, but for our entire perception of sound. To understand how sound travels through air, we must first understand what air is and why sound waves are able to pass through it.

Air is a collection of gas particles: mostly nitrogen with some oxygen and a small percentage of other gases, such as carbon dioxide. Like all materials in a gaseous state, particles in air are constantly moving, which creates pressure. As a sound wave passes through air, it creates localized changes in pressure. Condensations are areas of high pressure and rarefactions are areas of low pressure. These changes in pressure can be modeled well by a sine wave as sound passes through air. This process of creating condensations and rarefactions is how sound is able to travel from a vibrating string to your ear. It is also the driving force in the creation of sound in wind instruments where sound waves are not generated by vibrating strings, but by vibrating air particles in a chamber. The next post will feature the results of experiment that explores the way in which sound is able to be generated in a tube.


Works Cited:
1. Gunther, Leon. The Physics of Music and Color. New York, New York: Springer, 2012.

Tuesday, September 8, 2015

Standing Waves on a String Experiment

For the past two class blocks I have been working on an experiment involving sound and string. Understanding the interaction between sound waves and strings is essential in understanding string instruments, such as violin, guitar, or piano, but also illustrates concepts that are important in understanding the physics of music.

In this experiment, with the help of Jon Bretan, I attached a piece of string to a speaker and attached the string to a pole. Using an amplifier, I played sound of different frequencies through the speaker in order to find the natural frequencies of the string. This was visible in the string as it made wave shapes with clear nodes and antinodes. I measured the frequency needed to get the string to vibrate in its fundamental mode, as well as its harmonic modes. If this is hard to conceptualize, here is a video showing the changes in the string's modes as I changed the frequency of the sound playing through the speaker.
The frequencies corresponding to each harmonic were recorded as follows:
Frequency (hz)
Mode #
17.3
Fundamental (1st)
34.6
2nd
69.3
3rd
139
4th
277
5th
As you can see, each frequency is twice the previous frequency. This should be a familiar interval if you have read my post about the relationship between frequency and pitch, because doubling the frequency of a sound increases it by exactly one octave in pitch. The above video is a good example of why this is the case. 

The problem with these initial measurements is that the tension of the string was unknown, as it was simply tied down using a clamp. To remedy this, I collected more data, but used weights to tighten the string. The following data sets correspond to weights of 50, 100, and 200 grams respectively:

50 gram weight.
Frequency (hz)
Mode #
17.3
Fundamental (1st)
34.6
2nd
69.2
4th
138
8th
277
16th

100 gram weight.
Frequency (hz)
Mode #
19.5
Fundamental (1st)
39
2nd
78
4th
156
8th
312
16th

200 gram weight.
Frequency (hz)
Mode #
28.5
Fundamental (1st)
57
2nd
114
4th
228
8th
456
16th

As you can see, when the tension of the string increases, the frequency of the harmonics increases. In the context of musical instruments this makes perfect sense; for example, tightening the string of a guitar increases its pitch.



In addition to manipulating the tension of the string, I changed its thickness. I did this by tying an identical string to it in order to double its mass density. I also tied down this modified string using weights of 50, 100, and 200 grams. The following data corresponds to those trials:


50 gram weight.
Frequency (hz)
Mode #
15.1
Fundamental (1st)
30.2
2nd
60.5
4th
121
8th
242
16th

100 gram weight.
Frequency (hz)
Mode #
16
Fundamental (1st)
32.1
2nd
64.1
4th
128
8th
256
16th

200 gram weight.
Frequency (hz)
Mode #
18.6
Fundamental (1st)
37.1
2nd
74.2
4th
148
8th
297
16th

As expected, increasing the tension of the modified string increased its fundamental frequencies, but what difference was caused by the change in the string's mass density? In looking at the fundamental frequencies we get the following comparisons for the strings with varying tension: 17.3 vs. 15.1; 19.5 vs. 16; and 28.5 vs. 18.6 with the initial string listed first and the modified string listed second. The data appear to show that the more dense string resonates at lower frequencies. Again, this makes sense; the guitar strings that are lower in pitch are thicker. 



Fortunately, there is a handy relationship between frequency and the tension and mass density of a medium.

This shows that frequency is proportional to the root of tension and inversely proportional to the root of the mass density. The data follows this trend, as increased mass density decreased the frequency and increased tension increased the frequency. If the data is hard to conceptualize, thinking about this relationship in the context of a guitar string can be very helpful. 

While the data appear to follow this trend generally, it does not follow the relationship. The fundamental frequencies of the first and second mode were graphed. The data for the original string and modified string are plotted on the same graph, but no trend can be inferred, as there are only two different mass densities. Clicking the images will open them in full size, so that you can read the text.

First Mode:

Second Mode:

As you can see, the lines of best fit are quadratic. This is unexpected because frequency is proportional to the square root of tension and not the square of tension.