“What is Music”: Reading Notes*

Thursday, September 15th, 2011

From: This is Your Brain on Music

* by notes i mean quotes

When Bob Dylan dared to play an electric guitar at the Newport Folk Festival in 1965, people walked out and many of those who stayed, booed. The Catholic Church banned music that contained polyphony (more than one musical part
playing at a time), fearing that it would cause people to doubt the unity
of God. The church also banned the musical interval of an augmented
fourth, the distance between C and F-sharp and also known as a tritone
(the interval in Leonard Bernstein’s West Side Story when Tony sings the
name “Maria”). This interval was considered so dissonant that it must
have been the work of Lucifer, and so the church named it Diabolus in
musica. It was pitch that had the medieval church in an uproar. And it
was timbre that got Dylan booed.

The difference between music and a
random or disordered set of sounds has to do with the way these fundamental
attributes combine, and the relations that form between them.
When these basic elements combine and form relationships with one another
in a meaningful way, they give rise to higher-order concepts such
as meter, key, melody, and harmony.

What makes a set of lines and colors into art is the relationship between this line and that one; the way one color or form echoes another in a different part of
the canvas. Those dabs of paint and lines become art when form and
flow (the way in which your eye is drawn across the canvas) are created
out of lower-level perceptual elements. When they combine harmoniously
they ultimately give rise to perspective, foreground and background,
emotion, and other aesthetic attributes. Similarly, dance is not
just a raging sea of unrelated bodily movements; the relationship of
those movements to one another is what creates integrity and integrality,
a coherence and cohesion that the higher levels of our brain process.

Miles Davis famously described his improvisational
technique as parallel to the way that Picasso described his use of a canvas:
The most critical aspect of the work, both artists said, was not the
objects themselves, but the space between objects.

We
employ the term timbre, for example, to refer to the overall sound or
tonal color of an instrument—that indescribable character that distinguishes
a trumpet from a clarinet when they’re playing the same written
note, or what distinguishes your voice from Brad Pitt’s if you’re saying
the same words. But an inability to agree on a definition has caused the
scientific community to take the unusual step of throwing up its hands
and defining timbre by what it is not. (The official definition of the
Acoustical Society of America is that timbre is everything about a sound
that is not loudness or pitch. So much for scientific precision!)

By convention, when we press keys nearer to the left of the keyboard,
we say that they are “low” pitch sounds, and ones near the right side of
the keyboard are “high” pitch. That is, what we call “low” are those
sounds that vibrate slowly, and are closer (in vibration frequency) to the
sound of a large dog barking. What we call “high” are those sounds that
vibrate rapidly, and are closer to what a small yip-yip dog might make.
But even these terms high and low are culturally relative—the Greeks
talked about sounds in the opposite way because the stringed instruments
they built tended to be oriented vertically. Shorter strings or pipe
organ tubes had their tops closer to the ground, so these were called
the “low” notes (as in “low to the ground,”) and the longer strings and
What Is Music? 19
tubes—reaching up toward Zeus and Apollo—were called the “high”
notes. Low and high—just like left and right—are effectively arbitrary
terms that ultimately have to be memorized. Some writers have argued
that “high” and “low” are intuitive labels, noting that what we call highpitched
sounds come from birds (who are high up in trees or in the sky)
and what we call low-pitched sounds often come from large, close-tothe-
ground mammals such as bears or the low sounds of an earthquake.
But this is not convincing, since low sounds also come from up high
(think of thunder) and high sounds can come from down low (crickets
and squirrels, leaves being crushed underfoot)

A bowl of pudding only has taste when I put it in my mouth—when it
is in contact with my tongue. It doesn’t have taste or flavor sitting in my
fridge, only the potential. Similarly, the walls in my kitchen are not
“white” when I leave the room. They still have paint on them, of course,
but color only occurs when they interact with my eyes.

REST OF YOU Humans who are
not suffering from any kind of hearing loss can usually hear sounds from
20 Hz to 20,000 Hz. The pitches at the low end sound like an indistinct
rumble or shaking—this is the sound we hear when a truck goes by outside
the window (its engine is creating sound around 20 Hz) or when a
tricked-out car with a fancy sound system has the subwoofers cranked
up really loud. Some frequencies—those below 20 Hz—are inaudible to
humans because the physiological properties of our ears aren’t sensitive
to them.

The sound of the average male speaking voice is
around 110 Hz, and the average female speaking voice is around 220 Hz.
The hum of fluorescent lights or from faulty wiring is 60 Hz (in North
America; in Europe and countries with a different voltage/current standard, it can be 50 Hz). The sound that a singer hits when she causes a
glass to break might be 1000 Hz. The glass breaks because it, like all
physical objects, has a natural and inherent vibration frequency. You can
hear this by flicking your finger against its sides or, if it’s crystal, by running
your wet finger around the rim of the glass in a circular motion.
When the singer hits just the right frequency—the resonant frequency of
the glass—it causes the molecules of the glass to vibrate at their natural
rate, and they vibrate themselves apart.

If you put playing cards in the spokes of your bicycle
wheel when you were a kid, you demonstrated to yourself a related
principle: At slow speeds, you simply hear the click-click-click of the
card hitting the spokes. But above a certain speed, the clicks run together
and create a buzz, a tone you can actually hum along with; a pitch.

When this lowest note on the piano plays, and vibrates at 27.5 Hz, to
most people it lacks the distinct pitch of sounds toward the middle of the
keyboard. At the lowest and the highest ends of the piano keyboard, the
notes sound fuzzy to many people with respect to their pitch. Composers
know this, and they either use these notes or avoid them depending on
what they are trying to accomplish compositionally and emotionally.
Sounds with frequencies above the highest note on the piano keyboard,
around 6000 Hz and more, sound like a high-pitched whistling to most
people. Above 20,000 Hz most humans don’t hear a thing, and by the age
of sixty, most adults can’t hear much above 15,000 Hz or so due to a stiffening
of the hair cells in the inner ear. So when we talk about the range
of musical notes, or that restricted part of the piano keyboard that conveys
the strongest sense of pitch, we are talking about roughly three quarters of the notes on the piano keyboard, between about 55 Hz and 2000 Hz.

A melody is an auditory object that maintains its identity
in spite of transformations, just as a chair maintains its identity when
you move it to the other side of the room, turn it upside down, or paint it
red. So, for example, if you hear a song played louder than you are accustomed
to, you still identify it as the same song.

When you ask someone a question, your voice naturally rises in
intonation at the end of the sentence, signaling that you are asking. This is a convention in English (though not in all languages—we
have to learn it), and is known in linguistics as a prosodic cue.

All of us have the innate capacity to learn the
linguistic and musical distinctions of whatever culture we are born into,
and experience with the music of that culture shapes our neural pathways
so that we ultimately internalize a set of rules common to that musical
tradition.

The characters’ individuality in
Peter and the Wolf is expressed in the timbres of different instruments
and each has a leitmotiv—an associated melodic phrase or figure that
accompanies the reappearance of an idea, person, or situation. (This is
especially true of Wagnerian music drama.) A composer who picks socalled
sad pitch sequences would only give these to the piccolo if he
were trying to be ironic. The lumbering, deep sounds of the tuba or
double bass are often used to evoke solemnity, gravity, or weight.

The auditory cortex also has a
tonotopic map, with low to high tones stretched out across the cortical
surface.

As frequencies get higher, so do the letter names; B has a
higher frequency than A (and hence a higher pitch) and C has a higher
frequency than either A or B. After G, the note names start all over again
at A. Notes with the same name have frequencies that are multiples of
each other.

Although red and violet
fall at opposite ends of the continuum of visible frequencies of electromagnetic
energy, we see them as perceptually similar. The same is true
in music, and music is often described as having two dimensions, one
that accounts for tones going up in frequency (and sounding higher and
higher) and another that accounts for the perceptual sense that we’ve
come back home again each time we double a tone’s frequency.

The intervallic distance between A and B (or between “do” and “re”) is
called a whole step or a tone. (This latter term is confusing, since we call
any musical sound a tone; I’ll use the term whole step to avoid ambiguity).
The smallest division in our Western scale system cuts a whole step perceptually
in half: This is the semitone, which is one twelfth of an octave.
Intervals are the basis of melody, much more so than the actual
pitches of notes; melody processing is relational, not absolute, meaning
that we define a melody by its intervals, not the actual notes used to create
them. Four semitones always create the interval known as a major
third regardless of whether the first note is an A or a G# or any other
note.

“Superstition” by Stevie Wonder is played on only the black keys of
the keyboard.

We could fix A at any frequency, such as
439, 444, 424, or 314.159; different standards were used in the time of
Mozart than today. Some people claim that the precise frequencies affect
the overall sound of a musical piece and the sound of instruments. Led
Zeppelin often tuned their instruments away from the modern A440 standard
to give their music an uncommon sound, and perhaps to link it with
the European children’s folk songs that inspired many of their compositions.

The auditory system works the same way, and that is why our scale
is based on a proportion: Every tone is 6 percent higher than the previous
one, and when we increase each step by 6 percent twelve times, we
end up having doubled our original frequency (the actual proportion is
the twelfth root of two = 1.059463 . . . ).

The twelve notes in our musical system are called the chromatic
scale.

In Western music we rarely use all the notes of chromatic scale in
composition; instead, we use a subset of seven (or less often, five) of
those twelve tones. Each of these subsets is itself a scale, and the type of
scale we use has a large impact on the overall sound of a melody, and its
emotional qualities. The most common subset of seven tones used in
Western music is called the major scale, or Ionian mode (reflecting its
ancient Greek origins). Like all scales, it can start on any of the twelve
notes, and what defines the major scale is the specific pattern or distance
relationship between each note and its successive note. In any major
scale, the pattern of intervals—pitch distances between successive keys—
is: whole step, whole step, half step, whole step, whole step, whole step,
half step.

The particular placement of the two half steps in the sequence of the
major is crucial; it is not only what defines the major scale and distinguishes
it from other scales, but it is an important ingredient in musical
expectations. Experiments have shown that young children, as well as
adults, are better able to learn and memorize melodies that are drawn
from scales that contain unequal distances such as this. The presence of
the two half steps, and their particular positions, orient the experienced acculturated listener to where we are in the scale. We are all experts in
knowing, when we hear a B in the key of C—that is, when the tones are
being drawn primary from the C major scale—that it is the seventh note
(or “degree”) of that scale, and that it is only a half step below the root,
even though most of us can’t name the notes, and may not even know
what a root or a scale degree is. We have assimilated the structure of this
and other scales through a lifetime of listening and passive (rather than
theoretically driven) exposure to the music. This knowledge is not innate,
but is gained through experience.

Western music theory recognizes
three minor scales and each has a slightly different flavor. Blues
music generally uses a five note (pentatonic) scale that is a subset of the
minor scale, and Chinese music uses a different pentatonic scale. When
Tchaikovsky wants us to think of Arab or Chinese culture in the Nutcracker
ballet, he chooses scales that are typical to their music, and
within just a few notes we are transported to the Orient. When Billie Holiday
wants to make a standard tune bluesy, she invokes the blues scale
and sings notes from a scale that we are not accustomed to hearing in
standard classical music.

If instead we use the C minor scale,
the first, third, and fifth notes are C, E-flat, and G. This difference in the
third degree, between E and E-flat, turns the chord itself from a major
chord into a minor chord. All of us, even without musical training, can
tell the difference between these two even if we don’t have the terminology
to name them; we hear the major chord as sounding happy and the
minor chord as sounding sad, or reflective, or even exotic. The most basic
rock and country music songs use only major chords: “Johnny B.
Goode,” “Blowin’ in the Wind,” “Honky Tonk Women,” and “Mammas
Don’t Let Your Babies Grow Up to Be Cowboys,” for example.
Minor chords add complexity; in “Light My Fire” by the Doors, the
verses are played in minor chords (“You know that it would be untrue
. . .”) and then the chorus is played in major chords (“Come on baby,
light my fire”). In “Jolene,” Dolly Parton mixes minor and major chords
to give a melancholy sound. Pink Floyd’s “Sheep” (from the album Animals)
uses only minor chords.

REST OF YOU An analogy is the several types of motion of the earth that are simultaneously
occurring. We know that the earth spins on its axis once every
twenty-four hours, that it travels around the sun once every 365.25 days,
and that the entire solar system is spinning along with the Milky Way
galaxy. Several types of motion, all occurring at once. Another analogy is
the many kinds of vibration that we often feel when riding a train. Imagine
that you’re sitting on a train in an outdoor station, with the engine off.
It’s windy, and you feel the car rock back and forth just a little bit. It does
so with a regularity that you can time with your handy stopwatch, and
you feel the train moving back and forth about twice a second. Next, the
engineer starts the engine, and you feel a different kind of vibration
through your seat (due to the oscillations of the motor—pistons and
crankshafts turning around at a certain speed). When the train starts
moving, you experience a third sensation, the bump the wheels make
every time they go over a track joint.

Petr Janata http://vimeo.com/11236355

Peter placed electrodes in the inferior colliculus
of the barn owl, part of its auditory system. Then, he played the owls
a version of Strauss’s “The Blue Danube Waltz” made up of tones from
which the fundamental frequency had been removed. Petr hypothesized
that if the missing fundamental is restored at early levels of auditory processing,
neurons in the owl’s inferior colliculus should fire at the rate of
the missing fundamental. This was exactly what he found. And because
the electrodes put out a small electrical signal with each firing—and because
the firing rate is the same as a frequency of firing—Petr sent the
output of these electrodes to a small amplifier, and played back the
sound of the owl’s neurons through a loudspeaker. What he heard was
astonishing; the melody of “The Blue Danube Waltz” sang clearly from
the loudspeakers: ba da da da da, deet deet, deet deet. We were hearing
the firing rates of the neurons and they were identical to the frequency
of the missing fundamental. The overtone series had an instantiation not
just in the early levels of auditory processing, but in a completely different
species.

This difference is timbre (pronounced TAM-ber), and it is the most
important and ecologically relevant feature of auditory events. The timbre
of a sound is the principal feature that distinguishes the growl of a
lion from the purr of a cat, the crack of thunder from the crash of ocean
waves, the voice of a friend from that of a bill collector one is trying to
dodge. Timbral discrimination is so acute in humans that most of us can
recognize hundreds of different voices. We can even tell whether someone
close to us—our mother, our spouse—is happy or sad, healthy or
coming down with a cold, based on the timbre of that voice. Researchers still argue about what this “more” is, but it is generally
accepted that, in addition to the overtone profile, timbre is defined
by two other attributes that give rise to a perceptual difference from one
instrument to another: attack and flux.

In the early
1970s, while fiddling with the computer and with sine waves—the sorts
of artificial sounds that are made by computers and used as the building
What Is Music? 47
blocks of additive synthesis—Chowning noticed that changing the frequency
of these waves as they were playing created sounds that were
musical. By controlling these parameters just so, he was able to simulate
the sounds of a number of musical instruments. This new technique became
known as frequency modulation synthesis, or FM synthesis, and
became embedded first in the Yamaha DX9 and DX7 line of synthesizers,
which revolutionized the music industry from the moment of their introduction
in 1983. FM synthesis democratized music synthesis. Before FM,
synthesizers were expensive, clunky, and hard to control. Creating new
sounds took a great deal of time, experimentation, and know-how. But
with FM, any musician could obtain a convincing instrumental sound
at the touch of a button. A lot of what we think of as “the eighties sound” in popular
music owes its distinctiveness to the particular sound of FM synthesis.

Among the first of many famous electronic
music/music-psychology celebrities to come to CCRMA were
John R. Pierce and Max Mathews. Pierce had been the vice president of
research at the Bell Telephone Laboratories in New Jersey, and supervised
the team of engineers who built and patented the transistor—and it
was Pierce who named the new device (TRANSfer resISTOR). In his distinguished
career, he also is credited with inventing the traveling wave
vacuum tube, and launching the first telecommunications satellite, Telstar.
He was also a respected science fiction writer under the pseudonym
J. J. Coupling.

Their laboratory was something of a playground
for the very best and brightest inventors, engineers, and scientists
in America. In the Bell Labs “sandbox,” Pierce allowed his people to be
creative without worrying about the bottom line or the applicability of
their ideas to commerce. Pierce understood that the only way true innovation
can occur is when people don’t have to censor themselves and can
let their ideas run free. Although only a small proportion of those ideas
may be practical, and a smaller proportion still would become products,
those that did would be innovative, unique, and potentially very profitable.
Out of this environment came a number of innovations including
lasers, digital computers, and the Unix operating system.

Here’s what I brought to dinner:
1) “Long Tall Sally,” Little Richard
2) “Roll Over Beethoven,” the Beatles
3) “All Along the Watchtower,” Jimi Hendrix
4) “Wonderful Tonight,” Eric Clapton
5) “Little Red Corvette,” Prince
6) “Anarchy in the U.K.,” the Sex Pistols

Pierce listened and kept asking who these people were, what
instruments he was hearing, and how they came to sound the way they
did. Mostly, he said that he liked the timbres of the music. The songs
themselves and the rhythms didn’t interest him that much, but he found
the timbres to be remarkable—new, unfamiliar, and exciting.

The fluid
romanticism of Clapton’s guitar solo in “Wonderful Tonight,” combined
with the soft, pillowy drums. The sheer power and density of the Sex Pistols’
brick-wall-of-guitars-and-bass-and-drums. The sound of a distorted
electric guitar wasn’t all that was new to Pierce. The ways in which instruments
were combined to create a unified whole—bass, drums, electric
and acoustic guitars, and voice—that was something he had never
heard before. Timbre was what defined rock for Pierce. And it was a revelation
to both of us.

After Schaeffer edited out the attack of orchestral instrument recordings,
he played back the tape and found that it was nearly impossible for
most people to identify the instrument that was playing. Without the at-
What Is Music? 51
tack, pianos and bells sounded remarkably unlike pianos and bells, and
remarkably similar to one another. If you splice the attack of one instrument
onto the steady state, or body, from another, you get varied results:
In some cases, you hear an ambiguous hybrid instrument that sounds
more like the instrument that the attack came from than the one the
steady state came from.

Composers such as Scriabin and Ravel talk about their works as
sound paintings, in which the notes and melodies are the equivalent of
shape and form, and the timbre is equivalent to the use of color and
shading.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: