IASPM 2017

I presented at my first International Association for the Study of Popular Music conference, the biennial international one, which was held this year in Kassel, Germany at the Kulturbahnhof—the former Hauptbahnhof (main train station) of Kassel, which is now converted into an arts center—a super cool venue. (Full conference program and abstracts available here.)

Suedfluegel_27-09-09_14
Photo from http://www.kulturbahnhof-kassel.de

 

The program for this conference was huge, with something like six parallel sessions running at once. I tended to favor panels that were music-theory-ish, dealt with music technology, or dealt with gender.

arrows-2023448_640.png

Music-theory-ish

I doubt I would have applied had I not been asked to form a lovely panel by a colleague, Nick Braae, whom I met at the Osnabrück popular music summer school that I attended in September 2015.  Here is the full information of the panel:

Shaping Sounds and Sounds as Shapes in Popular Songs—Contemporary Analytical Approaches

Alex Harden (UK): “Oneiric Narrativity and Recorded Popular Song”

Harden analyzed how recording techniques may interact with the lyrics to create an oneiric sound space, focusing on Kate Bush’s “Waking the Witch.”

Megan Lavengood (USA): “Analyzing Sound, Analyzing Timbre”

I presented a trimmed-down version of Chapter 5 of my dissertation, ultimately arguing for more analysis of timbre in popular music studies.

Bláithín Duggan (Ireland): “The Shape of the Voice: Analysing Vocal Gestures in Popular Song”

Duggan took a holistic approach to analysis that takes motives beyond pitch and rhythmic content and attends to more subtle details of dynamics, timing, and pitch, using early Beatles songs as a corpus.

Nick Braae (New Zealand): “Analysing Musical Time in Popular Songs”

Braae discussed cyclic versus directional time created through interactions between song forms, harmonies, and melodies.

We were teamed up with another panel as well, organized by Kai Arne Hansen:

So What? Contemporary Approaches to the Interpretation and Analysis of Disparate Popular Musics

Kai Arne Hansen (Norway): “Darkness on the Edge of Pop: Constructing Masculinity and The Weeknd’s ‘The Hills'”

Hansen gave a preliminary exposition on themes of violence, misogyny and darkness in The Weeknd’s music and music videos.

Steven Gamble (UK): “Empowerment and Embodiment in Rap Music”

Gamble analyzed elements of rhythm in “Backseat Freestyle” by Kendrick Lamar, identifying musical elements which contribute to a sense of empowerment through embodiment.

Claire Rebecca Bannister (UK): “Psychopharmacology and the Analysis of Goth Music”

Bannister discussed goth music as a psychedelic genre, and defines psychedelia through the idea of set and setting, terms from psychopharmacology, in determination of what constitutes a psychedelic genre.

Andrei Sora (UK): “To Prepare a Face to Meet the Faces that You Meet: The Persona in Instrumental Music”

Sora used analysis of a persona in the unusual genre of instrumental popular music. While persona analysis is frequently applied to instrumental art music, it is rare to see this approach in instrumental popular music, where the notion of an analytical persona interacts with a perhaps more robust public persona.

The goal of both panels together was essentially to showcase new work by young scholars in the field of popular music analysis, and to show what we can do that is sort of “outside the box,” although maybe there is no such “box” anymore!

turntable-2157292_640

Recording technology and the music industry

Steffan Lepa (Germany): “The Diffusion of Music Streaming Services in Germany”

Lepa reported on data collected with his project, Survey Musik und Medien, from a 2012 survey and a follow-up 2015 survey of German music listeners. The data was used to develop hypotheses on the change in audio media that people use to listen to their music. Lepa said the data is different than many other sources, because it is derived from a survey of listeners, whereas most data comes from sales figures. They divided up listeners into classes based on the ways their listening habits changed between the two surveys: versatile audiophiles, digital mobilists, selective traditionalists, selective adopters, versatile traditionalists, and radio traditionalists. The last category was made because a large portion of people only listened to music on the radio—the fact I found the most surprising about this presentation. I would love to compare this to a similar study in the US—is radio equally prevalent here?

Chris Anderson (USA): “Contemporary Strategies for Making, Distributing, and Gifting Music”

Anderson featured two case studies of musicians giving away their music for free, relating this to Attali’s utoptian vision of creation for self-satisfaction instead of monetary gain. I am hopeful that in future studies the author might begin to consider the implications of class, and also of devaluing art. One of his subjects was only a hobbyist musician. It would be interesting to see who releases music for free because of “self-satisfaction instead of monetary gain,” versus who releases it for free due to economic pressure to do so to compete.

Franco Fabbri (Italy): “Binaurality, Stereophony, and Popular Music in the 60s and 70s”

Fabbri articulated an important distinction between stereophony and binaurality: if a typical stereo setup is meant to imitate having the best seat in the house, then headphones position the listener actually in the center of the stage. For symphonies, this might be like sitting next to the conductor. Fabbri also highlighted that while classical recording practices typically valorize “realism” in the mixes, in the case of concertos, the mixing typically creates an unreal sound space, as the performer is mixed in both channels.

Pat O’Grady (Australia): “The Politics of Digitizing Analog Technologies”

O’Grady reported on the variety of virtual and digital technologies that are meant to imitate analog recording software. Most fascinating to me and my research were the words that O’Grady reported as being used to describe these plugins. Plugins are typically described as having “warmth”. “Smooth”, “glue”, and “musical” are other words used to describe their plugins. I would love to learn more about these plugins and compare the language used to describe them with the language used to describe the Fender Rhodes and other technologies that are often contrasted with the Yamaha DX7.

Steve Waksman (USA): “Remaking aliveness in American Music, 1900–1930”

Waksman gave an account of the use of the word “live” as an adjective to describe technologies. Advertisements for sheet music would use the word “live”: “live songs for live singers by live authors.” We nowadays think of sheet music as kind of a dead object, maybe, but at this time it was being sold as more “live” than the “canned music” of recordings. The American Federation of Musicians launched a campaign that praised the virtues of live music as movies with sound killed off jobs for musicians as silent movie accompanists and vaudeville musicians.

statue-2451001_640

Gender in popular music

Robin James (USA): “Queered Voices in the Era of Post-Feminist Pop”

James featured two queer artists who do not conform to the post-feminist feminine ideal of resilience and overcoming. In one case, the artist Bottoms talks about emotional damage, and does not overcome this damage, but rather enjoys the damage. The artist-collective Decon/Recon writes music in a deeply collaborative way in order to resist ownership and thus the post-feminist ideal of feminine empowerment. I would love to see more of this kind of scholarship in pop music analysis, which in my view often relies on tropes of empowerment in its narratives. James also gave a keynote at the opening of the IASPM conference, which I unfortunately had to miss.

Sarah Dougher and Diane Pecknold (USA): “Girls Rock! Reverberations and Limitations”

Like Robin James, Dougher and Pecknold draw attention to the post-feminist assumption that femininity is equated to overcoming. They point out that this then places additional burden on girls, in a sense: all girls are feminized, but good girls overcome this. Dougher and Pecknold trace representations of the “Girls rock” theme from Jem through Black Girls Rock!.

drums-2178425_640.jpg

Concluding thoughts

The experience of meeting these international voices was incredible.

There were a lot of papers, so they were sometimes hit-or-miss, but I saw many very high-quality papers. And having such a huge program meant that I was almost always seeing something that related to my research in some sense.

IASPM provided lunch and several coffee breaks every day, as well as an opening reception, which made it easy to socialize with conference-goers.

Slots were 30 minutes for the paper and questions, but the organizers did not insist on a 20-minute paper with 10 minutes of questions; rather, they left it up to the presenter how to divide the time. I love 30 minute sessions—45 is way too long (ahem, SMT)—but I think question time ought to be mandated. Questions are usually the best part!

For some reason there was a lot of drama in the three-hour-long general meeting (which then went over time!), but I’m gonna go on the record and say that IASPM 2017 was a great conference for me. I’m grateful to all involved in its organization and success.

Advertisements

The Tenure-Track Job Search in Music Theory

I was on the job market this past year for the first time. No one will be surprised to hear that it was quite arduous. I’m very pleased to say that I did win a job as an Assistant Professor of Music Theory at George Mason University, located in Fairfax, Virginia (in the Washington, D.C. metro area).

Now that it’s all over, but while it’s still fresh in my mind, I compiled statistics from my search and personal advice, which I hope will help other aspiring theorists in their own searches.

Some Statistics on the 2016–2017 Season

How many jobs were there?

I applied to 25 total tenure-track (TT) jobs, a number consistent with Kris Shaffer’s estimate that there are about 26 TT jobs per year in music theory.

Where were the jobs?

Screen Shot 2017-04-27 at 5.07.27 PM.png

All the jobs were in the United States, except two jobs in the Toronto, Ontario area. The jobs are clustered where you would expect: along the East Coast, and in highly-populated states like Florida and Texas. Jobs in the West, Southwest, and Midwest are noticeably scarce. One of the fixtures of the life of an academic is non-academic friends and family asking “Why don’t you get a job in [hometown]/[current city]/[where I live]?” You can show them this map of 25 jobs and demonstrate that you don’t get a ton of choice.

When were applications due?

The TT job market is quite seasonal.

deadlines bars numbers
out of 24 total deadlines

Search committees want to have things settled with their new faculty member before summer break. Thus the application deadlines cluster together around late November to mid December. This gives plenty of time for the search to conclude even before spring break. A handful of people get out ahead of the game by requesting earlier deadlines; some schools for whatever reason take a little longer to get their search posted and thus have a later deadline.

What went into an application for a TT job?

Every application requires a curriculum vitae, a cover letter, and 3 letters of recommendation.

64% of job applications require additional materials on top of this.  These materials will help the committee evaluate you further, often aligning with the priorities of the school/job. Teaching statements, diversity statements, sample syllabi, and teaching videos are all common from teaching-focused schools; research statements and writing samples are common from research-focused schools; but, of course, many schools want you to excel in both.

Below are the percentages, of applications that require different types of materials, out of 25 total applications. Many applications requested more than one of these.

How long did it take to get results?

Unlike jobs in the outside world, academic job searches are ssslloooowwww.

elapsed app final

Some of the jobs, as of today (May 2), still have not sent out results, nor has anyone posted them on the jobs wiki, so you’ll notice I only have 20 results here total. On average, you can expect to wait 3–4 months after the application deadline to hear officially that you did not get the job (because if you did, you probably would know by now).

Some Recommendations

I prepared a lot for all of my interviews and materials, and I got good results. This preparation took many forms: reading books, personal mentoring, and my own foresight and planning.

Books

For books, I most highly recommend The Professor Is In by Karen Kelsky. Karen also has a website which can be useful for the comments sections, but all the good stuff has been exported to her book. She has a very no-nonsense and bare bones writing style that steers you through the minefield of the job search. There is a lot of advice that is specifically tailored to women, also. The Academic Job Search Handbook is more dry and thorough, with lots of sample materials. I did not read this cover-to-cover, but I did reference it from time to time.

Mentors

The significance of personal mentoring can’t be overstated. I ran almost everything by at least two advisors at first. As I got more confident, I was more selective, but still pretty much always checking in with these advisors. This took several forms:

  • one-on-one meetings and feedback on materials with my mentors
  • workshopping my letters and statements with my school’s Professional Development office
  • Test calling friends on Skype to check for lighting, dress, background, connection speed, etc. before doing Skype interviews
  • runthroughs of my job talks with my peers and mentors for critique
  • runthroughs of my lessons with pedagogy experts for critique and tips
  • mock job interviews in front of a “committee” of advisors and peers

I asked for a lot of help throughout this process, and people were very willing to give it. Don’t be shy and don’t try to do this on your own.

On letters of recommendation

Dossier services seem to be the norm for our field. Interfolio is well-known, but you have to pay to send your letters places—something I find a bit backwards. I used a free alternative, Chronicle Vitae, to no apparent ill effects. Some jobs do require that you use Interfolio, but in these cases, you do not pay to apply. Besides this, once, I did have to pay to use Interfolio instead, to snail-mail my dossier as part of an application that required that a hard copy be mailed to them. I could have gathered the letters and mailed them myself, but it didn’t seem worth the savings to me… but anyway, this is not typical.

Dossier services are nice, because you get to control the distribution of letters, and you know that the letters have been sent. Unfortunately, many schools use application portals that auto-generates an email sent to your letter-writer, requesting that they upload their letter to the school’s system. This is harder to manage effectively, and I recommend planning to complete your applications two weeks in advance of the deadline to deal with this. I talk about this more below.

Stay organized

I am a highly organized person, which is part of what helped me survive this. I suggest two spreadsheets and a packing list:

  • A “jobs summary” spreadsheet with the following categories of data: school names, locations, application deadlines, “unusual materials” (see above, not really that “unusual” per se, but beyond the standard), a link to the job announcement, how letters of recommendation are requested (see below), and date I submitted the application. Create this at the beginning of your search. As time went on, I added new columns for each stage that I progressed through. When I had submitted an application, I made the text gray. When I got a nibble, I made the text green. When I got a rejection, I struck through the text. It was very helpful to see this all in one place.
  • A spreadsheet to help deal with recommendation letters, which lists all the schools that ask letter-writers to upload/email the letter of recommendation themselves along with deadlines. You will need the spreadsheet for two reasons: one, so that you can be timely and considerate about what you are asking of your letter-writers, and two, so that you can personally email your letter writers and check in on the status of your letters. Even before you’re ready to apply, see if you can request the letters from your letter-writers without submitting the application—often, you can. Request the letters ASAP. If you must submit the application before the portal will request letters, then you need to submit your application one to two weeks early. This gives you plenty of time to warn your letter-writer about the application and to check in on them. I also liked to do this in batches, then send an email saying something like “Today, I applied to 3 schools that require you to upload a letter: X, Y, and Z. Please look for these emails in your inbox and upload the letters ASAP. The deadlines are A, B, and C.”
  • A packing list, like the one I’ve made here. I really did print this off and check off the boxes. This is maybe the most helpful thing in this blog post, because my brain was never in the right headspace to plan to pack things like clothes and snacks. I’ve generalized the list to be applicable to everyone (I think), accessible here. (Let me know if you think something should be added!)

Further reading

Here are some helpful websites for your music theory job search:

  • Music Theory Online’s job postings. Almost all jobs will be posted here.
  • The Music Theory and Composition Jobs Wiki. The wiki is a(n infamous) crowd-sourced gathering of information on all the jobs out there, TT and non-TT. You can go here to find out about jobs that might not have made it to MTO. More realistically, you can go here to find out whether or not you should give up hope on your dream job (seriously—I find knowing better than not knowing, but your mileage may vary).
  • Kris Shaffer, “So You Want to Be a Music Theory Professor.” Kris does some wonderful statistical analysis based on data from the jobs wiki to talk about how many jobs there are, who gets the jobs (institution and year of PhD), and how many applications there tend to be.

As a disclaimer, I have never been on the other side of this, i.e., on a TT faculty search committee, but my memory is quite fresh with all these experiences from the past months. My opinion is that we would all benefit from speaking more freely about our experiences so that everyone knows what to expect on the job market.

Beat of a Different Drummer?

(Is this title too dorky? Be honest.)

(…Actually, don’t tell me.)

In my dissertation research I’m turning toward drum machines. It’s a natural extension of my ’80s sound inquiries: if the Yamaha DX7 was so important to the ’80s sound, drum machines like the LinnDrum and the Roland TR-808 were at least equally important.

Analyzing the timbre of drum machines using my existing apparatus has revealed how biased toward pitched phenomena theories of timbre really are. For example, so many theories of timbre are completely preoccupied with overtones/partials and their relative loudness. (For more info on spectrogram analysis, check out the first half of this blog post.)

harmonica
This spectrogram is of a harmonica synth playing a melody. Time is on the x-axis in seconds. Pitch is on the y-axis in Hertz (higher Hz = higher pitch). The bottom line of this spectrogram, at around 500 Hz, is the fundamental pitch. Colloquially we just call this “the pitch.” The parallel lines running above the fundamental are the partials of this sound. You don’t hear them as separate notes, but instead you hear a change in timbre.

But for many percussion instruments, drums and cymbals and such, you won’t see any partials like that at all. Even drums that are pitched don’t really have partials running in multiple parallel lines above it.

all sounds mono.png
These are samples from a Roland TR-808: bass drum, low tom, mid tom, high tom, snare, closed hi-hat, open hi-hat, clave, and handclaps. Notice how these are all just thick bars of sound, not at all like the parallel strands in the above example.

So it does us no good at all to talk about partials, how those partials compare to the ideal natural harmonic series, whether there’s vibrato, etc. Yet, that’s the majority of the focus of spectrogram analyses.

Over the next few weeks I’m going to start finessing how we can talk about timbre in non-pitched percussion instruments. For now, back to the grind…

A theory of attacks?

Studies have shown that the attack (onset) of a sound plays an important role in a listener’s ability to accurately determine the sound’s source. In Saldanha and Corso 1964, listeners were able to identify the source of a tone with 50% greater accuracy if the attack of the sound was included in the sample, as opposed to a sample that cuts out the attack and plays only the sustain of the sound.

Therefore the attack of a sound must greatly influence our perception of timbre. In order to summarize the most important aspects of a timbre, my methodology must have an adequate way of accounting for the attack of the sound. How to do this? At the moment, my methodology is based on a system of oppositions. My first thought, of course, was an opposition between sounds with a fast attack and a slow attack. But isn’t this oversimplifying? There are probably degrees of variance between “fast” and “slow.” (Now you have a little insight into what I think about when I walk between my apartment and the cafe.)

The critique of binaries as being over-generalizing is leveled at me a lot. But McAdams 1999 shows that perhaps this isn’t actually a damaging oversimplification. McAdams also theorizes timbre but from a perceptual approach. From a study asking participants to rank the similarity between 153 pairs of timbres, McAdams devised a three-dimensional timbre space onto which the 18 timbres could all be mapped. One of these dimensions is attack time, on a scale from short (4) to long (−3). Listeners seem to have conceived of attack times as basically short or long, with little middle ground. This is visible in McAdams’s Figure 2 (below) by the grouping of the sounds into two clusters: there are basically timbres that are up high at around +2 (vibraphone, guitar, harpsichord) and timbres that are down low at around −2 (clarinet, trombone, English horn). 

Screen Shot 2016-06-20 at 3.13.50 PM
from McAdams 1999, 89.

This encourages me, but I admit that binaries are not always going to be appropriate. I have already begun to discard one binary in favor of a number, which measures the distance in octaves between the fundamental and the highest sounding partial. In Figure 2 above, spectral centroid and spectral flux do not neatly fall into two groups. McAdams’s research here confirms that binaries might not adequately capture spectral centroid or spectral flux (which colloquially maybe could be referred to as brightness and hollowness, respectively—I’ll save a more thorough investigation of these ideas for another time). On these axes, all timbres are scattered around the values from 3 to −3. So in these cases, the usefulness binaries may have to be reassessed. 

Even if binaries are good to assess slow attack vs. fast attack, I may still not be adequately capturing other ways that attacks contribute to timbre. McAdams 1999 is actually using FM-synthesized sounds, not acoustic sounds, in this study. I haven’t studied this exhaustively but a hypothesis I have is that FM synthesizers like the Yamaha DX7 do not have as complex or as nuanced attack sounds as acoustic instruments do—even though the DX7 has a highly sophisticated envelope generator. But perhaps McAdams’s use of FM synthesis lead to this binary being a useful generation. Acoustic instruments may have opened up more subtleties in attack sounds that would not be so easily captured. 

MTSNYS 2016 reflections

Another energizing and inspiring conference is past: Music Theory Society of New York State’s annual meeting. This year it was held at Mannes’s new campus–beautiful space. And a lot of really wonderful papers! I actually found the conference short, I think because I only saw two full sessions, and the other was a “lightning round” of short papers.

Yesterday I presented my paper, “Following Schenker’s Lead in Analysis of Stravinsky” (handout here). It was actually one of two papers in the conference about understanding Stravinsky’s neoclassicism using tools for tonal analysis–the other presenter, Sarah Iker, approached it from schema theory! I had great discussion with her afterward.
As I expected the discussion after my paper was quite lively–maybe that’s why my paper was at the end of the conference! This post is probably mostly for myself: I want to reflect on the feedback I received so that, after taking a break from this paper, I can revise it and address some of these concerns.

One piece of feedback I had, amazingly, never received before was that this was perhaps normalizing out some octatonicism, a well-established contributor to Stravinsky’s harmonic language. I hadn’t even noticed octatonicism in the excerpts I analyzed, truly, because one can also understand them as polytonal diatonic fragments, and that’s how I was experiencing them. I find this happens a lot: I hear things and think immediately “polytonal,” and others hear the same music and think, just as immediately, “octatonic.” At any rate, I’ll have to revise my paper to explicitly address and possibly theorize octatonicism a bit.

Another expected comment was about over-normalizing Stravinsky more generally. My approach proceeded from writing a recomposition of Stravinsky’s work that normalizes it to fit within the norms of tonality (what is “tonality” anyway? Another issue that needs more theorizing…) so that Schenkerian theory could deal with it fully. Does this erase what makes Stravinsky Stravinsky?

I tend to think that it does not. I chose a piece (Symphony in Three Movements) which really seemed to invoke tonality rhetorically. It seems fair, then, to relate it to tonality in analyzing it. The final product of my methodology is not the normalized Schenkerian sketch, but the overlay of the sketch back onto the original score. This actually highlights the differences between Stravinsky and typical tonality. I was approaching it in a Hepokoski/Darcy light. Here are the norms; now how have they been manipulated? It’s never supposed to be perjorative to say that Stravinsky is not doing the normal thing. I adore Stravinsky’s neoclassicism because of his peculiar way of twisting the norms.

Is it even fair to proceed from the idea of “norms”? I think so. And one thing that Sarah Iker discussed at length in her own presentation about Stravinsky was how the culture of modernism was very much interested in the traditions of the 18th century more generally, and also that Stravinsky had learned about things like tonal schemata (obviously not using that terminology).

Probably the most popular sentiment, though, was that more time should be devoted to discussing the differences between the surface and the recomposition. What are possible motivations for the shifts Stravinsky has made? What’s the effect?

I think for a 30 minute paper I discussed all I could. But I intend to expand this paper into an article! So these are all directions to expand in, to get it to article length. Any other feedback from any readers would be appreciated via Twitter or via email!

Analyzing timbre

So I’ve explained my rationale for analyzing timbre, and for specifically focusing on the Yamaha DX7, in another post; now it’s time to show this in action.

I base my analysis on the visual aid of the spectrogram. A spectrogram visually represents all the sounding frequencies on a two-dimensional graph, with pitch indicated in hertz on the y-axis and time represented on the x-axis, and loudness represented through color. Here is a spectrogram, paired with a transcription of the line you see in the spectrogram:

flute transcriptionflute

The header image for this website is a spectrogram, too (I used a prettier but maybe less useful color scheme for the header image). All those parallel lines are actually just part of one note. The loudest line—the thickest line with the white color—is the fundamentali.e., the pitch that we perceive as “the notes” that are being played. These are the notes that get transcribed above. All those other lines above and below it are partials, other frequencies that are actually sounding at the same time. These could be separated out as separate notes that are occurring at the same time, but instead they’re subsumed within the fundamental; we experience these other pitches not as pitched lines, but instead as part of the timbre of the fundamental.

Step 1: Find songs to analyze. After playing with my own DX7 for many hours, I’ve learned to identify the Yamaha DX7 presets by ear. I often start looking for songs by perusing http://www.officialcharts.com, the archive for the UK’s Top 40 charts (I find their website more usable than Billboard’s, plus synthpop was more popular in the UK than in the US).

Step 2: Isolate the DX7 sounds. Once I’ve found a track with a few DX7 sounds in it, I hook my DX7 up to my computer and rerecord the synthesizer lines myself, to isolate them from the rest of the track.

This step is essential to get any clarity in the spectrograms. Here is what a spectrogram looks like for the entire composite track of “What’s Love Got to Do with It”:

 

https://www.youtube.com/watch?v=Fkx9l-B6W-M

Some things come out clearly, such as Tina Turner’s voice, the sustained bass line, and the hi-hat. But generally, it’s difficult to separate out what instrument is creating what visual aspect of the spectrogram. The flute line, a DX7 preset that is very salient in the middle of this clip, is almost impossible to see. But if I rerecord the clip myself, we get a much clearer image of the flute sound:

https://www.youtube.com/watch?v=SgX_ZYyZiGQ

We can do a lot more with this! All the other clutter is out of the way, and the sound of the flute is clearly visually represented.

Step 3: Analyze the spectrogram. This is the most difficult part of my dissertation work, and it’s still very much under construction. I’m basing my approach on work by Robert Cogan. Cogan borrows an old approach from linguistics called “oppositional analysis.” Building from Cogan’s 13 oppositions, I have my own list of 20 or so oppositions that isolate one facet of the timbre of the sound and give it a negative or positive designation (these aren’t meant as aesthetic judgements; think of “negative/positive” like you do with a battery, not like with an Amazon review). I usually summarize these results in a table of plusses and minuses, and sometimes plus-minuses (±) or neutrals (∅). In the future I’ll write a post that focuses on my issues with Cogan’s oppositions, and brainstorm for future possibilities and alternatives. For now, let’s dive into an example analysis.


Tina Turner’s most well-known single, “What’s Love Got to Do with It”, sounds like a demo tape for the DX7. It uses four different distinctive DX7 presets: CALLIOPE, FLUTE 1, E. PIANO 1, and HARMONICA. It could practically be a demo tape for the DX7. The track peaked in the US at #1 in September 1984 and at #3 in the UK in June 1984. I like using this track as an example because it uses so much DX7 and it was so immensely popular, so it’s a good example of how the DX7 saturated popular music of this time.

The E. PIANO 1 preset is used the most in this track, constantly supporting Turner’s vocals. It’s mixed softly and doesn’t draw attention to itself. E. PIANO 1 is what I call a core timbre—a sound that’s used as the foundation of the track, like an electric guitar or drum set would be in a typical rock song. This is as opposed to a novelty timbre, which is used more sparingly for a coloristic effect. Core vs. novelty is not a distinction based on oppositional qualities or anything to do with the timbre itself. So it’s more of an orchestrational concern (how timbres are used) than a timbral one (what are the details that comprise this sound).

The CALLIOPE (heard in the intro and in verse 2), FLUTE 1 (heard in the prechoruses, and featured earlier in this post), and HARMONICA sounds (featured in the instrumental and in subsequent choruses) are all novelty timbres. CALLIOPE and FLUTE 1 both play some of the song’s hooks. They are all mixed very loudly in the track, and are only heard when they are replacing Turner’s vocals. The HARMONICA sound is used for a lengthy solo section, and afterward improvises some descant lines while Turner is singing.  Here are the spectrogram images for all four of these sounds (click to enlarge):

One interesting opposition for this set of sounds is narrow/wide, which captures a property of timbre often colloquially referred to as “brightness.” It refers to the distance between the fundamental and the highest sounding partial. CALLIOPE and FLUTE 1 are both narrow (dark) sounds, and they’re used in similar ways. They’re both novelty timbres that play short hooks that are only sounded while Turner is not singing. Dark sounds tend not to carry so well, and they also play in approximately the same range as Turner’s singing. This means that hearing both the FLUTE 1/CALLIOPE simultaneously with Turner would muddy the FLUTE 1/CALLIOPE. HARMONICA is a wide (bright) sound. This makes the HARMONICA  a good candidate for the extended solo that replaces Turner’s vocals for the instrumental. It also allows the sound to better compete with Turner’s voice when it improvises in the last choruses of the song.

E. PIANO 1 is a core sound, and it  is also wide sound, like the novelty timbre HARMONICA. E. PIANO 1 does not sound as aggressively loud as the HARMONICA, however. This is due of course to the volume of the two sounds, but also due to another timbral property: non-spaced/spaced. Most sounds have partials that are regularly occurring at certain frequencies. In hertz, the relationship  between the fundamental and partial 1, then partial 2, then partial 3, etc. is 1:2, 1:3, 1:4, etc. A sound that follows this general rule would be a non-spaced sound: all the expected partials are present. But not all sounds do this—some sounds skip some of these partials. E. PIANO 1 is one such sound. E. PIANO 1 has the first 5 partials above its fundamental as expected, but after this there is a big gap. The next sounding partials would be partials 11 and 12 if it followed the ratio pattern explained above. Aurally, a spaced sound is darker than a non-spaced sound, but can still seem somewhat bright if the sound is also wide, as E. PIANO 1 is.


 There are some problems that are inherent with spectrogram analysis; basically, the issue is what is most visually apparent in the spectrogram is not necessarily what is most aurally apparent. But I think in order for a theory of timbre to catch on, for other music theorists to want to do it, a visual medium is basically required. Visual aids are really useful for the kinds of analysis that music theorists like to do: we like to ponder music deeply and slowly, at our own pace. We like to be able to point to things that we can publish in a paper. This isn’t a bad thing, but it’s a limitation to be aware of.

This analysis above is, as of now, fairly basic. It was only meant to explain how my theory is supposed to work; it doesn’t provide any great insight into “What’s Love Got to Do with It” as a track. But there are many questions that I’ll investigate in my dissertation that an analysis like this might answer.

As I stated, the novelty sounds vs. core sounds designation doesn’t inherently have anything to do with timbral qualities. Nevertheless, there are often commonalities. In songs I’ve analyzed, core sounds are almost always 1) steady (not wavering) in pitch; 2) their partials conform to the harmonic series; 3) they don’t have a synthetic undertone; 4) they are sustained sounds (not clipped). Novelty sounds cannot be generalized, but in a way this is also distinctive: while core sounds are generalizable, novelty sounds are not. Are there songs where the core sounds have the opposite qualities from those I listed above? What happens if this relationship is somehow transgressed? It’s my suspicion that breaking this “rule” will sonically represent some kind of Other Thing. I know that the DX7 was used in sci-fi TV soundtracks of the 1980s, such as Dr. Who and The Twilight Zone. I want to look at which sounds are used as core sounds or novelty sounds in those soundtracks, compared to the way sounds are used in pop music.

Another question, one that relates more to my last post, is how does the timbral profile of the DX7 compare to other (analogue) synthesizers being used in the 1980s, or in other decades? How do these oppositions define that 80s “sound” that is so distinctive and polarizing? I’ve been looking at issues of Keyboard Magazine and New Musical Express from the mid-1980s. Discussions of the DX7 abound, and one theme is recurring: digital FM synthesis (the technology used in the DX7) sounds “cold” compared to the analog synthesis used in other synthesizers. What features of the DX7 (and by extension, FM synthesis) are contributing to this consensus? 

I can also expand the instruments I’m investigating. Another major component of the 80s “sound” was the prevalence of drum machines like the Linn Drum and the legendary Roland TR-808. Like the DX7, these drum machines had pre-programmed sounds that users relied on, and which I could easily reproduce. How do the “fake” drums and bass sounds of these machines compare to “real” sounds produced on acoustic instruments and non-synthesizer electric instruments? This would also work toward a precise definition of an 80s sound. For now, I’m leaving all these questions unanswered. In the future, as I make cool conclusions based on this method of analysis, I hope to share tidbits on this blog.

WHAT is the DEAL with TIMBRE?

After reading roughly 10,000 articles and books about the analysis of timbre, I can say with confidence this is how all of them start out. So here’s my own explanation of timbre’s DEAL. Timbre is more colloquially known as “tone color.” Imagine two different instruments, e.g., a violin and a trumpet, playing the same exact note at the same exact pitch, the same exact volume, and the same exact duration. You can still tell them apart, because the instruments have different timbres. You don’t need to have special training to tell that they are different; timbre is something that we intuitively understand.

In terms of how timbre relates to music, or specifically to popular music, it’s what gives each band their “sound.” It’s often said by music theorists that timbre is one of the most important aspects of popular music (e.g., Tagg 1982), while in classical music it’s maybe not so important. Even though this is generally agreed upon, music theorists still focus on things they focused on when dealing with classical music: pitch, rhythm, harmony, form.

In other words: even though timbre is highly intuitive, and so central to our experience of music, music theorists still don’t really talk about it! It’s my assertion that this is just because there is not a clear methodology that’s been established for the analysis of pitch, at least not one which is as accessible as theories of pitch/rhythm/form. I want to try and fill this gap with my own work.

Timbre is a big topic that affects every kind of music, but I’m focusing on 80s music. This is a body of music that definitely has a “sound,” created partially through the timbres being used. It’s a very polarizing sound; people either say “80s music is so terrible” or “I love 80s music!” when I tell them about the repertoire I’m focusing on. One unique aspect of this music, which likely contributes to this love/hate reaction, is a heavy reliance on synthesizers throughout almost every track. One synthesizer in particular, the Yamaha DX7, was particularly pervasive, and so this synthesizer is the focus of much of my dissertation work. Crucially, the DX7 provides the bass line in many iconic 80s tracks, like “Danger Zone” and “Take On Me,” rather than an actual electric bass guitar. In my eyes, this sound, along with many other famous timbres that came from the DX7, is a major part of the “sound” of the 80s.

I’ve always loved 80s music—I think this is because I’m a keyboardist, and the 80s is the one decade where keyboards were more pervasive than guitars in popular music since the 1940s. And everyone knows that you have to really, really love the repertoire you study in your dissertation. But my choice of repertoire and instrument has more to do with issues of convenience. The DX7 is special because the sounds for which it’s famous are actually presets, sounds that were pre-loaded onto the machine when it shipped out to buyers. (The DX7 was notoriously difficult to program yourself, so the presets were to help make it more accessible.) This means that I can duplicate these sounds exactly in my own home with my own DX7 by simply pressing a button. If I wanted to study, for example, the Rickenbacker 12 string guitar that the Beatles used in “Hard Day’s Night” and other tracks, I’d not only have to acquire that same guitar, but also the same amplifier that the Beatles used, and then use the same settings on the various knobs, before I could adequately duplicate the timbre.

Now that we all know the DEAL with TIMBRE, in my next post, I’ll talk about how I actually go about analyzing timbre myself.