IASPM 2017

I presented at my first International Association for the Study of Popular Music conference, the biennial international one, which was held this year in Kassel, Germany at the Kulturbahnhof—the former Hauptbahnhof (main train station) of Kassel, which is now converted into an arts center—a super cool venue. (Full conference program and abstracts available here.)

Suedfluegel_27-09-09_14
Photo from http://www.kulturbahnhof-kassel.de

 

The program for this conference was huge, with something like six parallel sessions running at once. I tended to favor panels that were music-theory-ish, dealt with music technology, or dealt with gender.

arrows-2023448_640.png

Music-theory-ish

I doubt I would have applied had I not been asked to form a lovely panel by a colleague, Nick Braae, whom I met at the Osnabrück popular music summer school that I attended in September 2015.  Here is the full information of the panel:

Shaping Sounds and Sounds as Shapes in Popular Songs—Contemporary Analytical Approaches

Alex Harden (UK): “Oneiric Narrativity and Recorded Popular Song”

Harden analyzed how recording techniques may interact with the lyrics to create an oneiric sound space, focusing on Kate Bush’s “Waking the Witch.”

Megan Lavengood (USA): “Analyzing Sound, Analyzing Timbre”

I presented a trimmed-down version of Chapter 5 of my dissertation, ultimately arguing for more analysis of timbre in popular music studies.

Bláithín Duggan (Ireland): “The Shape of the Voice: Analysing Vocal Gestures in Popular Song”

Duggan took a holistic approach to analysis that takes motives beyond pitch and rhythmic content and attends to more subtle details of dynamics, timing, and pitch, using early Beatles songs as a corpus.

Nick Braae (New Zealand): “Analysing Musical Time in Popular Songs”

Braae discussed cyclic versus directional time created through interactions between song forms, harmonies, and melodies.

We were teamed up with another panel as well, organized by Kai Arne Hansen:

So What? Contemporary Approaches to the Interpretation and Analysis of Disparate Popular Musics

Kai Arne Hansen (Norway): “Darkness on the Edge of Pop: Constructing Masculinity and The Weeknd’s ‘The Hills'”

Hansen gave a preliminary exposition on themes of violence, misogyny and darkness in The Weeknd’s music and music videos.

Steven Gamble (UK): “Empowerment and Embodiment in Rap Music”

Gamble analyzed elements of rhythm in “Backseat Freestyle” by Kendrick Lamar, identifying musical elements which contribute to a sense of empowerment through embodiment.

Claire Rebecca Bannister (UK): “Psychopharmacology and the Analysis of Goth Music”

Bannister discussed goth music as a psychedelic genre, and defines psychedelia through the idea of set and setting, terms from psychopharmacology, in determination of what constitutes a psychedelic genre.

Andrei Sora (UK): “To Prepare a Face to Meet the Faces that You Meet: The Persona in Instrumental Music”

Sora used analysis of a persona in the unusual genre of instrumental popular music. While persona analysis is frequently applied to instrumental art music, it is rare to see this approach in instrumental popular music, where the notion of an analytical persona interacts with a perhaps more robust public persona.

The goal of both panels together was essentially to showcase new work by young scholars in the field of popular music analysis, and to show what we can do that is sort of “outside the box,” although maybe there is no such “box” anymore!

turntable-2157292_640

Recording technology and the music industry

Steffan Lepa (Germany): “The Diffusion of Music Streaming Services in Germany”

Lepa reported on data collected with his project, Survey Musik und Medien, from a 2012 survey and a follow-up 2015 survey of German music listeners. The data was used to develop hypotheses on the change in audio media that people use to listen to their music. Lepa said the data is different than many other sources, because it is derived from a survey of listeners, whereas most data comes from sales figures. They divided up listeners into classes based on the ways their listening habits changed between the two surveys: versatile audiophiles, digital mobilists, selective traditionalists, selective adopters, versatile traditionalists, and radio traditionalists. The last category was made because a large portion of people only listened to music on the radio—the fact I found the most surprising about this presentation. I would love to compare this to a similar study in the US—is radio equally prevalent here?

Chris Anderson (USA): “Contemporary Strategies for Making, Distributing, and Gifting Music”

Anderson featured two case studies of musicians giving away their music for free, relating this to Attali’s utoptian vision of creation for self-satisfaction instead of monetary gain. I am hopeful that in future studies the author might begin to consider the implications of class, and also of devaluing art. One of his subjects was only a hobbyist musician. It would be interesting to see who releases music for free because of “self-satisfaction instead of monetary gain,” versus who releases it for free due to economic pressure to do so to compete.

Franco Fabbri (Italy): “Binaurality, Stereophony, and Popular Music in the 60s and 70s”

Fabbri articulated an important distinction between stereophony and binaurality: if a typical stereo setup is meant to imitate having the best seat in the house, then headphones position the listener actually in the center of the stage. For symphonies, this might be like sitting next to the conductor. Fabbri also highlighted that while classical recording practices typically valorize “realism” in the mixes, in the case of concertos, the mixing typically creates an unreal sound space, as the performer is mixed in both channels.

Pat O’Grady (Australia): “The Politics of Digitizing Analog Technologies”

O’Grady reported on the variety of virtual and digital technologies that are meant to imitate analog recording software. Most fascinating to me and my research were the words that O’Grady reported as being used to describe these plugins. Plugins are typically described as having “warmth”. “Smooth”, “glue”, and “musical” are other words used to describe their plugins. I would love to learn more about these plugins and compare the language used to describe them with the language used to describe the Fender Rhodes and other technologies that are often contrasted with the Yamaha DX7.

Steve Waksman (USA): “Remaking aliveness in American Music, 1900–1930”

Waksman gave an account of the use of the word “live” as an adjective to describe technologies. Advertisements for sheet music would use the word “live”: “live songs for live singers by live authors.” We nowadays think of sheet music as kind of a dead object, maybe, but at this time it was being sold as more “live” than the “canned music” of recordings. The American Federation of Musicians launched a campaign that praised the virtues of live music as movies with sound killed off jobs for musicians as silent movie accompanists and vaudeville musicians.

statue-2451001_640

Gender in popular music

Robin James (USA): “Queered Voices in the Era of Post-Feminist Pop”

James featured two queer artists who do not conform to the post-feminist feminine ideal of resilience and overcoming. In one case, the artist Bottoms talks about emotional damage, and does not overcome this damage, but rather enjoys the damage. The artist-collective Decon/Recon writes music in a deeply collaborative way in order to resist ownership and thus the post-feminist ideal of feminine empowerment. I would love to see more of this kind of scholarship in pop music analysis, which in my view often relies on tropes of empowerment in its narratives. James also gave a keynote at the opening of the IASPM conference, which I unfortunately had to miss.

Sarah Dougher and Diane Pecknold (USA): “Girls Rock! Reverberations and Limitations”

Like Robin James, Dougher and Pecknold draw attention to the post-feminist assumption that femininity is equated to overcoming. They point out that this then places additional burden on girls, in a sense: all girls are feminized, but good girls overcome this. Dougher and Pecknold trace representations of the “Girls rock” theme from Jem through Black Girls Rock!.

drums-2178425_640.jpg

Concluding thoughts

The experience of meeting these international voices was incredible.

There were a lot of papers, so they were sometimes hit-or-miss, but I saw many very high-quality papers. And having such a huge program meant that I was almost always seeing something that related to my research in some sense.

IASPM provided lunch and several coffee breaks every day, as well as an opening reception, which made it easy to socialize with conference-goers.

Slots were 30 minutes for the paper and questions, but the organizers did not insist on a 20-minute paper with 10 minutes of questions; rather, they left it up to the presenter how to divide the time. I love 30 minute sessions—45 is way too long (ahem, SMT)—but I think question time ought to be mandated. Questions are usually the best part!

For some reason there was a lot of drama in the three-hour-long general meeting (which then went over time!), but I’m gonna go on the record and say that IASPM 2017 was a great conference for me. I’m grateful to all involved in its organization and success.

Advertisements

What Makes It Sound Like Christmas?

Every year, music theory enthusiasts begin to ask the same question: “what makes it sound like Christmas?”

screen-shot-2016-12-23-at-3-13-23-pm
As you can see, this discussion recurs every year in /r/musictheory.

Vox.com has incurred the wrath of Twitter’s musicologists after posting a video focusing on Mariah Carey’s “All I Want for Christmas Is You” that suggested that iiø7 chords are what make it sound Christmassy. The video begins by stating the research question, “What makes Mariah Carey’s song sound so incredibly Christmassy? Aside from the sleigh bells, of course.” They then proceed to discuss the harmonic content of the song and how the harmonies signify Christmassy-ness.

Vox’s declaration that iiø7 chords sound Christmassy irritated musicologists for many reasons, perhaps best summarized thusly:

In the Vox video and in all those reddit posts, and indeed in much of beginner music theory, there is an obsession with finding explanations in the harmonies, specifically, of a song. This is a reflection of the overall bias in music theory: we focus on teaching harmony most of the time. Curiosity about how harmony elicits emotions is natural in this context. It only becomes problematic when this discussion really leads to the exclusion of other music-analytical domains that are more relevant to the track’s signification—namely, timbre!

“What makes Mariah Carey’s song sound so incredibly Christmassy? Aside from the sleigh bells, of course.” This last line in the Vox video is done as a throwaway joke—”haha, gotta have sleigh bells in Christmas songs, obviously!” Well, yes! You do! That is actually what makes it sound Christmassy. I would argue the only thing contributing more to its Christmas sound is the lyrical content and all its allusions to Christmas imagery (stockings, Christmas trees, fireplace, snow). Why focus so much on harmony—which is not different in Christmas music than in comparable pop styles—when we could focus on what really distinguishes this music from other genres?

Do We Know It’s Christmas?

https://www.youtube.com/watch?v=WesKXdaWBq0

“Do They Know It’s Christmas?” is a charity single by the supergroup Band Aid that was released in December of 1984. It was meant to raise funds for the famine in Ethiopia. This song is also among the worst Christmas songs of all time, not only due to the musical content but for spreading some harmful reductionist representations of Ethiopia. But it’s a Christmas song nonetheless. So what makes it sound so Christmassy?

Harmony-wise, this track is completely unremarkable. The chords of the verse are F–G–C (IV–V–I in C major), in the prechorus, you have Dm–G–C–F (ii–V–I–IV), and in the chorus we’re back to F–G–C (IV–V–I).

I’d contend that, like a lot of Christmas songs (including Mariah Carey’s “All I Want for Christmas is You”), these harmonies don’t sound particularly Christmassy. Instead, Christmas themes are communicated through the lyrics—that is, by repeating the words “Christmas” and “Christmastime” over and over—and also through the heavy use of synthesized tubular bells. 

“Do They Know It’s Christmas?” features that grand old synthesizer, the Yamaha DX7. I reached out to Midge Ure, one of the song’s writers of Ultravox fame, on Twitter and he confirmed that the DX7 preset called TUB BELLS is the source of this infamous bells sound.

TUB BELLS analysis

Here is the TUB BELLS sound isolated, playing an octave C3–C4, the same sound that you hear at the very beginning of “Do They Know It’s Christmas?”.

Today I don’t have time to get into all the details of this timbre, but if you’ve never heard what’s so special about bell timbres before, well, now you can. In general, bell timbres are special because the overtones that resonate when you strike a metal bar are totally different than the regular harmonic series that you get from a vibrating string or column of air. Bell timbres do not follow the harmonic series—they are inharmonic instruments.

Here’s another spectrogram image, this time for just a single note, C3. (For info on how to read a spectrogram, click here.)

tub bells 2.png

Since most of you probably don’t immediately know how to translate Hertz into pitch names, I’ve made a transcription in traditional notation of what these partials are.

bells-series_0001

If you’re familiar with the harmonic series, you can see that that series of notes is quite different. If you’re not familiar with the harmonic series, well, here it is:

regular-series_0001

The harmonic series has intervals that progressively narrow in a predictable fashion. Each frequency is a multiple of the lowest (fundamental) frequency. But in the harmonic series for TUB BELLS, well, it’s not quite so predictable. Not every partial is a multiple of the fundamental, and the intervals are not progressively narrowing.

But what does it mean?

The Yamaha DX7 was released in 1983, and so the technology was still shiny and new by December of 1984. The synthesizing capabilities of the DX7 were especially renowned for being able to faithfully replicate percussive sounds such as tubular bells, glockenspiel, and the like, much better than other contemporary synthesizers.

So the TUB BELLS sound in “Do They Know It’s Christmas?” is actually carrying a lot of semiotic weight! DX7’s TUB BELLS immediately inform the listener that 1) this is a Christmas song and 2) this is an ’80s Christmas song.

In so many cases, when we’re wondering “what makes it sound ____?” where ____ is Christmas, or metal, or Irish, or whatever, the answer lies not so much in the harmonies, but the timbres. Timbre is probably the most immediate aspect of our musical experience. Why shortchange it in our analyses?

“6 Inch” transcription

All aboard the Lemonade train! 🍋 🚂

I tweeted a few days ago that two tracks on Beyoncé’s Lemonade remind me of Sufjan, and that 3:22–3:44 of the track “6 Inch” is classic Sufjan. I say that because of the harmonies and the background vocals, which are actually sampled from an Isaac Hayes song, “Walk On By.” I’m still working up a full analysis, and digging up Sufjan tracks to compare it to, but as a teaser, here’s my transcription of the section in question. I left out Beyoncé’s lead vocals—she’s singing another iteration of “six inch heels, she walked in the club…”

6 inch_0001

header image credit: https://flic.kr/p/8MWW8d

Should Have Known Better

In my pop music analysis seminar last Wednesday, four of my students presented on harmony and form in four songs that I had chosen. Being a huge Sufjan Stevens fan, I couldn’t help but toss in one of his newest songs along with the more standard Beach Boys and Beatles tracks: “Should Have Known Better,” from the 2015 album Carrie & Lowell.

 https://www.youtube.com/watch?v=lJJT00wqlOo

I chose this song because we had looked over the handout to Mark Spicer’s forthcoming paper, “The Question of Tonality in Pop and Rock Songs.” In this paper he coins the term fragile tonic, a tonic which is sounded in the song in question, but only weakened somehow, usually weakened by being in inversion. Crucially, a fragile tonic reflects the lyrics in some way, usually a kind of tenderness or vulnerability. One example Spicer uses is Elton John, “Someone Saved My Life Tonight.”

“Should Have Known Better” is the second track on Carrie & Lowell, an album that Sufjan has stated was written to help him cope with the death of his mother Carrie (interview here). He had an atypical relationship with his mother, as she had several mental health problems, and left the family when Sufjan was only one year old. After she left, they saw her only sporadically. For a few years, Sufjan lived with her in Oregon, while Carrie was dating Lowell. The lyrics of the album are saturated with references to Carrie and to the state of Oregon, and this track is no exception.

Screen Shot 2016-04-01 at 7.33.37 PM

The song is split in half by an instrumental interlude at 2:38. In the lyrics above, this is after the fifth stanza. In the first half of the song, the lyrics are consistently depressive: “my black shroud” is repeated each stanza, for one example of the tone. The harmony reflects this brokenness by employing the fragile tonic technique. My transcription below gives a harmonic reduction of the chords and a guitar melody. This progression is repeated throughout the introductions, verses, and interludes of the first half of the piece.

first half_0001.png

The opening E minor tonic is considerably weakened because it’s in second inversion. There is a root-position i chord in the fourth measure of this progression, but here it’s weakened through the use of an added sixth (you could even argue that this is not a i chord at all, and say instead that it’s a vi7 in first inversion). The succession of the chords also weakens the status of the i chord in measure 4: approaching the tonic chord by root motion down a third makes the first chord (III) sound stronger than the second (iadd6). While the lyrics talk of a stalled or incomplete grieving process, the harmony sounds likewise unsettled.

The chorus (0:55–1:18, 2:15–2:37) modulates to G major for about four measures. The modulation is always set to the words “Be my rest/vest, be my fantasy,” presumably directed toward the mother. G major as tonic is confirmed with multiple IV-V-I progressions. This harmonic motion is much stronger than anything we heard in the verses. This conclusive and common progression might signify the “rest” in the lyrics, and the comfort and stability that a mother can bring.

chorus_0001

After an instrumental that begins at 2:38, the lyrics become more positive, communicating an acceptance of what is past, and finding joy in everyday comforts (“the neighbor’s greeting,” “my brother had a daughter,” etc.). The instrumentation is more fleshed out. Before 2:38, there’s only acoustic guitar and vocals. But at 2:38, a rhythmic synth, banjo picking, and quiet synthesized percussion are all added to the texture. The song modulates permanently to G major, and the chord progression becomes quite conventional, like something you would hear in a Bach prelude.

second half_0001.png

The student of mine that presented on this piece is a lyricist and songwriter, and she focused more on the melody than the harmonies. She noted that although the new instrumentation, key, and chord progression can make second half of the song sound like an unrelated section, the lyrics clearly tie back to the first half—”I should have known better,” “I should’ve wrote a letter,” and references to “feeling” are present in both halves of the song. The same vocal motive is used every time “I should have known better” appears.

As I examined her detailed melodic transcription, I began to notice more motivic connections beyond this. The melody of the second half seems substantially different than the first on a casual listening, but a little side-by-side comparison reveals more connections. Below I’ve identified several motives in the first E minor half of the song and designated them with a letter.

first part melody_0001

The G major half reuses many of these motives. As I already mentioned, the motive continues to head off each stanza, but further, the rest of the melody is also made up of motives from the E minor half, shuffled around:

second part melody_0001

Most of these motives are altered when they reappear in the second half. I’ll enumerate what criteria bind each motive together.

  1. Doesn’t really change between the two sections. In the G major section, A tail <G, E> is sometimes appended (a’).
  2. Circles around the pitches A and B, but the goal is B. B begins at the end of the bar and crosses over the barline.
  3. The only motive that uses exclusively long note values. The rhythmic profile is quarter-quarter-half, with the half note on the downbeat of the measure. The high contrast to the rest of the melodic rhythms is enough to bind it together even when the pitches are changed. However, note that the line begins on D and ends on E in both c and c’.
  4. Defined primarily by a long string of syncopated notes and an overall falling contour. The d motive ends on A.
  5. (not used in the second half)
  6. Shared pitch content (if we skip over the pickups), ordering of pitches, and contour.
  7. (not used in the second half)
  8. (not used in the second half)

“Unity despite apparent disjunction” is kind of an old school music-analytical goal. The recycling of materials in any piece of music binds the piece together musically, helps the listener remember the music after only a few hearings, and helps to create catchiness and cohesiveness. But to turn to the poetic, maybe the reshuffling of motivic material  in “Should Have Known Better” is reflecting the shift in perspective that the narrator experiences between the first and second halves. The things being perceived are the same, but the viewpoint changes. The motivic materials are the same, but there is a new ordering and understanding of them.

This is the second track on the album. It can seem strangely placed, because the rest of the songs on this album are entirely depressing. Putting this track so early sounds a bit off-balance. The very end of the track slows way down with long synth chords and distorted piano? guitar? something acoustic.

As I said, I’ve focused on harmony and melody here because that was the focus of the lesson I taught in the seminar. But before I sign off, just a quick word about sound production in this track. It’s phenomenal! Listen to it with headphones. In the first half, the guitar and the vocals are both double-tracked, and one track is directed into each side. It’s as though Sufjan is really inside your brain as you listen to this. For me it really helps me empathize with the lyrics. In the second half as instruments are added, the “sound box” (to borrow Allan Moore’s term) widens, and the different instruments seem to form a semicircle around the front of the listener while Sufjan remains close to the ears. It’s just lovely. Interestingly, I’ve been told by representatives of Sufjan’s record label that he uses very outdated recording equipment—8 tracks and such—and Sufjan does seem to have an affinity for DIY sound sometimes. But the production on Carrie & Lowell is nevertheless exquisite. Give the whole album a good listen.

header image credit: Amy Nolan

Analyzing timbre

So I’ve explained my rationale for analyzing timbre, and for specifically focusing on the Yamaha DX7, in another post; now it’s time to show this in action.

I base my analysis on the visual aid of the spectrogram. A spectrogram visually represents all the sounding frequencies on a two-dimensional graph, with pitch indicated in hertz on the y-axis and time represented on the x-axis, and loudness represented through color. Here is a spectrogram, paired with a transcription of the line you see in the spectrogram:

flute transcriptionflute

The header image for this website is a spectrogram, too (I used a prettier but maybe less useful color scheme for the header image). All those parallel lines are actually just part of one note. The loudest line—the thickest line with the white color—is the fundamentali.e., the pitch that we perceive as “the notes” that are being played. These are the notes that get transcribed above. All those other lines above and below it are partials, other frequencies that are actually sounding at the same time. These could be separated out as separate notes that are occurring at the same time, but instead they’re subsumed within the fundamental; we experience these other pitches not as pitched lines, but instead as part of the timbre of the fundamental.

Step 1: Find songs to analyze. After playing with my own DX7 for many hours, I’ve learned to identify the Yamaha DX7 presets by ear. I often start looking for songs by perusing http://www.officialcharts.com, the archive for the UK’s Top 40 charts (I find their website more usable than Billboard’s, plus synthpop was more popular in the UK than in the US).

Step 2: Isolate the DX7 sounds. Once I’ve found a track with a few DX7 sounds in it, I hook my DX7 up to my computer and rerecord the synthesizer lines myself, to isolate them from the rest of the track.

This step is essential to get any clarity in the spectrograms. Here is what a spectrogram looks like for the entire composite track of “What’s Love Got to Do with It”:

 

https://www.youtube.com/watch?v=Fkx9l-B6W-M

Some things come out clearly, such as Tina Turner’s voice, the sustained bass line, and the hi-hat. But generally, it’s difficult to separate out what instrument is creating what visual aspect of the spectrogram. The flute line, a DX7 preset that is very salient in the middle of this clip, is almost impossible to see. But if I rerecord the clip myself, we get a much clearer image of the flute sound:

https://www.youtube.com/watch?v=SgX_ZYyZiGQ

We can do a lot more with this! All the other clutter is out of the way, and the sound of the flute is clearly visually represented.

Step 3: Analyze the spectrogram. This is the most difficult part of my dissertation work, and it’s still very much under construction. I’m basing my approach on work by Robert Cogan. Cogan borrows an old approach from linguistics called “oppositional analysis.” Building from Cogan’s 13 oppositions, I have my own list of 20 or so oppositions that isolate one facet of the timbre of the sound and give it a negative or positive designation (these aren’t meant as aesthetic judgements; think of “negative/positive” like you do with a battery, not like with an Amazon review). I usually summarize these results in a table of plusses and minuses, and sometimes plus-minuses (±) or neutrals (∅). In the future I’ll write a post that focuses on my issues with Cogan’s oppositions, and brainstorm for future possibilities and alternatives. For now, let’s dive into an example analysis.


Tina Turner’s most well-known single, “What’s Love Got to Do with It”, sounds like a demo tape for the DX7. It uses four different distinctive DX7 presets: CALLIOPE, FLUTE 1, E. PIANO 1, and HARMONICA. It could practically be a demo tape for the DX7. The track peaked in the US at #1 in September 1984 and at #3 in the UK in June 1984. I like using this track as an example because it uses so much DX7 and it was so immensely popular, so it’s a good example of how the DX7 saturated popular music of this time.

The E. PIANO 1 preset is used the most in this track, constantly supporting Turner’s vocals. It’s mixed softly and doesn’t draw attention to itself. E. PIANO 1 is what I call a core timbre—a sound that’s used as the foundation of the track, like an electric guitar or drum set would be in a typical rock song. This is as opposed to a novelty timbre, which is used more sparingly for a coloristic effect. Core vs. novelty is not a distinction based on oppositional qualities or anything to do with the timbre itself. So it’s more of an orchestrational concern (how timbres are used) than a timbral one (what are the details that comprise this sound).

The CALLIOPE (heard in the intro and in verse 2), FLUTE 1 (heard in the prechoruses, and featured earlier in this post), and HARMONICA sounds (featured in the instrumental and in subsequent choruses) are all novelty timbres. CALLIOPE and FLUTE 1 both play some of the song’s hooks. They are all mixed very loudly in the track, and are only heard when they are replacing Turner’s vocals. The HARMONICA sound is used for a lengthy solo section, and afterward improvises some descant lines while Turner is singing.  Here are the spectrogram images for all four of these sounds (click to enlarge):

One interesting opposition for this set of sounds is narrow/wide, which captures a property of timbre often colloquially referred to as “brightness.” It refers to the distance between the fundamental and the highest sounding partial. CALLIOPE and FLUTE 1 are both narrow (dark) sounds, and they’re used in similar ways. They’re both novelty timbres that play short hooks that are only sounded while Turner is not singing. Dark sounds tend not to carry so well, and they also play in approximately the same range as Turner’s singing. This means that hearing both the FLUTE 1/CALLIOPE simultaneously with Turner would muddy the FLUTE 1/CALLIOPE. HARMONICA is a wide (bright) sound. This makes the HARMONICA  a good candidate for the extended solo that replaces Turner’s vocals for the instrumental. It also allows the sound to better compete with Turner’s voice when it improvises in the last choruses of the song.

E. PIANO 1 is a core sound, and it  is also wide sound, like the novelty timbre HARMONICA. E. PIANO 1 does not sound as aggressively loud as the HARMONICA, however. This is due of course to the volume of the two sounds, but also due to another timbral property: non-spaced/spaced. Most sounds have partials that are regularly occurring at certain frequencies. In hertz, the relationship  between the fundamental and partial 1, then partial 2, then partial 3, etc. is 1:2, 1:3, 1:4, etc. A sound that follows this general rule would be a non-spaced sound: all the expected partials are present. But not all sounds do this—some sounds skip some of these partials. E. PIANO 1 is one such sound. E. PIANO 1 has the first 5 partials above its fundamental as expected, but after this there is a big gap. The next sounding partials would be partials 11 and 12 if it followed the ratio pattern explained above. Aurally, a spaced sound is darker than a non-spaced sound, but can still seem somewhat bright if the sound is also wide, as E. PIANO 1 is.


 There are some problems that are inherent with spectrogram analysis; basically, the issue is what is most visually apparent in the spectrogram is not necessarily what is most aurally apparent. But I think in order for a theory of timbre to catch on, for other music theorists to want to do it, a visual medium is basically required. Visual aids are really useful for the kinds of analysis that music theorists like to do: we like to ponder music deeply and slowly, at our own pace. We like to be able to point to things that we can publish in a paper. This isn’t a bad thing, but it’s a limitation to be aware of.

This analysis above is, as of now, fairly basic. It was only meant to explain how my theory is supposed to work; it doesn’t provide any great insight into “What’s Love Got to Do with It” as a track. But there are many questions that I’ll investigate in my dissertation that an analysis like this might answer.

As I stated, the novelty sounds vs. core sounds designation doesn’t inherently have anything to do with timbral qualities. Nevertheless, there are often commonalities. In songs I’ve analyzed, core sounds are almost always 1) steady (not wavering) in pitch; 2) their partials conform to the harmonic series; 3) they don’t have a synthetic undertone; 4) they are sustained sounds (not clipped). Novelty sounds cannot be generalized, but in a way this is also distinctive: while core sounds are generalizable, novelty sounds are not. Are there songs where the core sounds have the opposite qualities from those I listed above? What happens if this relationship is somehow transgressed? It’s my suspicion that breaking this “rule” will sonically represent some kind of Other Thing. I know that the DX7 was used in sci-fi TV soundtracks of the 1980s, such as Dr. Who and The Twilight Zone. I want to look at which sounds are used as core sounds or novelty sounds in those soundtracks, compared to the way sounds are used in pop music.

Another question, one that relates more to my last post, is how does the timbral profile of the DX7 compare to other (analogue) synthesizers being used in the 1980s, or in other decades? How do these oppositions define that 80s “sound” that is so distinctive and polarizing? I’ve been looking at issues of Keyboard Magazine and New Musical Express from the mid-1980s. Discussions of the DX7 abound, and one theme is recurring: digital FM synthesis (the technology used in the DX7) sounds “cold” compared to the analog synthesis used in other synthesizers. What features of the DX7 (and by extension, FM synthesis) are contributing to this consensus? 

I can also expand the instruments I’m investigating. Another major component of the 80s “sound” was the prevalence of drum machines like the Linn Drum and the legendary Roland TR-808. Like the DX7, these drum machines had pre-programmed sounds that users relied on, and which I could easily reproduce. How do the “fake” drums and bass sounds of these machines compare to “real” sounds produced on acoustic instruments and non-synthesizer electric instruments? This would also work toward a precise definition of an 80s sound. For now, I’m leaving all these questions unanswered. In the future, as I make cool conclusions based on this method of analysis, I hope to share tidbits on this blog.

Garden pathing in Kesha’s music

This is from my discussion on the “#FreeKesha” episode of the Pop Unmuted podcast for a special episode about Kesha’s music and the current controversy. 

Paul Lester: Ke$ha, are you satirising teen America, their voraciousness and bloodlust when it comes to consumption and sex?

Ke$ha: Absolutely! And you either get it or you don’t.

via The Guardian

From the first time I heard “Tik Tok”, I’ve had a special place in my heart for Kesha’s music. I was immediately fascinated with her sung style flow, which I jokingly refer to as Sprechstimme. Her self-awareness and satire makes her trashy style highly appealing.

Two of my karaoke standbys are Kesha’s “Dinosaur”, from her album Animal, and “Sleazy”, from her EP Cannibal. Both are more deep cuts—”Dinosaur” was never released as a single, and “Sleazy” was a B-side to “We R Who We R”—and most often my friends haven’t heard them before, and immediately roll their eyes at my selection because they assume Kesha’s music is just trashy boring pop. As far as I can tell, though, every time I win over some new Kesha fans with these two tracks. They’re catchy, but moreover, they’re funny! 

One technique Kesha uses to create humor in her songs is through garden pathing. The song “Sleazy” begins with Kesha singing this lyric unaccompanied:

sleazy

Without the harmony underneath, a listener would probably assume this is in D phrygian, or at least I did initially. Setting a lyric like this in the phrygian mode connotes independence, attitude, meanness, and general bad-assery, a vibe that is totally common for rap music and for Kesha. But the bass line, backup vocals, and synthesizer that enter at 1:42 reveals that the tonality is something else entirely:

sleazy

It’s in B-flat major! (Or B-flat mixolydian, whatever.) Kesha’s lyrics now take on an entirely different tone. The hook now sounds much more sing-song-y, like a lighthearted playground taunt. There’s a humorous aspect to transforming the tone of this hook from bad-ass to playground, and this sense of humor is totally in keeping with Kesha’s M.O.—completely satirical.

Justin London adapted the term “garden pathing” to describe musical events in his book, Hearing in Time (to describe “metrical fake-outs”; he keeps a list of metrical fake-outs on his personal website). But the term “garden pathing” actually comes from language. Garden path sentences begin with one meaning but then end with an entirely different meaning. Wikipedia gives the example sentence “The old man the boat”. Reading the sentence, we first assume “the old man” is a noun phrase. But after hearing no verb, we retrospectively re-analyze the sentence and realize that “man” was being used as a verb.

Dinosaur” begins with Kesha chanting the spelling of the word like a cheerleader: “D-I, N-O, S-A, U-R a dinosaur”. This is an example of a garden path sentence in Kesha’s lyrics: two meanings of “U-R/you are” are elided and functioning simultaneously. U-R completes the spelling of the word “dinosaur” and “you are” is functioning as subject-verb (I know, explaining the joke totally kills it). Garden path sentences are an effective way to generate humorous lyrics. Garden path sentences rely on readers to parse the sentences into chunks as they read left to right in time; music, which occurs strictly in time, can control the listener’s parsing of the sentence by altering the timing of the lyrics.

An even better example of a garden path sentence in Kesha’s lyrics is back in “Sleazy,” immediately after the completion of the first chorus (1:52)…but I’ll let you experience this one on your own!

https://youtu.be/n2kdCJRAiNk?t=113