Guy-Manuel de Homem-Christo and Thomas Bangalter may not be household names - but they like it that way. Together, as electronic supergroup Daft Punk, the frenchmen have led the electronic music revolution hidden behind their iconic robot mask veils. In one week, Daft Punk will release their highly anticipated fourth studio album, Random Access Memories. The concept of the disco-influenced album, according to Bangalter, “was to somehow question the magical powers of recorded audio at a time when pop music is mostly recorded on laptops with a small microphone and a pair of headphones in airport lounges and hotel rooms.”1 What are the “magical powers” Bangalter is talking about? He continues:
…harmonically the samples are just an F minor or a G flat, something not so special. It occurred to us it’s probably a collection of so many different parameters; of amazing performances, the studio, the place it was recorded, the performers, the craft, the hardware, recording engineers, mixing engineers, the whole production process of these records that took a lot of effort and time to make back then. It was not an easy task, but took a certain craftsmanship somehow cultivated at the time.
Today, the term “sampling” brings to mind high-profile copyright lawsuits, bedroom music producers, and (to some,)2 the death of originality – but what about the samples themselves? In an age when digital sampling is just one click away, what purpose do samples serve, and what intrinsic value do they contain? In Walden, Thoreau argues that “The rays which stream through the shutter will no longer be remembered when the shutter is wholly removed.”3 Have samples obscured our understanding of and relationship to musicians and music, or is our relationship with sound simply evolving? Pierre Schaeffer coined the term reduced listening to describe a mode of listening that focuses on the characteristics of a sound itself, without considering its source or meaning.4 Are listeners and producers of electronic dance music engaging in reduced listening? Has the commercialization of dance music resulted in a perceived reduction of musical quality? Has computer music eliminated the musician from our understanding of music?
While sampling has certainly changed the way musicians make music, so too has our understanding of sound and how humans perceive sound. Throughout the 20th century, engineers at AT&T’s Bell Telephone Laboratories were studying the sound of the human voice, and the characteristics of human hearing.5 At the same time, musicians approached sound in a less technical way, leaving the final technicalities of music production to specialized engineers. However as new technologies emerged that changed the way we listened to music, the sound of the music itself changed to accommodate these new formats. We see the effects of these technological shifts in both the mp3’s emergence as the dominant sound format and in the “The Loudness War,” an apparent competition to produce, master, and release recordings of increasing loudness.
Traditionally, mastering was a process used to prepare a recorded song for commercial release. Mastering engineers would use tools to improve a song sonically: they would use EQ to boost or attenuate certain problematic frequencies in a mix, compressors to subtly shape the dynamics of a mix, or add slight distortion if a recording lacked “sparkle” or “shine.” Before the invention of the CD, people generally listened to music with record players in their homes, or on the radio in their cars. Because people were listening to records on higher-fidelity systems, enhancing clarity before pressing to vinyl was the engineer’s primary focus. The CD and mp3 changed the way many mastering engineers approached their work: how can one assure a good listening experience in a variety of listening environments? The answer is loudness.
An ear’s response to sound is non-linear and varies with frequency. The human ear is most sensitive to those frequencies in the midrange (where the human voice lies) and less sensitive on either extreme. The response curve of the ear also changes with volume, becoming flatter as volume increases. In order to maintain a perceived balance between different frequency bands, a “flat” playback system may need to be EQ’d differently for differing volume levels. The volume dial on your stereo attempts to correct this problem by applying a progressive equalizer that compensates for the “equal loudness contours” of human sonic perception. For example, because our ears become less sensitive to bass and treble at lower levels, a loudness control may add bass and treble at lower volumes. Some research indicates that the ear is even more sensitive to these relative EQ changes than to the volume change itself. As a result, music or other familiar audio sources that sound correctly equalized at one level may sound a little “off” at a different volume. As the volume of a sound increases, we perceive its sound as “flatter,” that is, more balanced, which can mask mixdown errors and generally make the sound “better.”6 Acknowledging this phenomenon, mastering engineers focused on making records loud, with less emphasis on clarity. Any CD that was mastered “hotter” would sound better to casual listeners than another, “traditionally” mastered CD. Using compressors and limiters to restrict the dynamic range (relative loudness and softness) of audio files, engineers were able to make their masters seem louder without touching any volume knobs. This process manifested itself in records like Metallica’s Death Magnetic, which has such a high average loudness that peaks reached above the limit of the CD format itself, resulting in clipping and digital distortion (no matter the playback volume or mechanism.) Even the engineer in charge of mixing the album thought he went too far, later saying on a message board, “Believe me I’m not proud to be associated with this one, and we can only hope that some good will come from this in some form of backlash against volume above all else.”7 As CDs were in their height of popularity (and the Loudness Wars in full effect,) the mp3 further degraded society’s understanding of sound quality.
The mp3, designed by engineers from the Motion Picture Experts Group, allowed anyone with internet access the ability to share music. The cost of this ease came at the expense of sound fidelity: the mp3 compresses sound by removing bits of sound humans cannot perceive under casual listening circumstances. It relies on a listener’s brain to fill in missing information. Because mp3 files were so easy to share, they quickly replaced both CDs and Vinyl as the dominant audio format. As the American public adopted the mp3, its ease of sharing overshadowed its poor sound quality. For most people listening in their cars or through cheap computer speakers and headphones, the idea of fidelity was mostly forgotten. People wanted their music loud, they wanted lots of it, and they wanted it for free.
At this point, artists, record labels, and engineers were at a crossroads: the artist was often unaware of the technical aspects of music mastering, while the engineers kept mixing louder and louder to compete with releases from other record labels. However, as the CD faded from use in the 2000s, other technologies like iTunes became the primary means of audio playback. Sound Check, an algorithm within iTunes that automatically adjusts average volumes between tracks to the same level, has enabled engineers to focus less on mix loudness, and more on mix dynamics.8 At this same time, record labels were losing profits due to mp3 sharing, and a new wave of independent artists were emerging. As access to music production and sharing technology became more and more available through the Internet, the need for middle men (record labels and mastering engineers) was waning. Electronic music was becoming popular in America, and an increasing number of people began producing computer music.
But before we understand the current state of electronic music production, we must examine its beginnings. Electronic music was born in two racially segregated black gay clubs Chicago’s “Warehouse” and New York’s “Paradise Garage.” The two distinct forms of electronic dance music emerging from these clubs were simultaneously forming out of disco’s ashes. Disco, immensely popular in the 70s, was essentially dead by the early 80s after a series of low quality disco releases in the mid to late 70s, including bad disco remixes of pop songs (which would set up disco for failure in a pop vs. disco battle of public opinion.) In Chicago, a DJ named Frankie Knuckles began playing primitive electronic remixes of disco songs at Warehouse. Because the electronic drums had never been heard before, the sound was fresh and avoided disco associations. The first “house” song (named after Warehouse, the club where Knuckles played it) was “Fantasy” by Z Factor. Disco and clearly influenced Z Factor in the production of the track, which plainly sampled disco basslines, but the synthesized percussion generated by the Roland TR808 synthesizer breathed an entirely new life into the track. The drums were not samples, but rather synthesized approximations of real-life drum sounds. Similar tracks began emerging, and producers would bring their tapes to Frankie Knuckles, who would play the tracks for clubgoers. A prominent DJ in the gay scene named Ron Hardy began to produce tracks not based in disco. Instead of employing disco instrumentation, Hardy utilized a synthesizer to create raw and wild rhythm tracks. The sounds used in his songs were computer-generated – they were sounds without a human source: simple sine wave bass tones with spurts of noise suggesting the sound of a drumstick hitting a surface. The sounds connected viscerally with the listener, and contained no indication of their source or meaning.
The house sounds developed in America in the 1980s never achieved mainstream popularity, and mostly fizzled out by the late 1990s. However, across the ocean, the UK club scene was expanding rapidly. Clubs in increasingly racially integrated urban areas began favor “black” music (including reggae, soul, jazz, hip hop, and house) over the indigenous indie rock. Legal radio stations didn’t play black music, but pirate radio stations like Kiss and LWR had over 500,000 listeners at their peak according to local newspaper surveys. The electronic music scene in the US remained underground throughout this period of development in Europe, as Hip Hop and Pop dominated the billboard charts and radio waves. Of course, electronic sounds did not disappear from the American soundscape, as synth-pop dominated 80s radio. Although the mainstream music outlets (which were essentially controlled by the big record labels) didn’t promote house in the United States, by 2009 an overwhelming number of teens had access to high energy electronic dance music, or EDM, through the Internet.
The Internet allowed for the easy discovery of new music, and thrill-seeking teens began to enjoy the visceral electronic sounds they found. Entrepreneurs capitalized on this newfound American interest by throwing massive outdoor electronic festivals like Ultra and Electric Daisy Carnival. These festivals and the DJs that played them exploded in popularity, with artists like Skrillex being paid over $100,000 per show.9 As a result of this rise in popularity, waves of bedroom producers began producing electronic tracks with hopes of overnight celebrity. While the loudness war was coming to a close within commercial music, it was only beginning within the independent dance music community. With hundreds of DJs playing shows in major American cities, bedroom producers focused on making tracks that competed with professional tracks not in terms of musical merit, but rather sound texture and loudness. They focused on loudness, and the musical content of the tracks took a back seat to their sound quality itself. Says Daft Punk’s De Homem-Christo, “EDM is energy only. It lacks depth. You can have energy in music and dance to it but still have soul.”10Why, after all the lessons learned in the 1990s about the dangers of over-compression, did dance music producers resort to the same loudness competition? After all, DJs often have much higher quality playback systems… with volume controls! While one answer is surprisingly familiar - the primary mode of electronic music storage and sharing facilitates loudness competition - the other lies embedded within the sampling culture of commercial dance music itself. We’ll come back to sampling in a bit.
Dance music was born and raised in clubs, so why produce loud music intended for low quality sound systems? It is designed to be played in clubs, but is shared primarily through internet sites like Soundcloud. While loudness is not an issue in clubs, the majority of EDM is being streamed through laptop speakers. As a result, producers are competing just as the engineers did in the 90s. Traditionally, artists left mastering and loudness to the engineers, but now, home producers assume all three roles: artist, engineer, and distributer. As a result, electronic artists have begun producing music in new ways. Instead of raising volume after a track is completed, they use our modern understanding of sound and listening to build loud tracks from the ground up. As a result, these tracks are much more powerful in the club, but also compete with releases from other artists as they stream from websites like Soundcloud.
In years past, one would need years of training to record, mix, and master recorded sound. The nuances of recording and audio processing take hard work to perfect. How are new electronic artists emerging at such a high rate without formal training? While production training is more available than ever with resources like YouTube and Internet forums, the secret’s in the samples. ReFX is a company that produces sample packs called Vengeance for use in music production. These sample packs contain short samples of a wide variety of processed drum “supersounds.” These sounds seemingly eliminate the need for expensive and time-consuming recording experience, and put high-fidelity audio in the hands of anyone with a laptop. In addition, popular software synthesizers like Native Instruments’ Massive come with thousands of dancefloor-ready instrument and synth presets. The ease of which these sounds can be looped into a track has given rise to an “EDM sound” - one based in loud percussion and synthesizer presets. Remarks Daft Punk’s Bangalter:
The problem with the way to make music today, these are turnkey systems; they come with preset banks and sounds. They’re not inviting you to challenge the systems themselves, or giving you the ability to showcase your personality, individuality. They’re making it as if it’s somehow easier to make the same music you hear on the radio. Then it creates a very vicious cycle: How can you challenge that when the system and the media are not challenging it in the first place?11
In a recent interview, De Homem-Christo remarked, “I don’t know EDM artists or the albums. At first I thought it was all just one guy, some DJ called EDM.” Asked if that was because it all sounded the same, he replied” “A little bit, yeah!” Bangaltar then jokingly added, “Maybe it’s just one guy called Eric David Morris.”12
In recent months, a backlash against perceived unoriginality has made “EDM” into a sort of derogatory term. Having mastered the science of loudness and the technical aspects of production itself, producers are now questioning the sample-based systems ubiquitous in EDM. What are some advantages of sampling, and what are the drawbacks? Culturally, we have generally accepted sampling in Hip-Hop, so why so much hostility towards sampling in EDM? What “magical powers” do sounds contain, and can samples capture these powers in the same way as live recording? The magic of a sample lies in its timbre. While the dominant frequency of a snare drum might be, for instance, 200hz, the other frequencies contained in the individual sample give it timbre and convey an immense amount of information. From one snare drum sample, a listener could identify the size of the drum, the material of the drum, the shape of the drum, or the velocity with which the drum is hit. Sounds can also convey social cues. As Stoever-Ackerman argues in “Splicing the Sonic Color-Line,” “In essence, we hear race in addition to seeing it. Sonic phenomena like vocal timbre, accents, and musical tones are racially coded, like skin color, hair texture, and clothing choices.” As Katherine Bergeron notes, “timbre comes from the Greek tympanon, or drum… implying a physical form, a skin or parchment stretched across a frame. Like the sensitive skin of the middle ear (the tympanum, or eardrum), this taut surface, rady to be struck, is a site for both producing and receiving vibrations.” The wealth of information able to be gleaned from a single sample’s timbre is remarkable, and sample packs offer a wealth of different evocative sounds to be used in EDM production, so why the uproar? Listening to electronic music often depends on reduced listening which, much like the mp3, involves discarding sonic information. Samples are selected from a variety of sources, and depend on the listeners imagination to form a cohesive drum kit. As a result, the timbral information available within the samples is often lost. Furthermore, synthesized drum sounds are not produced from a surface at all but from a computer algorithm. These sounds operate as tympanon only within the listener’s head, and render understanding source and meaning impossible. Having perfected the technical aspects of electronic music production, forward-thinking musicians like Daft Punk are looking to regain this lost information in an effort to (ironically, given their robotic brand) make music by humans for humans.
In order to bring an understanding of source and meaning into Random Access Memories, Daft Punk insisted on using live instrumentation. Daft Punk has become, perhaps now to their chagrin, synonymous with electronic music. Electronic music, born as a response to musically-questionable disco edits, has been returned full circle by its ringleaders as a reaction to musically-questionable electronic edits. Though music is made electronically, a human touch is irreplacable. As LaBelle notes of Alvin Lucier’s “I Am Sitting in a Room,” “the stutter is the very heart of the work.”13 Human imperfections defining timbre allow a two-way relationship between artist and listener, indicating both sounds and their source. Are Daft Punk leading us into the future, or simply reverting back to an earlier sound for its retro appeal? Context is everything – while Random Access Memories could have fit in with disco releases of the 70s, it instead emerged after more than 40 years of sonic development. Disco may be dead, but Daft Punk is using it to give electronic music life.
2“The Death of Originality: Long Live the 80s” http://cheriqui.blogspot.com/2011/11/death-of-originality-long-live-80s.html
3 Thoreau, Henry David, “Sounds” from Walden, Civil Disobedience, and other writings (Norton: 2008) p. 75.
4Chion, Michel “The Three Listening Modes” from Audio-Vision (Columbia University Press, 1994) p. 29.
5Mills, Mara “Deaf Jam” from Social Text 102, Vol. 29, No. 1 (Spring 2010) p. 36.
13LaBelle, Brandon “Finding Oneself: Alvin Lucier and the Phenomenal Voice” from Background Noise (Continuum International, 2006) p. 126.
The homies at White Folks Get Crunk are giving all GO BUCK fans 25% off at their store!!! BOOM! Check it out store.whitefolksgetcrunk.com