We Break It, You Buy It: Degrading Consumer Audio Quality for Convenience and Profit

Luke Gilfeather

March 9, 2014

Abstract

Manufacturers, engineers, and scientists have worked for decades to broaden the capabilities of audio equipment. Audio consumers have yearned for the highest quality playback. The factors that determine audio fidelity have been stretched and optimized to bring consumer audio into realms of realism and fidelity never experienced before. The ability to record, manipulate, and deliver nearly perfect audio to end listeners has been achieved. Yet in these times of audio perfection deliberate damaging processes degrade the consumer audio experience. This paper demonstrates that the pursuit of loudness and the reduction of file sizes undermine the quality of consumer audio.

Audio systems and distribution methods in use today are capable of creating and delivering the highest fidelity ever available, yet little of what is conveyed to end listeners approaches the ideals of high fidelity.

Introduction

Since the widespread adoption of digital recording processes in the 1980s, audio engineers have been able to capture and work with the highest fidelity audio ever available. The quantifiable properties of an audio recorder, which describe a natural acoustic event, made significant improvements with the introduction of digital recording. Quantities such as frequency response, transient response, distortion, signal to noise ratio, and dynamic range, suddenly became precise enough to record and reproduce sounds that differ from an original by margins bordering the limits of human auditory perception.

Storing audio signals digitally allows absolute clones of master recordings to be delivered to end listeners. With lossless digital audio formats, audiences are actually accessing the very same master recordings produced in audio production facilities. This is an advantage over analog recording and playback methods where each subsequent copy or generation of a recording suffers a measurable difference from the original. It is not uncommon for an analog recording to undergo eight or more generations before it reaches an audio consumer. Even though digital recording, production, and delivery processes of today are able to deliver a very faithful reproduction of an acoustic event, audio recordings that approach the ideals of “high fidelity” are few and far between. It is puzzling that audio professionals and consumers would be obsessed with the high-end specifications of their audio equipment and recordings when most of the audio being produced and delivered doesn’t take advantage of the extended capabilities.

There are several factors that stifle today’s audio quality. Some are conscious production decisions to process audio in ways to make it more desirable or able to be heard over the background noise of a busy life. Other factors are driven by economics and reduce the quality of audio so that it may be transmitted using less bandwidth or may be stored using less memory. In many cases, more than one of these factors damage the audio. Some say the low quality of today’s audio contributes to loss of sales, shorter-term stardom, and listener fatigue. Others have shown that many end listeners are not able to discern a difference between high quality and low quality audio, and some listeners actually prefer lower quality audio when tested.

Background

The chain of processes an audio signal follows from the transduction of an acoustic event to end listener has several stages. An assessment of audio fidelity can be applied to each segment of the audio chain: recording, production, delivery, and terminal listener playback. The fidelity the end listener experiences with respect to the original is only as truthful as the weakest link in the chain. As advances in electronics, transducers, and digital systems have progressed, different stages of the audio chain have been the arbiter of delivered fidelity.

High Fidelity

Audio equipment pioneer Henry Alexander Hartley invented the phrase high fidelity in 1927 to represent sound reproduction that was faithful to an original (Hartley 1958, 200). Hans Fantel credits William Shakespeare with a suitable definition of fidelity: “ ‘Tis, as it were, to hold the mirror up to nature” (1973, 45). The objective of high fidelity recording, transmission, and playback is to mimic a natural acoustic event as closely as a mirror might a scene in nature.

Audio is a wave. The vibrating string of an instrument creates a wave of air compressions and un-compressions known as rarefactions. A wave by definition will travel through a medium, in this case air. Compressions and rarefactions can be very well represented by an electrical signal. Electrical potential away from its equilibrium, known as ground, is easily proportioned to how different an air pressure is from its equilibrium, atmospheric pressure. When an acoustic event that generates these compressions and rarefactions is transduced to an electrical signal, an analog signal of the event is created.

Parameters that describe analog signals can be measured. Waveforms can be quantitatively compared. Thus we are able to know with the authority of mathematics, how similar a delivered audio production is to a first-hand acoustic event. Though there are many subjective and qualitative descriptions of how audio sounds, a few key quantities can assay basic fidelity. These parameters are frequency response, dynamic range, noise, and distortion. It is not to say that these are the only descriptors, but fidelity will suffer if any of these qualities are deficient.

Weak Links In The Audio Chain

From the 1940s through 1980s, analog electronics were the only means to record, manipulate, and deliver audio to audiences. Until the 1960s the only practical means for creating audio equipment was to use vacuum tubes. Earlier and lower cost vacuum tube electronics degraded audio signals because of their limited bandwidth (poor frequency response), distortion, and background noise. Throughout the sixties and seventies, both transistorized and vacuum tube equipment became available that had very low distortion and noise figures. With these advances, the weakest links in the audio chain became the initial recording process and the final delivery to listeners.

Perfection of Recording and Delivery

In the twenties, Harry Nyquist, a Bell Labs engineer, concluded that any analog signal can be recreated digitally… In 1937 a British engineer named Alec Harley Reeves invented a suitably exact sampling method called pulse code modulation, but it was many years before computers became powerful enough to really explore the possibilities of PCM (Milner 2009, 193).

Using pulse code modulation sampling as a means to record and deliver audio became a consumer reality in the 1980s. PCM sampling remedied the weakest links in the audio chain: recording, and delivery to end-users. With PCM techniques,

All signals ... are captured perfectly and completely by sampling ... Sampling doesn't affect frequency response ... The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal. (Montgomery 2012)

Upon perfect and complete signal capture, and lossless reconstruction, a means had been perfected to preserve fidelity of an audio signal once it had been transduced from an acoustic event. A requirement of all recording systems is that the captured signal be stored for subsequent playback. With digital recording, binary storage methods are used. These allow lossless transmission of audio from the production facility to the listener.

The methods of transduction from acoustic energy to electrical energy have been quite exact since the 1950’s. Nowadays, once a signal is transduced, it undergoes a very accurate analog to digital conversion. Once the signal becomes digital, it is possible to maintain perfect fidelity through subsequent storages and transmissions out to the audio consumer. The apparatus in use today can deliver a very high fidelity waveform from acoustic event out to the terminal listener.

Stepping Away From Audio Perfection

With the onset of digital recording and manipulation came entirely new ways to compromise signal quality (Howard 2002). Two prevalent actions that destroy fidelity in the modern audio arena are loudness maximization and file size compression.

Loudness maximization uses dynamic range compression to make music sound louder. Dynamic range is one of the key fidelity parameters that humankind has spent decades expanding, yet audio engineers are continually asked to reduce dynamic range in order to make music appear louder. Recording artists and record companies insist louder sounding music will sell better. They consistently demand a louder and louder sounding product. Because of its fatiguing and unpleasant sound, maximizing loudness has been the worst thing to happen to audio quality in decades (Vickers 2011).

Digital audio files are described using the terms “lossy” or “lossless” depending on whether file size compression has been used. Lossless means no data has been removed, and lossy means data has been strategically removed in order to reduce the size of the file. The term lossy in itself admits that there is loss when file size compression is applied. The Motion Picture Experts Group known as MPEG designed the MP3 coder-decoder (CODEC) in 1991.  MP3 is a lossy CODEC used to reduce the size of digital music files (Pras et al. 2009). Lossy audio formats like MP3 and AAC spurred a revolution in portable audio devices that could hold thousands of songs. Lossy formats also allow music to be distributed via Internet. Both of these advantages are very attractive to pinchfist record companies at the expense of the consumers’ audio quality. When music is converted to the common lossy file formats MP3 or AAC (the Itunes default), between 80 and 90 percent of the music is simply discarded (Milner 2009, 357).

Figure 1. A visual representation of the effects of file size compression. "Compression - Mathematicians and computer scientists are lazy." On Ramps. https://onramps.instructure.com/courses/1196409/wiki/compression-algorithms (accessed March 2, 2014).

 

Humankind has struggled for nearly a century to refine and extend the capabilities of audio recording, production, and delivery systems in pursuit of delivering high fidelity to the audio consumer. Nearly as soon as these achievements approached their objective, deliberate endeavors were taken to erode them.

Analysis

         Audio quality is subjective. Opinion can be swayed by the mere appearance or perceived expense of playback equipment. Subjective testing methods must be used to evaluate it. In the audio fields of production, engineering, software, and hardware manufacturing, double-blind trials are used to gauge progress and compare fidelities.

A common double-blind comparator is known as ABX testing. The ABX method presents a sample of audio designated as A. Then, on demand of the test subject, will present a different sample designated as B. Finally, on demand, audio sample X will be presented. Sample X is a random selection of either sample A or B. The test subject will be asked to identify X as either A or B. Data is gathered from a statistically relevant number of trials. If approximately half identify X as A and half identify X as B, it is concluded there is no discernable difference between A and B.

Audio Engineers Brad Meyer and David Moran published results of ABX testing in 2007 that verify the incredible fidelity of the “CD Quality” consumer digital audio chain. Audio is considered CD quality if it is sampled at 44.1 KHz and uses 16 bits of resolution per sample. There is debate about whether CD quality sampling and playback techniques are complete representations of human detectable audio content. Some argue digital sampling and playback has undesirable effects. Detractors claim PCM sampling and playback sound harsh, shallow, brittle, and cold.

In the test choice A directly connected the audio source, which would remain constant, to a very high quality playback system, which would also stay constant. Choice B connected the audio source through a PCM sampling analog to digital conversion and then converted the resulting digital data back to analog audio. The analog output of the converter was connected to the playback system. The intention was to find out if listeners could hear the effects of the digital recording and playback processes.

It is understood that directly connecting a source to a playback system as done in choice A would provide the best possible fidelity and suffer negligible losses well outside of the span of human perception. Choice B is a true representation of the entire digital audio process in use today. Through choice B, audio would undergo the same analog to digital conversion a signal would experience immediately after being transduced from an acoustic event. The collected digital data will then undergo a digital to analog conversion, as it would in audio delivery systems used today.

The results of 554 trials were essentially the same as chance with 276 correct answers or 49.82% (Meyer et al. 2007, 755). Statistically, nobody could differentiate between path A, the high fidelity ideal, and path B, an accurate portrayal of today’s CD quality audio recording and delivery process. This result signifies that today’s digital recording and delivery processes are indistinguishable from the highest fidelity ideal.

Though its sound is transparent, compact disc proved inconvenient for portable use and Internet distribution. The specification of CD quality audio is to sample audio for each channel at a rate of 44,100 times per second and save each measurement using sixteen bits of computer memory. Logging 44,100 measurements per second produces enormous data. To allow stereo, that amount of data must be doubled to represent two independent channels, a left channel and a right channel. To decrease file sizes the audio industry developed file compression schemes known as CODECs. File compression uses psychoacoustic models to determine what information casual listeners will not readily miss. In some cases, file compression reduces the bandwidth of the audio. Bandwidth is one of the key fidelity parameters that had been pursued for improvement for nearly a century. Lossless formats like compact disc (uncompressed PCM) do not reduce bandwidth.

File compression is commonly used to reduce the size of digital music files but as seen in figure 1, it introduces artifacts (Pras et al. 2009). Pras et al. conducted double blind testing to compare lossless CD quality audio (uncompressed PCM) to files that had been reduced in size using file compression. Their results showed that listeners significantly preferred CD quality files over mp3 files (Pras et al. 2009).

Audio Engineering Society Fellow Sean Olive demonstrated that teenagers and college students could hear the difference between CD quality audio and MP3 compressed audio. His trials showed they preferred the sound of CD-quality reproduction 70% of the time (Olive 2012). File size compression is knowingly applied to make audio files smaller at the expense of audio fidelity. Comparisons are often made between audio CODECs using ABX methods. Many times the CODEC is rated as acceptable for its listening application. “Rating CODECs’ degradations as acceptable at best has little congruence with conventional hi-fi principles, where any degradation is by definition unacceptable” (Howard 2002).

While the advent of formats like MP3 created a huge boom in the consumer audio industry by allowing portable digital audio players to hold thousands of songs, they measurably destroy the fidelity of the final audio product. The audio industry has a long history of increasing the average audio signal level per unit time in order to be perceived as sounding louder than competing audio or music. It has been shown that when playing back two identical audio recordings at different volume levels, listeners prefer the one that is louder. Music that is intentionally made louder is thought to sell better. Loud music can be perceived as having more energy and being more exciting because loudness typically accompanies intense activity in nature (Blesser 2007).

Making a recording sound louder than other recordings can only be achieved by reducing its dynamic range. Reduction of dynamic range is known as compression. Often dynamic range compression and file size compression are confused because the type of compression is supposed to be derived from context. They are disparate. Dynamic range is a quantity that describes the difference between loud and soft sounds.

Extreme application of dynamic range compression is called hyper compression. “One of the main complaints about hyper compression is that it flattens the dramatic and emotional impact of the music” (Vickers 2010). Dynamic range compression also causes damaging distortions that are compounded when file size compression is applied. At a 2009 Audio Engineering Society session, Marvin Caesar explained how compression CODECs respond poorly to heavy dynamic range compression. This can be especially problematic with the way CODECs are used in typical consumer signal chains. As a result, Caesar claimed, “We’re foisting pretty rough audio on some of our listeners” (2009).

Wide dynamic range provides musical emotion, perceived depth, punch, and musical contrast. Generations of scientists and engineers have struggled to create recording and playback systems with wider and wider dynamic ranges. Remarkably, now, with the capability for wide dynamic ranges, conscious actions are being made to diminish it back to quantities held by the earliest recording devices. After all the progress we have made expanding dynamic range, mastering engineer Bob Katz best described today’s bazaar reality when he declared, “we’re making popular music recordings that have no more dynamic range than a 1909 Edison Cylinder!” (2007, 157).

Figure 2. The decline of precious dynamic range. Vickers, Earl. "The Loudness War: Background, Speculation and Recommendations." Paper presented at Audio Engineering Society 129th Convention, San Francisco, CA, USA, November 4, 2010.

 

The ability to record, process, and deliver stereo audio with CD quality uncompressed PCM is in place today. CD quality PCM cannot be discerned from the high fidelity ideal. In the face of this, steps are taken to reduce audio quality by means of file size compression for the sake of economics and convenience. Quality is further reduced by over-applying dynamic range compression in attempts to make audio sound louder. Because of deliberate actions, little of what is conveyed to end listeners approaches the ideals of high fidelity.

Conclusion

As consumer Internet bandwidth and digital storage capacity continue to increase, the need for lossy CODECs may subside. Strides are being made to improve consumer audio. Apple has chosen a new standard CODEC for ITunes known as AAC or M4a. AAC represents a noticeable improvement over the MP3 format. While lossy formats are still the norm, there seems to be admission from the industry, that consumer audio quality is important and still isn’t what it could be.

Both Apple and the European Broadcast Union (EBU) are driving efforts toward the restoration of healthy dynamic ranges in consumer audio. The EBU has developed guidelines known as EBU-R128. EBU-R128 is a standard to measure loudness against. Audio productions are measured against this standard and are rejected for broadcast if they are too loud. Since dynamic range compression is used to make audio louder, this standard has led to a decrease in the use of damaging dynamic range reduction.

A permanent component of ITunes Radio is software called Sound Check that will automatically turn down audio that is too loud.  This is done so that when consumers listen to ITunes Radio, each song is about as loud as each other song. Listeners will not be startled or inconvenienced by dramatic changes in loudness among songs. Sound Check removes the incentive to make audio productions sound louder since every song will be played back at a standard loudness. It is very sobering to hear the negative effects of dynamic range compression when the excitement of the additional loudness is eliminated.

It is clear consumer audio quality is knowingly degraded for the sake of economic gain and convenience. It is important for consumers to understand how any products are modified during production and delivery. Many industries detract from nearly perfect goods for the sake of economics and convenience. The food industry parallels the audio industry in this way. Consumers can be hard pressed to find a can of corn or peas that does not contain added salt or sugar. Most cans of kidney beans contain preservatives, yet the canning process alone was supposed to provide preservation. Even apples and cucumbers are coated with wax to make them look more desirable. A system is already in place to deliver unprocessed fresh foods to the shelves, yet little of what reaches shoppers is unprocessed. Comparable processes of adulteration are commonplace in the audio industry, but unlike ingredient labels for food, there are no provisions for listeners to freely know or understand what is happening to their audio.

 

References

Blesser, Barry. "The Seductive (Yet Destructive) Appeal of Loud Music." Communauté électroacoustique canadienne / Canadian Electroacoustic Community, 62007.

Caesar , Marvin. "Listener Fatigue and Longevity." Paper presented at Broadcast and Media Streaming Session B9, AES 127th Convention, New York, NY, USA, October 11, 2009.

"Compression - Mathematicians and computer scientists are lazy." On Ramps. https://onramps.instructure.com/courses/1196409/wiki/compression-algorithms (accessed March 2, 2014).

Fantel, Hans. The True Sound of Music. New York, NY, USA: E.P. Dutton &

         Company, Inc., 1973. (accessed January 26, 2014).  

Hartley, Henry. Audio Design Handbook. New York, NY, USA: Gernsback Library, 1958. (accessed January 26, 2014).

Howard, Keith. "Will It Still Be Hi-Fi as We Know It?" Paper presented at Audio Delivery the Changing Home Experience -AES 17TH, UK, 2002.

Katz, Bob. Mastering Audio: The Art and the Science. Oxford, UK: Focal Press, 2007. (accessed January 19, 2014).

Ed. Langford-Smith, F. Radiotron Designers Handbook 4th Edition. Sydney Austrialia: Wireless Press for Amalgamated Wireless Valve Company PTY. LTD., 1952. (accessed January 26, 2014).

Meyer, Brad and David Moran. "Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback." Journal of the Audio Engineering Society 55 (2007): 755-779.

Milner, Greg. Perfecting Sound Forever, an Aural History of Recorded Music. New York, NY, USA: Faber and Faber Inc., 2009. (accessed January 26, 2014).

Montgomery, Chris. "24/192 Music Downloads and Why They Make no Sense." http://xiph.org/~xiphmont/demo/neil-young.html (accessed January 19, 2014).

Olive, Sean. "Some New Evidence That Teenagers and College Students May Prefer Accurate Sound Reproduction." Paper presented at 132nd Audio Engineering Society Convention, Budapest, Hungary, April 26, 2012.

Pras, Amandine, Zimmerman, Rachel, Levitin, Daniel, & Gustavino, Catherine. "Subjective evaluation of mp3 compression for different musical genres." Paper presented at Audio Engineering Society127th Convention, New York, NY, USA, 10 2009.

Vickers, Earl. "The Loudness War: Background, Speculation and Recommendations." Paper presented at Audio Engineering Society 129th Convention, San Francisco, CA, USA, November 4, 2010.

Vickers, Earl. "The Loudness War, Do Louder, Hypercompressed Recordings Sell Better?" Journal of the Audio Engineering Society 59 (2011): 346-351.

Appendix A

Quantitative vs. Qualitative Essay

Article 1

Vickers, Earl. "The Loudness War: Background, Speculation and Recommendations." Paper presented at Audio Engineering Society 129th Convention, San Francisco, CA, USA, November 4, 2010.

In the paper "The Loudness War: Background, Speculation and Recommendations." By Earl Vickers, the thesis seems best represented by the two sentences: “Given the incredible technological advances of the last half-century, one might expect that by now we should live in a musical paradise, with a thriving music industry and recordings of amazing depth, texture and dynamic range. Instead, the industry is in decline and In fact, early acoustical recorders had a dynamic range of up to 20 dB (Own, & Fesler 1981), which is more than the range of most recent recordings” (Vickers 2010).

Vickers exhibits three compelling graphs to represent empirical data.  These representations make it easy to interpret the rise in average signal levels (figure 2) and the decline of dynamic range (figure 3).

The illustrations represent trends with alarming rates of change. The impact is exaggerated by limiting the scale of a one hundred-point scale to the top twenty data points. In fairness, only the region of interest is being shown.

It is interesting to note that the data presented figure 2 (Vickers 2010) is simply the inverse of the data presented in figure 3. One could derive the necessary information from one of the figures alone.

Both of these techniques have a thesis supporting effect. By using only the top of the data scales, a reader cannot help but agree that dynamic range looks like it is decreasing at an alarming rate. Using two graphs that essentially represent the same data allowed the author to double the supporting effect of the first graph.

Article 2

Blesser, Barry. "The Seductive (Yet Destructive) Appeal of Loud Music." Communauté électroacoustique canadienne / Canadian Electroacoustic Community, 62007.

 

An interpretation of Blesser’s thesis in "The Seductive (Yet Destructive) Appeal of Loud Music." is: even though audiences are well informed of the dangers, they listen to music loud enough to cause hearing damage.

Blesser supports his thesis by exploring the causes behind loud listening behavior. He proposes three motivations for listening at high volume: social rewards, biological stimulation, and selective aural focus (Blesser 2007).

The author uses comparisons to well known human experiences to help the readers understand concepts. He works in steps to educate the reader so his argument may be clearly understood.

Research is cited to support his arguments. Instead of collating numbers from previous research, verbal compactions that support his thesis are presented.

Compare and Contrast

In the field of audio it seems that most peer-reviewed research employs both qualitative and quantitative methods.  It is hard to find writing in the field that is strictly qualitative. Because audio is very subjective, it may be impossible to find meaningful research that does not have empirical support. Assessment methods for the subjective evaluation of the quality of sound programme material –music (European Broadcasting Union 1997) is an example of many attempts to assign meaningful quantities to purely subjective human experience. It is also an example of an audio paper that uses both quantitative and qualitative methods.

Concrete numbers can support an argument quickly. However, when authors use qualitative methods they must do a good job of teaching the audience the concepts behind their arguments, which is invaluable in getting their point across. Qualitative discussions contain excellent concise summaries of other people’s research.

Research writing does not have to be purely qualitative or quantitative.  Qualitative writing can mention quantities, and qualitative methods are used in quantitative writing.

 

References

Blesser, Barry. "The Seductive (Yet Destructive) Appeal of Loud Music." Communauté électroacoustique canadienne / Canadian Electroacoustic Community, 2007.

Assessment methods for the subjective evaluation of the quality of sound programme material – Music. (Geneva) Switzerland: European Broadcasting Union, 1997. (accessed February 2, 2014).

Own, Tom, & Fesler, John. "Electrical Reproduction of Acoustically Recorded Cylinders and Disks." Paper presented at 70th AES Convention, New York, NY, USA, October 30, 1981.

Vickers, Earl. "The Loudness War: Background, Speculation and Recommendations." Paper presented at Audio Engineering Society 129th Convention, San Francisco, CA, USA, November 4, 2010.

 

Appendix B

Visual Rationale

Graphic 1

Figure 1. The decline of precious dynamic range. Vickers, Earl. "The Loudness War: Background, Speculation and Recommendations." Paper presented at Audio Engineering Society 129th Convention, San Francisco, CA, USA, November 4, 2010.

Figure 1 shows a decline of dynamic range in popular music as delivered to end listeners. The only way to make music sound louder is to reduce its dynamic range. The axis labeled Dynamic Range, dB indicates the average amount of dynamic range being delivered to listeners. The available dynamic range of a compact disc is around 90 decibels. This is a good graphic because it shows an alarming downward trend. It shows that less and less of the available dynamic range is being used. The graphic helps the reader understand how audio quality is being eroded by intentionally reducing one of the key components of audio fidelity.

         A graphic that was not chosen showed the increase of RMS or average signal levels in popular music. As RMS level is increased, dynamic range decreases and vice versa. This graphic would show an alarming upward trend in popular music sounding louder and louder. It was not chosen, because I wrote more about the reduction of dynamic range rather than the increasing apparent loudness in my paper. I think it was more powerful to show something that was hard earned like dynamic range eroding than show something that a lot of people prefer, loudness, increasing.

I feel figure 1 is appealing because of its simplicity. It is easily understood and makes the writer’s point directly and succinctly.

 

Graphic 2

Figure 2. A visual representation of the effects of file size compression. "Compression - Mathematicians and computer scientists are lazy." On Ramps. https://onramps.instructure.com/courses/1196409/wiki/compression-algorithms (accessed March 2, 2014).

Figure 2 uses imagery to demonstrate an audio phenomenon. This is a good graphic because it would be very inefficient to describe the ways file compression effect audio quality using words. Readers can quickly notice the loss of resolution and the appearance of artifacts as the file size decreases. This change of quality can then be related back to audio in writing. Several other images were passed-over because either they were too technical in nature for my intended audience, or were too visually dense to make immediate sense of.

         I feel the image is visually appealing because it presents three main subjects. Upon first glance the reader won’t panic that it will be something that will be difficult to decipher or understand.

 

Reference List

Vickers, Earl. "The Loudness War: Background, Speculation and Recommendations." Paper presented at Audio Engineering Society 129th Convention, San Francisco, CA, USA, November 4, 2010.

"Compression - Mathematicians and computer scientists are lazy." On Ramps. https://onramps.instructure.com/courses/1196409/wiki/compression-algorithms (accessed March 2, 2014).