Holmes CH 27 Computer Synth
Holmes CH 27 Computer Synth
Working with his programming partner, David Poole, Chowning ported Music IV to the Digital
Equipment Corp. (DEC) PDP-1 platform and then by 1966 to the newest generation of DEC computers,
the PDP-6. In the course of converting the code from the IBM platform for which Music IV was written
to the DEC, Chowning and Poole were among the first people to make Mathew’s music programming
language available outside of Bell Labs.
After having successfully ported Music IV to the Stanford computer, Chowning turned his attention
to improving the quality of sounds that could be directly synthesized from the computer. He visited
Jean-Claude Risset at Bell Labs in 1968 and learned about his attempts to synthesize the sounds of brass
instruments through the analysis of trumpet sounds.1 In using a computer and finite waveform
measurements to analyze the sound of a trumpet, Risset discovered the tell-tale fingerprint of the sound
that made it so rich and difficult to synthesize. There was a correlation between the growth of the
amplitude of the sound and its corresponding frequency spectra. The intensity of the signal during its
first few milliseconds was concentrated around the fundamental frequency but then rapidly radiated to
other harmonics at progressively louder volumes. The waveform analysis allowed Risset to then
synthesize the sounds using the complicated process of additive synthesis and more than a dozen finely
tuned oscillators. Chowning had a realization: “I could do something similar with simple FM,” he
explained, “just by using the intensity envelope as a modulation index.”2
Chowning experimented with FM synthesis in 1971 to see if he could apply what he learned from
Risset. Using only two oscillators, and fiddling with the relationship between increased amplitude and
frequency bandwidth, Chowning suddenly found himself producing brass-like tones that were
strikingly similar to those created by Risset’s complicated computer-based simulations:
That was the moment when I realized that the technique was really of some consequence, because
with just two oscillators I was able to produce tones that had a richness and quality about them that
was attractive to the ear—sounds which by other means were quite complicated to create.3
This was a different use of FM than anyone else had made [. . .] It was essentially done with two
oscillators or very few oscillators, so it was a much more efficient technique than Risset’s technique,
and it led to similar results of musical interst.4
As a reality check, Chowning played his brass tones for his friends at Bell Labs. They immediately told
him to patent it.5 This technology was later acquired by Yamaha in 1975 to become the basis for the DX-7
digital synthesizer, introduced in 1983, probably the top-selling synthesizer of all time.
The success of Chowning’s FM synthesis method was due in part to its extensibility. Chowning not
only tested his synthesis method using two oscillators—one carrier and one modulator—but devised
branching schemes where one modulator could affect several carriers or several modulators could drive a
single carrier. Chowning’s composition Turenas (1972) was an avid demonstration of these techniques
(see Figure 27.1). The three-part, 10-minute piece used FM synthesis to generate a wide spectrum of
natural-sounding percussion sounds. Using the Music 10 programming language, the composer created
spatially directed paths for the sounds to travel in relation to four channels and loudspeakers. The effect
rendered a remarkably living atmosphere in which organically resonating beats, clicks, and thumps
reminiscent of naturally occurring sounds travelled around the listener like insects flying in the night.
Turenas was decidedly unlike most computer music being composed at that time and factored
importantly into bridging the gap between the computer lab and the music hall.
Figure 27.1 Computers offer composers the ability to manage the projection of sound as well
as its generation. This diagram created by John Chowning illustrates the sound paths used in
his work Turenas (1972), written using the Music 10 programming language used at Stanford
University.
Source: After Charles Dodge and Thomas A. Jerse, Computer Music: Synthesis, Composition,
Chowning’s algorithms were in good hands at Yamaha. With the composer’s input, Yamaha
engineers devised a method of dynamically modifying the spectra of a digital oscillator by scaling the
pitch—called key scaling—to avoid the introduction of distortion that normally occurred in analog
systems during frequency modulation. The recognizable bright tonalities of the DX-7 were also due in
part to an overachieving sampling rate of 57 kHz in the instrument’s digital-to-analog converter.6
The licensing of Chowning’s patent to Yamaha and others was a generous source of income for
Stanford, earning the university as much as $20 million between 1975 and 1995.7 Some of this earning
found its way back to the Stanford Center for Computer Research in Music and Acoustics. Chowning
and colleagues James Moorer (b. 1945), Loren Rush (b. 1935), John Grey, and instrument designer Peter
Samson (b. 1941) channeled more than $100,000 into a Stanford project to create a digital synthesizer of
their own, driven by a DEC PDP-10 minicomputer and running a ported version of Music V. The
Systems Concepts Digital Synthesizer, affectionately known as the Samson Box after its creator, was
delivered to the university in 1977. Julius O. Smith (b. 1942) was one university composer who used the
Samson Box, lovingly describing it as a “green refrigerator.”8 The synthesizer was designed with Music V
in mind and included hardware equivalents of many of the predefined unit instrument generators
associated with the program. It featured 256 waveform generators and was capable of numerous kinds of
synthesis including FM, waveshaping, additive, subtractive, and non-linear. It was used at the lab for 12
years, eventually being superseded by faster, smarter programmable tools that required much less
maintenance; a fate that also led to the demise of large-scale general-purpose mainframe computers as
processing technology became less expensive and the market for PCs created a new paradigm in the
application of computers.
Title: Stria
Background: This rigorously composed work was the result of early experiments in FM synthesis by
John Chowning. Using high-level computer programming to organize and synthesize the sounds,
Chowning sought to develop a work based on the naturally occurring inharmonic ratios between
carrier signals and modulators. He used the Golden Mean (1.618) as the basic unit of his
calculations, wherein a carrier/modulator ratio would equal 1 to some power of 1.618, the resulting
characteristics of the music all being derived in this way. Writing computer code to compose using
this organizational principle, the result is an inharmonic piece for 26 sine wave instruments, each
programmed to play varying notes with a variety of possible envelopes. The overall shape or micro-
structure of the composition is also reflected in each individual tone of the program. The work has
several sections, each characterized by events determined by ratios of the Golden Mean.
Listen For: Frequency and timbral ranges and the variety of sound envelopes. The
attack, sustain, decay, frequency, amplitude, and reverberation of each tone were
computer controlled. Notice how the piece changes gradually over time, experimenting
with the psychoacoustic effects of sliding tones within a dense tone cloud.
0:10–1:00 The piece begins with a frequency of 1,618 Hz (equal to the Golden Mean), soon joined
by a series of low, sustained, bell-like tones that accumulate and overlap. Notice
between 0:35 and 1:00 how multiple tones stabilize a sustained chord-like tone and then
break apart as additional frequencies begin to dominate the mix. The overall effect is
one of gradual, large-scale harmonic shifts that dissolve from one to the next, without a
true tonal center.
1:01–1:50 Between 1:05 and 1:50 the tonal center of the piece shifts to the upper registers and one
hears several high-pitched tones clustered close together. Note the vibrato effect as
waves of similar frequencies vibrate in and out of phase with one another.
1:51–4:00 The piece gradually shifts to lower and lower pitches over the course of 2 minutes,
creating vibrato and slightly disorienting effects as individual tones waver up and down
within the mass.
4:01–5:14 Reverberation, sliding notes (glissandi), and the spatial distribution of the tones within
the stereo field add to the complexity of the final portion of the work.
In the early 1980s, Lansky (Princeton) and Vercoe (MIT) engaged in a friendly competition to see who
could create the faster and more elegant music programming language. Princeton acquired two DEC
micro-Vax computers running UNIX and Lanksy rewrote mix entirely in C, eventually renaming it
Cmix. Barry Vercoe created Csound based on his earlier program Music 11. Being written entirely in C
language, mix and Csound were easily portable to other machines and extensible by the user. Each
programmer succeeded in developing a highly adaptable music programming environment. Cmix—later
renamed RTCmix for real-time Cmix—was generally considered to be faster, but Csound became the
more popular program and currently has over 1,200 unit generators. Both are available for free and are
still used at most computer music centers in the United States.
As successive generations of computers gradually improved upon the cost and efficiency of
processing speed and data storage, there was a marked increase in application development across the
globe. Table 27.1 summarizes this trend up until the advent of personal computers.
Table 27.1 Evolution of computer technology
Computer generation Era Processing speed Data storage capacity Progression of computer music
(instructions per second) (cost per MB) applications
1st generation (vacuum 1939– 5,000 $40,000 • Composition and scoring
tube) 54
3rd generation 1959– 10,000 to 1 million $10,000 to $5,000 • Additive and subtractive
(integrated circuit) 71 synthesis audio synthesis
• Granular synthesis
Consider what it would be like for an auto mechanic if the technology of automotive engines
changed drastically—fundamentally—every 5 years. The mechanic either learns the new technology and
adapts, or falls behind and becomes unemployed. The challenge facing this imaginary auto mechanic is
not unlike the actual dilemma faced by electronic musicians over the past 50 or 60 years. These were
times of unprecedented paradigm shifts in the field of electronics. Electronic musicians were obligated to
muddle through several stages of re-education just to keep pace with the changing working environment
of their livelihood.
IRCAM
Although the early history of computer music was dominated by developments in the United States,
research began to shift to Europe and other countries as computer centers became more prevalent across
the globe. Of most significance was the founding in Paris in 1969 of IRCAM (Institut de Recherche et
Coordination Acoustique/Musique), a government-supported laboratory for the exploration of
computer applications in the arts.
Established by President Georges Pompidou, the institute appointed composer Pierre Boulez as
director and to lead its efforts in musical research. Boulez hired Jean-Claude Risset to direct its computer
operations. Construction of IRCAM was completed in 1974 and it remains to this day a vital center of
computer music development connected to the Centre Pompidou in Paris (see Figure 27.2). This
international center for the exploration of computer music and media has since hosted many projects
and developed software tools for the use of composers. Chowning perfected some of his FM synthesis
techniques while visiting the Center as a guest. A student of Barry Vercoe’s at MIT by the name of
Miller Puckette (b. 1959) spearheaded the 4X development team and eventually created Max, the
foundation version of one of the most widely used computer music environments for personal
computers.
In the early 1990s, IRCAM developed a new software package called PatchWork, more specifically
designed for the intended purpose of music composition. In France, a new school of thought regarding
music composition evolved with the developers of PatchWork. Working with a technique they called
“spectral music,” composers including Gérard Grisey (1946–98) and Claude Vivier (1948–83) from
Toronto and Tristan Murail (b. 1947) from France began to apply techniques of spectral analysis to the
composition of electronic and non-electronic instrumental music. They were interested in creating new
harmonies based on the mathematical principles of frequency modulation, amplitude modulation, and
ring modulation.
The technique of ring modulation can be used to illustrate how a spectral composer might work.
When two sounds are subjected to ring modulation, neither of the source frequencies is heard, just two
new frequencies based on the sum and difference of the original signals. By using a computer to analyze
the spectra or overtone series of a sound that changes over time, a composer could use that data to
mathematically calculate which frequencies would be produced if that sound were subjected to ring
modulation.
Laurie Spiegel (b. 1945) is a composer and musician—a skilled player of the lute and banjo—who also
nurtures a fascination with computer music dating back to the early 1970s. Spiegel took a job as a
software engineer at Bell Labs and in 6 productive years, from 1973 to 1979, worked alongside pioneers
Max Mathews, Emmanuel Ghent, and other talented engineers to explore the outer reaches of computer
music. It was a heady time for computer music and one that was often viewed with skepticism by those
outside the Lab. Spiegel explained:
Whereas back then we were most commonly accused of attempting to completely dehumanize the
arts [. . .] [A]t this point there has become such widespread acceptance of these machines in the arts
that there is now a good bit of interest in how this came to be.9
While at Bell Labs, Spiegel wrote programs to operate Groove, Mathews’ minicomputer-based real-time
synthesis project. Among Spiegel’s compositions with Groove were Appalachian Grove (1974) and The
Expanding Universe (1975). Groove was rooted in the technology of the late 1960s, however, and by 1979 its
performance and capabilities were being rapidly eclipsed by new technology. About this time, Spiegel
made a decision to leave Bell Labs to work as a consultant on new microcomputer-based products as a
computer engineer and composer. Spiegel dove headfirst into exploring the music applications of
microcomputers:
There were wonderful electronics parts shops all over this neighborhood [downtown, New York
City] until gentrification replaced them with expensive restaurants. Especially important was one
place on West Broadway just below Chambers that sold little kits they made up with things like
buzzers and frequency divider circuits, with a breadboard and all the parts and instructions. I
suspect a few of us composers used some of the same kits. I didn’t do nearly as much of this as
several of my friends, but I kludged up a little synth back in the late 1970s inside a seven-inch tape
box that I played live into a microphone like any other acoustic instrument, including through a
digital signal processor.10
Among Spiegel’s consulting projects from 1981 to 1985 were the alphaSyntauri music system for the low-
cost Apple II computer and the design of a high-end analog musical instrument, the Canadian-made
computer-controlled McLeyvier, that never came to market. After the McLeyvier project fell apart in
1985, Spiegel oscillated back in the direction of small, inexpensive desktop computers and created her
best-known music program, the astonishingly modest but capable Music Mouse for the then new Apple
Macintosh 512k computer and later Amiga and Atari computers. Music Mouse was an enabler of music
making rather than a programming environment. It provided a choice of several possible music scales
(e.g., “chromatic,” “octatonic,” “middle eastern”), tempos, transposition, and other controls that were all
played using a “polyphonic” cursor that was moved with the mouse on a visual grid representing a two-
dimensional pitch range. The simple Music Mouse was an elegant example of what Spiegel called an
“intelligent instrument” that could manage some of the basic structural rules of harmonic music making
for the user.
Figure 27.3 Laurie Spiegel, 1981.
Source: Stan Bratman.
Working with mainframe and microcomputers to compose music, Spiegel’s approach often integrates
a predefined logical process running in real time on a computer with actions that she can take during the
generation of the sound:
What computers excel at is the manipulation of patterns of information. Music consists of patterns
of sound. One of the computer’s greatest strengths is the opportunity it presents to integrate direct
interaction with an instrument and its sound with the ability to compose musical experiences much
more complex and well designed than can be done live in one take.11
Old Wave (1980) was composed using a Bell Labs computer that controlled analog synthesis equipment
through the Groove program. With the computer, Spiegel applied weighted mathematical probabilities
to develop the pitches and rhythms of melodic lines. The weightings could be made to change
“continuously or at given time, so that certain notes would dominate under certain conditions.”12
In another Spiegel work, Pentachrome (1980), an algorithm is used to continuously accelerate the
music, but Spiegel performed the rate and scale of the acceleration by adjusting knobs and dials in real
time. This combination of real-time, almost improvisatory action on the part of a performer who is
otherwise following a process is not an uncommon approach to process music when it is performed live.
Spiegel always kept something of the human touch in her music:
What I could control with the knobs was the apparent rate of acceleration (the amount of time it
took to double the tempo), and the overall tempo at which this happened (the extremes of slow and
fast that were cycled between). This was only one of the processes going on in the piece. Stereo
placement (voicing) was automated, too, except for the percussion voice, which just doubled the
melodic line. I did the timbral changes completely by hand.13
One of Spiegel’s early microcomputer works was A Harmonic Algorithm (1980), composed with an Apple
II computer. This piece is comprised of a program that “goes on composing music as long as the program
is allowed to run,”14 making it the ultimate self-fulfilling prophecy of process composition.
The transition of computer music from large, mainframe systems to microprocessors and personal
computers resulted in a paradigm shift that made electronic music systems affordable and widely
accessible. But living through the transition was not easy for the musician. Nicolas Collins (b. 1954)
explained:
I was resistant initially [. . .] I had taken a summer course in computers when they were like
mainframes and PDP-11 computers and I found them very counter-intuitive and, of course, not
portable [. . .] Then, when I was dragging my heels, Paul DeMarinis [b. 1948] said, “Don’t think of it as
a computer. Think of it as a big, expensive logic chip.” It was like a mantra. That got me going.15
As the 1970s began, the technology paradigm of the computer was making a dramatic changeover to
increasingly miniaturized components. Transistors, originally used individually in analog devices,
became part of the integrated circuit by the early 1960s. The integrated circuit (IC) is a miniaturized
electronic circuit manufactured on a thin substrate of semiconductor material. In addition to transistors,
an IC may contain blocks associated with RAM, logic functions, and the input and output of signals.
The IC, also known as the silicon chip or microchip, can be adapted to many functions and provides the
brains and circuitry for any digital electronic device, from computers, to cell phones, MP3 players, and
televisions. The first ICs were manufactured by Texas Instruments during the early 1960s. Following
advances in miniaturization, such chips became widely used as logic function devices in portable
calculators.
A microprocessor is a programmable integrated circuit. It contains all of the basic functions of a
central processing unit (CPU) on a single chip. Prior to the development of the microprocessor,
computers operated using transistorized components and switching systems, making them relatively
large and expensive. The introduction of the microprocessor greatly reduced the size and manufacturing
cost of computers. There are usually one or more microprocessors in a computer, each with potentially
thousands or hundreds-of-thousands of embedded transistors. The dramatic reduction in the cost of
processing power brought on by the microprocessor led to the introduction of the microcomputer by the
end of the 1970s. At the same time, there was a shift in the development of computer music from large-
scale computer environments to the desktop of the composer.
Before there were microprocessors dedicated to audio signal processing, there were ICs with sound-
specific applications in toys, appliances, and telephones. The first “oscillator on a chip” that was both
inexpensive and widely available was the Signetics NE/SE566, designed for use in touch-tone
telephones. It was the first audio chip that composer Nicolas Collins acquired. The year was 1972 and he
was in his last year of high school and about to embark on undergraduate study with Alvin Lucier at
Wesleyan University, Connecticut. Collins taught himself to assemble a little gadget that could make
satisfying boops and beeps with the SE566: “It cost $5, which seemed like a lot of money at the time. But,
you know, the synthesizer was $5,000.”16 It turned out that Collins’ discovery had also been made by
several other soldering composers. A few years later he was able to look “under the hood” of one of
David Behrman’s (b. 1937) early homemade synthesizers. This was not a computer, nor even a synthesizer
in the traditional sense, because it had none of the usual paraphernalia found on commercial
instruments, such as voltage-controlled filters, envelope generators, and modulation wheels. All
Behrman wanted was a lot of oscillators. He soldered them together along with logic circuits and pitch
sensors to create an early logic-based interactive sound synthesizer. It was used in his work for
synthesized music with sliding pitches. Tones were triggered by several musicians and sustained by the
synthesizer, dying out after a few seconds. As a tone died out, it modulated or deflected the pitches of
other tones that were being played and this caused sliding pitches to occur during the attack and decay
parts of a tone. The soldering composer had crossed the first line into the digital age. The chips provided
him with a sonic wall of wavering, digital bliss. Behrman had become the “Phil Spector of Downtown,”17
the father figure of a new wave of electronic music tinkering.
Collins called the Signetics chip the “cultural linchpin for an entire generation” of composer-hackers.
A lot of tinkerers learned basic IC breadboard design with the SE566. Even more significant was that,
before too long, the Signetics chip was already obsolete, only to be replaced by the next generation. Each
successive IC was more versatile yet less expensive. The economics of technology were for once working
in favor of the electronic musician. Composers Collins and Ron Kuivila (b. 1955) had just started taking
classes at Wesleyan:
We were like the idiot twin children of Alvin Lucier. We were desperately trying to learn
electronics. I don’t think either of us had any real intuition for it. We just forced ourselves to do it.
What else could you do? You were a student, you had time and no money, so you were trying stuff.
But here’s what happened. Technology got cheaper and more sophisticated and there was a
generation of composers who taught themselves this stuff. There was Ron, myself, John Bischoff,
Tim Perkis, Paul DeMarinis. Those are the names that come to mind offhand. And we’re all about
the same age.18
I remember riding on the Cunningham bus in the early 1970s with manuals about logic gates [. . .]
There was a period several years before the computer entered the picture where I remember we
could do switching networks.19
As a new generation of composers was discovering the work of Mumma, Tudor, and Behrman, they
began to ask for help in learning how to build their own instruments. A watershed event for a select
group of these young composers was the “New Music in New Hampshire” workshop in Chocorua, New
Hampshire, in the summer of 1973 (see Figure 27.4). For a little over 2 weeks, more than a dozen students
learned how to build electronic instruments and compose for them. Behrman and Mumma both taught
courses in making homemade instruments using solid-state electronics, circuits, oscillators, envelope
generators, and other components of synthesis. Tudor’s contribution was a workshop in a somewhat
opposite direction, in learning techniques for creating “sound transformation without modulation.” The
result of the Tudor workshop was the first performance of Rainforest, a large-scale electroacoustic
environment performed by attending musicians.
Figure 27.4 Promotional flyer for the Chocorua summer workshop.
Source: Gordon Mumma.
Behrman next went to California. Mumma had been invited to the University of California at Santa
Cruz (in 1973) to establish an electronic music studio there. Behrman joined Robert Ashley at Mills
College in northern California in 1975. The Bay Area became the West Coast’s experimental station for
soldering composers. Rooted in Silicon Valley and drawing nourishment from the proximity of the first
microcomputer manufacturers, the Mills program attracted many young soldering composers, including
Paul DeMarinis, Ron Kuivila, Laetitia deCompiegne, and John Bischoff.
I remember saying to myself, “No, I’m not going to go down this path into computer software” [. . .]
There were lots of people there who were interested in this new microcomputer thing that was just
coming out. Students started coming in with the very first kits.20
Up until then, the synthesizers Behrman had been building were hardwired to do only one thing, such as
play a defined set of oscillators:
It seemed that this new device called the microcomputer could simulate one of these switching
networks for a while and then change, whenever you wanted, to some other one.21
The breakthrough in microcomputers came with the arrival of the KIM-1 (1975), a predecessor of the
Apple computer that used the same chip set. One individual from the Bay Area scene was largely
responsible for moving the gadget composers from soldering chips to programming the KIM-1. Jim
Horton (1944–98), by all accounts the leading underground computer evangelist in Berkeley, preached
the miracles of the KIM-1 at regular meetings at the Mediterranean Café near UC Berkeley. Collins
explained:
He was the first person to get a single-board computer—a KIM-1—for use for music. This caught on.
These computers were made for controlling machines and for learning how a microprocessor
worked. They looked like autoharps. They had a little keypad in the corner, a little seven-segment
display.22
The KIM-1 was a primitive, industrial-strength microcomputer for process control applications. It could
be programmed with coded instructions—machine-language software—but these were entered by
pressing keys on a hexadecimal pad. It had no keyboard or computer monitor like the microcomputers of
today. One entered a sequence of codes and hit the run button. The composer was operating very close
to the level of the machinery itself. Behrman, Paul DeMarinis and other composers found that the KIM-1
was ideal for controlling their primitive, chip-based synthesizers. They built in cable ports, not unlike
printer connections, to connect homemade synthesizers to the KIM-1 (see Figure 27.5).
Figure 27.5 A homemade interface between a KIM-1-era microcomputer and the homemade
synthesizers of David Behrman.
Source: Thom Holmes.
Horton’s work, dedication, and know-how led to the development of live performances of
microcomputer music in the Bay Area during the early 1970s. One group founded by Horton was the
League of Automatic Music Composers, which also included John Bischoff (b. 1949), Tim Perkis (b. 1951),
and Rich Gold. Members of the group have continued to work over the years on the creation of
computer music using networked machines, inexpensive technology, and low-level programming
languages. One extension of the League of Automatic Music Composers was The Hub, a group of six
individual computer composer–performers connected into an interactive network. The Hub took shape
around 1989 and included members Mark Trayle (b. 1955), Phil Stone, Scot Gresham-Lancaster (b. 1954),
John Bischoff, Chris Brown, and Tim Perkis. Their music was:
[A] kind of enhanced improvisation, wherein players and computers share the responsibility for the
music’s evolution, with no one able to determine the exact outcome, but everyone having influence
in setting the direction.23
The phenomenon of handmade electronic music instruments for performance followed in the footsteps
of Tudor, Mumma, Behrman, and Oliveros. During the early days of this movement, most computer
sound applications were related to video game consoles such as those by Atari and Sega. This activity
represented an extremely low-end of the market when compared to the robust emergence of its high-end
cousins in the digital synthesizer world. Inexpensive microcomputer components allowed those
interested in tinkering with handmade electronic music instruments to continue on a path begun earlier
with analog technology. These were the persistent soldering composers, the circuit builders who
imagined sounds and processes and then found ways to create them. Not satisfied with—and often
unable to afford—the kinds of synthesizing equipment that only rock stars could afford, they worked
with the trickle-down technology of the computer industry, the cheapest chips, and mass-produced kits
and circuits. These instrument builders came from the Heathkit school of electronic music begun by
David Tudor and continued in successive generations primarily by Paul DeMarinis, Laurie Spiegel, John
Bischoff, Tim Perkis, Nicolas Collins, Ron Kuivila, and Matt Rogalsky, among others.
LISTEN
Another common sound chip in the early days was the Texas Instruments SN76489 Four-Channel
Programmable Sound Generator. It included three programmable square wave generators, a white noise
oscillator, and amplitude modulation of the signal. Chips like these were used to create the tunes that
were played while a video game was operating. Each of the major game manufacturers, including Atari,
Nintendo, and Commodore, released chips specialized for use with their game consoles. After acquiring
the license for John Chowning’s FM synthesis patent in 1975, Yamaha also released a series of chips of
varied sophistication that could also be used in home computers and game consoles. The limiting factor
for all sound chips was the computer hardware itself; the only way to output the sound signal was
through the tinny speaker built into the personal computer. Much has changed since then in terms of
the quality of the sound output of these devices, but the state of the art in gaming music in the 1980s was
low-fidelity sound.
The availability of MIDI in 1984 incentivized microcomputer makers to develop more robust methods
of producing computer music. The most adaptable solution was to provide an expansion card dedicated
to sound synthesis and other audio processing tasks that could be plugged directly into a peripheral slot
in a computer’s microprocessor motherboard. Some of the best-known sound cards were the Sound
Blaster family produced in 1988 for the DOS operating system and the Applied Engineering Phasor
sound card for the Apple IIe personal computer.
By the second half of the 1980s, the world of synthesis was undergoing another sea change. Personal
computers such as the Apple II Plus (1977) and IBM PC (1981) were the reason for this, with the costs of
processing power, RAM, and hard disc storage all falling precipitously. As in the general field of
computers, the rise of the microcomputer meant the fall of expensive turnkey systems. In electronic
music, this meant that expensive turnkey instruments, such as the Synclavier and Fairlight, were on the
wane, being replaced by the ever-increasing power of the laptop. This also meant that a huge aftermarket
grew to encompass everything needed in electronic music, from keyboard controllers to software
synthesizers designed to operate on microcomputers. This field continues to grow enormously. From an
historic point of view, we can look back on the evolution of Max as an example of a versatile tool that
has evolved to give more and more power to the laptop composer.
IRCAM released its first version of Max, a graphical programming language for music applications,
created by Miller Puckette, in 1988. It was developed to support real-time interaction between the
performer and computer and provided a rich array of virtual patches and controllers for the management
of audio processing. The first client for a Max patch was Philippe Manoury, for his composition Pluton.
IRCAM began developing the Max hardware board (1989) for the Max computer and Ariel systems,
used to drive event signals, with the signal-processing part of the patcher (FTS, faster than sound)
running the hardware on the IRCAM Signal Processing Workstation.
A more musician-friendly version of Max was introduced in 1990 by Opcode, with its design
improved by David Zicarelli. This microcomputer program for the Macintosh became an instant success
and continues to be one of the most widely used software controllers for real-time music synthesis
today. Other popular programming languages—many available for free and using open-source code—
include Csound (by Barry Vercoe, 1985), RTCmix (by Brad Garton, John Gibson, Dave Topper, Doug
Scott, Mara Helmuth, 1995), SuperCollider (by James McCartney, 2002), ChucK (by Perry Cook and Ge
Wang, 2003), and MetaSynth (by Eric Wenger, 2000) which features granular synthesis. Miller Puckette
released Pure Data (Pd) (1996), a rewrite of Max as an open-source, software-only platform, with both
logic and audio (DSP) capabilities.
David Zicarelli founded Cycling ’74 (USA) and released MSP (1997) (named after Miller Puckette), a
set of audio extensions for Opcode’s Max that used the PowerPC chip for real-time signal processing on
the Macintosh platform with no additional hardware. The MSP engine and object API were initially
based on the Pd system released the previous year by Puckette. IRCAM, with development led by
François Déchelle, completed jMax (1999), a new real-time version of its performance software for
personal computers.
NOTES
1 Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music (Upper Saddle River, NJ: Prentice Hall, 1997),
p.116.
2 Tom Darter, “John Chowning, an Interview.” Available online:
[Link]/~bensondj/html/[Link] (accessed July 25, 2007).
3 Ibid.
4 Tae Hong Park, “An Interview with Max Mathews,” Computer Music Journal, 33:3 (Fall 2009), p.12.
5 Joel Chadabe, Electric Sound: The Past and Promise of Electronic Music (Upper Saddle River, NJ: Prentice Hall, 1997),
p.117.
6 Tom Darter, “John Chowning, an Interview.” Available online:
[Link]/~bensondj/html/[Link] (accessed July 25, 2007).
7 David F. Salisbury, “Yamaha, Stanford Join Forces on Sound Technology” (July 16, 1997). Available online:
[Link] (accessed July 25, 2007).
8 Julius O. Smith, “Experiences with the Samson Box.” Available online: www-
[Link]/~jos/kna/Experiences_Samson_Box.html (accessed July 25, 2007).
9 Laurie Spiegel, “The Early Computer Arts at Bells Labs.” Available online: [Link]/ls/btl/[Link] (accessed
December 6, 2019).
10 Laurie Spiegel, personal communication with Thom Holmes, June 27, 2001.
11 Laurie Spiegel, liner notes, The Expanding Universe (Philo Records, 9003, 1980).
12 Ibid.
13 Ibid.
14 Ibid.
15 Nicolas Collins, interview with Thom Holmes, April 2, 2001.
16 Ibid.
17 Nicolas Collins is credited for coining this nickname for Behrman.
18 Nicolas Collins, interview with Thom Holmes, April 2, 2001.
19 David Behrman, interview with Thom Holmes, March 13, 2001.
20 Ibid.
21 Ibid.
22 Nicolas Collins, interview with Thom Holmes, April 2, 2001.
23 Tim Perkis, liner notes, Wreckin’ Ball: The Hub (Artifact Recordings, ART 1008, 1994), p.1.