Copyright ©2002 by Dennis Báthory-Kitsz
This is the story of electronic music on the cheap, and how inexpensive electronics abets, inhabits, and inhibits the musical canvas.
The year was 1964. I'd just heard Karlheinz Stockhausen's Gesang der Jünglinge for the first time, even though the piece was seven years old. Yup, I thought, that's what I want to do--that and everything else, of course, but that, for sure.
I was 15. It was the composers' in-between generation, too young for the postwar avant-garde and its nascent minimalist rebels, and too old to be in on the development of postmodern pop-influenced idioms. But at 15, I didn't know that yet. All I knew was that Gesang was an auditory beacon for me, and I left Duke of Earl and its kin way behind.
What Gesang contributed to my thought was that, although I was already a 'pencil and paper' composer, there existed the idea of sound per se. This may not be a revolutionary idea from today's vantage (it's 38 years since I first heard Gesang), but for me, a kid who lived over a dry cleaning store and played bass clarinet in a high school band, it was a world apart.
In 1969, I became courageous enough to put what I'd heard to work. I was handy with a tape recorder and soldering iron, but there was no electronic music lab at Rutgers--there was hardly a music department, for that matter. I lived in a Trenton, New Jersey, tenement. The synthesizers from Moog and Arp were well beyond my means. So I took out paper and pencil and drew my first electronic composition.
I had no history or education to rely on. I'd seen some scores in John Cage's Notations, and even Gardner Read had an example or two in his then-new Music Notation: A Manual of Modern Practice. But it was still three years before the standard would appear--Erhard Karkoschka's Notation in New Music--so I was left to invent what I imagined.
By 1970, though, I had some tape recorders of my own, plus both brand-new raw tape and boxes full of electronic parts salvaged from the nearby Princeton landfill, dump of the wealthy. I shoplifted solder and splicing tape from Radio Shack.
All of the early equipment is gone, but here is what my electronic studio consisted of in 1970, the first items having been purchased with a bank loan:
How could one actually create 'art' in such a studio?
Filtering was done with R/C networks or by lifting tape away from the record head. Head screws were twisted back and forth for on-the-fly phase shifting. Delay loops were created by running tape from one machine to the other across the room over sewing machine spools. The circuit sides of oscillators were 'played' with fingertips to create filtering, feedback, and changing patterns of pitches and slides. Tapes and records were played backwards, or rocked back and forth (shades of contemporary turntable art). Tapes and records were played at different speeds, both standard speeds (15/16 through 15ips) and others made up with arbitrary pucks and variable voltage controls (most of which were hand made and burned out every few uses). Concrete sounds were recorded and saved on strips taped to the ceiling. Sequences of sound were cut and spliced together from tiny snippets. (The Dynaco amps, unused to intense low frequencies, blew up with enough regularity that a stock of output transistors sat on the shelf.)
Much was play. Stomach Music and Telephone Trip both arose from chance events of stomach growling and a strange telephone connection. syne-4 was a four-channel electronic improvisation on intervallic slides. D'a'lpp used metal cans tapped randomly by friends as the tape speed was changed and delay was improvised. Every available 'thingie' with wires was recovered from the dump and used as a sound source or process.
Nothing was predictable. I found out years later that artists like Stockhausen were struggling with this as well; every take of Gesang (yes, from pages written-down notes and drawings) was different, imprecise, not replicable.
This lack of replicability was wonderful and invigorating for performance, so long as the sound creation was intended to be flexible, interactive, and improvisational.
It also presented me with two serious compositional dichotomies:
As an independent composer, I straddled the worlds of avant-garde and modernist, of abstract and mapped. And, as an independent composer, I was required to invent my own tools for doing all of these.
It was exhausting.
1969 / Score: Composition for Tape & Soloists |
1970 / Pieces with thingies: Electronic Constructions No. 1, No. 2, in Glass (2), in Sliding Tones |
1970 / Pieces with turntables: Electronic Construction in Chorus & Sounds |
1970 / Concrete pieces: Three Concrete Constructions |
1971 / Pieces with thingies: syne-4* |
1971 / Live with thingies: Violin Construction* |
1972 / Pieces with thingies: Fugue*; Wedding Music* |
1972 / Pieces with turntables: Praeludium (All White) |
1972 / Concrete pieces: D'a'lpp; Stomach Music; Telephone Trip |
In the four years from 1969 to 1973, only a dozen and a half pieces were completed, most of them brief experiments. I had developed and learned a body of electronic tools and techniques that most electronic and concrète composers were using: found materials, turntables and tape recorders, oscillators, filters, modulators and processors.
Though such cheap electronics created wonderful work for the avant-garde, that era was clearly over. Besides, nothing of the caliber of Composition for Tape and Soloists was finished during that time, and that piece was still waiting in paper form.
In late 1972, a strange (and serendipitously cheap-looking) announcement came my way. A company impressively called Ionic Industries promised a complete, non-modular keyboard synthesizer, and an accompanying black box that would memorize and sequence the trigger signals and voltage levels.
When I arrived at Ionic Industries in Morristown, New Jersey, I found no sleek, modern factory--only a modest, suburban, split-level home, with a single prototype "Performer" synthesizer in the basement. A genial Alfred Mayer demonstrated it with the help of his son, who ran the "Digionic" sequencer that was still a homemade rat's nest of wires and circuits (and would ultimately never be delivered). The bank provided the money at exorbitant interest, and on July 4, 1973, I drove to Ionic to pick up the prototype.
It was so heavy, I named it "Killer".
Killer's self-contained, cable-free, cross-switched patching and built-in stereo amplifier and speakers revealed an enormous leap forward in synthesizer thinking, one that would never be emulated successfully by other manufacturers.
Killer included the basic elements of the era's analog synthesis: oscillators, filter, white noise generator, ring modulator, reverb, and envelope; a block of special effects that included repeat, tremolo, fuzz, wah and portamento; and a 49-key keyboard with touch sensitivity.
There were two main oscillators (sine/triangle and square/triangle) and one square with an extended low range (to .01 Hz--a direct-coupled output that caused Killer also to blow up my transistorized Dynaco power amps). The shape (duty cycle) and relative level of the waveform pairs were adjustable; I later added octave dividers and tuning spread trimmers. Also present were external inputs (line or microphone used as a source or modified) and a separate keyboard oscillator with octave doubling and tuning spread control (for micro- through macro-tunings) with a keyboard dynamics control that could be routed to any parameter.
Signals were sent through pannable stereo outputs, and controlled through a trigger (external or via keyboard) and an attack-decay-release (ADR) envelope, which I modified with a sustain control. I added envelope quieting, because the envelope off state was very sensitive to temperature changes (and got worse over time); this control trimmed it to silence.
A pair of axes (pseudo-joysticks) controlled depth and level of whatever patches were selected, and a filter with variable level, intensity, and Q was available to be switched in place. The classic spring reverb had a rich dry/wet mix.
Each source could be modified by the envelope, ring modulator (fed from any other source), filter, and reverb, and be routed to either or both output amp directly. The filter (either with modified output or ringing at the Q point) could also be a source, as well as the keyboard oscillator, external inputs, and white noise. A trapezoid (second envelope) level was also adjustable. The sticks adjust two axes, assignable to any device, and adjustable (using the two limit sliders) from no effect to full range. The sticks made Killer an astounding microtonal machine. To make sure the player knew which oscillators were in use during performance, there were four indicator lights (one for the oscillators, one for any other source).
At the output end, the portamento worked quite well, with adjustable pitch slide speed that makes Midi look positively amateur. I replaced the fuzz with a fuzzier circuit of my own design; tremolo was a triangle controller, and repeat a square controller. The latter two could be given a different variable rate for each channel, and combined with the Z-axis auto-pan (auto-pan/out-of-phase to rear speakers) created a swirling surround sound, to which I later added 'soundstage' controls.
The double-rail keyboard determined dynamics from the time difference between striking the top and bottom rails, and divided the voltage with a precision resistor ladder. An LM550-controller bipolar 18-volt power supply was a sturdy, high-current unit which ran the output amplifiers to high-efficiency speakers. With six motorcycle batteries, Killer could run as a self-contained outdoor unit, and was in fact used for Invocation, Dance and Lament for Twandano with dancer Reuben James Christian Edinger, the first performance at Pepsico's sculpture garden before it became a popular performance venue.
All the circuit boards were phenolic and noisy. Transistors often needed replacement, too ... the circuits were all designed around discrete parts, as monolithic op amps were new and expensive. The boards were fairly easy to access and fix or modify, and fortunately, the designers marked the purpose of each section of circuit on the boards.
Killer was a surprisingly performable instrument. The designers had ergonomics working before IBM invented the term. Color-coding and logical layout made the instrument performable in near-darkness, and in fact I used it in the New Jersey State Museum Planetarium for the live version of Somnambula for recorder and synthesizer. I printed and bound books of blank layout sheets so that I could switch settings on the fly during a show. Both Killer and David Gunn's Astrosynarce (an unmodified Performer synth made as part of Ionic's small production run) were used in David's Boondock and my Rando's Poetic License, as well as many other performances from 1974-1982. The machines were then retired.
A complete, durable synthesizer like Killer opened the door to performance, improvisation and composition without the technical struggle of cables, but it had discomfiting limits: it wasn't expandable; it's tuning was unstable; and, most of all, it never had its planned sequencer.
In the interim, I experimented (like Composition for Tape and Soloists, again on paper) with electronic interactivity. Network C/R for tethered dancer and interactive electronics was never realized, but used the dancer's body to control the music via a web of sensors. Three versions were written, each with a different take on how the sensor clothing could interact with the electronic sound. No version of Network C/R was realized.
1973 / Killer pieces: Transresistor; Four Bach Transcriptions |
1973 / Live with thingies: i cried in the sun aïda; Autoharp |
1974 / Killer pieces: The Development of the Consciousness of Space in a Child; Five Daydreams; Bomber; Three Renaissance Transcriptions |
1975 / Live with Killer: Somnambula; Invocation, Dance and Lament for Twandano; ...and gently lead |
1975 / Live with thingies: Outtake Rodemas |
1975 / Live with human interfaces: Network C/R (3 versions) |
1973-75 / Live with Killer: Multiple improvisation performances with dancers and poets |
The beauty of analog synthesis is its simplicity. Every knob has a task, there are no menus and selections trees, and 'playing around' is invited.
But its disadvantage is that it forgets. It forgets tuning and settings, and has no library to call up to prepare the machine for performance. It doesn't synchronize to tape. It has a limited set of resources, and a unique color that depends on circuit design. It's an orchestra with knobs--quirky, hard to maintain, expensive, personal, and constantly in need of praise.
Because Killer's sequencer never appeared, and I was anxious to get on with more complex projects (yes, by then I had heard the Synclavier, developed here at Dartmouth), considering building a programmable memory box with voltage output and trigger. I was saved from a serious fate by the appearance of the Radio Shack TRS-80 microcomputer in 1977.
I had two goals with this small computer: to learn to program sound output, and--more immediately--to put its control functions to work as Killer's sequencer.
Not unexpectedly, I was distracted from both goals by the idea of integrating the computer into the construction of musical ideas. Rando's Poetic License premiered in Washington, D.C., with Killer being played both as usual and with a simple resistor-ladder output, together with the generation of random quasi-poetry using a vocabulary created during the performance by the audience.
Computer-analog hybrids became very interesting because the improvisatory/exploratory nature of the analog devices combined well with the replicatory and detailed control nature of the digital realm. I spent the better part of four years developing interface boxes and standalone digital-analog instruments, such as the Rhythmatron. [Alas, no pictures of the now-destroyed Rhythmatron exist.]
As an aside: Building instruments had been part of my music since the late 1960s, when I started with harpsichord and clavichord kits, began stripping existing instruments (the Defrocked Autoharp for Autoharp), combined bells and hand instruments in non-standard performance (such as The Owl Departing, Cy-Gît, Gendarme, and the Permutrance series), and eventually graduated to doing unique instruments for specific pieces--the Uncello, Hharp, Glass Chimes, Brass Chimes, and Gong were built for Plasm over ocean; Ovarian Xylophone and Buzzophone for Christian Wolff in Hanover; the Triangulum and Exlophon for Wedding Music; and various others.
The Rhythmatron was a unique drum machine. Unlike commercial models that were to appear in a few years, the 1982 Rhythmatron included a stable, integrated-circuit white-noise source feeding an 8×8 matrix of filters adjustable around their Q point, with variable pitch and feedback. The results were mixed to a stereo output. An instrument was created by summing these resonant filters in any combination, which could emulate both real and imaginary percussion. Events were recorded and re-triggered by a static, battery-backup memory matrix (from one to 256 beat patterns, controlled by a thumbwheel), with tempo varied by control knob or a foot pedal (from one to 1,000 beats per minute); recording could be done live (with the variable clock running), or a step at a time (advanced with the a one-shot trigger).
Simultaneously, I was exploring the world of musical output directly from microprocessors--although doing much of sonic interest with a 0.9MHz device was impossible. I wrote software ('Quaver') and wrote articles ('MicroSonics'). But my concentration remained on the interface between analog and digital, itself a sometimes disturbing struggle of speed and stability.
The creation of interesting sonic landscapes had been inhibited by the melding of computer technology and analog systems. But worse was to come, for a while.
1978 / Live with computer and interfaces: Rando's Poetic License |
1979 / Live with computer: Not Vermont Hardware |
1981 / Live with computer, Killer, thingies: Wedding Music |
1982 / Live with Rhythmatron: Armies of Mice (composition by David Gunn) |
1982 / Live with computer, Killer, thingies, Rhythmatron, and interfaces: Cruise; Bugs |
I had begun writing for computer magazines in 1979, and using computers for all sorts of tasks, not only musical ones, was a part of daily life. Compuserve discussion groups and dialup bulletin boards were active in 1981, along with frequent postal exchanges. An ad hoc group was arguing about a digital transmission method for musical information, and as a result of those arguments, in 1982, the first specification was announced for the commercial Musical Instrument Digital Interface (MIDI).
To me, MIDI was almost a disaster. It was everything my digital/analog interfaces were already growing out of: slowness, crude resolution, and (in practice) unfriendliness to non-standard systems of tuning and pitch change. Sliding pitches and morphing were nowhere to be found. Up front were only the basics of pitch, velocity, duration, instrumentation. The rest was as much of a trial as programming my own black boxes.
Its simplicity, however, swept it into the commercial world immediately and with enormous enthusiasm, and a wall began to grow (once again!) between the experimental composer and the pop/commercial artists as MIDI devices limited to the basics appeared. Formidable pre-MIDI electronic machines quickly faded.
Instead of adopting MIDI, for the next ten years I further developed my own custom interfaces in presentations, such as Nighthawk in 1985, for four independent computers programmed to respond to stimuli with sounds, and the 1986 In Bocca al Lupo, an installation experiment in interactive sonic learning.
In Bocca al Lupo represented another 'inhabiting' of my work by electronics. The formal qualities of Bocca included sound in long, solemn waves, consisting of deep pulses punctuated by low percussive elements. Instrumentation (samples, in the form of short tape loops) included primitive string instruments as well as natural sounds and human voices. Quiet interludes occurred in response to the entrance, presence and speed and quality of movement of the visitors. Pools of silence followed the busy viewer. Environmental sound gradually returned to the visitor who stood quietly listening and watching.
Environmental sound corresponded to the "presence" created by the interaction of the felt (cloth) figures and the architectural environment. As they evoked a mood of primitivism and timelessness, the music complemented by hushing intruders. Strangers in the environment would have to be quiet in order to enter the complete sound environment. Motifs in the music corresponded loosely to the ideograms in the sculptural environment, created by Fernanda D'Agostino.
Guardian and aggressive sounds were key elements. The environmental sounds faded in response to the active, visitor and returned in response to a quiet meditative stance. On the other hand, more aggressive, percussive voices protected the environment by responding in an agitated way to the approach of a viewer. These were not pretty sounds, but neither were they frightening: They were evocative of natural or naturalistic sounds, but without moral quality or implication. They were formed on the patterns of buzzing of cicadas, clicking of sticks, the non-pretty vocalizations of certain birds, or the guttural rumbling of buffalo. While all the sounds retained a natural quality, they were altered electronically to enhance their strangeness to the listener; these sounds warned as animals warn against invasion by outsiders.
The floor and wall felt elements acted as the soft yielding ground on which the bony, linear, brittle, ceramic antennas/ideograms resided, while the environmental waves of sound were the ground for the percussive, aggressive guardian sounds. The environmental sounds seemed to come from everywhere and nowhere, avoiding and enveloping the listener according to the listener's attitude, while the aggressive sounds were individual voices which spoke from the sculptor's 'soul catchers,' underscoring the identity of the soul catchers both as individual entities and as the elements of the installation which interacted most intimately with the visitor.
There could easily have been people behind the walls, spying through holes and responding to provocations--but their emulations of primitivism would carry modern cultural baggage. Instead, I tried to teach the my small computers to respond as humans, and have as a result almost a pre-primitivism (which was all these devices were capable of doing).
The intent was to make an environment inside which the visitor's spirituality was called into play in a non-dogmatic way, through the environment's sensitivity to the viewer's attitude and presence. It included an iconography which was complex and comprehensive enough to be convincing, although not identified with any specific historical culture.
Sixteen infrared sensors picked up body movement in the gallery. That information was computerized, evaluated, stored, and used to produce the immediate and historical responses, and turned into the sound responses to events. Computer evaluation was done by five Tandy 6809-based computers, a Green Mountain Micro data acquisition system and interface boards, and a dozen custom digital and analog circuit cards. Four endless-loop cassette players were used as well. Sound (eight channels) was reproduced from three-minute tape loops, and sound (four channels) was generated by computer algorithm. All were volume-adjusted and mixed, and fed to twenty-four sound sources throughout the hall.
Computer work that had begun as work with sound sequencing for Killer had largely bypassed the sequencing (and its MIDI descendent) to work with low-level artificial intelligence and interactivity. For the most part, I found myself retired from electronics during this decline to commercialism.
1985 / Public interactive with computer: Nighthawk; Echo |
1985 / Tape: Ah, Minimalism! |
1985 / Live with tape: Composition for Tape and Soloists (1969, finally premiered) |
1986 / Public interactive with computer: In Bocca al Lupo |
Earlier this year, Apple's new iMac was introduced--what John Stewart on The Daily Show called 'desklamp technology'.
That introduction reminded me that my exploration of computer as interactive contributor rather than sound producer continued into 1990 with A Time Machine, an instrumental/dance composition with eleven of its 33 sections made up of counterpoints selected from a randomly generated computer menu of musical modules.
A Time Machine is a performance work and song cycle which projects its musical sense back through itself and into its own future. Under the control of singer, dancer, director and chance, A Time Machine reconsiders both the notions of poetry (all computer-generated) and counterpoint (some chance-selected). Snatches of tunes recently heard and yet to be heard are jumbled together in places, whereas strict through-composition characterizes other times. The singer interprets the words directly; the instrumentalists interpret the new counterpoint; and the dancer (in a full performance) mediates it all.
A Time Machine traces its origins to the 1973 Avant-Garde Festival of New York, where I was taken by four events: random "poetry" generated by a clattering Teletype terminal hooked to a mainframe computer in upstate New York; a dance company all in white, improvising as it weaved through the festival; a clarinettist inviting the audience to join in improvisatory duets and trios; and the festival itself, which took place on classic, passenger railway cars at Grand Central Station. The industrial railroad scene, the mélange of sounds and ideas, the whirling motions and casual randomness, and the incompleteness of it ... all suggested complex visual, aural, technological and compositional possibilities.
A work inspired by this combination (and an ancestor to A Time Machine) was Rando's Poetic License, the interactive piece I've discussed. Electronic and acoustic sounds were played from the stage and from speakers placed in the seats; scavenged televisions displayed poetic texts generated by my computer program; the audience whispered the texts into microphones scattered about, and these were amplified and restated; and the piece shifted and mutated according to the computer text (whose entire vocabulary was contributed by the audience).
Rando, overall, was a failure; so during a long creative hiatus that followed, I revised its computer program to make the poetry more subtle and striking, with a vocabulary increased by contributions from artist and writer friends. Then, while composing an art-song setting of five Rando poems, I came to believe that (despite their source in inanimate silicon microchips) such poems could be musically and humanly treated as living works of art.
Over a thousand poems had been created from Rando; I chose several for a commission from the Vermont Contemporary Music Ensemble for a performance work on a science-fiction theme. Here at last was the opportunity to create an extended score with computer amanuensis.
The complete score of A Time Machine incorporated several aspects of technology from old and new times: the human voice (ancient); violin, cello and clarinet (old); electronic keyboards (new via an old model); percussion (ancient); dance (ancient); costuming (new via a medieval route); video (new use via the industrial age); stage props (1940's diner design); lighting (old and new); and computer technology (new). Nothing obviously departed from the musicotheatrical tradition, except for the computer.
The "time machine" reference of the title and the sections is further ambiguous. It suggests H. G. Wells's famous novel, and the Hollywood film which brought it to American youngsters (inspiring in me dread and fear with its use of air-raid sirens). More generally, time is past, present and future; also it is tempo, a metronome. Finally, the themes in the performers' respective menus are extracted from the rest of A Time Machine--meaning either they will have been heard already (a reflection of the past) or are yet to be heard (a foreshadowing of the future). This is a different kind of time machine, bringing us back to the past and ahead to the future of the work's musical time-line. Because each performance's counterpoint is unique, it compresses or stretches the whole of the apparent (emotional or perceptual) time in A Time Machine, and further, it brings unexpected meanings to the themes as they appear in new guises. There are anticipatory expositions and surprising recapitulations. Each performance is fresh, unique: it has no past and no future; it has but one, present existence: the ideal of any time machine, presentness within all times.
I arranged sixteen of the Rando poems (otherwise untouched and precisely as written and punctuated by the computer program) in a cycle: conception, birth, infancy, childhood, playfulness, adolescence, sensuality, maturity, security, hardening, pain, dementia, sickness, dying, death, and memorial. There is within them an analogy to creation and Armageddon, to creativity and destructiveness, to peace and war.
A Time Machine is the fruition of ideas sown 17 years earlier in the chaos of a festival at New York's Grand Central Station, and its message is simple and direct: Life invites, grows and departs in a splendor of wildness and beauty.
It's particularly interesting to bring up A Time Machine today because, curiously, the 'desklamp technology' of the iMac was presaged 12 years earlier by my composition, which used a computer as 'floorlamp technology'--an illuminated hemispherical base containing the computer (but glowing green, not white), a stalk with adjustable LCD display (monochrome in 1990), and in fact, an entire assemblage of working elements, including music stand, lamp, and mirrors.
I didn't call it a major step forward, though. I called it "1940's futuristic style--American classic diner trim."
1990 / Computer as director: A Time Machine |
Beginning in the early 1990's, it was finally possible for relatively unencumbered electronic composition to take place on the cheap. The cost of multichannel sound had dropped, and computer power rose by a factor of two hundred (now it's 4,000). Analysis and transformation that took hours could be performed in real time.
Hypertunes, Baby used MIDI control of real samples, and the related group of Xirx; zéyu, quânh & sweeh; and exirxion drew from samples while morphing their textures into extended soundscapes. Vocal samples were drawn across a (possible) 27-year sound environment with Detritus of Mating and brought live performers back in with Zonule Glaes II. Manipulated sample material was integrated with live morphed samples in RatGeyser. And with HighBirds (Prime), my composition returned to its roots with live and electronics in improvisatory interaction.
In effect, my personal processes begun with Composition for Tape and Soloists could finally be realized. Interactivity, with inexpensive real-time programs such as AudioMulch and more extensive software such as Reaktor, was restored to my composition.
Now my personal question is this: Have I reached the end of my imagination, or the satisfaction of my dream?
1993 / Playback: Meditations on the Llama Sutra; Body Language; Party Musik |
1993 / Live with playback: Llama Butter |
1994 / Live with computer-generated to playback medium: Hypertunes, Baby |
1994 / Computer-generated to playback medium: For the Invisible |
1996 / Computer-generated to playback medium: Xirx; zéyu, quânh & sweeh; exirxion |
1997 / Sound environment computer-generated to playback medium: Detritus of Mating |
1999 / Sound environment computer-generated to playback medium with string quartet: Zonule Glaes II |
1999 / Computer-generated with samples to playback medium: bellyloops; No Money (Lullaby for Bill) |
2000 / Live with computer-generated from samples to playback medium: RatGeyser |
2000 / Computer-generated from samples to playback medium: FreeSimple; Snare:Wilding; Glossalalia 15; Williams Mix 14, 24, & 26 |
2001 / Live with computer generated from samples with optional interactivity: HighBirds (Prime) |