0% found this document useful (0 votes)
164 views37 pages

Audio Processes (Part)

Uploaded by

Juanning Liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views37 pages

Audio Processes (Part)

Uploaded by

Juanning Liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Audio Processes

Page Intentionally Left Blank


Audio Processes
Musical Analysis, Modification,
Synthesis, and Control

David Creasey

NEW YORK AND LONDON


First published 2017
by Routledge
711 Third Avenue, New York, NY 10017

and by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN

Routledge is an imprint of the Taylor & Francis Group, an informa business

© 2017 Taylor & Francis

The right of David Creasey to be identified as author of this work has been asserted by him in
accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any
form or by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying and recording, or in any information storage or retrieval system,
without permission in writing from the publishers.

Trademark notice: Product or corporate names may be trademarks or registered trademarks,


and are used only for identification and explanation without intent to infringe.

Library of Congress Cataloging in Publication Data


Names: Creasey, D. P. (David P.), author.
Title: Audio processes : musical analysis, modification, synthesis, and control /
David Creasey.
Description: New York ; London : Routledge, 2017. | © 2017
Identifiers: LCCN 2016012376 | ISBN 9781138100138 (hardback) |
ISBN 9781138100114 (paperback) | ISBN 9781315657813 (ebook)
Subjects: LCSH: Computer sound processing. | Music–Computer programs.
Classification: LCC MT723 .C72 2017 | DDC 786.7–dc23
LC record available at http://lccn.loc.gov/2016012376

ISBN: 978-1-138-10013-8 (hbk)


ISBN: 978-1-138-10011-4 (pbk)
ISBN: 978-1-315-65781-3 (ebk)

Typeset in URW Palladio L by the author


Contents

Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Nature of Audio Processes 2
1.1.1 Introducing Audio Processes 2
1.1.2 Constructing an Audio Process 2
1.1.3 Real-Time and Non-Real-Time Systems 4
1.1.4 Audio Process Themes 5
1.2 Example Audio Process Systems 8
1.2.1 Playing an Acoustic Instrument 8
1.2.2 Combining Two Paths 10
1.2.3 Automated Analysis 11
1.2.4 Two Humans Working Together 13

PART I — A NALYSIS
Chapter 2 Audio Data Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 The Nature of Sound 18
2.1.1 Sound in the Time Domain 18
2.1.2 Cycle Length, Frequency, and Amplitude 19
2.1.3 Construction and Deconstruction with Sinusoids 26
2.2 Sound as Numbers 27
2.2.1 Overview 27
2.2.2 Sampling Continuous Data 28
2.2.3 A Complete Digital Audio System 34
2.2.4 Choosing a Sample Rate 35
2.2.5 Amplitude 37

v
vi Contents

Chapter 3 Time Domain Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39


3.1 Basic Concepts 40
3.1.1 Continuum of Sound Character 40
3.1.2 Harmonic Sounds 41
3.1.3 Inharmonic Sounds 45
3.1.4 Phase Effects 47
3.2 Dynamic Sound Character 49
3.2.1 Chime Bar 49
3.2.2 Amplitude Envelopes 51
3.2.3 Dynamic Waveform Changes 60
3.2.4 Oboe 61
3.2.5 Vibraphone 63
3.2.6 Piano 66
3.2.7 Tubular Bell 67
3.2.8 Vocal “sh” and “f” Sounds 68
3.3 Using Time Domain Information 70
3.4 Learning More 72

Chapter 4 Frequency Domain Analysis . . . . . . . . . . . . . . . . . . . . . . . . 73


4.1 Introduction 74
4.2 Static and Average Sound Character 74
4.2.1 Spectral Form 74
4.2.2 Frequency Domain Analysis of Simple Waveform Shapes 79
4.2.3 Average Spectrum Analysis Examples 84
4.3 Dynamic Sound Character 89
4.3.1 Representing Three Dimensions 89
4.3.2 Simple Spectrogram Examples 90
4.3.3 More Complex Spectrogram Examples 94
4.4 Using Frequency Domain Information 96
4.5 Learning More 98

PART II — M ODIFICATION

Chapter 5 Basic Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


5.1 Introduction 102
5.2 Signal Flow Control 102
5.3 Amplitude Control 105
5.3.1 Simple Amplitude Control 105
5.3.2 Two Channel Amplitude Control 108
5.3.3 Naturalistic Amplitude Control 109
5.3.4 Working with Decibels 113
5.4 Mixing 114
Contents vii

5.5 Pan Control and Stereo Balance 117


5.5.1 Monophonic and Stereophonic Signals 117
5.5.2 Panning 121
5.5.3 Stereo Balance 125
5.6 Combination of Elements 127
5.6.1 Ordering 127
5.6.2 Series and Parallel Forms 129
5.6.3 Practical Combination Examples 133
5.7 Developing Processes and Learning More 137

Chapter 6 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139


6.1 Introduction 140
6.1.1 Filters and Audio Processes 140
6.1.2 Filtering and Acoustic Sound Sources 140
6.1.3 Musical Frequency Ranges 141
6.1.4 Frequency (Magnitude) Responses 143
6.2 Standard Filters 148
6.2.1 Lowpass and Highpass Filters 148
6.2.2 Bandpass and Bandreject Filters 149
6.2.3 Comb and Allpass Filters 152
6.2.4 Variations in Filter Responses 153
6.3 Filter Combinations 155
6.3.1 Common Series and Parallel Forms 155
6.3.2 Subtractive Techniques 159
6.4 Filter Designs 164
6.4.1 Introduction 164
6.4.2 Lowpass and Highpass Designs 171
6.4.3 Bandpass and Bandreject Designs 180
6.4.4 Comb and Allpass Designs 183
6.5 Developing Processes and Learning More 193

Chapter 7 Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


7.1 Introduction 196
7.1.1 Avoiding and Creating Distortion 196
7.1.2 Hard Clipping 196
7.2 Distortion Functions 200
7.2.1 Soft Clipping Distortion 200
7.2.2 Other Distortion Transfer Functions 208
7.2.3 Controlling Distortion Character 208
7.3 Distortion of Complex Signals 213
7.4 Developing Processes and Learning More 216

Chapter 8 Audio Data Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219


8.1 Storing and Accessing Audio Data 220
8.1.1 Storage and Processing Requirements 220
viii Contents

8.1.2 Simple Buffering 221


8.1.3 Shift Registers and Circular Buffers 225
8.1.4 Delayed Sound 232
8.1.5 Pointer Position Considerations 235
8.2 Selecting and Interpolating Values 237
8.2.1 Introduction 237
8.2.2 Truncation and Rounding 239
8.2.3 Linear Interpolation 241
8.2.4 Non-linear Interpolation 242
8.3 Level Measurement 245
8.3.1 Introduction 245
8.3.2 Accurate Peak Measurement 247
8.3.3 Accurate RMS Measurement 249
8.3.4 Filter-Based Techniques 252
8.3.5 Ramping Techniques 255
8.3.6 Selecting and Configuring Envelope Followers 260
8.4 Developing Processes and Learning More 261

Chapter 9 Modulated Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265


9.1 Introduction 266
9.1.1 Variation over Time 266
9.1.2 Oscillators 267
9.2 Examples of Periodic Modulation 270
9.2.1 Tremolo and Autopan 270
9.2.2 Filter Modulation 277
9.2.3 Vibrato 278
9.2.4 Flanger and Phaser 279
9.2.5 Chorus 283
9.2.6 Modulated Modulators 284
9.3 Developing Processes and Learning More 286

Chapter 10 Acoustic Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289


10.1 Basic Concepts 290
10.1.1 Types of Environmental Effects 290
10.1.2 Fundamentals of Sound in Enclosed Spaces 291
10.1.3 Practical Enclosed Spaces 294
10.2 Non-Recirculating Echo and Reverberation Forms 297
10.2.1 A Simple Model 297
10.2.2 Echo Effects 298
10.2.3 Multitap Implementation 303
10.2.4 Improvements to Echo Effects 307
10.2.5 Reverberation Effects 309
10.2.6 Convolution with Impulse Responses 310
10.3 Recirculating Echo Forms 313
10.3.1 Basic Concepts 313
10.3.2 Echo Effects 315
Contents ix

10.3.3 Improvements to Echo Effects 322


10.4 Recirculating Reverberation Forms 323
10.4.1 Basic Concepts 323
10.4.2 Reverberator with Comb and Allpass Filters 324
10.4.3 Multi-Stage Reverberator 326
10.4.4 Improvements to the Multi-Stage Reverberator 331
10.4.5 Stereo Reverberation 333
10.5 Developing Processes and Learning More 335

Chapter 11 Dynamics Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337


11.1 Introduction 338
11.1.1 Manual and Automated Control Loops 338
11.1.2 Amplitude Dynamics 339
11.1.3 Basic Forms 340
11.2 Noise Gates 341
11.2.1 Main Principles 341
11.2.2 Simple Form 345
11.2.3 Improvements to the Simple Form 347
11.2.4 Additional Techniques 351
11.3 Compressors 353
11.3.1 Main Principles 353
11.3.2 Compressor Form 359
11.4 Further Techniques 362
11.4.1 Expanders 362
11.4.2 Sidechain Filtering 364
11.4.3 Combinations 365
11.4.4 Other Modifications 366
11.5 Developing Processes and Learning More 367

Chapter 12 Frequency Domain Methods . . . . . . . . . . . . . . . . . . . . . . . 369


12.1 Time Domain and Frequency Domain Processes 370
12.1.1 Introduction 370
12.1.2 Fourier Transform Basics 371
12.2 General Techniques 377
12.2.1 Filtering 377
12.2.2 Vocoding and Application of Spectral Envelopes 379
12.2.3 Delay 381
12.3 More Sophisticated Techniques 382
12.3.1 Time-Stretching 383
12.3.2 Changing Pitch 383
12.3.3 Modifying Amplitude and Frequency Relationships 387
12.3.4 Blending and Morphing 389
12.4 Developing Processes and Learning More 390
x Contents

PART III — S YNTHESIS

Chapter 13 Basic Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395


13.1 Basic Concepts 396
13.1.1 From Modifiers to Synthesizers 396
13.1.2 Oscillators 398
13.1.3 Note Numbers and Frequency Values 399
13.1.4 Parameter Variation 402
13.2 Gate and Envelope Methods 406
13.2.1 Basic Methods 406
13.2.2 Envelope Shapes and Applications 408
13.2.3 Audio Input Methods 414
13.3 Tremolo and Vibrato in Synthesis 418
13.4 Developing Processes and Learning More 421

Chapter 14 Signal Generators and Shaping . . . . . . . . . . . . . . . . . . . . . 423


14.1 Introduction 424
14.2 Equation-Based Methods 425
14.2.1 Basic Concepts 425
14.2.2 Example Equations 427
14.3 Breakpoint Methods 429
14.3.1 Basic Concepts 429
14.3.2 Generating a Single Linear Segment 431
14.3.3 Generating Multiple Segments 434
14.3.4 Mapping with Multiple Breakpoints 442
14.4 Wavetable Methods 444
14.4.1 Creating a Wavetable 444
14.4.2 Using a Wavetable 445
14.4.3 Efficiency and Control 447
14.5 Modifying and Shaping Oscillation 449
14.5.1 Bandlimiting and Aliasing 449
14.5.2 Developing Oscillator Character 451
14.5.3 Waveshaping 452
14.6 Developing Processes and Learning More 459

Chapter 15 Sample-Based Synthesis Methods . . . . . . . . . . . . . . . . . . 461


15.1 Basic Concepts 462
15.1.1 Using Sampled Sound 462
15.1.2 Recording, Editing, and Organising Samples 463
15.1.3 Sample Playback 465
15.2 Position Control and Concatenation 466
15.2.1 Position Control and Looping 466
15.2.2 Concatenation 469
15.2.3 Process Considerations 470
Contents xi

15.3 Manipulating Sample Data Values 473


15.3.1 Sample Increment Effects 473
15.3.2 Further Methods 478
15.4 Developing Processes and Learning More 479

Chapter 16 Additive Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481


16.1 Basic Concepts 482
16.1.1 Additive Synthesis Characteristics 482
16.1.2 Frequency Separation 483
16.2 Synthesis with Control over Individual Partials 485
16.2.1 Synthesizer Form 485
16.2.2 Configuration from Theory 487
16.2.3 Configuration from Sound Analysis 491
16.2.4 Abstract Configuration 494
16.3 Synthesis with Mapped Control over Multiple Partials 496
16.4 Developing Processes and Learning More 499

Chapter 17 Subtractive Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501


17.1 Basic Concepts 502
17.2 Common Subtractive Methods 503
17.2.1 Source Selection 503
17.2.2 Filtering Methods 504
17.3 Vocal Synthesis 509
17.3.1 Introduction 509
17.3.2 Voiced Source Sound Generation 510
17.3.3 Formant Filtering 514
17.4 High Resonance Methods 517
17.5 Developing Processes and Learning More 521

Chapter 18 Noise in Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523


18.1 Consistent and Inconsistent Signals 524
18.2 Noise in the Time Domain 525
18.2.1 Generation and Shaping 525
18.2.2 Rate-Controlled Random Value Generation 529
18.2.3 Inconsistent Oscillation 529
18.3 Noise in the Frequency Domain 532
18.3.1 Spectral Envelopes 532
18.3.2 Rate Control and Modulation 533
18.3.3 Filtering 535
18.3.4 Applications 537
18.4 Developing Processes and Learning More 542
xii Contents

Chapter 19 Blending Synthesized Sounds . . . . . . . . . . . . . . . . . . . . . . 543


19.1 Basic Concepts 544
19.2 Fixed Parameter Blends 546
19.2.1 Static Blend 546
19.2.2 Enhancements to Static Blend 549
19.3 Envelope-Controlled Blends 550
19.4 Cross-Fading 553
19.4.1 Two Signal Cross-Fade 553
19.4.2 Four Signal Cross-Fade 555
19.4.3 Eight Signal Cross-Fade 556
19.5 Developing Processes and Learning More 557

Chapter 20 Modulation for Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 559


20.1 Introduction 560
20.2 Amplitude Modulation 561
20.2.1 Basic Principles 561
20.2.2 Practical Examples 565
20.3 Frequency Modulation 567
20.3.1 Basic Principles 567
20.3.2 Parallel FM 571
20.3.3 Multiple Modulator FM Arrangements 573
20.4 Further Modulation Methods 575
20.5 Developing Processes and Learning More 577

Chapter 21 Waveguide Physical Models . . . . . . . . . . . . . . . . . . . . . . . 579


21.1 Introduction 580
21.1.1 Basic Concepts 580
21.1.2 Structure of a Physical Model 581
21.1.3 Waveguides 582
21.2 Waveguide Plucked String 584
21.2.1 Simplest Plucked String 584
21.2.2 Body Resonances 588
21.2.3 Modifying the Excitation 588
21.2.4 Modifying the Waveguide Filtering 592
21.3 Waveguides for Percussion 593
21.3.1 Banded Waveguides 593
21.3.2 Waveguide Mesh Drum 595
21.4 Waveguide Wind Instrument 599
21.4.1 Basic Wind Instrument 599
21.4.2 Breath Input Improvements 602
21.4.3 Filtering and Tuning Improvements 603
21.5 Bowed Waveguides 604
21.6 Developing Processes and Learning More 606
Contents xiii

Chapter 22 Granular Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609


22.1 Sound from Grains 610
22.1.1 Introduction 610
22.1.2 Granular Stream Generation 610
22.1.3 Lookup Table Sources 615
22.1.4 Parameter Relationships 615
22.2 Granular Synthesis Techniques 617
22.2.1 Fixed Parameters 617
22.2.2 Envelope Control 620
22.2.3 Random Variations 623
22.2.4 Parallel Streams 624
22.3 Developing Processes and Learning More 626

PART IV — C ONTROL

Chapter 23 Process Organisation and Control . . . . . . . . . . . . . . . . . . 631


23.1 Components of Organisation and Control 632
23.1.1 Introduction 632
23.1.2 Process Segmentation 633
23.1.3 Processing at Different Rates 637
23.1.4 Control Inputs and Mapping 641
23.2 Controlling Synthesis 644
23.2.1 Introduction 644
23.2.2 Polyphonic Methods 645
23.2.3 Monophonic Methods 650
23.3 Developing Processes and Learning More 653

Chapter 24 Control Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655


24.1 Frequency and Amplitude 656
24.1.1 Frequency Control 656
24.1.2 Amplitude Gain Control 658
24.1.3 Key Velocity to Amplitude Gain 661
24.1.4 Further Mapping Considerations 662
24.2 More Sophisticated Techniques 665
24.2.1 Mapping to Varying Parameters 665
24.2.2 Mapping to Multiple Parameters 667
24.3 Developing Processes and Learning More 669

Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671


xiv Contents

A PPENDICES
Appendix A Mathematics for Audio Processes . . . . . . . . . . . . . . . . . 677
A.1 The Need for Mathematics 678
A.2 Variables, Simple Equations, Subscripts, Superscripts 679
A.3 Repeated Summation 684
A.4 Linear Mapping 685
A.5 Straight Lines 689
A.6 Logarithmic and Exponential Functions 691
A.7 Mapping and Shaping Functions 697
A.8 Units, Prefixes, and Symbols 698
A.9 Accuracy 699
A.10 Answers to Problems 703

Appendix B Windowing and Window Functions . . . . . . . . . . . . . . . 705


B.1 Introduction 706
B.2 Some Common Window Functions 710
B.3 Learning More 713

Appendix C Block Diagram Techniques . . . . . . . . . . . . . . . . . . . . . . . 715


C.1 Detail and Clarity 716
C.2 Relating Diagrams and Programming Forms 718

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721

K EY D ATA R ESOURCES
Cycle lengths and frequencies 20
Piano note names and fundamental frequencies (in hertz) 22
Common pitch change frequency multipliers 23
Example frequency ranges for filter controls 141
Important gain values 145
Note numbers, note names, and fundamental frequencies 400
Common intervals, chords, and scales 401
First ten harmonic amplitudes for common waveforms 488
Abbreviations

AD Attack-Decay
ADC Analogue-to-Digital Converter
ADR Attack-Decay-Release
ADSR Attack-Decay-Sustain-Release
AHDSR Attack-Hold-Decay-Sustain-Release
AM Amplitude Modulation
AR Attack-Release
ASR Attack-Sustain-Release
BPF Bandpass Filter
BPM Beats Per Minute
BRF Bandreject Filter
DAC Digital-to-Analogue Converter
DAW Digital Audio Workstation
DC Direct Current (0Hz)
DFT Discrete Fourier Transform
EG Envelope Generator
EQ Equaliser/Equalisation
FDN Feedback Delay Network
FFT Fast Fourier Transform
FIR Finite Impulse Response
FM Frequency Modulation
HPF Highpass Filter

xv
xvi Abbreviations

HF High Frequency
HMF High-Mid Frequency
IFFT Inverse Fast Fourier Transform
IIR Infinite Impulse Response
LF Low Frequency
LMF Low-Mid Frequency
LPF Lowpass Filter
MF Mid-range Frequency
MIDI Musical Instrument Digital Interface
PWM Pulse-Width Modulation
RMS Root Mean Square
SPL Sound Pressure Level
STFT Short Time Fourier Transform
Preface

The Power of Audio Processes


Digital audio processes are algorithmic forms that generate, modify, and analyse audio
data. They dominate the landscape of audio recording and production, deeply affecting
the way in which music is originated, performed, edited, and consumed. Audio processes
enable the exploration of obscure hidden features within sounds, the radical transforma-
tion of tonality, the generation of instrument sounds with wild new characteristics, and the
potential to all be controlled with natural and expressive human gestures. They also allow
great subtlety and precision; slightly changing individual frequencies and amplitudes,
accurately synthesising the character of acoustic instruments, and gently massaging a
track to fit better in a mix.

The potential is vast, but the basic principles of audio processes are within the grasp of
novices and enthusiasts. This is aided by the fact that the quantity of key element types is
relatively modest. Figure 1 illustrates the most common process elements that appear in
this book. Audio processes are often based on simple forms that are gradually expanded
into larger structures. Where the expansion ends depends on the complexity that is
desired and how much computational power is available. But even simple forms can
have sufficient capacity to provide hours of pleasure. A simple monophonic synthesizer,
a distortion effect, and some imagination is sufficient for creating superb musical results.

Exploring the potential of audio processes can take a lifetime. Not only is there the
opportunity to build bigger and bigger structures, but also the depth to dig down to the
underlying construction of elements and their relationship to the way in which humans
perceive sound. This book covers a wide range of topics and provides many routes to
explore the world of audio analysis, modification, synthesis, and control. All of them
build from decades of work by dedicated individuals, and there are a huge number of
books, journal papers, and other sources of information for those who want to learn more.
The journey starts here.

xvii
xviii Preface

the human acoustic input and output switches

∆i
f
f
0 1

variable input values signal generation filters

1 1
t

amp
y G 1Cx delay
-1 0
-1 1 time

mathematics lookup/function tables envelope generation delayed samples

Figure 1 Common process elements

How to Use this Book


This book is about the practical design of audio processes for musical applications. Com-
puter software for making and recording music is the main application being considered,
but the principles are also applicable to other situations. This is not a book that teaches
programming nor how to use commercial effects and synthesizers; there are many other
books in those areas. Rather, the information here is about the insides of audio processes;
understanding how they work, and creating new designs.

This book can be read in different ways depending on prior understanding and the topics
of interest. For someone who is new to audio processes and needs a comprehensive
grounding in the subject, such as an undergraduate university student, the expectation
is that it will be read from the start to the end. The chapters of this book are ordered to aid
in the progressive accumulation of understanding. The early chapters introduce funda-
mental ideas and processes, which later chapters build into more complex arrangements.
Care has been taken in sequencing the topics to reduce the need to understand everything
at once.

Those with prior experience might choose to read individual chapters, as efforts have
been made to contain related methods together, rather than spreading them across the
book. Inevitably there are many overlaps between techniques, however, and it is often
Preface xix

possible to combine approaches to achieve more sophisticated results. Cross-references


are provided where possible to direct the reader to related chapters and methods, rather
than repeating basic ideas in every chapter. Notations specific to particular programming
languages are avoided where possible.

The aim of the book is to explain the key techniques for each topic. From the fundamen-
tals it is usually possible to achieve sophistication by expanding and combining ideas.
Although it is not always the case, a simple process with a small number of elements will
tend to produce an unsophisticated result. Professional systems build on basic ideas to
create more naturalistic, tonally-interesting or target-specific results.

Digital audio processes are implemented as computer software within different types of
environment:
• Conventional text-based general purpose programming languages. For conven-
tional languages libraries of functions exist to provide audio and control input and
output, and often complete objects such as filters, oscillators, and envelope genera-
tors. The power of conventional languages can be in the access to low-level data and
associated structures, which allow more compact and effective coding by moving
beyond simple linking of standard process elements. The issue for many people is
the time taken to learn the programming language, in order to then apply it to audio
process development.
• Graphical/visual dataflow programming environments. In these environments
audio process objects are provided to the user in their finished form, ready to be
configured and linked together as required. This can make it easy to rapidly develop
a prototype, as well as construct larger processes. The visual environment can often
make the nature of signal flows clearer than in text-based programming languages.
However, organising the process for greatest clarity and efficiency can be a challenge.
There are usually ways of extending the functionality with custom objects, which
might be programmed by third parties or users themselves, in order to overcome
limitations with the standard elements.
• Text-based audio-specific programming languages. These have rapid prototyping
characteristics that are similar to graphical dataflow environments, but a visual form
more similar to text-based general purpose languages.
Many people find it helpful to learn through practice, and methods are presented in this
book to encourage this. However, this book does not assume that a particular program-
ming language or environment is being used. There is no perfect choice that can be
recommended, so personal preference will affect the selection. It is often the case that
there is a balance to be chosen between initial learning curve and ultimate flexibility.
Different environments, toolkits, and libraries have different strengths, and so sometimes
the choice can affect the ease with which certain processes can be created. However,
xx Preface

common elements sufficient for the majority of process types tend to be available in all
standard audio programming environments.
The audio processes in this book are applicable to a wide range of software implementa-
tions. There are three major representations that are used:
• Block diagrams are the main tool for explaining how process elements are linked
together to achieve particular results. It is normally easy to translate from a block
diagram to the on-screen representations in a graphical programming environment.
In text-based programming languages, the data flow through a complex process
element is typically implemented as a function call, and the links between elements
are variables.
• Text-based algorithmic forms are used where a block diagram is unable to express
a process elegantly or clearly. They are designed for readability and to allow easy
translation into conventional programming languages. Conventions are as follows:
◦ Assignment is with “G” but equality is tested with “GG”. For example:
if x GG 1 then
out G in D 2

◦ Inequality is tested with “!G” (meaning “not equal to”). For example:
if x !G 2 then
yGxB6

◦ Comments are from a double-slash to the end of the line. For example:
position G position B 1 // move to next position

◦ Arrays are accessed with an index in square brackets and the first element is
index 0. For example, to store the value 7 in the first element of an array called
buffer:
buffer[0] G 7

• Mathematical equations are used for compactness and clarity when the alternative
would be a long algorithm or a messy block diagram. Angles are expressed in
radians (and therefore trigonometric function arguments as well). See Appendix
A for further help in understanding the mathematical forms in this book.

b
Additional supporting materials for this book can be found on the companion website
(www.routledge.com/cw/creasey).
Acknowledgements

This book is dedicated to my Dad, John Creasey, and the memory of my Mum, Gwenn.

Many thanks to all those who have supported the production of this book, directly or
indirectly, including:

⋆ The Music Technology staff at the University of the West of England, Bristol (Stephen
Allan, Zak Baracskai, Rich Brown, Lukas Greiwe, Martyn Harries, Adrian Hull,
Gethin John, Liz Lane, Marcus Lynch, Tom Mitchell, Chris Nash, Phill Phelps, Alan
Price, Martin Robinson, Matt Welch), and all my other colleagues past and present.

⋆ The students I have taught at UWE, whose demands for greater clarity have helped
me to develop new explanations and examples whenever I thought that things were
obvious.

⋆ Martyn Harries for playing the trumpet, and Linda Allan for helping with vocal
recordings.

⋆ Ian Holmes and the UWE Centre for Performing Arts for providing a number of
instruments for recording.

⋆ Purton Methodist Church for allowing their vintage organ to be recorded.


⋆ Chris Nash for being technical editor, counsellor, and for providing suggestions for
improvements.

⋆ Everyone at Routledge/Focal Press for dealing with my many questions and complex
requirements.

⋆ Anne Collie, John Collie, and Jude Sullivan for their encouragement and interest in
the project.

⋆ Saskia, Lily, and Poppy for help and company in the office.
⋆ My brother, Steve.
Finally I want to thank my wife, Emma, for her musical talents, emotional support, English
language skills, and endless patience in coping with what seemed like a never-ending
project.

xxi
Page Intentionally Left Blank
1
Introduction

1.1 The Nature of Audio Processes 2


1.1.1 Introducing Audio Processes 2
1.1.2 Constructing an Audio Process 2
1.1.3 Real-Time and Non-Real-Time Systems 4
1.1.4 Audio Process Themes 5
1.2 Example Audio Process Systems 8
1.2.1 Playing an Acoustic Instrument 8
1.2.2 Combining Two Paths 10
1.2.3 Automated Analysis 11
1.2.4 Two Humans Working Together 13

1
2 Introduction

1.1 The Nature of Audio Processes


1.1.1 Introducing Audio Processes

Audio processes are at the heart of common musical software such as effects and syn-
thesizers. Fundamentally they are algorithmic forms that generate, modify, and analyse
audio data. Their design is not driven by simple rules, but rather through the requirements
of musical practice and their relationship to the human auditory system. In one musical
context a particular effect might be regarded as enhancing the result, in another it might
be regarded as completely inappropriate. The configuration of that effect can depend on
the combination of notes being played, and the role of a particular instrument within the
overall mix.

There is endless scope for developing novel versions of audio processes to match the
variety and context-specific requirements of the target applications. The subject would
not be as interesting if there were a single method that was always applied in the same
way for a particular problem. There are, however, common principles and techniques
that enable the novice to start from the fundamentals and work gradually towards more
complex forms. This is one of the key roles of this book.

1.1.2 Constructing an Audio Process

These are some of the significant features that influence audio process design, implemen-
tation, and configuration:

• There is substantial scope for varying the designs of audio processes to fit the context
and the artistic desires of the user. The wide range of commercial implementations
of audio processes demonstrates the range of possibilities for tonal variation and
control style, and the many possible situations to which the processes could be
applied.
• Available computation and storage has always had a strong influence on the devel-
opment of digital audio process implementations. The extensibility of audio pro-
cesses means that gains in computational capabilities always appear to be matched
by increases in algorithmic complexity to take advantage of the additional processing
power. If there are restrictions within which an audio process must work, there will
be influences on the tonal character produced.
• Fashion and novelty can cause certain ideas to receive more attention than others,
such as a fashion for vocoding and pitch correction, or a desire for warm or dirty-
sounding compression, or sample-based synthesis. As research progresses, new
methods and new associated sound characters add to the available palette.
1.1 The Nature of Audio Processes 3

• Control characteristics, accessibility, and learning time are all important. A process
is more than just the underlying computation and audio output, as its usefulness
depends on the nature of the mapping from the user to the system. The idea is
to optimise the relationship, such that the quantity of control parameters is not
excessive, that a synthesizer can be played in an expressive manner, that the reaction
of a modifier to change of control values is progressive and natural, and so on.
A wide variety of audio processes exist:
• Some processes are so common that they are regarded as standard technology. These
can be found in such places as on mixing consoles and effects racks in the studio.
Many are based on requirements that have not changed fundamentally in many
years.
• Some processes are well known, but often seen as less mainstream, or suitable for
more experimental composers or producers, or only the domain of specialists such as
synthesizer programmers. Some have been around for many years, but the average
home studio musician is less likely to have any direct experience of them.
• Some processes are created as custom one-off solutions to particular problems. This
might be a tool that is needed in a particular studio for a particular session, or an art
installation with very particular requirements.
Although the capability exists to produce any possible sound, there is the issue of how
to get from an idea to an implementation. One way of starting is to recognise that many
people want to produce systems that relate in some way to the world around us, such
as a realistic cathedral-like reverberation, or synthesis of a percussive instrument, or a
performance interface that utilises arm gestures.
It is often harder to achieve audio processes that fit naturally into the sound world than
it is to create “unnatural” sounds. For example, creating a sound that has never been
heard before might be achieved by sketching and synthesising an arbitrary waveform.
Synthesising a sound that convincingly could have come from an acoustic source (an
animal, an instrument), yet actually does not exist outside the computer, is a rather more
complex task. It is necessary to understand the nature of sounds from acoustic sources;
how they start, how they develop over time, how the sound character can be altered by
physical interaction, and so on. In that way, an audio process can convince the listener
that the result should fit in the soundscape being created.
Although it can appear daunting to have many possibilities, there is a fairly logical pro-
gression in how a fundamental method can be enhanced or extended. For example, a
parallel set of bandpass filters can be added to many modification and synthesis processes
to add tonal shaping. A distortion method can be applied to synthesis outputs in the same
way that it is applied to a guitar signal. A varying blend of multiple oscillators can be
used in place of a single oscillator in a synthesis scheme. This book explains not only the
4 Introduction

fundamental toolkit of audio process structures, but also suggests ways in which the parts
can be used to build more sophisticated forms.

1.1.3 Real-Time and Non-Real-Time Systems


A typical audio process has some means by which current or past outputs are used to
inform future action. For example, the value on a mixing console level meter reflects the
recent audio signal level, such that the engineer can adjust the amplitude gain. Likewise
the sound of an instrument will be used by the performer to adjust their control of the
instrument, and so the future audible results. If the output of a system is occurring at the
same time as the inputs are affecting those results, then the system is working in a real-time
(or performance) mode. Playing an instrument is usually a real-time process.
However, some tasks are performed where there is no real-time performance interaction.
This can be called a non-real-time (or editing) mode. In these cases the output is not being
produced continuously in response to the control inputs, such as when a system is being
configured in preparation for a performance. Computer-based tasks have historically been
orientated around an editing-type mode, with often only one parameter being affected at
once, such as when adjusting a setting in a popup window accessed from a menu. In those
cases there is a series of separate steps performed in sequence towards an end, rather than
a performance interaction.
Some typical non-real-time tasks are:
• Non-linear audio editing where a portion of a sound file is cut from one position in
time and pasted at another. It would be possible to do this by selecting the portion
while the sound is playing, and pasting it as the relevant position is reached during
playback. For purposes of precise control, however, it is far more desirable to make
this edit graphically, and then play back the result as a separate step.
• Arranging music for an orchestra. It would be enormously time-consuming to try
arranging music while the whole orchestra were present, attempting to direct each
player individually as to the composer’s intentions. The normal method is to write
the score in an editing stage beforehand, using knowledge of instrument ranges
and sound characters, and an understanding of harmony and so forth. Individual
parts might be tested on another instrument such as a piano or synthesizer, but the
full effect is only achieved in the performance stage when the score is used by the
orchestra.
• Complex computer-based sound processing. There are tasks for which real-time
operation is beyond the state of the art. If the system is not fast enough, it can be
necessary to let the computer generate a result (such as a sound file), and then to
play that back when the task has been completed. Historically this has been a major
problem, but has become less so as technology has progressed.
1.1 The Nature of Audio Processes 5

The conditions for real-time operation in digital audio systems can be viewed from two
key perspectives:
• A digital audio system must operate at the sample rate. For example, at a sample rate
of 96kHz there are 96000 audio output samples per second, so it must produce a new
output every 1E96000th of a second. If it cannot achieve that because the processing
task is too complex for the available computing power, then gaps will occur in the
output.
• Another constraint is the acceptable delay (or latency) between an event occurring
and the system output demonstrating a response to that event. For example, if a
key is pressed on a musical keyboard, what is the longest acceptable delay before
a synthesizer produces an output? Similarly if a guitar signal is passing through a
computer to add a flanging effect, how quickly must the output reflect a change in in-
put to be acceptable? In general such latency need only be a few milliseconds before
the delay is perceptible. Delays are often due to the operation of hardware interfaces,
operating systems, communication mechanisms, and block-based processing.
It is possible for an audio system to be able to operate consistently at the sample rate,
yet have a latency in response to events that makes it impractical to use the system in
performance.

While real-time performance operation is often associated with placing high load on a
computing system, that is not always the case. A digital room thermostat is a real-time
electronic device, but uses very limited processing power. Similarly, many processing
tasks can be completed faster than required for real-time performance. For example,
creating a data compressed audio file, or bouncing audio tracks from a Digital Audio
Workstation (DAW) can typically be completed faster than the time taken to play the
whole audio file through from beginning to end.

1.1.4 Audio Process Themes


This book is organised around four themes; analysis, modification, synthesis, and control:

Analysis

Humans analyse sound constantly, looking for patterns in the stream of information reach-
ing the ears, in order to gain understanding of what is going on in the environment. In
musical terms, certain patterns are important, such as those that identify a particular type
of instrument, or characteristics like pitch. When playing an instrument, analysis helps
the performer to adjust their physical inputs to achieve the desired pitch, loudness, and
tonal character. When mixing a recorded track, analysis helps the engineer to vary process
controls to achieve different sound modifications.
6 Introduction

Turning the concept around, the information that is important to the human auditory
system is also important to the audio process designer. For example, to synthesize a
percussive sound means generating the sonic cues that a human recognises as reflecting
acoustic percussive sound sources. Similarly, a noise gate must analyse its input in order
to recognise the difference between noise and non-noise in a similar way to a human.

As part of the audio process control loop, the human brain is often used to analyse
information (such as audio signals, visual displays, and tactile and kinaesthetic feedback)
and then produce a control output (such as moving the fingers) to alter the behaviour of
the system. Therefore, the brain can be represented in block diagram terms as follows:

(ANALYSIS)
information control
human brain
input output

Software or electronic process elements that perform an analysis role have a similar form
to the brain in a block diagram, where they take an input (such as an audio or control
signal) and produce a control output.

Modification

Many of the most common audio processes are concerned with modification of existing
sounds. This can be as simple as changing the amplitude of a signal or attenuating high
frequencies, all the way through to complex effects processes such as chorus and rever-
beration. Most sounds encounter some modification from source to ear, either in software,
electronically, or acoustically. The recording studio contains many sound modifiers that
can take a source sound and change it such that it has the desired character. Even listening
to a recording in a living room causes modification of sound, as the characteristics of the
room are superimposed on the aural result.

Modifiers are not necessarily specific to a particular instrument, and usually work in-
dependently. For example, a tremolo effect might be used with an electric guitar, a
synthesizer, or the sound of a clarinet recorded using a microphone. In a block diagram
a typical modifier will take an audio input and produce an audio output. Consider a
distortion effect, for example:

(MODIFICATION)
audio audio
distortion
input output
1.1 The Nature of Audio Processes 7

Synthesis

Audio synthesis can be defined as the generation of sound from scratch. Synthesis is often
derived, at least in part, from understanding the characteristics of existing sounds and
replicating them in some way. The important feature is that synthesis is the source of
sound, without needing a continual audio input from elsewhere (as modifiers must have).
A simple example of a synthesizer is an instrument such as an electronic piano. It does
not have an audio input like the modifier, but instead has a control input (the hands and
feet determining the sound that is produced):

(SYNTHESIS)
control audio
electronic piano
input output

In audio process terms, a synthesizer often incorporates fundamental modification ele-


ments within its design, such as amplitude gain control, mixing, and filtering.

Control

The nature of the control system has a significant effect on the results produced. It might
be assumed that certain tasks are always controlled in the same way, such as a musical
keyboard being the input to a synthesizer. However, synthesizers can be controlled by a
wide variety of sources, from a simple computer mouse to a guitar. A different technique
will produce a different sonic result, as the ergonomics of interfaces provide different
styles of control. For example, a guitar provides the opportunity for rapid and precise
positioning of many fingers in different positions, whereas a computer mouse provides
two linked dimensions of control and a small number of switch inputs.

Parameter control is an important feature of both modifiers and synthesizers. This is found
both in setup stages, as well as in performance. Therefore, it is typically the case that there
will be more than one input to a process. For example:

(MODIFICATION)
audio audio
amplitude control
input output

control
input

The later chapters in this book will examine the techniques used to achieve effective
control.
8 Introduction

All four themes are discussed in depth through the chapters of this book in terms of:

• The underlying technologies, and the forms used in real systems. The basic struc-
tures and techniques are expanded to help the reader understand how to construct
more complex real-world systems.
• The response that a human has when interacting with those systems; both aurally
and using other senses. An audio process is most useful when it provides a result
that is controllable in a way that the human finds natural and suitable.

There are many ways in which this information can be applied, but one of the points
of learning about audio processes in all its aspects is to understand the benefits and
limitations of existing technologies. From this, future systems will be developed that will
produce new understanding of sound, creative techniques, and listening experiences. It
is not the case that audio processes are always tied to particular musical styles. In fact,
the result might not be classed as musical at all. A science fiction sound effect, a slowly
changing electroacoustic sonic landscape, and a rock track are equally valid contexts for
understanding the elements of interest.

1.2 Example Audio Process Systems


This section introduces some typical arrangements of audio process elements, to illustrate
how they are combined to form working systems.

1.2.1 Playing an Acoustic Instrument


Imagine a musician playing an acoustic instrument. Although this scenario includes no
electronic components, it is worth considering in terms of audio processes. Figure 1.1
shows how playing an instrument requires some kind of control input, to which the
instrument reacts and synthesizes an output sound. Any instrument can be considered to
have those two elements, whether electronic or acoustic. The challenge when creating an
instrument is designing it such that it reacts in an expressive and natural way to the human
input, and such that it produces a sound that has the desired timbral (tonal) characteristics.

When an instrument is played, there is aural feedback to the performer. Therefore, fig-
ure 1.1 can be extended to include that path as shown in figure 1.2. The performer’s
brain analyses the sound in order to appropriately adjust control of the instrument and
so achieve variations of pitch, amplitude, and tonality. A feedback path is important
in the operation of most audio systems. For example, a recording engineer uses aural
information to adjust the controls of a mixing console and balance the relative amplitudes
of different instruments.
1.2 Example Audio Process Systems 9

(CONTROL)
(SYNTHESIS)

instrument output

Figure 1.1 Playing an acoustic instrument (simplest form)

(CONTROL)
(SYNTHESIS)

instrument

(ANALYSIS) brain

acoustic feedback path

Figure 1.2 Playing an acoustic instrument (including aural feedback)

(CONTROL)
(SYNTHESIS)

instrument

(ANALYSIS) brain

room

(MODIFICATION)

Figure 1.3 Playing an acoustic instrument (including room acoustics)

analysis control synthesis modification

Figure 1.4 An audio process feedback loop


10 Introduction

Another element that can be added to the model is the effect of the room in which the
instrument is being played, as shown in figure 1.3. The environment in which a sound
exists can radically change the aural result. For example, playing an instrument in a
cathedral produces a very strong reverberant effect.

The structure of the feedback loop seen in figure 1.3 is summarised in figure 1.4. This
feedback loop has an arrangement of parts that is found in a number of audio process
systems. It is possible to extend the loop with further elements:

• Other feedback paths used by the musician. For example, the sense of touch that
helps a guitarist to achieve the desired tone, by feeling the response of the strings.
Or visual information that a musician might use when positioning their fingers.
• Other sources of sound. For example, other instruments playing at the same time
that produce sound that the musician must analyse, in order to play in synchrony
and to interact with them in performance. There are also background noise sources
that must be filtered out by the human auditory system.
• Other modifiers. There are often many elements in the audio chain between the
instrument and the ear, some deliberate and some incidental. For example, if a
microphone and amplifier are being used to increase the sound level for an audience
then they will also have an effect on the tonality of the sound that is heard. Similarly
an acoustic guitar recorded with a pickup will sound different to when a microphone
is used.

Changing any one element of the structure will produce a different audible result. For
example:

• If the performer is a less skilled musician, they will be less able to produce control
inputs that realise the full expressive potential of the instrument, compared to a
virtuoso.
• Even if the type of instrument remains the same, different examples from different
manufacturers will have different tonal qualities.
• The environment in which the instrument is being played can have a large effect
not only on the sound character, but also on the style of music that can be clearly
conveyed. A complex high-tempo tune will be unclear in a cathedral, for example.

1.2.2 Combining Two Paths


The scenario described in §1.2.1 was presented as a simple circular path from analysis, to
control, to synthesis, to modification, and back to analysis. In most cases there will be
multiple signal paths within an audio process.
1.2 Example Audio Process Systems 11

Imagine that a musician carefully tips a large bucket of marbles onto an electronic drum
pad to cause a rapid series of triggered sound events. The drum pad is connected to a
synthesizer, and then to an amplifier and loudspeaker. However, as the marbles fall off
the drum pad and onto the floor, that will also create sound. Figure 1.5 shows how this
situation might be represented as a block diagram.

(CONTROL)
electrical
output
(SYNTHESIS)

synthesizer

brain (ANALYSIS) (synthesized)


acoustic
output

room amp

(MODIFICATION)

Figure 1.5 Playing a drum pad with marbles

In the diagram, there are two types of synthesis; acoustic generation of sound as the
marbles hit the floor, and electronic synthesis. These two separate signals are mixed
together (in the adder block) to create the signal that is modified by the room acoustics
and heard by the person creating the effect. This feedback allows the person to adjust the
tilt of the bucket to control the audible result.
Transducers (such as microphones, headphones, and loudspeakers) and amplifiers can be
regarded as signal modifiers as well. For example, a microphone’s electrical output is not
a completely accurate representation of the corresponding variation in sound pressure. In
many cases transducers and amplifiers are quite subtle modifiers, however.

1.2.3 Automated Analysis


It is not only humans that can perform audio analysis. With some audio processes there is
a computer algorithm or electronic circuit that is responsible for analysing the content of
a signal, and producing a suitable effect based upon the results. Automation is commonly
used where a task is boring or repetitive, or where a human is unable to react quickly or
accurately enough.
12 Introduction

Figure 1.6 shows an example use of an automated analysis process. This represents a
system that might be used in an audio artwork at a public exhibition, where the computer
is responsible for controlling the audible results. The computer analyses the audio signal
from the microphone, and produces a control signal that depends upon the nature of that
signal. For example, it might detect the footsteps of the humans in the room, or whether
they are talking, or try to gauge the number of people present. This information is then
used to create a particular sound character that reacts to the presence of humans in the
room.

control
signal
(ANALYSIS) (SYNTHESIS) (MODIFICATION)

computer synthesizer effects

room amp

(MODIFICATION)

humans

(SYNTHESIS)

Figure 1.6 Processing loop for an audio artwork

In the figure there are two principal sound creation (synthesis) blocks (the synthesizer and
the humans), and two principal modifier blocks (the effects unit and the room). There are
some fairly complex elements to this arrangement:

• A computer does not have the experience or imagination of a human, which means
that when the system is programmed it must be given a set of rules with which to
work. Such rules can be very difficult to specify in order to achieve an appropriate
result. For example, it might be desirable to achieve a musical character that is
increasingly harsh and tense as the number of human visitors to the room increases.
Such instructions are fairly easy for a human musician to interpret, but it is less clear
how to translate these into a computer algorithm.
• Humans can easily recognise the difference between footsteps and voices, even when
they are occurring simultaneously. Achieving this with a computer algorithm is a
challenge.
1.2 Example Audio Process Systems 13

• There are two principal audio sources that are received as a combined signal at the
microphone. The computer must not only distinguish between different human
sounds, but also between the human sounds and those produced by the synthesizer.
Due to the modifications created by the effects unit and the room, this is not a trivial
task.
• Finally, it is important to remember that part of the complexity of these systems
is that previous outputs affect the current input. With positive feedback, this can
lead to an unstable system (or at least one that has quite dramatic behaviour). For
example, imagine that the computer algorithm has a rule that the louder the humans
talk, the louder the synthesizer should become. Of course, if the synthesizer becomes
louder, the humans are likely to talk more loudly in order to be heard, causing the
synthesizer to become louder, and so on in an escalating fashion.

1.2.4 Two Humans Working Together


Figure 1.7 shows the signal flows in a studio session where two instrumentalists are
separated by an acoustic partition, but are recording their parts simultaneously. It is
assumed that the studio acoustic is dead and so there is minimal modification due to the
acoustic environment. One performer is playing an electric guitar plugged into a guitar
amplifier simulator (which models the behaviour of different amplifier and loudspeaker
combinations). The other performer is playing a pair of conga drums, using one hand per
drum.

Each performer is listening on headphones to a mix of their own instrument and the other
performer, such that they can play in time with each other. There is a split in the electrical
signal from both the amplifier simulator output and the microphone output that feed the
adders either side of the acoustic partition. Each performer has their own amplitude
gain control such that the level of the signal from the other performer can be tailored
for personal preference. For example, they might want to hear their own output signal
slightly higher in level than the other performer’s output.

There are a number of ways in which the diagram could be extended to represent other
features of the studio recording scenario. For example, the output from the amplifier sim-
ulator and the microphone will be connected to a mixing console and an audio recorder.
Similarly, there are likely to be paths back from the recorder to the performers such that
they can hear previously recorded parts, or a click track. The guitarist might well have
a control path to the amplifier simulator (as well as to the guitar) such as a foot pedal
or expression pedal in order to change parameter settings while playing. The recording
engineer in the control room will also be analysing the signals, and applying suitable
modification processes.
14 Introduction

(MODIFICATION) (MODIFICATION)

gain
amp simulator
control

electric
(SYNTHESIS)
guitar

(CONTROL) (ANALYSIS) brain

acoustic partition
brain (ANALYSIS) (CONTROL)

(SYNTHESIS) drum drum

(MODIFICATION)

gain
control

Figure 1.7 Two humans performing together in a studio

b
The examples above illustrate the typical roles of analysis, modification, synthesis, and
control. Practical audio systems often combine processes from different areas to achieve
the required results. Subsequent chapters will explore each area in detail.

You might also like