Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, IEEE International Conference on Acoustics Speech and Signal Processing
…
4 pages
1 file
Distributed physical models of musical instruments have been used to acoustically "ping" Internet connections between two network hosts. Sound waves propagated through Internet acoustics behave just as in air, water or along a stretched string. In this case, a musical synthesis technique creates waves on the Internet path between two hosts. When waves recirculate between two endpoints, a musical tone is created if the round trip travel time lies within the range of our pitch sense (roughly
Organised Sound, 1996
thus the method that is used for producing the digital new paradigm in digital sound synthesis. The basic idea is sound signal.
ISPA 2001. Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis. In conjunction with 23rd International Conference on Information Technology Interfaces (IEEE Cat. No.01EX480)
After recent advances in coding of natural speech and audio signals, also the synthetic creation of musical sounds is gaining importance. Various methods for waveform synthesis are currently used in digital instruments and software synthesizers. A family of new synthesis methods is based on physical models of vibrating structures (string, drum, etc.) rather than on descriptions of the resulting waveforms. This article describes various approaches to digital sound synthesis in general and discusses physical modelling methods in particular: Physical models in the form of partial differential equations are presented. Then it is shown, how to derive discrete-time models which are suitable for real-time DSP implementation. Applications to computer music are given as examples.
2013
Sound synthesis based on physical models of musical instruments is, ultimately, an exercise in numerical simulation. As such, for complex systems of the type seen in musical acoustics, simulation can be a computationally costly undertaking, particularly if simplifying hypotheses, such as those of traveling wave or mode decompositions are not employed. In this paper, large scale time stepping methods, such as the finite difference time domain and finite volume time domain methods are explored for a variety of systems of interest in musical acoustics, including brass instruments, percussion instruments based on thin plate and shell vibration, and also their embeddings in 3D acoustic spaces. Attention is paid here to implementation issues, particularly on parallel hardware, which is well-suited to time stepping methods operating over regular grids. Sound examples are presented. Copyright: c 2013 Stefan Bilbao, Brian Hamilton, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
2012
This small article will define a type of sound synthesis that takes inspiration from physical modeling but has important differences with it. The produced sounds exhibit physical characteristics that make them suitable for the sonification of real-world objects.
IEEE Signal Processing Magazine, 2000
T he physical modeling of complex sound generators can only be approached by individually synthesizing and discretizing the objects that contribute to the generation of sounds. This raises the problem of how to correctly implement the interaction between these objects. In this article we show how to construct an object-based environment for sound generation, whose objects can be individually synthesized and which can interact with each other through the modeling of a potential interaction topology. We will also show how this interaction topology can be made dynamic and time varying. We will further discuss how we envision an object-based environment that integrates geometric, radiometric, and intrinsic/extrinsic acoustic properties. We will finally illustrate our first results toward the modeling of complex sound generation systems.
Organised Sound, 1998
In multimedia art and communication, sound models are Schafer (1977), who also introduced a catalogue of needed which are versatile, responsive to users' sounds organised according to referential attributes. expectations, and have high audio quality. Moreover, model Nowadays, a common terminology is available for flexibility for human-machine interaction is a major issue. describing sound objects both from a phenomeno-Models based on the physics of actual or virtual objects can logical or a referential viewpoint, and for describing meet all of these requirements, thus allowing the user to collections of such objects (i.e. soundscapes) (Risset rely on high-level descriptions of the sounding entities. As 1969, Truax 1978, McAdams 1987). long as the sound description is based on the physics of For effective generation and manipulation of the sounding objects and not only on the characteristics of sound objects it is necessary to define models for human hearing, an integration with physics-based graphic sound synthesis, processing and composition. Identimodels becomes possible.
2003
In this paper signal-based and physics-based sound synthesis methods are described, with a particular emphasis on our own results achieved in the recent years. The applications of these methods are given for the case of organ, piano, and violin synthesis. The two techniques are compared based on these case studies, showing that in some cases the physics-based, in other cases the signal-based realization is more advantageous. As a theoretical result, we show that the two methods can be equivalent under special circumstances.
… of the 2005 International Computer Music …, 2005
A new approach to computer music instruments is described. Rather than sense control parameters from acoustic instruments (or non-acoustic instrument controllers), the sound of an acoustic instrument is used directly by a synthesis algorithm, usually replacing an oscillator. Parameters such as amplitude and pitch can control other aspects of the synthesis. This approach gives the player more control over details of the sound due to the use of the rich acoustic signal. Latency in sensing parameters, particularly pitch, is less of a problem because pitch information is carried directly by the acoustically generated signal. Several examples are described and the results of a subjective evaluation by musicians are presented.
2004
The sound produced by acoustic musical instruments is caused by the physical vibration of a certain resonating structure. This vibration can be described by signals that correspond to the time-evolution of the acoustic pressure associated to it. The fact that the sound can be characterized by a set of signals suggests quite naturally that some computing equipment could be successfully employed for generating sounds, for either the imitation of acoustic instruments or the creation of new sounds with novel timbral properties.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of New Music Research, 2004
Proceedings of the 2006 symposium on Interactive 3D graphics and games - SI3D '06, 2006
Journal of New Music …, 2001
Computer Music Journal, 2020
The ITB Journal, 2002
Acta Physica Polonica A, 2015
Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452), 1999
Organised Sound, 2005
Journal of the Audio Engineering Society, 2015