Papers by Giovanni De Poli
Leonardo, 2023
A real breakthrough in the development of technological art and computer music was the collaborat... more A real breakthrough in the development of technological art and computer music was the collaboration era between the Centro di Sonologia Computazionale (CSC) and the Venice Biennale in the early 1980s. Thanks to this collaboration, computer music, which in those years was confined to research laboratories with auditions for insiders, entered the global scene of contemporary music. This interview by Sergio Canazza, current director of CSC, with two leading figures of that endeavor aims to bear witness to that creative turning point.

Proceedings of the SMC Conferences, Jul 26, 2015
This paper presents the early developments of a recently started research project, aimed at study... more This paper presents the early developments of a recently started research project, aimed at studying from a multidisciplinary perspective an exceptionally well preserved ancient pan flute. A brief discussion of the history and iconography of pan flutes is provided, with a focus on Classical Greece. Then a set of non-invasive analyses are presented, which are based on 3D scanning and materials chemistry, and are the starting point to inspect the geometry, construction, age and geographical origin of the instrument. Based on the available measurements, a preliminary analysis of the instrument tuning is provided, which is also informed with elements of theory of ancient Greek music. Finally, the paper presents current work aimed at realizing an interactive museum installation that recreates a virtual flute and allows intuitive access to all these research facets.

Models for the representation the information from sound are necessary for the description of the... more Models for the representation the information from sound are necessary for the description of the in-formation, from the perceptive and operative point of view. Beyond the models, analysis methods are needed to discover the parameters which allow sound description, possibly lossless from the physical and perceptual properties description. When aiming to the extraction of information for sound, we need to discard every feature which is non relevant. This process of feature extraction consists of var-ious steps, starting from pre-processing the sound, then windowing, extraction, and post-processing procedures. An audio signal classification system can be generally represented as represented in Figure 4.1. Pre-processing: The pre-processing consists of noise reduction, equalization, low-pass filtering. In speech processing (voice has a low-pass behavior) a pre-emphasis is applied by high-pass fil-tering the signal to smooth the spectrum, to achieve an uniform energy distribution spect...
Atti del Congresso Annuale A.I.C.A. (1982), 1982
In this paper we present some results on audio restoration obtained with an algorithm that, in pr... more In this paper we present some results on audio restoration obtained with an algorithm that, in principle, solves the problems of broadband noise filtering, signal parameters tracking and impulsive noise removal by using the Extended Kalman Filter (EKF) theory. We show that, to achieve maximum performance, it is essential to optimize the EKF implementation. For this purpose, to cope with the non stationarity of the audio signal, we used two properly combined EKF filters (forward and backward), and introduced a bootstrapping procedure for model tracking. The careful combination of the proposed techniques and an accurate choice of some critical parameters, allows one to improve the performance of the EKF algorithm. Different audio examples validate the effectiveness of the presented procedure. 1 Work supported by CNR Progetto Finalizzato Beni Culturali, under contract CT9700629PF36

Computer Music Journal, 1992
Sound synthesis by means of simulated physical models has gained popularity in the last few years... more Sound synthesis by means of simulated physical models has gained popularity in the last few years. One of the principal reasons for this interest is that this technique, based on modeling the mechanism of production of sound, seems to offer the musician simpler tools for controlling and producing both new and traditional sonorities. In general the aim of any model is to describe the fundamental aspects of the phenomenon in question by means of mathematical relationships. Most often models are used for purposes of analysis. In science and engineering, models are commonly used for the purpose of understanding physical phenomena. This is especially true in musical acoustics, where it is common practice to study a traditional instrument through its physical model in order to understand how it works (Keefe 1992; Woodhouse 1992). In the pioneering work of Hiller and Ruiz (1971), physical models were used with the goal of producing musical sounds. Since that time, physical models have been used for synthesis purposes. In this article we examine how models can be constructed for musical applications and discuss the principles that inspire the most widely used synthesis algorithms. We will also try to compare physicalmodel-based and traditional synthesis methods by discussing their structural properties. For all structures and models discussed below there are some important general truths. First, a common way of approaching the problem of modeling physical systems is to describe their observed behavior in the frequency domain. Frequency models are particularly effective for the description of linear systems, but such systems rarely apply for musical instruments. When nonlinearities must be taken into account, modeling in the frequency domain often becomes unfeasible, especially when strong nonlinearities are involved. In this case, models in the time domain are preferable. Moreover, we know that any simulation requires the continuous-time model to be made discrete. This, of course, must be done in such a way as to reproduce with good approximation the behavior of the continuous-time model to which it refers.

Journal of New Music Research, 2017
Can a computer play a music score, e.g. via a Disklavier, in a way that cannot be distinguished f... more Can a computer play a music score, e.g. via a Disklavier, in a way that cannot be distinguished from a human performance of the same music? One hundred and seventy-two participants with a wide range of music playing backgrounds rated sound recordings of 7 performances of piano music by Kuhlau, one played by a human, and six generated by algorithms, including a 'mechanical' and an 'unmusical' rendering. Participants rated the extent to which each performance was by a human and explained their answers. The mechanical performance had the lowest mean rating, but the human performance was rated as statistically identical to the other stimuli. There were no differences between ratings made by classical piano experts and lay listeners, but despite this, the musicians were more confident with their ratings. Qualitative analysis revealed five broad themes that contribute to judging whether a piece appears to be human. The themes were labelled (in descending order of frequency) intuitive, expressive, imperfections, halo (global preference) and empathy. This paper presents new evidence systematically demonstrating that algorithm generated performances of piano music can be indistinguishable from human performances, suggesting some parallels with the 1990s victory of the Deep Blue computer of the world champion (human) chess player.

Journal of New Music Research, 2015
Studies on the perception of music qualities (such as induced or perceived emotions, performance ... more Studies on the perception of music qualities (such as induced or perceived emotions, performance styles, or timbre nuances) make a large use of verbal descriptors. Although many authors noted that particular music qualities can hardly be described by means of verbal labels, few studies have tried alternatives. This paper aims at exploring the use of non-verbal sensory scales, in order to represent different perceived qualities in Western classical music. Musically trained and untrained lis- teners were required to listen to six musical excerpts in major key and to evaluate them from a sensorial and semantic point of view (Experiment 1). The same design (Experiment 2) was conducted using musically trained and untrained listeners who were required to listen to six musical excerpts in minor key. The overall findings indicate that subjects’ ratings on non- verbal sensory scales are consistent throughout and the results support the hypothesis that sensory scales can convey some specific sensations that cannot be described verbally, offering interesting insights to deepen our knowledge on the relation- ship between music and other sensorial experiences. Such research can foster interesting applications in the field of music information retrieval and timbre spaces explorations together with experiments applied to different musical cultures and contexts.

ACM Transactions on Applied Perception, 2015
Computational systems for generating expressive musical performances have been studied for severa... more Computational systems for generating expressive musical performances have been studied for several decades now. These models are generally evaluated by comparing their predictions with actual performances, both from a performance parameter and a subjective point of view, often focusing on very specific aspects of the model. However, little is known about how listeners evaluate the generated performances and what factors influence their judgement and appreciation. In this article, we present two studies, conducted during two dedicated workshops, to start understanding how the audience judges entire performances employing different approaches to generating musical expression. In the preliminary study, 40 participants completed a questionnaire in response to five different computer-generated and computer-assisted performances, rating preference and describing the expressiveness of the performances. In the second, “GATM” ( Gruppo di Analisi e Teoria Musicale ) study, 23 participants als...

IEEE Transactions on Affective Computing, 2014
The important role of the valence and arousal dimensions in representing and recognizing affectiv... more The important role of the valence and arousal dimensions in representing and recognizing affective qualities in music is well established. There is less evidence for the contribution of secondary dimensions such as potency, tension and energy. In particular, previous studies failed to find significant relations between computable musical features and affective dimensions other than valence and arousal. Here we present two experiments aiming at assessing how musical features, directly computable from complex audio excerpts, are related to secondary emotion dimensions. To this aim, we imposed some constraints on the musical features, namely modality and tempo, of the stimuli.The results show that although arousal and valence dominate for many musical features, it is possible to identify features, in particular Roughness, Loudness, and SpectralFlux, that are significantly related to the potency dimension. As far as we know, this is the first study that gained more insight into the affective potency in the music domain by using real music recordings and a computational approach.
M. Emmer, editor, Matematica e cultura 2003, pages 31–40. Springer Verlag, 2003., 2003
Mathematics and Culture III, 2012
musical timbre and physical modeling Giovanni De Poli and Davide Rocchesso
Proceedings ICMC …, 1999
A framework for real-time expressive modification of audio and MIDI musical performances is prese... more A framework for real-time expressive modification of audio and MIDI musical performances is presented. An expressiveness model computes the deviations of the musical parameters which are relevant in terms of control of the expressive intention. The modifications are ...
Signals play an important role in our daily life. Examples of signals that we encounter frequentl... more Signals play an important role in our daily life. Examples of signals that we encounter frequently are speech, music, picture and video signals. A signal is a function of independent variables such as time, distance, position, temperature and pressure. For examples, speech and music signals represent air pressure as a function of time at a point in space.
EURASIP Journal on Applied Signal Processing , 2003
This paper reviews recent developments in physics-based synthesis of piano. The paper considers t... more This paper reviews recent developments in physics-based synthesis of piano. The paper considers the main components of the instrument, that is, the hammer, the string, and the soundboard. Modeling techniques are discussed for each of these elements, to-gether with implementation strategies. Attention is focused on numerical issues, and each implementation technique is described in light of its efficiency and accuracy properties. As the structured audio coding approach is gaining popularity, the authors argue that the physical modeling approach will have relevant applications in the field of multimedia communication.
Uploads
Papers by Giovanni De Poli
Ma Alvise non è solo un musicista, ma è anche un ingegnere, ora diremmo informatico. Nel seguito accennerò ad altri aspetti dell’attività di Alvise nella ricerca scientifica e nella didattica universitaria, perseguendo alcuni grandi temi di ricerca che possono essere così delineati:
- informazione musicale: astrazione e aiuto alla composizione
- suoni dal computer: voce sintetica, sistema per la sintesi della musica,
- computer come strumento musicale: sintesi in tempo reale, ambienti performativi
- conservazione e restauro della musica elettronica
Poi dirò infine qualcosa anche sulla sua attività didattica svolta all’Università di Padova, come docente di corsi istituzionali ed estivi, nonché come relatore di tesi di laurea e di progetti di laboratorio.