Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
5 pages
1 file
The paper discusses the interplay between language, music, and cognition, emphasizing the limitations of current models of translation and the importance of context in understanding musical semantics. The author critiques the existing literature, calling for more comprehensive methodologies that account for the influences of contextual and extra-textual factors in both translation studies and music perception. Philosophical reflections on the ineffability of certain experiences, such as musical understanding and sensory perception, also feature prominently, suggesting that significant aspects of cognition and art remain outside verbal expression.
Rivista Italiana di Filosofia del Linguaggio, 2017
In this paper I counter the formalistic rejection of musical meaning and the consequent dismissal of the analogy between music and language. Although musical formalists may concede that music can express emotions and offer sonic analogues of dynamic relations, they claim that, contrary to a linguistic meaning, which would imply the inter-subjectively sharable reference to contents, purported musical meaning is vague, private, and arbitrary. Hence, they argue, music has no semantics, and, consequently, is not like language. However, the formalist account of linguistic meaning overlooks the pragmatic view of linguistic meaning. According to such an approach, language is a kind of action and linguistic meaning is determined by the use of language in specific contexts. Drawing on pragmatics, I will suggest that musical meaning is structurally and interestingly analogous to linguistic meaning, understood in such a way. A pragmatic understanding of music as communication, which is also supported by philosophical and empirical research on musical power to embody personal traits, is all what we need for answering positively the question of musical meaning. Musical works and musical performances are, like speech acts, communicational actions that activate and determine the vague and undetermined meaning of music, originated by music power of representing dynamic and emotional relations. Music's determined meaning is actualized in virtue of its " context of use " (accordingly to the cultural-social conventions of practices), while, conversely, musical actions contribute meaning to its context(s). Generally speaking, musical meaning emerges through contextual relations and interpretational acts. In this regard, focusing on the reciprocal connection between group improvisation and conversation, I will finally provide an emergentist and non-intentionalist account of the pragmatic generation of musical meaning, which may be heuristically adopted also as paradigm of a conversational view of the interpretation of artworks.
2018. In "Advances in Experimental Aesthetics", ed. by F. Cova & S. Réhault, Bloomsbury. Lost in Musical Translation: A Cross-Cultural Study of Musical Grammar and Its Relation to Affective Expression in Two Musical Idioms between Chennai and Geneva. In this paper, I give a working hypothesis of 'musical grammar' and 'musical affective meaning' (a subtype of musical semantics) to then present the background theory, methodology, stimuli, and expectations of a cross-cultural study on musical grammar that I am presently conducting between Chennai (India) and Geneva (Switzerland). The two main questions that are being tackled in this study can be formulated as follows. (1) Perception of musical structure. Given that different musical idioms have different tonal organizations based on different grammatical structures, would a listener who is familiar with musical idiom MI1, but not with musical idiom MI2, be better than average at computing musical structures which respect the grammatical organizations of MI1, but not of MI2? (2) Perception of musical expression. Would a listener who is better at computing musical structures from MI1 than from MI2 be also better at telling what is expressed in musical pieces of MI1? The main predictive hypothesis behind this study is that, in order to better understand what is being expressed in a musical idiom, one needs to have developed its musical grammar to a sufficient level.
Proc. of First Symposium on Music and Computers, 1998
I discuss the issue of meaning, and the definition of "meaning" in music. I propose that it is a mistake to import the linguistic notion of semantics into a musical context on the grounds that musical communication serves a different function and is of a different nature from linguistic communication, and that there is no evidence to support the suggestion that the two should function in a strongly similar way.
The similarities between music and language are often remarked upon. For this reason, many thinkers fall into a linguistic bias in music analysis. Both are unique products of the human mind serving the purpose of expression, although they do not coincide entirely. Meanings conveyed musically and linguistically differ due to physical, conceptual and formal limitations. A closer study of their creation points to the common root of linguistic and musical production, consisting in the human affinity for symbolic transformation of experience, stemming from evolutionary adaptation to the changing environment. Symbolisation not only allows the human being to internally conceive the reality which surrounds them, but also to represent objects and concepts in absentia. This faculty prompted the elaboration of systems of communication and artistic creation, a virtually ludic activity reflecting the inherent cerebral processes.
Music Theory Online, 2019
In this article, I theorize a new conception of musical meaning, based on J. L. Austin’s theory of performative utterances in his treatise How to Do Things with Words. Austin theorizes language meaning pragmatically: he highlights the manifold ways language performs actions and is used to “do things” in praxis. Austin thereby suggests a new theoretic center for language meaning, an implication largely developed by others after his death. This article theorizes an analogous position that locates musical meaning in the use of music “to do things,” which may include performing actions such as reference and disclosure, but also includes, in a theoretically rigorous fashion, a manifold of other semiotic actions performed by music to apply pressure to its contexts of audition. I argue that while many questions have been asked about meanings of particular examples of music, a more fundamental question has not been addressed adequately: what does meaning mean? Studies of musical meaning, I argue, have systematically undertheorized the ways in which music, as interpretable utterance, can create, transform, maintain, and destroy aspects of the world in which it participates. They have largely presumed that the basic units of sense when it comes to questions of musical meaning consist of various messages, indexes, and references encoded into musical sound and signifiers. Instead, I argue that a considerably more robust analytic takes the basic units of sense to be the various acts that music (in being something interpretable) performs or enacts within its social/situational contexts of occurrence. Ultimately, this article exposes and challenges a deep-seated Western bias towards equating meaning with forms of reference, representation, and disclosure. Through the “performative” theory of musical utterance as efficacious action, it proposes a unified theory of musical meaning that eliminates the gap between musical reference, on the one hand, and musical effects, on the other. It offers a way to understand musical meaning in ways that are deeply contextual (both socially and structurally): imbricated with the human practices that not only produce music but are produced by it in the face of its communicative capacities. I build theoretically with the help of various examples drawn largely from tonal repertoires, and I follow with lengthier analytical vignettes focused on experimental twentieth and twenty-first century works.
Phenomenology of musical meaning, 2024
The article deals with the phenomenological analysis of music perception on the material of rock songs. The structure of a song is simpler than the structure of a classical piece of music, but all the results are valid for any music with appropriate complication. “Listening device” – a computer program that translates sounds into musical notation - perceives individual sounds. But the musical consciousness understands the specific musical thought. It groups sounds into motifs and phrases. The concept of musical thought has different interpretations and needs a thorough phenomenological analysis based on Husserl's doctrine of the constitution of sense. F. Tagg introduces the concept of “museme” – a unit of musical meaning. Musemes have meaning only for the understanding consciousness. Musemes of different levels include motifs and phrases, metric signature, rhythm, and the structure of the whole work. In a melody there may be "key sites" in which the concentration of musical thought is maximized. They are constituted by musical consciousness and are largely subjective. The phenomenon of key site is not specific to music; key sites are found in literature and philosophy. Their appearance sheds light on the peculiarities of acts of understanding. The understanding of a musical work takes place in time, and this allows the consciousness to mark out a segment of time, to arrange the intelligible structure in it. A melody is a meaningful whole that has its own logic of unfolding. It encompasses a segment of time and transforms it into something stationary. Melody can be represented in the form of a scheme. The subject of music is the compression of time into such structures. An interesting example of musical perception and understanding is psychedelic rock. When listening to psychedelic music, the constitutive activity of consciousness is reduced and an altered state of consciousness occurs. The horizon of meaning narrows, the protention disappears. Here we should recall the notion of “saturated phenomenon” introduced by J.-L. Marion. The saturated phenomenon does not allow for constitution and assimilation. Psychedelic music modifies the horizons of future meanings. The article concludes with a question about computer music: will it make sense to us? So far there is no unequivocal answer to this question.
away from mimesis, Rousseau may not be that far from Rameau's physicalist approach of musical phenomena. Similarly, and in the field of music theory, David Cohen revisits how the binary of melody versus harmony, and Rousseau's conception of melody as the ruling principle of music, do not so much reject Rameau but rather accommodate his own views to Rameau's musical Cartesianism. Finally, Matthew Gelbart and I reconsider the debates on the existence (or lack thereof) of musical universals that have emerged since the Enlightenment. Whereas Gelbart ponders on the depiction of Rousseau as a cultural relativist that resulted from an oversimplification of his understanding of the concept of "nature," I argue that Rousseau's relativist views on music have been distorted through nineteenth-century positivist discourses, the legacies of which are still widely perceptible in the disciplines of neuroscience and cognitive psychology. Certainly Rousseau's intellectual output was protean. This explains why so many disparate forms of reception have coexisted (not always in conversation with one another) during the past three centuries. Our colloquy highlights that his writings still encourage a rich diversity of approaches.
The question of musical meaning is one of the great practical and philosophical cruxes of the Western tradition especially since the rise of autonomous instrumental music in the eighteenth century broke the hitherto unquestioned links between musical performance and its verbal texts, and the propagation of the notion of absolute music in the nineteenth century detached music-making from its immediate social contexts. At the same time, however, whether from the viewpoint of what the medievals dubbed musica theoretica, or its less respectable cousin musica practica, the question of what music means, or how it means, paradoxically has been not so much raised as begged. Indeed such are the problems evoked by the notion of musical meaning, and how it relates to musical form, that a recent study explicitly drawing on a social-semiotic model , deliberately declines to use the key social semiotic concept of metafunction in order to analyse various semiotic uses of the modality of sound, including music. van Leeuwen chooses not to adopt the so-called metafunctional hypothesis whereby the expression plane of language is related to its interpretation plane(s), and through them to the social context, in terms of the three abstract generalized functions of ideational, interpersonal and textual meaning, concentrating instead on the materiality of sound, on the one hand, and its ideological implications, on the other. It is interesting to note how he justi es his decision, contrasting his earlier analysis of language and vision with that of sound and music:
This chapter describes the structural similarities between music and language, in pursuit of a strong argument for the hypothesis that music and language are not categorically different from one another, but placed on the same continuum. Hence, we propose an integrative model. Analyzing their denotative and connotative levels, a crucial systemic difference emerged: while in language these levels rely on semantics, in music they depend on syntax and semantics, respectively. Thus, musical syntax and semantics are merged into a unique system that cannot be split. Indeed, an analysis of musical intra- and extra-systemic meanings suggests, that music seems to be to a certain degree auto-referential, while language’s main function is extra-referential. This, ultimately, leads to the difficulty of translating different semiotic systems into one another. We argue that a translation is notwithstanding possible in principle, allocating both music and language on the same semiotic continuum based on their structural similarities.
In discussing the relationship between music and other forms of articulation this paper deals first with the question of what the notion of music as alanguage might imply for contemporary composers. Secondly, it considers Albrecht \X/ellmer's thoughts on a constitutive "sprach-Bezug" (speech-reference) of music.t This leads to some final remarks on the aesthetic experience of music in its relation to other forms of human expression.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Music Perception, 2009
Journal of Aesthetics and Art Criticism, 2008
Linguistics and the Human Sciences, 2007
Universidad de Las Palmas de Gran Canaria eBooks, 2021
Proceedings of the International Joint Conference on Arts and Humanities (IJCAH 2020)
Rocznik Komparatystyczny, 2019
Philosophies, 2025
2012
ARTES. Journal of Musicology, 2018
Journal of documentation, 2008