
Stephen Turner
Philosophy professor, with varied interests in multiple fields, rooted in concerns in the philosophy of social science and cognitive science.
Phone: 8139745549
Phone: 8139745549
less
Related Authors
Thomas Eder
University of Vienna
Shaun Gallagher
University of Memphis
Gerhard Lauer
Johannes Gutenberg-Universität Mainz
Paul A. Roth
University of California, Santa Cruz
Guido Baggio
Roma Tre University, Rome, Italy
Susanne Schmetkamp
University of Fribourg
Tom Froese
Okinawa Institute of Science and Technology
Keith Oatley
University of Toronto
InterestsView All (15)
Uploads
Books by Stephen Turner
From Turner’s childhood in the racially violent South Side of Chicago, the development of his interests in social theory, through to his education in the shadow of the war in Vietnam and a period of social and personal turmoil, this biographical work shows us not only the development of academic thinking, but the evolution of an academic career. The rebellion within sociology against the hegemonic Merton-Parsons conception of sociology and the methodological orthodoxies of the time leads through to a discussion of the philosophy of science and social science, and from there to a reassessment of the inherited view of the classics, to science studies, and to political and international relations theory – the comprehensive nature of Mad Hazard means the reader can truly understand how Turner’s academic journey evolved.
Revealing an academic career not dependent on prestige and academic power, but also not untouched by hierarchy and academic politics, Mad Hazard is appealing for readers interested in the field of social theory, and beyond that, those interested in the evolution of intellectual life in the present university.
The problems of cognition, the brain, the actual experiences of people, and the question of how people acquire their cognitive capacities are all very difficult and subject to conflicting considerations. In this chapter a number of these are described. Yet the various approaches need to be integrated. There are various proposals for doing so. Many of them are fundamentalist, in the sense that they take one set of considerations, such as evolution or computation, as the controlling facts. Typically these approaches have little to say about the social, except in a very abstract way. To take social variation and the complexities of social life into account appears to require a substantial revision of these approaches. At the same time, however, it appears that cognitive science is providing reasons for rejecting or modifying some of the central ideas of social science, such as rational choice, by showing experimentally that the standard models in social science are false. These conflicts point to the need for integration, and to the limitations of proposals for integration. If one begins with the navigational powers of an ant, and infers from them the features that an ant's brain must have in order to do the navigating that can be observed in ant experiments, one arrives, as Roger Gallistel does (1989), with a picture like this: the brain is made up of " functional " components that perform specific tasks in sequences or in relation to one another. It is a reasonable assumption that these components are not the product of learning, but are innate, inherited, fixed, and the product of extremely long evolution. This gets us a striking picture: that brain is like a transistor radio with separate functional units that match up to physical processes and spaces, and which can be treated as something like transistors which are the same from person to person and " produced by evolution " to be standardized units. Brain regions do have different significance for the capacity to act, speak, perceive, and so forth. But because there are few cases in which these hypothetical structures match physical localized patterns in the brain, one must allow for a good deal of looseness in the assumed relation between these " functional " components and their physical realization. If one begins with the problems of computation, recognizing that all models of the cognitive processes in the brain are computational, the issues look like this: what is the structure, or architecture, of the computational processes that allow the brain to do the big things that it unquestionably does—remember, often from the distant past and often with difficulty and error; consciously perform logical reasoning and computation; make intuitive distinctions, decisions, and the like that differ from conscious reasoning; receive perceptual inputs and process them, learn and change responses because of learning, and so forth. The constraints on computational models turn out to be substantial: models which suppose the brain to be a more or less fixed computer turn out to be bad at
So what is normativity? The one thing it is not is the sociological fact of people behaving in a certain way, using certain terms, and believing certain things. Behavior can be contrary to the relevant norms, and beliefs can be false. Genuine, as distinct from sociological normativity is the thing that makes behavior wrong, usage genuinely meaningful, and beliefs false. Without genuine normativity, there is no meaning, no truth or falsity, no act correct or incorrect. And this has implications for "sociological" accounts of the social world: without accepting the reality of genuine normativity we can=t even describe the social world as we know it, because the world as we know it is constituted by these normative distinctions.
How do we know all this? Regress arguments. When we use normative language, or even reason about something, and are asked for a justification of our reasoning, we get back to a justification in the same normative language. The mere fact that people expect a promise to be fulfilled doesn't make it a promise or generate an obligation. Only a norm, one that says something like "one should fulfill one's promises" justifies our saying that someone who fails to fulfil a promise has done something wrong. Merely violating our expectations is not wrong.
This, at least, is the conventional wisdom in philosophy. But philosophers ordinarily keep an eye on the shiny regress arguments and avert them from the trainwreck of the metaphysics behind it. What is this normativity that lies at the end of justification? Is there really a realm of the normative? Is it really the case that every time one uses a "normative" term, such as "correct," that one invokes this netherworld of normativities? Does appealing to genuine normativity actually explain anything in this world? And if so, how does this kind of explanation relate to the kinds of explanations social scientists have given of normative facts? What is the source of normativity? There seems to be little agreement, and wide variation, in answers to these questions, where there are answers at all. And when we look at the answers, they turn out to be all over the place- from a system of proprieties co-extensive with language, to presuppositions that flash into existence whenever they are needed, to the Kantian norms of reason, to the tacit rules behind the meanings of sentences, to the normativity embodied in and created by collective intentions, and on and on. This should be an embarrassment, but no one seems to be embarrassed about it.
There is also a puzzle about what exactly "normativity" explains. Does it explain something real that the social sciences don't explain? In the case of science, this problem has been discussed in two ways. One argues that science is a normative activity and therefore any "sociology" which purports to account for science must be insufficient. The other says that philosophy of science is an attempt to make normative sense of science, and that this activity does not compete with explanations of science or the course of scientific development: the project of fashioning a normative lens for science and the project of explaining what scientists believe to be true are different enterprises, that do not compete. One could extend this to other forms of "normativity": there is what people say and understand, and what they believe to be correct speech; then there is what is really correct speech. Social science is concerned with the former, normativity with the latter.
Normativists say that social science explanations don=t explain normativity: they are only about regularities or probabilities, expectations, perhaps, but never the normative fact, and therefore the meaning, of promising itself. Is this really true? A simple example is the explanation of Maori gift customs in Marcel Mauss's classic The Gift. The Maori think there is a spiritual substance, Hau, that attaches to a gift and must be returned by the giving of another gift. They acknowledge that hau is a mysterious thing. Nevertheless, they believe in it, and organize their social and economic life around this substance. Hau is a Good Bad Theory: good, because it co-ordinates behavior and motivates compliance; bad, because hau is an non-explanatory, false, and fails to fit into our ordinary stream of explanation, which is why the Maori treat it as a mystery.
This standard social science explanation works just fine for the Maori. It is difficult to imagine even a normativist philosopher quibbling with it. But it also raises a tough question: why don"t explanations like this apply just as well to our own moral beliefs? If we think we are obligated to return a phone call, is the obligation any more real, or is our belief that this obligation is real any different than the beliefs of the Maori about hau? Isn't the whole concept of normativity suspiciously like the concept of hau, namely a false belief wrongly used to explain something that isn=t there in the first place?
In a way, this one has an easy answer- there is no difference. And there is nothing mysterious that is left over after the social science explanation is given. For beliefs like this, it makes sense to be a relativist. But things appear to be different for reason or rationality itself. How can we treat that as a superstition? We rely on it. It is not a good bad theory, but a good good theory, if it is a theory at all. Here, it seems, the regress arguments work- there is no denying rationality, because justifying our denial would assume rationality, the rationality of the speaker and the person persuaded. And rationality is normative.
Or is it? Do we really "appeal" to rationality when we try to persuade someone or communicate with them? When we try to communicate anything, we have to say something that is intelligible. And we hope that the listener will understand it and see it is true. But that is not the same as invoking a norm. But something is right in the normativist argument. What we do need, to communicate or to reason with another person, as the normativist argument suggests, is a stopping pointB an end of the regress, something that is in common that the justifying can close with. The normativist thinks these stopping points must be norms because they are not causes or data.
But there is another possibility, found both in the philosophical tradition and in the social science tradition- Brentano's notion of Evidenz, which appears also in Weber in relation to empathy. Evidenz is defined by Brentano as being evident to all. The things that are evident to all B - though the "All" needs to be qualified- might include steps in reasoning, or ostensive definitions that were understood by others, which are the kinds of regress-stoppers that are needed. Brentano thought of Evidenz as an alternative answer to the problem of grounding mathematics, which bedeviled Frege and Husserl. The alternative was derided as "psychologism" and the concept of Evidenz as subjective. But the critics were wrong, and they misrepresented it.
Today we can construe these points of mutual obviousness in terms of cognitive science concepts. The mirroring system in humans, the basis for empathy, is a good candidate for naturalizing Evidenz, for accounting for such things as our capacity to understand others without appealing to normativities, hidden structures of norms, and the like. These systems are "objective." They do much of the explanatory work that the mysterious notion of normativity is claimed to do. We can do without these mysteries.
What are the political implications of 'expert' knowledge and especially scientific knowledge for liberal democracy? If knowledge is not evenly distributed upon what basis can the philosophy of equal rights be sustained?
This important book points to the crisis in knowledge in liberal democracies. This crisis, simply put, is that most citizens cannot understand, much less judge, the claims scientists make.
One response is the appointment of public commissions to provide conclusions for policy-makers to act upon. There are also `commissions from below', such as grass roots associations that quiz the limits of expert knowledge and power and make rival knowledge claims. Do these commissions represent a new stage in the development of liberal democracy? Or is it merely a pragmatic device of no political consequence.
The central argument of the book is that in a `knowledge society' in which specialized knowledge is increasingly important to politics, more has to be delegated because democratic discussion can't handle it. This limitation in the scope of liberal democracy threatens its fundamental character.
The book will be required reading in the fields of social theory, political theory and science studies.
The story of the dispute is itself fascinating, for it cuts across the major political and intellectual currents of the twentieth century, from positivism, pragmatism and value-free social science, through the philosophy of Jaspers and Heidegger, to Critical Theory and the revival of Natural Right and Natural Law. As Weber's ideas were imported to Britain and America, they found new formulations and new adherents and critics and became absorbed into different traditions and new issues.
This book was first published in 1984.
In a series of tightly argued essays, Turner traces out the implications that discarding the notion of shared frameworks has for relativism, social constructionism, normativity, and a number of other concepts. He suggests ways in which these ideas might be reformulated more productively, in part through extended critiques of the work of scholars such as Ian Hacking, Andrew Pickering, Pierre Bourdieu, Quentin Skinner, Robert Brandom, Clifford Geertz, and Edward Shils.
This book focuses on the consequences of the 'near-death' experience of sociology in the 1980s, and its slow revival and transformation, as well as the challenges it faces in the new university environments. Certain to be controversial, the book looks forward to a new kind of discipline.
Contents
Resources
About the Authors
First published in 1980, this book examines the nature of sociological explanation. The tactics of interpretive sociology have often remained obscure because of confusion over the nature of the evidence for interpretation and the nature of decisions among alternative interpretations. In providing an account of the problem of interpretive sociological claims, the author argues that there is rationality to interpretation. He also presents a fresh view of the relationship between qualitative and statistical claims and shows their complementary character. Dr. Turner's lucid and comprehensive analysis breaks new ground in its fundamental re-examination of the conceptual basis for “explaining” social behaviour. By its call for more rigourous conceptual sophistication in attempted explanations of social behaviour, this book will stimulate controversy and lively discussion among sociologists.
From Turner’s childhood in the racially violent South Side of Chicago, the development of his interests in social theory, through to his education in the shadow of the war in Vietnam and a period of social and personal turmoil, this biographical work shows us not only the development of academic thinking, but the evolution of an academic career. The rebellion within sociology against the hegemonic Merton-Parsons conception of sociology and the methodological orthodoxies of the time leads through to a discussion of the philosophy of science and social science, and from there to a reassessment of the inherited view of the classics, to science studies, and to political and international relations theory – the comprehensive nature of Mad Hazard means the reader can truly understand how Turner’s academic journey evolved.
Revealing an academic career not dependent on prestige and academic power, but also not untouched by hierarchy and academic politics, Mad Hazard is appealing for readers interested in the field of social theory, and beyond that, those interested in the evolution of intellectual life in the present university.
The problems of cognition, the brain, the actual experiences of people, and the question of how people acquire their cognitive capacities are all very difficult and subject to conflicting considerations. In this chapter a number of these are described. Yet the various approaches need to be integrated. There are various proposals for doing so. Many of them are fundamentalist, in the sense that they take one set of considerations, such as evolution or computation, as the controlling facts. Typically these approaches have little to say about the social, except in a very abstract way. To take social variation and the complexities of social life into account appears to require a substantial revision of these approaches. At the same time, however, it appears that cognitive science is providing reasons for rejecting or modifying some of the central ideas of social science, such as rational choice, by showing experimentally that the standard models in social science are false. These conflicts point to the need for integration, and to the limitations of proposals for integration. If one begins with the navigational powers of an ant, and infers from them the features that an ant's brain must have in order to do the navigating that can be observed in ant experiments, one arrives, as Roger Gallistel does (1989), with a picture like this: the brain is made up of " functional " components that perform specific tasks in sequences or in relation to one another. It is a reasonable assumption that these components are not the product of learning, but are innate, inherited, fixed, and the product of extremely long evolution. This gets us a striking picture: that brain is like a transistor radio with separate functional units that match up to physical processes and spaces, and which can be treated as something like transistors which are the same from person to person and " produced by evolution " to be standardized units. Brain regions do have different significance for the capacity to act, speak, perceive, and so forth. But because there are few cases in which these hypothetical structures match physical localized patterns in the brain, one must allow for a good deal of looseness in the assumed relation between these " functional " components and their physical realization. If one begins with the problems of computation, recognizing that all models of the cognitive processes in the brain are computational, the issues look like this: what is the structure, or architecture, of the computational processes that allow the brain to do the big things that it unquestionably does—remember, often from the distant past and often with difficulty and error; consciously perform logical reasoning and computation; make intuitive distinctions, decisions, and the like that differ from conscious reasoning; receive perceptual inputs and process them, learn and change responses because of learning, and so forth. The constraints on computational models turn out to be substantial: models which suppose the brain to be a more or less fixed computer turn out to be bad at
So what is normativity? The one thing it is not is the sociological fact of people behaving in a certain way, using certain terms, and believing certain things. Behavior can be contrary to the relevant norms, and beliefs can be false. Genuine, as distinct from sociological normativity is the thing that makes behavior wrong, usage genuinely meaningful, and beliefs false. Without genuine normativity, there is no meaning, no truth or falsity, no act correct or incorrect. And this has implications for "sociological" accounts of the social world: without accepting the reality of genuine normativity we can=t even describe the social world as we know it, because the world as we know it is constituted by these normative distinctions.
How do we know all this? Regress arguments. When we use normative language, or even reason about something, and are asked for a justification of our reasoning, we get back to a justification in the same normative language. The mere fact that people expect a promise to be fulfilled doesn't make it a promise or generate an obligation. Only a norm, one that says something like "one should fulfill one's promises" justifies our saying that someone who fails to fulfil a promise has done something wrong. Merely violating our expectations is not wrong.
This, at least, is the conventional wisdom in philosophy. But philosophers ordinarily keep an eye on the shiny regress arguments and avert them from the trainwreck of the metaphysics behind it. What is this normativity that lies at the end of justification? Is there really a realm of the normative? Is it really the case that every time one uses a "normative" term, such as "correct," that one invokes this netherworld of normativities? Does appealing to genuine normativity actually explain anything in this world? And if so, how does this kind of explanation relate to the kinds of explanations social scientists have given of normative facts? What is the source of normativity? There seems to be little agreement, and wide variation, in answers to these questions, where there are answers at all. And when we look at the answers, they turn out to be all over the place- from a system of proprieties co-extensive with language, to presuppositions that flash into existence whenever they are needed, to the Kantian norms of reason, to the tacit rules behind the meanings of sentences, to the normativity embodied in and created by collective intentions, and on and on. This should be an embarrassment, but no one seems to be embarrassed about it.
There is also a puzzle about what exactly "normativity" explains. Does it explain something real that the social sciences don't explain? In the case of science, this problem has been discussed in two ways. One argues that science is a normative activity and therefore any "sociology" which purports to account for science must be insufficient. The other says that philosophy of science is an attempt to make normative sense of science, and that this activity does not compete with explanations of science or the course of scientific development: the project of fashioning a normative lens for science and the project of explaining what scientists believe to be true are different enterprises, that do not compete. One could extend this to other forms of "normativity": there is what people say and understand, and what they believe to be correct speech; then there is what is really correct speech. Social science is concerned with the former, normativity with the latter.
Normativists say that social science explanations don=t explain normativity: they are only about regularities or probabilities, expectations, perhaps, but never the normative fact, and therefore the meaning, of promising itself. Is this really true? A simple example is the explanation of Maori gift customs in Marcel Mauss's classic The Gift. The Maori think there is a spiritual substance, Hau, that attaches to a gift and must be returned by the giving of another gift. They acknowledge that hau is a mysterious thing. Nevertheless, they believe in it, and organize their social and economic life around this substance. Hau is a Good Bad Theory: good, because it co-ordinates behavior and motivates compliance; bad, because hau is an non-explanatory, false, and fails to fit into our ordinary stream of explanation, which is why the Maori treat it as a mystery.
This standard social science explanation works just fine for the Maori. It is difficult to imagine even a normativist philosopher quibbling with it. But it also raises a tough question: why don"t explanations like this apply just as well to our own moral beliefs? If we think we are obligated to return a phone call, is the obligation any more real, or is our belief that this obligation is real any different than the beliefs of the Maori about hau? Isn't the whole concept of normativity suspiciously like the concept of hau, namely a false belief wrongly used to explain something that isn=t there in the first place?
In a way, this one has an easy answer- there is no difference. And there is nothing mysterious that is left over after the social science explanation is given. For beliefs like this, it makes sense to be a relativist. But things appear to be different for reason or rationality itself. How can we treat that as a superstition? We rely on it. It is not a good bad theory, but a good good theory, if it is a theory at all. Here, it seems, the regress arguments work- there is no denying rationality, because justifying our denial would assume rationality, the rationality of the speaker and the person persuaded. And rationality is normative.
Or is it? Do we really "appeal" to rationality when we try to persuade someone or communicate with them? When we try to communicate anything, we have to say something that is intelligible. And we hope that the listener will understand it and see it is true. But that is not the same as invoking a norm. But something is right in the normativist argument. What we do need, to communicate or to reason with another person, as the normativist argument suggests, is a stopping pointB an end of the regress, something that is in common that the justifying can close with. The normativist thinks these stopping points must be norms because they are not causes or data.
But there is another possibility, found both in the philosophical tradition and in the social science tradition- Brentano's notion of Evidenz, which appears also in Weber in relation to empathy. Evidenz is defined by Brentano as being evident to all. The things that are evident to all B - though the "All" needs to be qualified- might include steps in reasoning, or ostensive definitions that were understood by others, which are the kinds of regress-stoppers that are needed. Brentano thought of Evidenz as an alternative answer to the problem of grounding mathematics, which bedeviled Frege and Husserl. The alternative was derided as "psychologism" and the concept of Evidenz as subjective. But the critics were wrong, and they misrepresented it.
Today we can construe these points of mutual obviousness in terms of cognitive science concepts. The mirroring system in humans, the basis for empathy, is a good candidate for naturalizing Evidenz, for accounting for such things as our capacity to understand others without appealing to normativities, hidden structures of norms, and the like. These systems are "objective." They do much of the explanatory work that the mysterious notion of normativity is claimed to do. We can do without these mysteries.
What are the political implications of 'expert' knowledge and especially scientific knowledge for liberal democracy? If knowledge is not evenly distributed upon what basis can the philosophy of equal rights be sustained?
This important book points to the crisis in knowledge in liberal democracies. This crisis, simply put, is that most citizens cannot understand, much less judge, the claims scientists make.
One response is the appointment of public commissions to provide conclusions for policy-makers to act upon. There are also `commissions from below', such as grass roots associations that quiz the limits of expert knowledge and power and make rival knowledge claims. Do these commissions represent a new stage in the development of liberal democracy? Or is it merely a pragmatic device of no political consequence.
The central argument of the book is that in a `knowledge society' in which specialized knowledge is increasingly important to politics, more has to be delegated because democratic discussion can't handle it. This limitation in the scope of liberal democracy threatens its fundamental character.
The book will be required reading in the fields of social theory, political theory and science studies.
The story of the dispute is itself fascinating, for it cuts across the major political and intellectual currents of the twentieth century, from positivism, pragmatism and value-free social science, through the philosophy of Jaspers and Heidegger, to Critical Theory and the revival of Natural Right and Natural Law. As Weber's ideas were imported to Britain and America, they found new formulations and new adherents and critics and became absorbed into different traditions and new issues.
This book was first published in 1984.
In a series of tightly argued essays, Turner traces out the implications that discarding the notion of shared frameworks has for relativism, social constructionism, normativity, and a number of other concepts. He suggests ways in which these ideas might be reformulated more productively, in part through extended critiques of the work of scholars such as Ian Hacking, Andrew Pickering, Pierre Bourdieu, Quentin Skinner, Robert Brandom, Clifford Geertz, and Edward Shils.
This book focuses on the consequences of the 'near-death' experience of sociology in the 1980s, and its slow revival and transformation, as well as the challenges it faces in the new university environments. Certain to be controversial, the book looks forward to a new kind of discipline.
Contents
Resources
About the Authors
First published in 1980, this book examines the nature of sociological explanation. The tactics of interpretive sociology have often remained obscure because of confusion over the nature of the evidence for interpretation and the nature of decisions among alternative interpretations. In providing an account of the problem of interpretive sociological claims, the author argues that there is rationality to interpretation. He also presents a fresh view of the relationship between qualitative and statistical claims and shows their complementary character. Dr. Turner's lucid and comprehensive analysis breaks new ground in its fundamental re-examination of the conceptual basis for “explaining” social behaviour. By its call for more rigourous conceptual sophistication in attempted explanations of social behaviour, this book will stimulate controversy and lively discussion among sociologists.
sociology problematizes ideology itself, without affirming any ideology. This approach provides its own discipline: we come to understand what something like “oppression” means, and means for the people involved, in terms of the difference between our assumptions and practices, from our
living life, and theirs.
science, as they are in science itself, having replaced laws and theories
as the primary strategy. Logical Positivism tried to erase the older
neo-Kantian distinction between ideal constructions and reality. It
returns in the case of models. Nowak's concept of idealization provided
an alternative account of this issue. It construed model application
as concretizations of hypotheses which improve by accounting
for exceptions. This appears to account for physical law. But it raises
the problem of uniqueness: is the result unique, as physical law should
be? Neo-Kantianism failed this test. Its solutions were circular justifications
for claims of uniqueness. Nowak inherited the problem without
resolving it.
Web: http://www.stonybrook.edu/commcms/cognitivefutures/CFP.html
Panel Participants:
Stephen Turner (University of South Florida)
Jacob Mackey (Queens College, CUNY)
Georg Theiner & Nikolaus Fogle (Villanova University)
Evelyn B. Tribble (Otago University) & John Sutton (Macquarie University)
“On the whole, what is familiar is precisely not understood because it is familiar” (Hegel, Preface to the Phenomenology of Spirit)
With the increasing subjugation of sociology to political ideologies and a growing emphasis on "policy", which casts sociology in the role of a provider of intellectual content for political programs, this volume asks whether the situation is the result of an exhaustion of ideas or might perhaps be rooted in the failure in the very program of establishing sociology as a science. Taking seriously the challenges to the classical aspiration of constructing theories that both explain and are grounded in empirical reality, The Future of Sociology asks whether the core idea of transcending ideology is still worth pursuing, and whether there remains scope for making sociology scientific.
As such, it will appeal to scholars and students of sociology, social theory, and social scientific methodology.
Contributors are: Krzysztof Brzechczyn (Poland), Nancy D. Campbell (USA), Serge Grigoriev (USA), Géza Kállay (Hungary), Piotr Kowalewski (Poland), Jouni-Matti Kuukkanen (Finland), Chris Lorenz (The Netherlands), Herman Paul (The Netherlands), Dawid Rogacz (Poland), Paul A. Roth (USA), Laura Stark (USA), Stephen Turner (USA), Rafał Paweł Wierzchosławski (Poland), and Eugen Zeleňák (Slovakia) Available at: http://www.brill.com/products/book/towards-revival-analytical-philosophy-history.
understanding in such texts as: The Power of Dialogue (1999), Hermeneutic
Cosmopolitanism, or: Toward a Cosmopolitan Public Sphere (2011), "Hermeneutics,Phenomenology, and Philosophical Anthropology'' (2006), "Recognition and Difference'' (2005), "Agency and the Other" (2012). In what follows I would like to conduct a kind of philosophical experiment: to translate the basic ideas of this approach into the language and concerns, and also the findings, of cognitive science. My strategy
will be to replace some key ideas from the basics of his account with cognitive science concepts dealing with the same or similar phenomena, and ask whether the same ldnds of conclusions Kogler draws about social theory and normative matters could be made. This leads to somewhat surprising results, especially in relation to such concepts as recognition and in the role of play in preschool children's development.
Online discussion will be held on Monday, December 13, 2021, at 5 P.M.
(Central European Time).
In the current climate of crisis, the relevance and fruitfulness of Kögler's work has never been greater, as he fuses the philosophies of Paul Ricoeur, Hans Georg Gadamer, and his mentor, Jürgen Habermas, to respond to critical international issues surrounding politics, society, and the environment. Working towards a truly non-ethno-centric and global conception of intercultural dialogue, an essential aspect of Kögler's critical hermeneutics is his account of selfhood as reflexive: socially situated, embodied, and linguistically articulated, permeated by power, but yet critical and creative.
Leading international scholars, representing a variety of disciplinary backgrounds, build upon Kögler's approach in this volume and explore the methodological, theoretical, and applicative scope of critical hermeneutics beyond the Frankfurt School. In doing so, they address some of the most pressing issues facing global society today, from multilingual education to the urgent need for interreligious and intercultural understanding.
Closing with a response from Kögler himself, Hans-Herbert Kögler's Critical Hermeneutics also offers an exclusive account of the philosopher's contemporary re-appraisal of the core tenets of critical hermeneutics.