Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004, Environmental Impact Assessment Review
AI
Risk assessments play a crucial role in informing decisions regarding public safety and environmental protection. However, they often fail to account for the complexities of real-world situations and the diverse perspectives of affected communities. This paper critiques the current practices in professional risk assessments, highlighting their limitations, the resulting harms, and the neglect of alternatives. It advocates for a more participatory approach that includes ethical considerations, community input, and transparent communication of uncertainties to reduce the inherent risks associated with these assessments.
Unfortunately, the practice of risk assessment by the federal government routinely departs from the academic ideal. Federal risk assessments continue to rely on conservative models and assumptions that effectively intermingle important policy judgments with science. This often makes it difficult to discern serious hazards from trivial ones, and it distorts the ordering of the government's regulatory priorities. These distortions typically lead to disproportionate investments in reducing very small threats to health and life. In some cases these distortions may actually increase net health and safety risks. Widely acknowledged problems that continue to plague the practice of risk assessment in the federal government were described in the 1990 edition of the Regulatory Program of the United States, an annual publication of the Office of Management and Budget. The issues were not new, nor was the forum original inasmuch as previous editions of the Regulatory Program had raised similar concerns. But the unusual candor of the 1990 edition provoked a storm of controversy within federal regulatory agencies. The policy issues kindled by risk assessment, which for years had been relegated to obscure scientific journals, had finally become visible to
Environmental health perspectives, 2014
1992
DISCLAIMER "Fhi_ report wa_ prepared as an account of work sponsored by an agency of the United States Government, Neither the United States Government not any agency thereof, nor any of their _r.,-_,,-20781 employees, makes _tny warranty, express or implied, or _,.ssumes any legal liability or responsibility for the accuracy, completeness, or usefulnes'; of any information, apparatus, product, or process disclose'd, ,-lr represents that fls use would not infringe privately owned rights. Refer. 11E_2 019102 ence herein to any specific commerci'l! product, process, or service by trade nanle, trademark, manufacturer, or otherwise does not necessarily constitute or imply ils endorsement, recommendatiozl, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
Journal of Risk Research, 2005
Research on risks has mainly been devoted to detailed analyses of such risks that are subject to public debate and policy decision making. However, many if not most of the risks that are now the subject of regulation were once neglected. Experts in conjunction with regulators have a crucial role in putting risks on the policy agenda. But what views do experts have on the matter of attention to risks? In order to answer this question risk assessment experts were asked to list the risks they considered to be over-emphasized, respectively neglected. Radiation risks constituted the largest category of risks reported to be over-emphasized. Other risks often reported to be over-emphasized included BSE, GMOs, amalgam, and air traffic. Lifestyle risks were the largest category of risks reported to be neglected. Other risks often listed as neglected included radon (as an exception within the radiation category), road traffic, socio-economic risks, energy production excluding nuclear power, and local accidents (including fires and workplace accidents). Risks mentioned about equally often as neglected and over-emphasized included chemicals and crime. There was a correlation between perceived risk and neglect: risks considered to be neglected were also judged as larger. For a comparison, the topics of articles in the journal Risk Analysis from 1991-2000 were categorized into the same risk categories that were used for the questionnaire. The risks most commonly treated in the journal (chemicals and cancer) coincided with the risks which experts in our survey considered to be overemphasized rather than neglected.
Risk Analysis, 2002
Risk Analysis, 2004
Risk analysis, 2002
2009
There is in fact much evidence to suggest that non-experts sometimes have puzzling views about risks. 4 For example, Starr first concluded that people will accept risks incurred during voluntary activities (e.g. skiing) that are about one thousand times greater than they will accept when they are involuntarily imposed upon them (e.g. food preservatives). 5 This conclusion inspired research leading to the classic report Acceptable Risk which described how people are more afraid of the harm from something new or unfamiliar but with a low probability of occurring (e.g. nuclear meltdowns) than they are of the harm from something familiar (e.g. driving a car) with a high probability. 6 Although it is acknowledged that even experts have to make judgments about what constitutes a harmful event and what kinds of considerations need to be taken into account in identifying risks, there remains a division between what is objective or "real" for experts (e.g. scientists and risk analysts) versus what is subjective or "perceived" for nonexperts. Slovic et al, for instance, compared the responses from two groups on the relative risks of 30 activities and technologies. 7 One group was composed of 15 national experts on risk assessment while the other group was composed of 40 members of the U.S. League of Women Voters. The disparities found between the two groups were significant. The study's authors explain that "[t]he experts' mean judgments were so closely related to the statistical or calculated frequencies that it seems reasonable to conclude that they viewed the risk of an activity or technology as synonymous with its annual fatalities." 8 The experts ranked nuclear power 20 th while the League of Women Voters ranked it as the number one risk. While the experts put X-rays at number 7 in
2019
In 2018 a new collaborative initiative was launched by Atomium-EISD to encourage ways risk can be intelligently understood and managed. This collaboration seeks to foster greater public risk literacy, from its stronger forms of developing better statistical understanding to more basic abilities to recognize characteristics of both bad and good risk communication and research. The aim is to make an actionable impact on risk conversations in society, among thought-leaders and between decision-makers and to improve the quality of debate and decision making around risk issues. The ultimate aim is to free up societal resources which can then be used for the greater good of the public.
The Culture and Power of Knowledge, 1992
With 4 figures and 2 tables © Printed on acid-free paper which falls within the guidelines of the ANSI to ensure perm anence and durability. Library o f Congress Cataloging in Publication Data The C ulture and power o f knowledge : inquiries into Contempo rary societies / edited by Nico Stehr and R ichard V. Ericson. Includes bibliographical references and indexes.
2000
Behavioral Sciences & the Law, 2000
Journal of the Royal Statistical Society. Series A (Statistics in Society), 1988
Risk communication takes place in a variety of forms, ranging from product warning labels on cigarette packages and saccharin bottles to interactions between officials and members of the public on such highly charged issues as Love Canal, AIDS, and the accident at Three Mile Island. Recent experience has shown that communicating scientific information about health and environmental risks can be exceedingly difficult and is often frustrating to those involved. Government officials, industry executives, and scientific experts often complain that laypeople do not understand technical risk information and that individual and media biases and limitations lead to distorted and inaccurate perceptions of many risk problems. Individual citizens and representatives of public groups are often equally frustrated, perceiving government and industry officials to be uninterested in their concerns, unwilling to take immediate and direct actions to solve seemingly simple and obvious health and environmental problems, and reluctant or unwilling to allow them to participate in decisions that intimately affect their lives. In this context, the media often play the role of transmitter and translator of information about health and environmental risks, but have been criticized for exaggerating risks and emphasizing drama over scientific facts. These difficulties and frustrations pOint to the problems and complexities of communicating health and environmental information in a pluralistic, democratic society. A review of the literature (Covello, Slovic, and von Winterfeldt, 1986) on recent efforts to communicate information about health or environmental risks-such as the controversies over the risks of saccharin, the pesticide EDB, dioxin, AIDS, toxic wastes, smoking, driving without seat belts, and nuclear power plant accidents-suggests that communication problems arise from (1) message charact~ristics and problems (e.g., limitations of scientific risk assessments)i (2) source characteristics and problems (e.g., limitations *The views and conclusions expressed in this paper are solely those of the authors and do not necessarily reflect the views and conclusions of the National Science Foundation.
Risk Analysis, 1994
This editorial is an essentially unedited version of testimony Z presented to the House Committee on Science, Space, and Technology (Subcommittee on Technology, Environment, and Aviation) on November 16, 1993. The hearing marked the release of an Ofice of Technology Assessment (OTA) report entitled "Researching Health Risks," for which Z was part of an advisorypanel (along with Curtis Travis and a dozen other experts in risk assessment). The title of my testimony foreshadowed the dual message Z hoped to convey: that more research to improve risk assessment is sorely neede4 but that we ought not let that need blind us to other deficiencies in risk-based decision-making that may deserve equal or greater attention.] 'Fellow, Cecil and Ida Green Center for the Study of Science and Society, University of Texas at Dallas. Fellow, Center for Risk Management, Resources for the Future, Washington, D.C. (1987-1994), during which time this testimony was prepared. *There is no single quantitative rule of thumb to guide the balance between the stakes of an enterprise and the resources that should be devoted to gathering information on it-the. proper balance would depend, among other things, on the likelihood that the information would reduce the probability and magnitude of making wrong decisions. But a ratio of 1OOO:l would, I think, strike most experts as out of proportion. It is something like investing $lO,OOO in a new car without investing $10 in a couple of magazines that give the repair records of various models.
Knowledge Management Strategies and Applications, 2017
The aim of this chapter is to critically reflect definitions of hazard, risk, and risk perception and their assessments used in different scientific disciplines and give examples of the potential implications for scientific discussions, knowledge management, and risk communication. Scientists with backgrounds in public health, psychology, environmental health, occupational health, engineering, sociology, and medicine were asked for a definition of hazard, risk, risk assessment, and risk perception seen from their specific scientific disciplines. Hazard is generally seen as an adverse event or condition. For most risk definitions, probability and severity are important aspects. Often a quantification of risk is desired, whereas risk perception is seen as a subjective appraisal and a cognitive construct. As risk perceptions are based on a combination of knowledge and individual values and affects, it may not provide a reliable guidance for risk management decisions on a societal level. Discipline differences are mainly connected to terminology and interpretation of key concepts, but the differences are based on different tasks and perspectives. For dealing with controversies in science across disciplines, an acceptance and appreciation of terminology and perspectives from different scientific disciplines are needed to ensure a transparent risk assessment process.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.