Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019
…
13 pages
1 file
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Journalism Practice, 2021
The rapid spread of misinformation online has been deemed as a growing problem in the current digital media environment with significant impact both on journalism and on society at large. As news practitioners are increasingly challenged by information overload and the need to process huge volumes of unstructured and unfiltered data within a very short time span, the veracity of online and User-Generated Content (UGC) and its spread through social media and video platforms has been identified as a key concern. This study aims to extend the body of knowledge on the verification of news video content, derived from social media platforms. Using a web-based tool that analyses the online context around a video post and presents a number of verification signals, a set of fake and real news videos coming from YouTube and Facebook were examined by a sample of journalism students (N = 90) regarding their veracity. A sample of professional journalists (N = 17) was also used for qualitative comparison purposes and additional results. The results of the study highlight the significance of online context as well as the efficacy of a semi-automated process in the verification of video content and journalism practice, in conjunction with useful verification features and practices.
Citizen journalism videos increasingly complement or even replace the professional news coverage through direct reporting by event witnesses. This raises questions of the integrity and credibility of such videos. We introduce Vamos, the first user transparent video “liveness” verification solution based on video motion, that can be integrated into any mobile video capture application without requiring special user training. Vamos’ algorithm not only accommodates the full range of camera movements, but also supports videos of arbitrary length. We develop strong attacks both by utilizing fully automated attackers and by employing trained human experts for creating fraudulent videos to thwart mobile video verification systems. We introduce the concept of video motion categories to annotate the camera and user motion characteristics of arbitrary videos. We share motion annotations of YouTube citizen journalism videos and of free-form video samples that we collected through a user study. We observe that the performance of Vamos differs across video motion categories. We report the expected performance of Vamos on the real citizen journalism video chunks, by projecting on the distribution of categories. Even though Vamos is based on motion, we observe a surprising and seemingly counterintuitive resilience against attacks performed on relatively “static” video chunks, which turn out to contain hard-to-imitate involuntary movements. We show that the accuracy of Vamos on the task of verifying whole length videos exceeds 93% against the new attacks.
WITNESS, 2019
Lead author: Gabi Ivens An increasing number of verified-at- capture tools and other tools for tracing authenticity provenance and edits/manipulations over time are being developed to counter misinformation, disinformation, deepfakes, shallowfakes and other 'fake news' and toprovide content validation in a time where challenges to trust are increasing. However, if these solutions are to be widely implemented, in the operating systems and hardware of devices, in social media platforms and within news organizations, then they have the potential to change the fabric of how people communicate, inform what media is trusted, and name who gets to decide. This report looks at the challenges, consequences, and dilemmas that might arise if these technologies were to become a norm. What seems to be a quick, technical fix to a complex problem could inadvertently increase digital divides, and create a host of other difficult, complex problems. Within the community of companies developing verified-at-capture tools and technologies, there is a new and growing commitment to the development of shared technical standards. As the concept of more thoroughly tracking provenance gains momentum, it is critical to understand what happens when providing clear provenance becomes an obligation, not a choice; when it becomes more than a signal of potential trust, and confirms actual trust in an information ecosystem. Any discussion of standards - technical or otherwise - must factor in consideration of these technical and socio-political contexts. This report was written by interviewing many of the key companies working in this space, along with media forensic experts, lawyers, human rights practitioners and information scholars. After providing a brief explanation of how the technology works, this report focuses on 14 dilemmas that touch upon individual, technical and societal concerns. around assessing and tracking the authenticity of multimedia. It focuses on the impact, opportunities, and challenges this technology holds for activists, human rights defenders and journalists, as well as the implications for society-at-large if content authenticity technology was to be introduced at a larger scale.
Heliyon
Photos have been used as evident material in news reporting almost since the beginning of Journalism. In this context, manipulated or tampered pictures are very common as part of informing articles, in today's misinformation crisis. The current paper investigates the ability of people to distinguish real from fake images. The presented data derive from two studies. Firstly, an online cross-sectional survey (N ¼ 120) was conducted to analyze ordinary human skills in recognizing forgery attacks. The target was to evaluate individuals' perception in identifying manipulated visual content, therefore, to investigate the feasibility of "crowdsourced validation". This last term refers to the process of gathering fact-checking feedback from multiple users, thus collaborating towards assembling pieces of evidence on an event. Secondly, given that contemporary veracity solutions are coupled with both journalistic principles and technology developments, an experiment in two phases was employed: a) A repeated measures experiment was conducted to quantify the associated abilities of Media and Image Experts (N ¼ 5 þ 5) in detecting tampering artifacts. In this latter case, image verification algorithms were put into the core of the analysis procedure to examine their impact on the authenticity assessment task. b) Apart from conducting interview sessions with the selected experts and their proper guidance in using the tools, a second experiment was also deployed on a larger scale through an online survey (N ¼ 301), aiming at validating some of the initial findings. The primary intent of the deployed analysis and their combined interpretation was to evaluate image forensic services, offered as real-world tools, regarding their comprehension and utilization by ordinary people, involved in the everyday battle against misinformation. The outcomes confirmed the suspicion that only a few subjects had prior knowledge of the implicated algorithmic solutions. Although these assistive tools often lead to controversial or even contradictory conclusions, their experimental treatment with the systematic training in their proper use boosted the participants' performance. Overall, the research findings indicate that the scores of successful detections, relying exclusively on human observations, cannot be disregarded. Hence, the ultimate challenge for the "verification industry" should be to balance between forensic automations and the human experience, aiming at defending the audience from inaccurate information propagation.
The authenticity of Information has become a longstanding issue affecting businesses and society, both for printed and digital media. On social networks, the reach and effects of information spread occur at such a fast pace and so amplified that distorted, inaccurate or false information acquires a tremendous potential to cause real world impacts, within minutes, for millions of users. Recently, several public concerns about this problem and some approaches to mitigate the problem were expressed. In this paper, we discuss the problem by presenting the proposals into categories: content based, source based and diffusion based. We describe two opposite approaches and propose an algorithmic solution that synthesizes the main concerns. We conclude the paper by raising awareness about concerns and opportunities for businesses that are currently on the quest to help automatically detecting fake news by providing web services, but who will most certainly, on the long term, profit from their massive usage. Abstract The authenticity of Information has become a longstanding issue affecting businesses and society, both for printed and digital media. On social networks, the reach and effects of information spread occur at such a fast pace and so amplified that distorted, inaccurate or false information acquires a tremendous potential to cause real world impacts, within minutes, for millions of users. Recently, several public concerns about this problem and some approaches to mitigate the problem were expressed. In this paper, we discuss the problem by presenting the proposals into categories: content based, source based and diffusion based. We describe two opposite approaches and propose an algorithmic solution that synthesizes the main concerns. We conclude the paper by raising awareness about concerns and opportunities for businesses that are currently on the quest to help automatically detecting fake news by providing web services, but who will most certainly, on the long term, profit from their massive usage.
Society and Economy
Fake news, deceptive information, and conspiracy theories are part of our everyday life. It is really hard to distinguish between false and valid information. As contemporary people receive the majority of information from electronic publications, in many cases fake information can seriously harm people’s health or economic status. This article will analyze the question of how up-to-date information technology can help detect false information. Our proposition is that today we do not have a perfect solution to identify fake news. There are quite a few methods employed for the discrimination of fake and valid information, but none of them is perfect. In our opinion, the reason is not in the weaknesses of the algorithms, but in the underlying human and social aspects.
Communication & society, 2024
The emergence of technologies based on artificial intelligence is accelerating the digital transformation of media organizations, directly impacting work processes, the relationship with audiences, the generation of content, and the emergence of new professional profiles. It is also, and notably, transforming the processes of detecting and verifying false content. This descriptive-exploratory research analyzes the impact that the use of AI is having on the transformation of the public entity Radiotelevisión Española (RTVE). Through a literature review and interviews with RTVE executives and experts, it reveals the transformative impact of AI on the corporation, highlighting its use to generate new content and verify the authenticity of fake and deepfake videos. In this area, RTVE combines traditional methodologies with others based on AI and leads the development of several tools in collaboration with several universities. These tools have already yielded satisfactory results in detecting these misleading materials, reinforcing RTVE's role as a guarantor of the veracity of information and increasing citizens' trust in its content. Similarly, AI is reinforcing RTVE's identity as a public service by facilitating the generation of automated content, which guarantees access to information in depopulated territories, and others that connect new generations with cultural content. The arrival of artificial intelligence will also generate in a short time a transformation of profiles and professional roles that adapt to this new reality.
IEEE Technology Policy and Ethics
Journalism Practice, 2020
Social media platforms and news organisations alike are struggling with identifying and combating visual mis/disinformation presented to their audiences. Such processes are complicated due to the enormous number of media items being produced, how quickly media items spread, and the often-subtle or sometimes invisible-to-the-naked-eye nature of deceptive edits. Despite knowing little about the provenance and veracity of the visual content they encounter, journalists have to quickly determine whether to re-publish or amplify this content, with few tools and little time available to assist them in such an evaluation. With the goal of equipping journalists with the mechanisms, skills, and knowledge to be effective gatekeepers and stewards of the public trust, this study reviews current journalistic image verification practices, examines a number of existing and emerging image verification technologies that could be deployed or adapted to aid in this endeavour, and identifies the strengths and limitations of the most promising extant technical approaches. While oriented towards practical and achievable steps in combating visual mis/disinformation, the study also contributes to discussions on fact-checking, source-checking, verification, debunking and journalism training and education.
Digital Media and Documentary, 2018
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Springer, 2024
Anàlisi: Quaderns de Comunicació i Cultura, 2021
Proceedings of the 8th International Conference on Information Systems Security and Privacy, 2022
Arab Media & Society, 2020
Journal of Intellectual Property and Practice , 2020
Proceedings of the 31st ACM International Conference on Information & Knowledge Management
Promet-traffic & Transportation, 2008
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Journal of Information Security, 2021
Journalism and Media, 2021
Continuum: Journal of Media & Cultural Studies, 2022
IAMCR Nairobi, 2021
Social Media for Journalists, 2013
International Journal of Information Technology and Management, 2012