Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006
…
10 pages
1 file
Trust and shared interest are the building blocks for most relationships in human society. A deceptive action and the associated risks can affect many people. Although trust relationship in virtual communities can be built up more quickly and easily, it is more fragile. This research concentrates on analyzing the Information Quality in the open rating systems; especially studying the way deceptive data spread in virtual communities. In this paper, we have proposed several novel ideas on assessing deceptive actions and how the structure of the virtual community affects the information flow among subjects in the web of trust. Furthermore, our experiments illustrate how deceptive data would spread and to what extent the deceptive data would affect subjects in virtual communities.
System Sciences, 2000. Proceedings of the 33rd Annual Hawaii …, 2000
At any given time, the stability of a community depends on the right balance of trust and distrust. Furthermore, we face information overload, increased uncertainty and risk taking as a prominent feature of modern living. As members of society, we cope with these complexities and uncertainties by relying trust, which is the basis of all social interactions. Although a small number of trust models have been proposed for the virtual medium, we find that they are largely impractical and artificial. In this paper we provide and discuss a trust model that is grounded in real-world social trust characteristics, and based on a reputation mechanism, or word-of-mouth. Our proposed model allows agents to decide which other agents' opinions they trust more and allows agents to progressively tune their understanding of another agent's subjective recommendations.
2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07), 2007
This research studies the problem of evaluating the effect of deceptive data based on the Information Flow Network and the Web of Trust. We present an Information Dissemination Model that illustrates the prerequisite for dissemination of information based on the subject and object trusts. To evaluate the effects of deceptive data accurately, we offer a quantitative model that is utilized to calculate to what extent the subjects in the information flow network are affected. The algorithm for evaluating the spread of deceptive data is provided and the time complexity of the algorithm is also analyzed.
Computer Fraud & Security, 2014
Accessed Sept 2013. www.ons.gov.uk/ons/rel/ crime-stats/crime-statistics/focus-onproperty-crime-2011-12/index.html. 7. Methodological Analysis from the Pew Research Centre. www.google.com/ insights/consumersurveys/pew. 8. Krugman, Paul. 'What People (Don't) Know About The Deficit'. New York Times, 13 Aug 2013. (Use of Google Surveys). http://krugman. blogs.nytimes.com/2013/08/13/whatpeople-dont-know-about-the-deficit. 9. Bursztein, Elie. '18.4% of US Internet users got at least one of their account compromised'. Elie Bursztein blog. www.elie.im/blog/security/18-4-ofus-Internet-users-got-at-least-one-oftheir-account-compromised/. 10. Pinkerton, Marianne. 'The Elderly as Victims of Crime, Abuse and Neglect'.
arXiv preprint arXiv:1211.0963, 2012
Abstract: Online rating systems are subject to malicious behaviors mainly by posting unfair rating scores. Users may try to individually or collaboratively promote or demote a product. Collaborating unfair rating'collusion'is more damaging than individual unfair rating. Although collusion detection in general has been widely studied, identifying collusion groups in online rating systems is less studied and needs more investigation. In this paper, we study impact of collusion in online rating systems and asses their susceptibility to collusion attacks. The ...
Proceedings of the 16th International Conference on Extending Database Technology - EDBT '13, 2013
Trust and reputation systems for virtual communities are gaining increasing research attention. These systems track members' activities and obtain their reputation to improve the quality of member interactions and reduce the effect of fraudulent members. As virtual communities become a central playground for internet users, the reputation a member gains within a community may be viewed as a social credential. These credentials can serve the user as a means for promoting her status in new communities on one hand, and on the other hand assist virtual communities to broaden their knowledge about users with relatively low activity volume. The Cross-Community Reputation (CCR) model was designed for sharing reputation knowledge across communities. The model identifies the fundamental terms that are required for a meaningful sharing of reputation information between communities and proposes methods to make that information sharing feasible within the boundaries of users' and communities' policies. This paper presents the CR model and draws the architecture guidelines for designing an infrastructure to support it. The proposed model is evaluated by using a sample of real-world users' ratings as well as by conducting a dedicated experiment with real users. The results of the experimental evaluation demonstrate the effectiveness of the CCR model in various aspects. and credentials (vertical trust). Their paper suggests a new policy language that allows such compositions. However, it does not deal with the computation of the aggregated trust. Some commercial products provide a very simple notion of cross-community reputation, e.g. iKarma (Site 2) and TrustPlus (Site 3), which do not exhibit the complexities of the concept as presented here.
Lecture Notes in Computer Science, 2013
Online rating systems are subject to unfair evaluations. Users may try to individually or collaboratively promote or demote a product. Collaborative unfair rating, i.e., collusion, is more damaging than individual unfair rating. Detecting massive collusive attacks as well as honest looking intelligent attacks is still a real challenge for collusion detection systems. In this paper, we study impact of collusion in online rating systems and asses their susceptibility to collusion attacks. The proposed model uses frequent itemset mining technique to detect candidate collusion groups and sub-groups. Then, several indicators are used for identifying collusion groups and to estimate how damaging such colluding groups might be. The model has been implemented and we present results of experimental evaluation of our methodology.
2003
We previously developed a social mechanism for distributed reputation management, in which an agent combines testimonies from several witnesses to determine its ratings of another agent. However, that approach does not fully protect against spurious ratings generated by malicious agents. This paper focuses on the problem of deception in testimony propagation and aggregation. We introduce some models of deception and study how to efficiently detect deceptive agents following those models. Our approach involves a novel application of the well-known weighted majority technique to belief function and their aggregation. We describe simulation experiments to study the number of apparently accurate witnesses found in different settings, the number of witnesses on prediction accuracy, and the evolution of trust networks.
: Trust management systems have been considered a novel approach to establish security in distributed systems and open communities. However, a trust management system itself is vulnerable against attacks and malicious behaviors. In this article, having reviewed the recent works conducted in this field and investigated the famous trust models against attacks and threats, a trust management model including some security strategies is presented through which the resistance of trust management system has been dramatically increased against newcomer attack, on-off attack, and the unfair ratings by malicious agents. The results demonstrate the efficiency of the system.
2012
In virtual communities (e.g., forums, blogs), modeling the trust of community members is an effective way to help members make decisions about whose information is more valuable. Towards this goal, we first formulate hypotheses on how various interaction attributes influence trust in virtual communities, and validate these hypotheses through experiments on real data. The influential attributes are then used to develop a trust ranking-based recommendation model called TruRank for recommending the most trustworthy community members. Contrary to traditional recommender systems that rely heavily on subjective manual feedback, our model is built on the foundation of carefully verified objective interaction attributes in virtual communities.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IFIP Advances in Information and Communication Technology, 2011
Journal of The Royal Society Interface, 2015
Lecture Notes in Computer Science, 2010
Proceedings of the 34th Annual Hawaii International Conference on System Sciences, 2001
Mathematical Problems in Engineering, 2018
Proceedings of the 9th ACM conference on Electronic commerce - EC '08, 2008
SSRN Electronic Journal, 2000
Studies in Informatics and Control, 2011
Lecture Notes in Computer Science, 2009
ACM Transactions on Economics and Computation, 2014