Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, Proceedings of the 2020 Conference on Human Information Interaction and Retrieval
…
10 pages
1 file
In evaluating search systems there is a growing trend to complement the established effectiveness indicator topical relevance by the usefulness of search results. Usefulness refers to the contribution of search results to a larger task generating information search. This study analyses articles on interactive information retrieval which either predict the usefulness of documents retrieved or evaluate search systems by the usefulness of search results. The aim is to systematize the findings of these studies by categorizing the types of usefulness and their predictor types. Significant empirical associations between predictor types and usefulness are systematized, too. The data consists of articles in journals or conference proceedings focusing on the usefulness of search results either in web or database environment. The results indicate that there is a growing trend to complement topical relevance by the usefulness of search results in evaluating search systems. Search tasks typically instruct participants to search information for a writing task. Perceived usefulness of search results is the established measure in studies, although there are alternative measures in use. Significant associations between predictors and usefulness do not accumulate much but vary notably. Growth of knowledge on the usefulness of search results is based on the increasing number and variety of proposition supported empirically.
Journal of Documentation, 2000
This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental setting consists of three components: (1) the involvement of potential users as test persons;
Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '00, 2000
This article compares search effectiveness when using query-based lnternet search (via the Google search engine), directory-based search (via Yahoo) and phrasebased query reformulation assisted search (via the Hypefindex browser) by means of a controlled, userbased experimental study. The focus was to evaluate aspects of the search process. Cognitive load was • measured using a secondary digit-monitoring task to quantify the effort of the user in various search states; independent relevance judgements were employed to gauge the quality of the documents accessed during the search process. Time was monitored in various search states. Results indicated the directory-based search does not offer increased relevance over the query-based search (with or without query formulation assistance), and also takes longer. Query reformulation does significantly improve the relevance of the documents through which the user must trawl versus standard query-based internet search. However, the improvement in document relevance comes at the cost of increased search time and increased cognitive load. Keywords: navigation versus ad hoc search, monitoring user behaviour to improve search, field/empirical studies of the reformation seeking process, testing methodology. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that capias are not made or dtstributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a foe.
2009
ABSTRACT The purpose of an information retrieval (IR) system is to help users accomplish a task. IR system evaluation should consider both task success and the value of support given over the entire information seeking episode. Relevance-based measurements fail to address these requirements. In this paper, usefulness is proposed as a basis for IR evaluation.
Purpose: To compare five major Web search engines (Google, Yahoo, MSN, Ask.com, and Seekport) for their retrieval effectiveness, taking into account not only the results but also the results descriptions. Design/Methodology/Approach: The study uses real-life queries. Results are made anonymous and are randomised. Results are judged by the persons posing the original queries. Findings: The two major search engines, Google and Yahoo, perform best, and there are no significant differences between them. Google delivers significantly more relevant result descriptions than any other search engine. This could be one reason for users perceiving this engine as superior. Research Limitations: The study is based on a user model where the user takes into account a certain amount of results rather systematically. This may not be the case in real life. Practical Implications: Implies that search engines should focus on relevant descriptions. Searchers are advised to use other search engines in addition to Google. Originality/Value: This is the first major study comparing results and descriptions systematically and proposes new retrieval measures to take into account results descriptions.
The crucial role of the evaluation in the development of the information retrieval tools is useful evidence to improve the performance of these tools and the quality of results that they return. However, the classic evaluation approaches have limitations and shortcomings especially regarding to the user consideration, the measure of the adequacy between the query and the returned documents and the consideration of characteristics, specifications and behaviors of the search tool. Therefore, we believe that the exploitation of contextual elements could be a very good way to evaluate the search tools. So, this paper presents a new approach that takes into account the context during the evaluation process at three complementary levels. The experiments gives at the end of this article has shown the applicability of the proposed approach to real research tools.
Journal of The American Society for Information Science and Technology, 2002
This article compares search effectiveness when using query-based Internet search (via the Google search engine), directory-based search (via Yahoo), and phrasebased query reformulation-assisted search (via the Hyperindex browser) by means of a controlled, user-based experimental study. The focus was to evaluate aspects of the search process. Cognitive load was measured using a secondary digit-monitoring task to quantify the effort of the user in various search states; independent relevance judgements were employed to gauge the quality of the documents accessed during the search process and time was monitored as a function of search state. Results indicated directory-based search does not offer increased relevance over the query-based search (with or without query formulation assistance), and also takes longer. Query reformulation does significantly improve the relevance of the documents through which the user must trawl, particularly when the formulation of query terms is more difficult. However, the improvement in document relevance comes at the cost of increased search time, although this difference is quite small when the search is self-terminated. In addition, the advantage of the query reformulation seems to occur as a consequence of providing more discriminating terms rather than by increasing the length of queries.
2012
Neste artigo e proposta uma extensa metodologia para avaliar a recuperacao da informacao interativa. A proposta baseia-se em principios fundamentais apresentados na literatura de avaliacao para definir os objetivos do sistema ou ferramenta a ser avaliado, e inferir as medidas e os criterios de sucesso na consecucao dos objetivos. Propoe-se que, ao avaliar uma ferramenta de pesquisa, seja analisado em que medida ela beneficia os utilizadores, aumentando a sua capacidade de pesquisa e, consequentemente, contribuindo para a qualidade da lista de resultados. Alem da qualidade da lista de resultados, e importante avaliar ate que ponto o processo de busca e as ferramentas que o suportam atingem os seus objetivos
Journal of Documentation, 1997
The paper describes the ideas and assumptions underlying the development of a new method for the evaluation and testing of interactive information retrieval (IR) systems, and reports on the initial tests of the proposed method. The method is designed to collect different types of empirical data, i.e. cognitive data as well as traditional systems performance data. The method is based on the novel concept of a 'simulated work task situation' or scenario and the involvement of real end users. The method is also based on a mixture of simulated and real information needs, and involves a group of test persons as well as assessments made by individual panel members. The relevance assessments are made with reference to the concepts of topical as well as situational relevance. The method takes into account the dynamic nature of information needs which are assumed to develop over time for the same user, a variability which is presumed to be strongly connected to the processes of relevance assessment.
2013
Abstract- The key technology for knowledge management that guarantees access to large corpora of both structured and unstructured data is Information retrieval (IR) Systems. The ones commonly used on an everyday basis are search engines. This study developed and validated an evaluative model from user’s perspective meant to assess these systems using the user-centered approach. Items used and validated in other related studies were used to elicit responses from over 250 users. The reliability and validity of the measurement instrument (MI) was demonstrated using statistics such as internal consistency, composite reliability and convergent validity. After assessing the reliability and validity of the MI, the resultant evaluative model was estimated for goodness-of-fit using the structural equation modeling (SEM) technique. Results confirmed that the suggested model is valid and will be useful to researchers who wish to use it. Thus, this study suggests both the parameters and methods...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of the American Society for Information Science, 2000
Issues In Information Systems, 2012
Journal of the American Society for …, 2009
Proceedings of the American Society for Information Science and Technology, 2010
Information Processing & Management, 2019
Information Processing & Management, 2008
Journal of the American Society for …, 2004
Multimedia Information Retrieval Applications, 1999
Journal of Integrated Information Management
International Journal of Computer Applications, 2011
Proceedings of the 2021 Conference on Human Information Interaction and Retrieval
Journal of medical systems, 2003
International Journal of Information, Library & Society, 2019