Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, https://www.ijrrjournal.com/IJRR_Vol.7_Issue.5_May2020/Abstract_IJRR0058.html
https://doi.org/10.4444/ijrr.1002/2000…
7 pages
1 file
Item analysis is a process which examines the examinee's responses to individual test item (questions) in order to assess the characteristics of those items and of the test as a whole. Item analysis is especially valuable in improving items which will be used again in later tests, but it can also be used to eliminate ambiguous or misleading items in a single test administration. There are different approaches in item analysis. In all approaches general goal is to arrive at tests having minimum items that will yield necessary degree of reliability. Classical Test Theory (CTT) and Item Response Theory (IRT) are the two broad methodology of test theory. In CTT framework, using the selected sample, some indices like item difficulty, item discriminations are calculated for each item. The quality of item will be decided on the basis of these values. In IRT, which is also known as modern test theory, the item characteristics are decided based on values taken by the parameters of the model chosen for the item response. The parameters are estimated from the samples chosen for the item analysis. Based on the values taken by the parameters for each item, the quality of the item will be decided. This paper tries to explain the item analysis procedure in both classical and Item Response Theory frameworks
1997
When norm-referenced tests are developed for instructional purposes, to assess the effects of educational programs, or for educational research purposes, it can be very important to conduct item and test analyses. These analyses can evaluate the quality of items and of the test as a whole. Such analyses can also be employed to revise and improve both items and the test as a whole. However, some best practices in item and test analysis are too infrequently used in actual practice. This paper summarizes recommendations for item and test analysis practices as are reported in commonly used textbooks. These practices are determination of item difficulty, item discrimination, and item distractors. Item difficulty is simply the percentage of students taking the test who answered the item correctly. The larger the percentage getting the item right, the easier the item. A good test item discriminates between those who do well on the test and those who do poorly. The item discrimination index and discrimination coefficients can be computed to determine the discriminating power of an item. In addition, analyzing the distractors (incorrect alternatives) is useful in determining the relative usefulness of the decoy items, which should be modified if students consistently fail to select certain multiple choice alternatives. These techniques can help provide empirical information about how tests are performing in real test situations. (Contains 7 tables and 13 references.) (Author/SLD)
A test score that is obtained through the number-correct score is often used as estimate of an examinee's proficiency. A problem with this approach is that it does not take into account the characteristic of the item, such as the item difficulty, when estimating the proficiency. Furthermore, when test scores are used to evaluate the performance of a school, the presence of examinees who do not respond according to their true ability (e.g., guessing and copying), could yield estimates of a school performance that is lower than its actual performance. In this study, it was shown that the problem is circumvented using an approach based on the item response theory (IRT). Simulation studies were conducted to illustrate the points.
2011
In the standardized and objective evaluation of student performances, the item analysis is a process in which both students' answers and test questions are examined in order to assess the quality and quantity of the items and the test as a whole. All students from some classrooms of primary and middle school were selected to evaluate their performances by testing. On the basis of the analysis results the tests have been re-designed. The results emphasized that item analysis provides valuable information to the teachers to further item modification and future test development and offers educational tools to assist them.
There are two currentiJ' popular statistical frameworks for addressing measurement problems such as test development, test-score equating, and the identification of biased test items: classical test theory and item response theory (lRT). In this module, both theories and models associated with each are described and compared, and the ways in which test development generally proceeds within each framework are discussed. The intent of this module is to provide a nontechnical comparison of classical test theory and item response theory.
The practice of testing has become increasingly common and the reliance on information gained from test scores to make decision has made an indelible mark on our culture. The entire educational system is today highly concerned with the design and development of the tests, the procedures of testing, instruments for measuring data, and the methodology to understand and evaluate the results. In theory of measurement in education and psychology, Classical Test Theory (CTT) is a popular framework. The techniques of CTT are applied in assessment situations to improve test analysis and test refinement procedures. The main purpose of this paper is to provide a comprehensive overview of the CTT and its procedures as applied to test item development and analysis. The usage of CTT in measurement is to determine maximum information about an individual. It is a scientific framework which has a pioneer role in educational measurement and psychometric process. CTT has served the measurement community for decades, besides depicting the simplicity of the CTT model from multiple points of view; various limitations of the model were highlighted. These limitations are detailed in item, person and ability level. Despite the shortcomings attributed to CTT it is recommended that, Classical test theory approach of item analysis should be maintained in test development and evaluation, because of its superiority and simplicity in the investigation of reliability and in minimizing measurement errors.
Proceedings of the 41st ACM technical …, 2010
Background: The practice of testing has become increasingly common and the reliance on information gained from test scores to make decision has made an indelible mark on our culture. The entire educational system is today highly concerned with the design and development of the tests, the procedures of testing, instruments for measuring data, and the methodology to understand and evaluate the results. In theory of measurement in education and psychology there are two competing measurement frameworks, namely Classical Test Theory (CTT) and Item Response Theory (IRT). The techniques of the two frameworks are applied in assessment situations to improve test analysis and test refinement procedures. Objective: The main purpose of this paper is to provide a critical review of relevant empirical studies conducted to compare the two theories in test development. Results: Findings reveals that CTT and IRT are highly comparable; however, no study provides enough empirical evidence on the extent of disparity between the two frameworks and the superiority of IRT over CTT despite the theoretical differences. Conclusion: the inability of these empirical studies to provide enough evidence of superiority of the IRT over CTT may result from the instruments they used in conducting the studies. It is recommended that further studies be conducted with different tools to further explore the true picture of the framework and provide enough evidences to justify or prove the theoretical stands of the two frameworks in the field of educational and psychological measurement.
Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 2019
The aim of this study is to introduce the jMetric program which is one of the open source programs that can be used in the context of Item Response Theory and Classical Test Theory. In this context, the interface of the program, importing data to the program, a sample analysis, installing the jmetrik and support for the program are discussed. In sample analysis, the answers given by a total of 500 students from state and private schools, to a 10-item math test were analyzed to see whether they shows differentiating item functioning according to the type of school they attend. As a result of the analysis, it was found that two items were showing medium-level Differential Item Functioning (DIF). As a result of the study, it was found that the jMetric program, which is capable of performing Item Response Theory (IRT) analysis for two-category and multi-category items, is open to innovations, especially because it is open-source, and that researchers can easily add the suggested codes to the program and thus the program can be improved. In addition, an advantage of the program is producing visual results related to the analysis through the item characteristic curves.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the 41st ACM technical symposium on Computer science education, 2010
International Journal, 2009
uvt.ulakbim.gov.tr
English Language in Focus (ELIF), 2019
International Journal of Scientific Research in Education,, 2018
International Journal of Engineering and Advanced Technology (IJEAT), 2019
Psicologia: Reflexão e Crítica, 2016
International journal of psychology and educational studies, 2022
Educational and Psychological Measurement, 1965
Universal Journal of Educational Research, 2019
Methodology of educational measurement and assessment, 2017