1
Navigating the Digital Frontier: A Literature Review on the Impacts of Digital State
Testing on Student Achievement.
Mariana D. Brasfield
College of Education, the University of Houston
CUIN:7347 Seminar in Learning, Design, and Technology
Dr. Michael Ahlf
November 7, 2023
2
Navigating the Digital Frontier: A Literature Review on the Impacts of Digital State
Testing on Student Achievement.
Standardized testing has been a fundamental tool for assessing student content knowledge
in the American public school system. However, recent years have witnessed a shift in how
public schools assess student knowledge. Through the use of technology devices, ‘digital testing’
has transformed how student content knowledge is assessed. This newest approach to
high-stakes testing has had significant implications for student performance. Backes and Cowan
(2019) reported that more than two dozen states transitioned from paper-based to
computer-based testing in 2016. This recent transition has raised concerns regarding its potential
impact on student performance. During their investigation, Backes and Cowan discovered a
decline in student performance following the transition to digital testing compared to those who
used the traditional paper assessment. This literature review will address the disparities between
computer-based and paper-based assessments, explore the issues around technological
infrastructure and its role in digital testing, analyze the impact of digital testing on student
achievement, and examine the adaptations in pedagogy affected by digital testing. Through this
review, we aim to combine the existing literature and gain insights into the factors contributing to
the rise of digital testing in public education and its potential impacts on student performance. By
understanding the dynamics of digital testing and its potential negative impact on student
performance, policies, and policymakers can make evidence-based decisions that can provide
equal opportunities for students.
3
Background
High-stakes testing, commonly called standardized testing, has been the subject of much
debate in recent years. While policymakers and stakeholders hold different opinions concerning
the benefits of standardized testing, recent discussions have centered around the transition to
high-stakes computer-based assessments and its potential impact on student performance. This
recent shift to online testing represents a significant change in the policies that affect public
education. To understand how computer-based assessments may possibly be affecting student
performance, a deep dive into the history of high-stakes testing must be explored. According to
Olson (2020), standardized testing is used by policymakers and federal education agencies to
track student performance to distribute resources to disadvantaged students. It was in the 1960s
when the Elementary and Secondary Education Agency (ESEA) first mandated a snapshot of
student performance (Olson, 2020). However, concerns over low student performance prompted
policymakers to mandate the minimum competency movement, which required students to learn
a set of pre-requisite skills. This movement, directed by the federal education agency, set forth
the foundation for what is known today as the accountability ratings system, which holds schools
and districts across the nation accountable for student performance. According to Olson (2020),
it wasn’t until the Clinton administration that states were first required to adopt state standards to
test mastery for all students across all subject areas and it was the No Child Left Behind Act of
2001 that emphasized testing student mastery in mathematics and reading language arts. The
results were catastrophic to public education, as teachers found themselves prioritizing
test-taking strategies rather than content. Shaffer et al. (2015) argue that the role of teachers
shifted from serving as the central provider of information to fulfilling the standards set by the
state, leading teachers to focus on test-taking strategies and teaching the test. Although the
4
response to student achievement started with the intention of using state testing as a means to
allocate additional resources to underserved students, with the inclusion of the NCLB Act of
2001, districts began to emphasize learning in subject areas that were heavily tested at the end of
the year, leading subject areas like science and social studies to be entirely disregarded. Under
the pressure of standardized assessments and the accountability ratings system, test-taking skills
were prioritized by the teachers to ensure student mastery of state-mandated standards, leading
students to fall further behind than before.
Technological Infrastructures
In education today, technology is essential for providing students with access to
resources, engaging learning experiences, and academic tools necessary for success. However,
insufficient technological infrastructures and limited access to technology have negatively
impacted student performance in high-stakes online testing. Technology accessibility has faced
many challenges in the past few years, raising concerns about student equity among various
communities. Gordanier et al. (2022) observed that students who reported receiving SNAP or
TANF benefits performed significantly lower in all subject areas compared to their peers who did
not receive such benefits. The findings highlight the socio-economic disparities in student
performance, emphasizing the need to address technology accessibility as an equity issue. A
similar study found that students with internet and device access at home are able to log in more
learning hours outside of the classroom (Ogundari, 2023), highlighting the socioeconomic
disparities in student accessibility to technology. Gullen (2014) emphasizes the need for students
to learn specific technological skills for success in high-stakes online testing and argues that
skills learned on one device are not easily transferable to another. For example, students in low
socioeconomic areas often have limited access to traditional keyboards and other computer
5
accessories, which leads them to acquire some skills through using touchscreen tablets or
phones. These skills are often not transferable when using a laptop or computer and can affect
how students answer online tests. Furthermore, according to Gordanier et al. (2022), findings
suggest that schools with more technological resources, including but not limited to greater
bandwidth capacity and a higher ratio of individual student devices, were associated with
reduced effects on computer-based testing. While the effects did not show a significant increase
across different communities, this emphasizes the potential benefits of ensuring every student has
equitable access to technology and the positive impact it can have on student achievement. The
lack of access to technology and the skills required widen the gap in technological proficiency
among students. In addition to issues with equitable access to technology, it is essential to
explore the challenges associated with technological infrastructures and their impact on student
performance. Online testing software demands powerful internet capacity to deliver tests and
load questions and tools to students. However, many schools nationwide lack the necessary
infrastructure to support such a demanding software effectively. For instance, in April 2013,
several states reported multiple issues with their online testing systems. Students reported “slow
loading times of test questions, students were closed out of testing mid-answer, and some were
unable to log into the test” (Davis, 2020, sec.1), which can result in irregular test results and
missed deadlines, leading states to grant extensions, complicating the testing process further.
Other findings have suggested that the cost of switching to computer-based assessment software
can be significant; this is especially true when considering the cost of providing each student
with a device. One study suggested the financial burden of providing each student with a device
would lead to “short-cuts…taken in terms of visual display unit size and resolution, or
processing power of the hardware” (Brown, 2019, p.14), which would make it difficult for a
6
student to take the test on. This analysis is crucial when addressing issues of inequity, as
discussed previously. Lower-resolution devices can impact students' visions and prevent them
from seeing the entirety of the test questions and stimuli without scrolling down or switching
between pages (Brown, 2019), negatively impacting student performance, which can be
detrimental to accountability ratings. These recent studies have emphasized the importance of
addressing technological equity among students in state high-stakes testing.
Digital testing in education
The consequences of computer-based assessments have long been discussed among
educational researchers. One of the primary concerns regarding digital testing is the transfer of
skills from paper-based assessments to their digital counterparts. However, researchers argue that
the advantages of digital assessments, particularly in standardized testing, outweigh the potential
drawbacks. Noyes and Garland found that the elimination of physical test booklets has led to
substantial cost reductions in high-stakes testing as well as a reduction in human errors during
test administration (2008). The cost-effectiveness benefits educational institutions but also
ensures that errors in standardized testing are minimal. Additionally, testing experiences for
students have been generally positive. According to Dasher & Pilgrim (2022), students benefited
from testing accommodations that closely resembled classroom experiences. For example,
students who receive oral administration can now have the text read aloud to them using
text-to-speech tools. Online digital dictionaries have also been confirmed to be more
user-friendly Keng et al. (2008). These findings emphasize how technology has enhanced the
accessibility of tests for student needs and highlight the advantages of digital testing, making
assessments more inclusive and efficient. However, in contrast to the positive use of digital tools,
the negative implications of online testing were also observed. One significant issue students
7
encountered was related to the formatting of test items. According to Dasher and Pilgrim (2022),
reading passages were formatted to fit smaller computer screens, resulting in students needing to
scroll up and down to read the content. The added demand has caused students to face challenges
when searching for answers, adding unnecessary stress. Brown and Cowan (2019) also discuss
the likelihood that paper tests are generally more accessible than online tests because students
can review and revise their previous responses. Furthermore, the issue of technology skills was
also discussed. In her study, Gullen (2014) explores the impacts that basic computer skills, such
as scrolling, dragging and dropping, and using a cursor, may be potential indicators of lower test
scores and reduced engagement in online tests. Students who lacked these essential skills to use
the technology often experienced elevated levels of stress related to digital testing, suggesting
that a lack of computer experience could lead students to lower test results. Nevertheless, a study
by Molnar (2015) found no significant differences in student performance between those with
computer access at home. This suggests that familiarity with essential computer skills had
minimal impact on student performance in computer-based assessments.
Impact of digital testing on student achievement
As previously discussed, computer-based testing has been associated with either positive
or negative effects on student performance. Some argue that computer-based assessments have
negative implications on student performance. In contrast, others suggest that the impact on
performance is minimal or non-existent, revealing a range of outcomes. For instance, a study by
Backs and Cowan (2019) found that online testing led to lower scores in both Mathematics and
English Language Arts (ELA) compared to traditional paper tests, with a significant decline in
ELA scores. Similarly, a study conducted by Keng et al. (2008) suggested a correlation between
a higher chance of answering a question correctly on the paper format of the test. This implies
8
that students had a much higher chance of performing better on paper-based assessments
compared to online assessments. These findings highlight the potential negative impact of
computer-based assessments, especially in high-stakes testing. On the other hand, some studies
have found no clear correlation between declining student performance and online testing.
Research conducted by Wang et al.(2007) showed minimal statistically significant differences in
reading scores between computer-based and paper-based assessments. The study, however,
revealed several inconsistencies, such as the comparability between online assessment questions
and their paper-based counterparts, which affected testing performance. Similarly, Molnar (2015)
found that even when questions had the same difficulty level, students perceived the paper
version of the test to be more accessible but found little evidence that the mode of delivery was
the singular factor contributing to a difference in performance. Lower student performance might
not only be attributed to the mode of testing delivery but could also be influenced by factors such
as question difficulty and the challenges students may face with answering those questions.
Analysis.
The impact of computer-based assessments on student performance continues to be a
major topic for research. The conflicting evidence suggests that student computer abilities might
not be the single factor influencing student performance in computer-based assessments. Much
of the existing research focuses on comparing paper-based and computer-based assessments,
focusing on the performance of both formats. However, to gain a deeper understanding of the
factors affecting student performance, the research must consider the cognitive abilities assessed
by each question and assess the challenges students face when answering them. This raises
questions about the alignment of the curriculum to the online version of the assessment and how
teaching methods have adapted to meet the increased demands of computer-based testing.
9
Adaptations to Pedagogy
The challenges that students face when answering questions can be attributed to two key
factors: first, the alignment between the curriculum and standardized tests, and second, the
difficulty of interactive and active question types used in online assessments. The combination of
these two factors can negatively affect student performance in computer-based assessments. Au
(2007) argues that one of the factors in which high-stakes testing affects curriculum is the
fragmentation of knowledge in which content is taught in isolated pieces that focus on the
content of the test. According to Polesel et.al, (2013), the effect is teacher-centered instruction,
which focuses on lectures and lower thinking skills assessed by standardized testing, removing
valuable learning experiences from students and constraining a deeper conceptual understanding
of the content. This set of challenges provides insight into the significance of new interactive
question formats that assess student understanding and comprehension and their repercussions on
student achievement. Interactive questions were recently added to the State of Texas Assessment
of Academic Readiness (STARR) as of the 2022-2023 school year (State Summative Assessment
Redesign FAQ, n.d.). According to the Texas Education Agency, the new question types were
designed to be aligned with classroom experiences and the types of questions that teachers ask in
the classroom. According to a study, interactive questions were found to be more challenging
than static question formats; the research concluded that interactive questions tap into
higher-level thinking abilities and require a greater degree of technical experience (Deboer et
al.,2014). This demonstrates a misalignment between the increased rigor of standardized
assessments and the role of teaching practices centered around test-specific content, which
ultimately neglects to provide students with valuable learning experiences in which they apply
higher-order thinking skills related to real-world scenarios.
10
Analysis
Although the literature provides insight into the challenges faced by students with
computer-based high-stakes testing, there are limitations to the studies and research. As the
literature suggests, prior computer experience does not equate to a significant difference in
student performance, even in schools with years of experience using computer-based
assessments (Gordanier et al., 2023), implying that other factors influence lower student
performance. One limitation of past studies is a focus on the comparison between
computer-based assessments and the paper format. Suggesting that the research is focused on
how comparable the paper test is to the computer version of it without acknowledging student
knowledge of the content. While these factors can influence student performance in any test
format, the integration of interactive and active question types adds a layer of challenges for
students. Therefore, additional research is needed that focuses on the effects that active and
interactive new item-type questions have on student performance. The potential findings can
influence educational policies that can help address the gap between curriculum, testing, and
effective teaching practices that focus on the conceptual understanding of all content rather than
test-specific content.
The transition from traditional paper-based testing to computer-based assessments has
brought opportunities and challenges to the field of education. Digital testing offers potential cost
reductions, improved student accessibility, and a more in-depth understanding of student content
knowledge. The transition to digital testing has emphasized the technology divide, both in terms
of accessibility and mastery of digital skills, which has been a significant concern. This divide
can increase existing inequalities associated with student performance, emphasizing the need for
equitable access to technology. The factors impacting student performance in high-stakes testing
11
are complex. While studies have shown a decrease in student performance in online assessments,
others have found evidence that the decline has been statistically insignificant. These findings
suggest that computer-based testing may not be the only factor impacting student performance.
Moreover, teacher-centered practices focused on standardized testing are a cause for concern.
The addition of interactive item-question types emphasizes a misalignment between effective
teaching practices focused on fostering student concept knowledge and the increased rigor of
standardized testing. This provides additional challenges in aligning teaching practices with a
curriculum that fosters students' higher-order thinking skills. As the literature demonstrates, there
are limitations to current research studies, particularly in their emphasis on comparing
computer-based and paper-based assessments. A comprehensive investigation should not only
consider the format of the assessments but also address the cognitive skills assessed by each
question type and address the challenges that students face when answering them. This
comprehensive approach can highlight all of the underlying factors affecting student
performance and the role that digital testing has in it. Considering all of these observations, it is
evident that further research must be conducted. The results can lead to a deeper understanding
of the implications interactive and active question format types have on student performance in
high-stakes testing. By addressing these issues, policymakers can make evidence-based decisions
that bridge the gap between curriculum, testing, and effective teaching practices focused on the
conceptual understanding of content over test-specific content. In conclusion, the transition to
digital testing in standardized assessments can potentially reform educational practices that
provide students with equal opportunity to succeed in the digital era of high-stakes testing.
12
References
Au, W. (2007). High-stakes testing and curricular control: A qualitative metasynthesis.
Educational Researcher, 36(5), 258–267. https://www.jstor.org/stable/30137912
Backes, B., & Cowan, J. (2019). Is the pen mightier than the keyboard? The effect of
online testing on measured student achievement. Economics of Education Review,
68, 89-103.https://doi.org/10.1016/j.econedurev.2018.12.007
Brown, G. T. L. (2019). Technologies and infrastructure: costs and obstacles in developing
large-scale computer-based testing. Education Inquiry, 10(1), 4-20.
https://doi-org.ezproxy.lib.uh.edu/10.1080/20004508.2018.1529528
Davis, M. R. (2020, December 10). Online testing suffers setbacks in multiple states. Education
Week.
https://www.edweek.org/teaching-learning/online-testing-suffers-setbacks-in-multiple-states/201
3/05
Dasher, H., & Pilgrim, J. (2022). Paper vs. online assessments: A study of test-taking strategies
for STAAR reading Tests. Texas Journal of Literacy Education, 9(3),
7-20.https://eric.ed.gov/?id=EJ1371475
DeBoer, G. E., Quellmalz, E. S., Davenport, J. L., Timms, M. J., Herrmann‐Abell, C. F.,
Buckley, B. C., Jordan, K. A., Huang, C., & Flanagan, J. C. (2014). Comparing three
online testing modalities: Using static, active, and interactive online testing modalities to
assess middle school students’ understanding of fundamental ideas and use of inquiry
13
skills related to ecosystems. Journal of Research in Science Teaching, 51(4), 523–554.
https://doi.org/10.1002/tea.21145
Gordanier, J., Orgul, O., & Zhan, C. (2023). Pencils down? Computerized testing and student
achievement. Education Finance Policy,18 (2), 232-252.
https://doi-org.ezproxy.lib.uh.edu/10.1162/edfp_a_00373
Polesel, J., Rice, S., & Dulfer, N. (2013). The impact of high-stakes testing on curriculum and
pedagogy: A teacher perspective from Australia. Journal of Education Policy, 29(5),
640–657. https://doi.org/10.1080/02680939.2013.865082
Keng, L., McClarty, K. L., & Davis, L. L. (2008). Item-level comparative analysis of online and
Paper Administrations of the Texas Assessment of Knowledge and Skills. Applied
Measurement in Education, 21(3), 207–226.
https://doi.org/10.1080/08957340802161774
Molnar, G. (2015). The comparative analysis of paper-and-pencil and computer-based inductive
reasoning, problem solving and reading comprehension test results of upper elementary
school students (251249866). [Doctorial dissertation, University of Szeged]. Semantic
Scholar.
Noyes, J. M., & Garland, K. J. (2008). Computer- vs. paper-based tasks: Are they equivalent?
Ergonomics, 51(9), 1352–1375. https://doi.org/10.1080/00140130802170387
Olson, L. (2020). A shifting landscape for state testing. State Education Standard, 20(3), 7-11.
https://eric.ed.gov/?id=EJ1277452
14
Shaffer, D. W., Nash, P., & Ruis, A. R. (2015). Technology and the new professionalization of
teaching. Teachers College Record,117(12).https://www-tcrecord-org.ezproxy.lib.uh.edu/
Texas Education Agency. (2022, 08 30). State Summative Assessment Redesign FAQ. TEA
https://tea.texas.gov/student-assessment/testing/staar-redesign-faq.pdf
Ogundari, K. (2023). Student access to technology at home and learning hours during
COVID-19. U.S. Educational Research for Policy and Practice, 22(3), 443–460.
https://doi.org/10.1007/s10671-023-09342-7
Wang, S., Jiao, H., Young, M. J., Brooks, T., & Olson, J. (2007). Comparability of
computer-based and paper-and-pencil testing in K–12 reading assessments. Educational
and Psychological Measurement, 68(1), 5–24.
https://doi.org/10.1177/0013164407305592