Papers by Tanya Nazaretsky
Exploring the Potential of Automated and Personalized Feedback to Support Science Teacher Learning
Lecture notes in computer science, 2024

International journal of artificial intelligence in education, Apr 22, 2024
Writing high-quality procedural texts is a challenging task for many learners. While example-base... more Writing high-quality procedural texts is a challenging task for many learners. While example-based learning has shown promise as a feedback approach, a limitation arises when all learners receive the same content without considering their individual input or prior knowledge. Consequently, some learners struggle to grasp or relate to the feedback, finding it redundant and unhelpful. To address this issue, we present RELEX, an adaptive learning system designed to enhance procedural writing through personalized example-based learning. The core of our system is a multi-step example retrieval pipeline that selects a higher quality and contextually relevant example for each learner based on their unique input. We instantiate our system in the domain of cooking recipes. Specifically, we leverage a fine-tuned Large Language Model to predict the quality score of the learner's cooking recipe. Using this score, we retrieve recipes with higher quality from a vast database of over 180,000 recipes. Next, we apply BM25 to select the semantically most similar recipe in real-time. Finally, we use domain knowledge and regular expressions to enrich the selected example recipe with personalized instructional explanations. We evaluate RELEX in a 2x2 controlled study (personalized vs. non-personalized examples, reflective prompts vs. none) with 200 participants. Our results show that providing tailored examples contributes to better writing performance and user experience.

Explainable Artificial Intelligence (XAI) seeks to render Artificial Intelligence (AI) models tra... more Explainable Artificial Intelligence (XAI) seeks to render Artificial Intelligence (AI) models transparent and comprehensible, potentially increasing trust and confidence in AI recommendations. This research explores the realm of XAI within unsupervised educational machine learning, a relatively under-explored topic within Learning Analytics (LA). It introduces an XAI framework designed to elucidate clustering-based personalized recommendations for educators. Our approach involves a two-step validation: computational verification followed by domain-specific evaluation concerning its impact on teachers' AI acceptance. Through interviews with K-12 educators, we identified key themes in teachers' attitudes toward the explanations. The main contribution of this paper is a new XAI scheme for unsupervised educational machine-learning decision-support systems. The second is shedding light on the subjective nature of educators' interpretation of XAI schemes and visualizations.

arXiv (Cornell University), Dec 10, 2023
The increasing availability of Massive Open Online Courses (MOOCs) has created a necessity for pe... more The increasing availability of Massive Open Online Courses (MOOCs) has created a necessity for personalized course recommendation systems. These systems often combine neural networks with Knowledge Graphs (KGs) to achieve richer representations of learners and courses. While these enriched representations allow more accurate and personalized recommendations, explainability remains a significant challenge which is especially problematic for certain domains with significant impact such as education and online learning. Recently, a novel class of recommender systems that uses reinforcement learning and graph reasoning over KGs has been proposed to generate explainable recommendations in the form of paths over a KG. Despite their accuracy and interpretability on e-commerce datasets, these approaches have scarcely been applied to the educational domain and their use in practice has not been studied. In this work, we propose an explainable recommendation system for MOOCs that uses graph reasoning. To validate the practical implications of our approach, we conducted a user study examining user perceptions of our new explainable recommendations. We demonstrate the generalizability of our approach by conducting experiments on two educational datasets: COCO and Xuetang. CCS Concepts: • Human-centered computing → Human computer interaction (HCI); • Computing methodologies → Neural networks.
Navigating Self-regulated Learning Dimensions: Exploring Interactions Across Modalities
Lecture notes in computer science, 2024

Automated Identification and Validation of the Optimal Number of Knowledge Profiles in Student Response Data
It is well--known that the provision of personalized instruction can enhance student learning. AI... more It is well--known that the provision of personalized instruction can enhance student learning. AI--based education tools can be used to incorporate blended learning in the science classroom, and have been shown to enhance teachers' ability to prescribe this personalisation. In order to reveal student knowledge profiles from their response data, we must utilise classical educational data mining techniques, with cluster analysis being one of the key methods for doing so. However, while clustering algorithms typically require the number of clusters as a hyperparameter, there is no clear method for choosing the optimal number.Motivated by a practical instance of this foundational problem -- deciding on the number of student clusters for a group-based personalization tool -- this paper discusses several variations of the gap statistic to identify the optimal number of clusters in student response data. We start with a simulation study where the ground truth is known to evaluate the q...
Engaging students in argument from evidence is an essential goal of science education. This is a ... more Engaging students in argument from evidence is an essential goal of science education. This is a complex skill to develop; recent research in science education proposed the use of simulated classrooms to facilitate the practice of the skill. We use data from one such simulated environment to explore whether automated analysis of the transcripts of the teacher's interaction with the simulated students using Natural Language Processing techniques could yield an accurate evaluation of the teacher's performance. We are especially interested in explainable models that could also support formative feedback. The results are encouraging: Not only can the models score the transcript as well as humans can, but they can also provide justifications for the scores comparable to those provided by human raters.

Towards Automated Assessment of Scientific Explanations in Turkish using Language Transfer
The paper presents a preliminary study on employing Natural Language Processing (NLP) techniques ... more The paper presents a preliminary study on employing Natural Language Processing (NLP) techniques for automated formative assessment of scientific explanations in Turkish, a morphologically rich language with limited educational resources. The proposed method employs zero and few-shot language transfer techniques for creating Turkish NLP models, obviating the need for extensive collection and annotation of Turkish datasets. The study utilizes multilingual BERT-based pre-trained transformer models. It evaluates the effectiveness of different fine-tuning approaches using an existing annotated dataset in Hebrew. The results indicate that, despite being trained using non-perfectly automated translations from Hebrew responses, the best-performing models demonstrated adequate performance when evaluated on authentic Turkish responses. Thus, this research may provide a useful method for building automated scientific explanations assessment models that are transferred between languages.
Teachers' trust in AI ‐powered educational technology and a professional development program to improve it
British Journal of Educational Technology

International Journal of Artificial Intelligence in Education, 2022
Machine learning algorithms that automatically score open-ended questions can be used to measure ... more Machine learning algorithms that automatically score open-ended questions can be used to measure students' conceptual understanding, identify gaps in their reasoning, and provide them with timely and individualized feedback. This talk will present the results of a study that uses Hebrew NLP to automatically score students' open-ended questions in Biology. The experimental results show that our algorithms achieve a high-level of agreement with human experts, on par with related work in English, in which this area is well-established. The contribution is twofold. First, we present a conceptual framework for constructing grading rubrics that are designed to support automated guidance and are geared towards machine learning-powered automated assessment. Second, we use this approach to build an NLP-pipeline for a new context -Hebrew, which belongs to a group of languages known as Morphologically-Rich. In languages of this group, among them also Arabic and Turkish, each input token may consist of multiple lexical and functional units, making them particularly challenging for NLP. This is the first study on automatic assessment of open-ended questions in Hebrew, and among the firsts to do so in Morphologically-Rich Languages.

LAK22: 12th International Learning Analytics and Knowledge Conference, 2022
Evidence from various domains underlines the key role that human factors, and especially, trust, ... more Evidence from various domains underlines the key role that human factors, and especially, trust, play in the adoption of technology by practitioners. In the case of Artificial Intelligence (AI) driven learning analytics tools, the issue is even more complex due to practitioners' AI-specific misconceptions, myths, and fears (i.e., mass unemployment and ethical concerns). In recent years, artificial intelligence has been introduced increasingly into K-12 education. However, little research has been conducted on the trust and attitudes of K-12 teachers regarding the use and adoption of AI-based Educational Technology (EdTech). The present study introduces a new instrument to measure teachers' trust in AI-based EdTech, provides evidence of its internal structure validity, and uses it to portray secondary-level school teachers' attitudes toward AI. First, we explain the instrument items creation process based on our preliminary research and review of existing tools in other domains. Second, using Exploratory Factor Analysis we analyze the results from 132 teachers' input. The results reveal eight factors influencing teachers' trust in adopting AI-based EdTech: Perceived Benefits of AI-based EdTech, AI-based EdTech's Lack of Human Characteristics, AI-based EdTech's Perceived Lack of Transparency, Anxieties Related to Using AI-based EdTech, Selfefficacy in Using AI-based EdTech, Required Shift in Pedagogy to Adopt AI-based EdTech, Preferred Means to Increase Trust in AI-based EdTech, and AI-based EdTech vs Human Advice/Recommendation. Finally, we use the instrument to discuss 132 high-school Biology teachers' responses to the survey items and to what extent they align with the findings from the literature in relevant domains. The contribution of this research is twofold. First, it introduces a reliable instrument to investigate the role of teachers' trust in AI-based EdTech and the factors influencing it. Second, the findings from the teachers' survey can guide creators of teacher professional development courses and policymakers on improving teachers' trust in, and in turn their willingness to adopt, AI-based EdTech in K-12 education.

Empowering Teachers with AI: Co-Designing a Learning Analytics Tool for Personalized Instruction in the Science Classroom
AI-based educational technology that is designed to support teachers in providing personalized in... more AI-based educational technology that is designed to support teachers in providing personalized instruction can enhance their ability to address the needs of individual students, hopefully leading to better learning gains. This paper presents results from participatory research aimed at co-designing with science teachers a learning analytics tool that will assist them in implementing a personalized pedagogy in blended learning contexts. The development process included three stages. In the first, we interviewed a group of teachers to identify where and how personalized instruction may be integrated into their teaching practices. This yielded a clustering-based personalization strategy. Next, we designed a mock-up of an AI-based tool that supports this strategy and worked with another group of teachers to define an `explainable learning analytics' scheme that explains each cluster in a way that is both pedagogically meaningful and can be generated automatically. Third, we develope...

ArXiv, 2018
Sequencing items in adaptive learning systems typically relies on a large pool of interactive ass... more Sequencing items in adaptive learning systems typically relies on a large pool of interactive assessment items (questions) that are analyzed into a hierarchy of skills or Knowledge Components (KCs). Educational data mining techniques can be used to analyze students performance data in order to optimize the mapping of items to KCs. Standard methods that map items into KCs using item-similarity measures make the implicit assumption that students performance on items that depend on the same skill should be similar. This assumption holds if the latent trait (mastery of the underlying skill) is relatively fixed during students activity, as in the context of testing, which is the primary context in which these measures were developed and applied. However, in adaptive learning systems that aim for learning, and address subject matters such as K6 Math that consist of multiple sub-skills, this assumption does not hold. In this paper we propose a new item-similarity measure, termed Kappa Lear...

Sequencing items in adaptive learning systems typically relies on a large pool of interactive que... more Sequencing items in adaptive learning systems typically relies on a large pool of interactive question items that are analyzed into a hierarchy of skills, also known as Knowledge Components (KCs). Educational data mining techniques can be used to analyze students response data in order to optimize the mapping of items to KCs, with similarity-based clustering as one of the two main approaches for this type of analysis. However, current similarity-based methods make the implicit assumption that students’ performance on items that belong to the same KC should be similar. This assumption holds if the latent trait (mastery of the underlying skill) is relatively fixed during students’ activity, as in the context of testing, which is the primary context in which these methods were developed and applied. However, in adaptive learning systems that aim for learning, and address subject matters such as K-6 Math that consist of multiple sub-skills, this assumption does not hold. In this paper w...

Confirmation bias and trust: Human factors that influence teachers' attitudes towards AI-based educational technology
Evidence from various domains underlines the key role that human factors, and especially, trust, ... more Evidence from various domains underlines the key role that human factors, and especially, trust, play in the adoption of AI-based technology by professionals. As AI-based educational technology is increasingly entering K-12 education, it is expected that issues of trust would influence the acceptance of such technology by educators as well, but little is known about this matter. In this work, we bring the opinions and attitudes of science teachers that interacted with several types of AI-based technology for K-12. Among other things, our findings indicate that teachers are reluctant to accept AI-based recommendations when it contradicts their previous knowledge about their students and that they expect AI to be absolutely correct even in situations that absolute truth may not exist (e.g., grading open-ended questions). The purpose of this paper is to provide initial findings and start mapping the terrain of this aspect of teacher-AI interaction, which is critical for the wide and ef...

As scientific writing is an important 21st century skill, its development is a major goal in high... more As scientific writing is an important 21st century skill, its development is a major goal in high school science education. Research shows that developing scientific writing skills requires frequent and tailored feedback, which teachers, who face large classes and limited time for personalized instruction, struggle to give. Natural Language Processing (NLP) technologies offer great promise to assist teachers in this process by automating some of the analysis. However, in Hebrew, the use of NLP in computer-supported writing instruction was until recently hindered by the lack of publicly available resources. In this paper, we present initial results from a study that aims to develop NLP-based techniques to assist teachers in providing personalized feedback in scientific writing in Hebrew, which might be applicable to other languages as well. We focus on writing inquiry reports in Biology, and specifically, on the task of automatically identifying whether the report contains a properly...

Confirmation bias and trust: Human factors that influence teachers' attitudes towards AI-based educational technology
Evidence from various domains underlines the key role that human factors, and especially, trust, ... more Evidence from various domains underlines the key role that human factors, and especially, trust, play in the adoption of AI-based technology by professionals. As AI-based educational technology is increasingly entering K-12 education, it is expected that issues of trust would influence the acceptance of such technology by educators as well, but little is known about this matter. In this work, we bring the opinions and attitudes of science teachers that interacted with several types of AI-based technology for K-12. Among other things, our findings indicate that teachers are reluctant to accept AI-based recommendations when it contradicts their previous knowledge about their students and that they expect AI to be absolutely correct even in situations that absolute truth may not exist (e.g., grading open-ended questions). The purpose of this paper is to provide initial findings and start mapping the terrain of this aspect of teacher-AI interaction, which is critical for the wide and ef...
Uploads
Papers by Tanya Nazaretsky