Posts tagged ‘computing education research’
Come join us at the ITiCSE 2026 Doctoral Consortium!
I’ve reached the stage in my career where I’m attending conferences not because I’m presenting a paper but because I’ve agreed to take on a service role.. That’s not a bad progression. I won’t be at ACM SIGCSE 2026 because I have neither a paper not a service role — I’ll miss seeing everyone, and hope you enjoy St. Louis.
I will be at ACM ITiCSE 2026 and 2027, serving as co-chair for the Doctoral Consortium. Please, PhD students, come join Monica Divitini from NTNU and me at this year’s DC in Madrid. More information on the DC and how to apply is here.
I will also be at ACM ICER 2026 and 2027, serving as co-chair for lightning talks and posters. I’ll pester you with that Call for Participation when the page gets posted.
Defining Learner-Centered Design of Computing Education: What I did on my sabbatical
My planned activity during my sabbatical was to revise my 2015 book “Learner-Centered Design of Computing Education.” One of the fixes I wanted to make was a better definition of what “learner-centered design” was. In the new edition, I wrote some formal defining stuff, and then I wrote the below — an extended metaphor to make distinctions between different kinds of “centering” in education. I’m sharing that section here (in its pre-reviewed and pre-edited state). It comes right after defining what the Zone of Proximal Development is and what student performance means.
There are many different kinds of teaching activity that can help a student reach a more sophisticated level of performance. A teacher can model successful performance. The teacher can give feedback on the student’s performance. The teacher can coach or guide a student while attempting a task. They can set expectations in the class which create a social context for success. They can use teaching methods that have a proven research record in promoting engagement and student performance.

Figure 1: A metaphor for teaching contrasting learner-centered and standards-centered
Consider teaching from the top or bottom of the ZPD. Here is a metaphor to make distinctions between two kinds of support in order to create a geography of teaching. Imagine the ZPD as a climbing wall (Figure 1). The student is at the bottom and wants to reach the top. Depicted as grayscale images in this figure, here are two ways a teacher might support the student in scaling this wall:
- The supporter at the bottom can help the student get started, giving them a “boost” or “leg up.”
- The supporter at the top can reach down, and get them the rest of the way to the top of the wall.
The supporter at the bottom is more flexible than the one at the top. She can move to where the student is actually standing. She can help the student scale different parts of the wall or even reach different goals along the wall. She can bend even further if the student is shorter.
But a disadvantage for the supporter at the bottom is that she cannot be absolutely sure that the learner reaches the top. She can meet the student where they are when they first face the wall. She can help them get started on whatever path they choose on the wall.
The supporter at the top can help students who are almost at the top of the wall. He can be sure that students actually reach the learning objective. When he is reaching down, he is in a fixed position. He can help the student reach the objective where he is at, the level that he has already achieved. He can also be sure when a student does not reach this standard – he can see the students who fall, or who do not make it to his level. He is in a better position to decide whether the student is going to achieve the desired objectives.
The supporter at the bottom is more learner-centered. The supporter at the top is more standards-centered. Neither supporter is particularly strong at helping the student in the middle, when the student is challenged to persist, to stay engaged, and to maintain motivation. If the student is not particularly interested in achieving the top of the wall, they are satisfied making it part-way to the objective, then the learner-centered teacher has the most to offer.
Learner-centered teaching is concerned with helping students where they are, helping them to get started, and getting them engaged and motivated to tackle the mid-part. Low enrollment and high withdrawal or failure rates (sometimes called WDF rates) are issues that learner-centered teaching addresses. Learner-centered teaching also addresses issues of diversity, with the goal that all kinds of students can succeed in the class — even those who think that they cannot succeed or do not have the prior background to succeed.
Standards-centered teaching is concerned about making sure that students have what they need to go on, in their studies or in their career. Students who fail the second class because they did not learn enough in the first class is an issue for standards-centered teaching. Talking to industry partners about the desired out- comes is standards-centered. Concern about what graduates can do and achieve is a standards-centered teaching issue.
(I’m skipping some text here about teacher-centered, classroom-centered, and other forms of structuring education.)
I am splitting hairs a bit between child-centered and learner-centered. Learner-centered also starts from the students’ interests and considers the learner’s needs, and is very much about student construction of knowledge in their own minds, since that is how learning takes place. As described in Chapter 2, the knowledge to be learned in learner-centered education is defined by the community of practice. That is external to the learner.
Within the metaphor, I am describing three kinds of teaching: Learner-centered (supporter at the bottom), standards-centered (supporter at the top), and maintaining motivation and engagement (in the middle). Of course, teachers and students have to address all these issues, but it is sometimes useful to focus on one part. Consider this metaphor: If you have heart problems, it is important to go to a cardiovascular specialist. That does not mean that you do not need to care about skeleton, digestion, and skin; you need all of those, but sometimes you can address critical issues or fix problems by specializing. I focus on the first one because it is the most important. I like the way my colleagues Amy Bruckman and Betsy diSalvo put it
Computer science is not that difficult, but wanting to learn it is.
A New Zealand Perspective on the Challenges of Computing Education: What I did on my sabbatical
During our sabbatical, Barb and I spent a week in Auckland. We gave talks at the Auckland University of Technology and University of Auckland. Alison Clear (past Chair of the SIGCSE Board) and Tony Clear hosted us in their home, which was made even more delightful by Cary Laxer and his wife visiting. Alison organized a picnic with Paul Denny and Andrew Luxton-Reilly and their families. Tony hosted us at AUT, and Paul hosted us at the University of Auckland. It was a wonderful experience. If you ever get the chance to try Alison’s cooking, you have to take it. She’s amazing.
We got the chance to talk to Tony about some of his recent columns in Inroads magazine. I admit that when I get a new issue of Inroads or Communications of the ACM, I skim the table of contents for new feature articles. I usually skip the columns. After talking with Tony about his recent columns, I realized that I was missing out.
Tony writes from his perspective as a New Zealand scholar. It’s different than the average American perspective. The experiences and values lead to different questions and concerns.
We talked with him a good bit about his piece: “Large Language Models, the ‘Doctrine of Discovery’ and ‘Terra Nullius’ Declared Again?”. I didn’t know that the European colonists who came to Australia and New Zealand had papal permission. Australia was declared terra nullius — nobody owned the place, so go ahead and take it over. New Zealand was recognized as being run by the Māori, so colonizing there had to be negotiated. Tony asks, “So which of these models gives LLMs the right to consume the Internet?” Do we assume that nobody owns all the content on the Internet (like Australia)? Or should we be negotiating rights? The idea of LLM providers as a colonizing force was a fascinating perspective.
We also talked about his column that was published after we left: Project Carbon Budgeting. People in New Zealand were much more critical than in the US about data centers relying on nuclear power. New Zealand is a nuclear-free zone. They decided that the benefits are not worth the risks. Tony’s position is less critical about the LLMs themselves than he is about our job as educators. LLMs are having a huge impact on CS education, but we are not talking enough about the ethics of their use — from energy demands to ecological impacts. It’s our job to raise these issues in our classes. In New Zealand, the fact that GenAI providers are nuclear-powered is a critical issue. CS educators should be talking about that.
I’ve been fortunate to know Tony for a long time. We have had lots of research discussions. We have served as mentors at the same Doctoral Consortia. It wasn’t until I was talking with Tony about his columns, after I’d already been living in New Zealand for several weeks, that I became attuned to the New Zealand perspectives that he was bringing to his columns. That’s entirely on me — I wasn’t paying enough attention.
But that’s made me think about where else we make assumptions about a shared perspective when there’s actually an important difference that helps to see situations in a new light. Alan Kay famously said, “A change in perspective is worth 80 IQ points.” A quote (often attributed to McLuhan and his students, but is actually older), “I don’t know who discovered water, but it wasn’t a fish.” If you don’t see a problem from other perspectives, you may not be seeing the problem at all.
Last month, my daughter was married in Indore, India. It was beautiful — an amazing set of ceremonies over several days. This trip was the most time I have spent in India, and the most I have traveled there. It was such a radically different context than my life as an American professor in a college town. When I came back to the new term, I was immersed in the on-going discussions about GenAI in our classes, about how AI is going to take everyone’s jobs, and on how we should start planning for a “post-labor” society. I understand why the people in my daily context are worried about AI. I’m not sure that it’s the same for people I met and interacted with in India. Will GenAI be changing the real estate business all that much? Construction? Being a travel guide for foreigners? Tailoring clothes? Driving an auto-rickshaw? Or even driving at all? I’d never trust an autonomous vehicle trained in the US on the streets in the Indian cities I visited. I know that I saw only a small slice of India, but even that small slice gave me a different perspective than my daily life. GenAI is going to change a lot, but maybe we overestimate the impact because of the bubbles we live in.
ACM SIGCSE now has the ACM Global Computing Education Conference (CompEd), held this last October in Botswana. I hope that this conference will help all of us see our CS education problems and issues in new perspectives. Tony helped me see the New Zealand perspective in his columns. My time in India gave me new insight into the US-centrism of the AI discussions I’m part of. We could use those additional 80 IQ points.
Dr. Tamara Nelson-Fromm defends her dissertation: What Debugging Looks like in Alternative Endpoints
In May, Tamara Nelson-Fromm defended her dissertation “A Qualitative Exploration of Programming Instruction for Alternative Endpoints in Post-Secondary Computing Education.”
I’ve talked about Tamara’s work a few times in this blog.
- One of her early projects was a teaspoon language to help history teachers to build history timelines (blog post).
- At PLATEAU 2024, she presented our paper suggesting that there was transfer from the Pixel Equations teaspoon language into building Image filters in Snap! (blog post).
- She presented our paper at SIGCSE 2025 on how we designed the PCAS courses oriented towards creative expression and social justice (blog post). Tamara worked with me on that design process, particularly on how to meet justice scholars desire for their students to learn about databases, HTML, and SQL (blog post) and on helping students to understand how a computer might generate language (blog post).
Tamara has published a lot more than that during her PhD work in part because she became an expert on reflexive thematic analysis. She worked with several other students on using RTA. At SIGCSE 2026, she and Aadarsh Padiyath will present their paper on how to use RTA for computing education research. I’ve read the paper and loved it — I have been recommending it widely.

Tamara with her committee: Valerie Barr (on Zoom), (from right) Nikola Banovic, Barry Fishman, Tamara, and me
I want to tell you about her dissertation, but I don’t want to divulge too much — only the first study has been published so-far. The big idea that drives her work is alternative endpoints. She and I have talked a lot about the paper by Mike Tissenbaum and his colleagues. The big question that she’s helping to answer is “What will CS education look like as we move beyond producing more software developers?”
Study #1: New CS Teachers learning Debugging: Her first study investigated how we develop new CS teachers. From the start of her PhD, she has been interested in how students learn to debug. Her method was novel (and hard to get past reviewers). Instead of studying new CS teachers and how they learned debugging, she interviewed expert teachers of new CS teachers. She interviewed the people who run professional training, summer workshops, and many the other ways that teachers learn CS. Rather than track individuals (who might not struggle with debugging, or who might not be representative of new teachers), she talked to people who have been doing this for years. What do they do to teach debugging?
Here was the amazing answer: Avoid it. In hindsight, it makes all the sense in the world. Imagine: You’ve got a teacher new to CS in your workshop. In the first workshops (which is often all you get with teachers), you want them to succeed.. You want them to come back for more workshops. So, you do all that you can to avoid bugs. Since bugs will happen still, you provide checklists and “Here’s what to look for if it doesn’t work” guidance.
Of course, really learning to debug comes later…or does it? Tamara raises the intriguing possibility that maybe that’s enough. Maybe for what these teachers are doing (especially in primary school), maybe it’s enough to just have checklists. Again, it’s about alternative endpoints — what does a K-12 teacher need to know about debugging? The paper on her first study will appear at SIGCSE 2026 in February.
Study #2 and #3: PCAS Students: Her second and third studies involved PCAS students. In her second study, she looked at why arts, sciences, and humanities students would want to take courses involving programming. In her third study, she returned to the theme of the first study — how do PCAS students debug?
I don’t want to say too much about these studies, but I do want to tell one story from Study #3 that connects strongly to the story about teachers in Study #1. One of the ways that Tamara saw PCAS students debugging was the way that your modern mechanic fixes your car.
Mechanics today do not need to how your car actually works. Instead, they plug it into the diagnostic machine, and they get a code. The code tells the mechanic where the problem is. The mechanic then follows a procedure or (more likely) replaces a part — whatever the manufacturer guidance is for that code. They then try it again.
That’s how some of the PCAS students debugged. Each assignment for the arts and humanities classes was open-ended, and I gave them completely working examples. The students would write their programs and try them. If they didn’t work, they’d check that they didn’t make a simple mistake. If they couldn’t figure it out, they would go back to one of the worked examples and copy-paste the part that worked and did about the same thing. Then they’d test again. If they still couldn’t get it to work, they’d explore changing what they were trying to do, so that they still met the requirements — but they could get it working.
Is this problem? Do the students need to learn better debugging skills? Let’s go back to alternative endpoints again. Not everyone needs to have a strong mental model of the working program.
Tamara wasn’t prescriptive in her dissertation. She didn’t make judgements of good or bad. Rather, she described the world as she found it, and raises the reasonable possibility that what she saw is working just fine.
Tamara’s dissertation is important. The alternative endpoints paper suggested that we should think about different audiences learning to program for different purposes than software development. Tamara showed us what that is looking like.
Creating a measure of Critical Reflection and Agency in Computing
I stopped blogging while I was on sabbatical because I had to focus on finishing the second edition of Learner-Centered Design of Computing Education. And then we came back from sabbatical. I’d heard that it was tough getting back to normal work after sabbatical, and it was. I had it easier than most (e.g., I came back to summer time, and I had a light teaching schedule this Fall). But it was still a transition, so it’s taken me awhile to get back to blogging.
In the meantime, Aadarsh Padiyath published two papers (and a poster) about the the development and validation of an instrument to measure Critical Reflection and Agency in Computing. Aadarsh Padiyath is a PhD student (soon to graduate! Hire him!) advised by Barb Ericson and me. He last appeared here with a guest post a year ago with a pushback against technological determinism — computing education researchers assuming that the future of CS education can be predicted by the development of ChatGPT.
These new papers are about the second study from his dissertation. Aadarsh is interested in how we can better prepare computer sciences for recognizing and dealing with ethical issues. Typically, we do that with computing ethics classes. But do they work? Aadarsh recognizes that being able to measure progress is an important way to encourage progress.
In May, he published a paper at CHI 2025 “Development of the Critical Reflection and Agency in Computing Index.” The title captures the two aspects of computing and ethics that Aadarsh is most interested in — that student reflect on the ethical implications of their work and that they have a sense of agency, i.e., that they can do something that can address problems. This first paper was about defining the constructs (see Table 1 below). He created 45 items for his measure. He had a panel of experts review the items, and he interviewed five undergraduate students as they responded to the items. His paper was recognized with a Best Paper Honorable Mention.

Aadarsh presented a poster at SIGCSE 2025 in February, “The Development and Validation of the Critical Reflection and Agency in Computing Scale.”
The big finale was his ICER 2025 paper in August, “Validation of the Critical Reflection and Agency in Computing Index: Do Computing Ethics Courses Make a Difference?”. This paper summarized the CHI 2025 story of how the index came to be, then presented the results of a two-round validation study (474 participants in one, 464 in the other). Overall, he has strong support for the validity of his measure.
But in addition to taking the measure, Aadarsh ask the participants if they had taken a computing ethics course. He found “Participants who completed computing ethics courses showed higher scores in some dimensions of ethical reflection and agency, but they also exhibited stronger techno-solutionist beliefs, highlighting a challenge in current pedagogy.” Here’s my interpretation of his results: after taking a course in computing ethics, students were more reflective (yay!) and believed that they could make a change if they saw an ethical problem in their work (double yay!), but they tended to belief that more technology is the answer to addressing ethical problems with technology (uh-oh).
This is an impressive set of papers. It gives us a way of measuring the impact of our interventions in teaching computing students about ethics. It also highlights some real issues that we should be addressing in our computing ethics classes.
Three stories about how CS is overwhelming, and ideas for how we can do better
When we looked at how PCAS students thought about our classes (for our SIGCSE 2025 experience report), I was surprised at students’ use of the word “overwhelming” when talking about CS classes. I was pleased that they positively contrasted our courses with CS classes that they had taken previously, but I didn’t realize how much baggage the students brought with them — how negatively they perceived computer science. Some students told us how emphatically did not want a job in the Technology industry and didn’t want to take CS classes. When Tamara Nelson-Fromm interviewed the PCAS students from our first semester, she told me that every student she interviewed had tried to learn programming (via formal or informal means) and failed. That’s why they were trying the PCAS courses. Most weren’t looking for a sense of belonging in CS — they had their identities as artists or scientists or managers.
My guess is that students made this choice away from CS pretty early on. Some studies support the proposition that students make career decisions by late elementary school. We know that less than 10% of US high school students take a CS class each year (State of CS Ed Report). And while undergraduate CS classes and majors have grown, the majority of students at any University are not choosing CS. And as I described in an earlier post in this series, the kind of computing that students use outside of CS classes is different from what’s inside of CS classes.
I shouldn’t have been surprised about what students were telling Tamara. I had read about the #techLash. There is a lot of literature about how much CS overwhelms students. There’s also literature on how we can do better. Here are three of my favorite papers in this space.
“‘I like computers. I hate coding’: a portrait of two teens’ experiences” by Paulina Haduong communicates the punchline in the title. This is a rich, qualitative study of students who love to use computers, but who hated their experiences with Hour of Code, with Scratch, and with formal and informal education around programming. In the end, though, this is a positive paper. As Paulina writes, “These learners’ experiences illuminate the ways in which identity, community and competence can play a role in supporting learner motivation in CS education experiences.”
“‘I Always Feel Dumb in Those Classes’: A Narrative Analysis of Women’s Computing Confidence” by Amanda Ross and Sara Hooshangi is another paper that tells the punchline in the title. Amanda is completing an Engineering Education PhD at Virginia Tech, and has accepted a job at Rose-Hulman (congratulations both to her and Rose-Hulman!). For this paper, she interviewed women who succeeded in introductory CS and had high self-efficacy, but still dropped out of computer science.
Results show that while participants were highly successful in their course (reporting a high mark in the class) and had relatively high self-efficacy when discussing specific programming problems, they lacked computing self-concept in whether or not they were good at programming in general.
These first two papers show women who are interested in programming and who are good at it, but struggle to succeed in CS education. Why? Maybe it’s because of how we frame the field of computer science.
Being a software developer is a hard job — you translate requirements to code, and you aim for the code to be robust and secure. Most people who program (scientists, artists, end-user programmers, critical computing scholars, etc.) are programming for themselves, to achieve a goal of their own or to express themselves. It’s only the minority of programmers, the professional software developers, who code primarily for others. So, I understand why classes to prepare future software developers are about the hard task of precisely converting specifications to well-tested program code. But that’s maybe 10% of people who program.
The field of computer science has developed a narrow frame. We could have a broader one, one that includes the way that other disciplines use programming. I argued in an earlier post that we can broaden participation in computing by making computing education broader than just what CS and the Tech industry wants.
I have been telling people about this talk by Felienne Hermans from SPLASH 2024, “A Case for Feminism in Programming Language Design.” I highly recommend her paper with co-author Ari Schlesinger (which you can find here), but if you are interested in how computing became so male and so uncomfortable for female students, you must watch this talk. It’s a compelling and thought-provoking story, which I found both emotional and insightful. Watch through the Q&A if you want to get some additional evidence that Felienne is right about how she describes the field and how computer scientists push against a broader framing.
I appreciated Felienne’s point that computer science has confused “hard” with “interesting” or “valuable.” We overly value things that are hard to do, which leads us to undervalue things that are interesting, valuable, or useful but are not necessarily hard to do (e.g., studying how people build in Excel is interesting and valuable, even if it’s not as “hard” as studying programmers building million LOC systems). I have heard this sentiment voiced lots of times. “The study was really not that much. I don’t see why it’s interesting.” “The system wasn’t hard to do. Anyone could have built it. It’s not really a contribution.” “Anyone could have thought of that.” An academic contribution should be judged by what we learn, not by how hard it was to do or invent. That focus on being hard is part of what drives students away from computer science.
Felienne and Ari’s paper helps to explain tension between Computer Science departments and Computing Education Researchers. CER work doesn’t look like CS “hard” work. My students don’t typically build a big piece of code that gets used by thousands. Some of my students tested educational psychology theories in a computer science context (see blog post about Brianna Morrison’s work as an example). Some of these experiments are replication studies. There’s an obvious hypothesis — that what was seen by educational psychologists in other fields would likely be true in computer science, too. Whether the replication worked or not, the findings are novel contributions to CER because they tell us something that we didn’t know before.
Making computer science classes more welcoming and inviting isn’t about changing the nature of computer science. Paulina, Amanda, and Felienne are talking about and with people who love working with computing. The goal is not to reduce rigor. The goal is to remove unnecessary constraints. We can allow students to express themselves in computer science classes. We don’t have to make students feel dumb in computer science classes. We need to be open to broader definitions of what counts and is important in CS. We need a larger frame for the field of computer science and the goals of computing education.
I started this blog post series in February, describing how we designed the PCAS courses for arts and humanities students. The next post described how computing education was different than CS education. I offered two posts on computing education in the arts and and in the sciences. My previous post was a recommendation that CSTA (and primary and secondary school overall) focus more on computing education for everyone and less on CS education. I’m ending the series here with a post on how to make computing education work for all students, whether aimed at technology for their career or not. I’m making an argument for computing education for all and even more specifically programming for all. The way we get there is by looking at how the whole word uses computing and programming, not just what the computer scientists want.
How scientists learn computing and use LLMs to program: Computing education for scientists and for democracy
Gabi Marcu is a professor in the University of Michigan School of Information who studies technologies to promote health (see her website). She’s also an improv performer, and a friend. She asked me to participate in her project to combine research and improv called “Extra Credit.” She asks researchers to explain their research in 10 minutes, then a group of improv artists riff on the research for another 10 minutes.
The first speaker in the session I participated in was Joyojeet Pal, who studies social media and politics. He was hilarious — I heard one of the improv performers quip, “Wait — it’s our job to be funny!”
I talked about what everyone needs to know about computing to support democracy, with a focus on our recent course on AI. Barb recorded it and allowed me to share it.
I learned the most from hearing and meeting Elle O’Brien (see her website). Elle is a computational neuroscientist who decided to go meta. She now studies how scientists learn and use computational methods.
She had a paper in Harvard Data Science Review last year on how scientists learn computational methods, “In the Academy, Data Science Is Lonely: Barriers to Adopting Data Science Methods for Scientific Research.” In her study, it didn’t go well:
These scientists quickly identified that they lacked the expertise to confidently implement and interpret new methods. For most, independent study was unsuccessful, owing to limited time, missing foundational skills, and difficulty navigating the marketplace of educational data science resources.
I was surprised how much the scientists in her study needed more curation. There’s no lack of ways of learning data science — videos, tutorial, MOOCs, books, bootcamps, and on and on. But Elle was talking to working scientists. They were busy professionals. They struggled to find the right learning materials for their level of knowledge that matched what their field used.
Elle and I have both noticed how many different computational cultures there are across the sciences and liberal arts. These scientists use R, and these others use Python — even in the same department. They talk together about their science, but not really about code. Computational artists I’ve met at Michigan only use Processing or Unity. I’ve learned that Economists at Michigan mostly use Stata, a tool that I’d never heard of before my informants (two Economics faculty and PhD student) told me about it. While programming is common across the sciences, actually taking CS classes is rare among scientists that we’ve worked with. Most of the programming science faculty we met are self-taught, or learned through apprenticeship from the labs and groups they came up through.
Elle observed that computational scientists she works with are increasingly multi-lingual. They might use Python for some of their tasks (data processing, data cleaning, modeling, and/or simulation), then use R for statistics and visualizations. They are making choices for programming languages based on the libraries and communities that use those tools, not on the characteristics of the languages themselves. I’ve worked with some scientists who also work in multiple language ecosystems, but within the constraint that they’re trying to optimize their time. They’re not trying to transfer their knowledge of programming from Python to R — they’re just trying to get their work done. “Recipes” of how to do things in R are just fine for them.
Elle tries to convince some of her scientists to consider using version control systems, but they don’t see much benefit. Few scientists that either of us work with are inventing new abstractions. They write code (often, no more than a screenful) to get a job done, then throw the code away. They care about the data and the results, not the code. If you don’t invent new abstractions and you don’t reuse code, what does Github buy you?
Elle has a new paper appearing in CHI 2025 that is fascinating and relevant: “How Scientists Use Large Language Models to Program.“ She finds that “scientists often use code generating models as an information retrieval tool for navigating unfamiliar programming languages and libraries.” Again, they are busy professionals who are trying to get their job done, not trying to learn a programming language.
I was impressed with how much effort the scientists that she studied put into checking what the LLMs produced. One scientist ran code in a familiar system to compare to the results generated by the LLM-generated code. They all wisely distrusted the LLM code, more than I usually see computer scientists (and especially computer science students) who may not check LLM-generated code.
And yet, the LLMs still inserted bugs that the scientists missed. LLMs are absolutely nefarious in how and where they generate bugs. Elle raises the possibility that LLMs are having a negative influence on the scientific enterprise.
Elle is engaging in computing education research, though I don’t think that she thinks of herself that way. She’s not likely going to submit anything to ICER or the SIGCSE Symposium anytime soon, but computing education researchers need to know about work like hers. She’s studying scientists from the lens of being a scientist who uses computing, not a computer scientist. She knows more about what scientists need from programming and how they learn programming than most computer scientists or computing education researchers I know.
My Favorite ICER 2024 Paper: How Media Artists Teach Computing
I’m hesitant to state a preference for my favorite paper at the International Computing Education Research (ICER) Conference in 2024. There were so many cool papers (including some by my students!). But it’s an easy choice if I use the heuristic, “Which paper have I still been thinking and talking about the most after the conference?”
My favorite paper of ICER 2024 was Alice Chung and Philip Guo’s paper “Perpetual Teaching Across Temporary Places: Conditions, Motivations, and Practices of Media Artists Teaching Computing Workshops.” It’s a study of real media artists who teach computing in workshops. The first sentence of the paper is “Why and how do new media artists teach computing?” I love this question, and the answers are fascinating.
One of their observations is that media artists teach as part of their practice. They’re always learning new tools and practices, and also always sharing them. Let’s contrast this with software engineering. How many professional software developers also teach software development? How many consider it integral to their practice? Or swap the question — how many CS1 instructors are also professional software engineers?
Our study finds that artists strategically understand and respond to these conditions, developing what we call perpetual teaching – reframing the internalized duty or responsibility of perpetual training into pedagogical frameworks
So why? Why would media artists spend their time teaching? It’s about trying to be critical about what they’re doing.
We found that artist-educators are motivated by creating spaces to unlearn ineffective conventions and incubate new cultures rather than by technical knowledge transfer alone. Furthermore, they intended to design their workshop materials (e.g., prompts, activities, reading lists) to prepare participants to create critical interpretations of computing outside of mainstream tech career pipelines.
This is such an interesting goal and a contrast with computer science education. Artist-educators want to make new things and explicitly contrast with traditional technology paths. They want their students to be media artists who are critical of what’s happening in the rest of computing. Explicitly, media artist-educators are focused on alternative endpoints in computing education.
The paper goes into much more depth with examples and quotes from the artist-educators about their goals and motivations. I highly recommend reading the whole paper. It’s well-written and grounded in education literature.
I have had more conversations about this paper than any previous ICER paper that I am not co-author of. In most of the conversations, a computing education researcher was critiquing the paper, and I was defending it. The biggest critique I heard is that the paper does not speak to CS educators’ issues and offers them no solutions to their problems.
I mostly agree, but that’s what’s why I’m so excited about this paper.
The International Computing Education Research (ICER) conference should be about more than computer science education. Of course, it’s important to study CS1 classes, CS majors, and how to produce great software developers. We need good CS education, and we need research on what’s going on in CS education and how to make it more successful — which includes studies of teachers. But there will be far more people programming than will ever take a CS1. Studying how people learn computing beyond CS and how to make their learning successful is important for our modern society. That’s computing education, and ICER needs to have more papers like this one that explores the much larger world beyond traditional CS education.
But in the best possible world, this paper does speak to CS educators, too. Alice and Philip write:
New media artists view teaching as a means to promote greater diversity in computing cultures, emphasizing education’s role in broadening participation and challenging traditional narratives.
Wouldn’t we wish that to be true of all CS educators, too?
CS doesn’t have a monopoly on computing education: Programming is for everyone
I participated at the first SIGCSE Virtual Conference last December. I was on a panel “Assessments for Non-CS Major Computing Classes” (see the ACM DL paper here). The panelists were excellent. I was excited to meet Permit Chilana who came up with the idea of conversational programmer in her 2015 VL/HCC paper. Her talk was particularly relevant to me because she emphasized how she is studying business students, not computer science students — her research is about how non-CS students interact with computing and programming. Jinyoung (Jina) Hur was our organizer, and she ran the panel, which it left to her advisor, Katie Cunningham to present their fascinating work contrasting conversational programmers and end-user programmers in the CS classroom, which appeared at ICER 2024 (see paper here). Katie also shared some of her studies of conversational programmers starting from her dissertation work. Dan Garcia presented his work (with Armando Fox) on mastery learning which gives even non-CS majors the chance to get top grades in introductory CS classes (see nice piece that Berkeley Engineering wrote about their effort).
My talk was about what we’re accessing non-CS majors on. My claim is this: Computing education for non-CS majors is different than what we teach CS majors. It is important to figure out why non-CS majors are taking courses designed for CS majors (maybe they want to be conversational programmers or end-user programmers?) and to make sure that they can succeed (including getting good grades) when they are in those classes. However, it’s even more important to figure out the learning needs of the non-CS majors around computing and how to meet those — and then, how to assess the learning in meeting those needs. Education for CS majors is different from what non-CS majors need.
Here are a few examples of what I mean:
- When I asked Social Justice scholars what they wanted their students to know about computing, the top learning objective was for their students to understand that websites can be built from databases (see blog post about that story and our recent SIGCSE 2025 paper). Most CS majors probably don’t learn this.
- My colleague Gus Evrard led the effort to build our Python Programming for Sciences classes. He got four different departments (who were already teaching Python) to collaborate to define this new course. The course is about SciPy, cleaning data, Numpy, and building data visualizations with libraries like MatPlotLib. Most data science programs cover these topics, but most computer science programs don’t.
- I’m pretty sure that the most popular programming language (in terms of number of people using it) on most campuses is R. All of Statistics is taught in R. It’s very common in Psychology, Anthropology, and Sociology. Natural Sciences (chemistry, biology, physics) is increasingly using R for statistics and visualizations, even if they use Python for data management, modeling and simulation. I haven’t found a computer science program yet that teaches R or computer science through R, e.g., explaining to students the computer science that is most relevant to them.
- End-user programmers most often use systems where they do not write loops. Instead, they use vector-based operations — doing something to a whole dataset at once. (Kind of like the higher-order functions that Kathi Fisler used to beat the Rainfall Problem.) Many scientists use R and Numpy on Python. Many Engineers use MATLAB. Yet, we teach FOR and WHILE loops in every CS1, and rarely (ever?) teach vector operations first. The K-12 US state CS standards I’ve read explicitly include loops — teachers have to teach loops to meet the standard. End-user programmers likely outnumber traditional software developers (see some estimates). So why are we first teaching the stuff that fewer people use (hard-coded loops), requiring students to learn the harder forms?
Here are the points that I want to make over the next few blog posts. Many, many people are programming today. A minority of them are professional software developers. Learning to program is a form of computing education, but computer science is not typically teaching the things that non-CS majors need to program, so computing education is moving away from Computer Science (field, departments, teachers). Computer science no longer has a monopoly on computing education.
Here’s how I’m using these terms. Computer science education is teaching students about computer science. For the most part, CS education has become focused on developing professional software developers and other workers for the technology industry. Computer science (as a field or a department) has a lot of definitions, some of which I present when I give talks (below, and in this blog post). (Notice that the K-12 definition still includes “impact on society” but ACM/IEEE dropped that out in the 2021 Computing Curriculum volume.)

My favorite is the original one from Perlis, Newell, & Simon (1967): “The study of computers and the phenomena surrounding them.” But most computer scientists balk at how broad that one is. So, let’s call that “computing” and preparing students to work with computing (explicitly including programming) in whatever field they do computing education.
Computer science departments should offer computer science educationWe obviously need lots of people who know computer science, including many professional software developers. But most people who program will not be computer science majors (e.g., see this 2017 Oracle study). The needs for computing education must also be met.
(Side note: It is an interesting question: If students’ computing education needs are not being met, whose job is it to figure out a solution? Here at Michigan, individual departments were making classes to teach students the programming needed in their discipline, but now we’re combining them in PCAS. I’d wish that computer scientists would work to meet those needs, but computer science today is mostly about developing future technology workers. I am grateful that U-M’s College of Literature, Science, and the Arts started PCAS.)
The breakup of the CS monopoly is a particularly good thing for computing education researchers. There is SO much to do! So much of our research in the SIGCSE community is about CS1 for future CS majors. But computing education research doesn’t have to be about CS majors, and doesn’t have to be about CS1. There is so much more to study and explore when we think about how artists, scientists, business people, designers, architects, and humanities scholars, and everyone else learns about and uses programming.
Designing Courses for Liberal Arts and Sciences Students Contextualized Around Creative Expression and Social Justice: SIGCSE 2025 Experience Report
I am sorry to miss the SIGCSE 2025 Technical Symposium this week. I so want to hear Mitchel Resnick’s keynote for his Outstanding Contribution award, and to cheer on Manuel Pérez-Quiñones for his Distinguished Service award. There’s going to be a great session on “Ethics, Power, and Persistence” with papers from ACM TOCE, including a paper by Aadarsh Padiyath (see his recent blog post here) whom I co-advise with Barbara Ericson, and a paper by Noelle Brown, on whose dissertation committee I served.
Barbara and I are in New Zealand on sabbatical. We are Visiting Erskine Fellows at the University of Canterbury in Christchurch.
Tamara Nelson-Fromm and I have an Experience Report at SIGCSE 2025, which she will be presenting, “Designing Courses for Liberal Arts and Sciences Students Contextualized Around Creative Expression and Social Justice.” Preprint available here.
Readers of this blog have read some of the story in this paper. We point out that the early computer science scholar wanted everyone to learn programming, from reasons other than getting a job (a story I tell in this blog post). Most of what we have done in computer science education research has been focused on preparing students for jobs as software developers. We are still learning to develop design processes for alternative endpoints (see this blog post). In a paper that Gus Evrard and I presented at the Koli Calling conference in 2023 (see paper here), we tell the story of learning what liberal arts and sciences faculty want from computing education and then developing the Program in Computing for the Arts and Sciences. We describe the participatory design process for the first two courses in PCAS: COMPFOR 111: Computing’s Impact on Justice: From Text to the Web and COMPFOR 121: Computing for Creative Expression. I told some of that story, and showed the results of the participatory design process in this blog post.
This new paper adds an evaluation of the design. Tamara in her dissertation is looking at the motivations and process of students in these classes, so she had terrific interviews with the students that addressed the design of these classes. We had put questions into the students’ course evaluations to ask about our design. We used all of these to investigate what worked and what didn’t in the class.
I’ll summarize the results here:
- Students felt that the classes were useful, and not “overwhelming.” I was surprised that students used that word when talking about the class. We aimed to create undergraduate courses in programming that students would perceive as easy, but students still commented on the workload and how they expected it to be “overwhelming.” It’s hard to underestimate just how scary students find coding classes.
- Students feel that they are now able to talk about programming and learn more programming. Helping students to become conversational programmers was an explicit goal for the class. The Runestone ebooks that show students code in Python, SQL, HTML, and Processing help.
- Snap! works well, but students need some convincing. Students are amazed at the “cool things” they can make in Snap!. But some students said that it wasn’t “real” programming. One student said that it was “demeaning” at first to be asked to use blocks-based programming, but then they realized that they were actually learning a lot about “general computer science.”
Beyond the paper
The Computer Science Division here at University of Michigan wrote a very nice article about PCAS here. Perhaps the most convincing evidence that the PCAS courses are working are the enrollment numbers. We just surpassed 500 students enrolled in our courses this summer. Students want these classes, even when they’re not required of anyone.

We’re able to scale because there are a bunch of us teaching PCAS courses. My Physist colleague on the task force and in directing PCAS, Gus Evrard, created and is teaching our 101 course, The Transistor Disruption: How a tiny tool transforms science and society. We hired a lecture, Brian Miller (PhD in Music), who has been teaching the Creative Expression and Social Justice courses — and doing a great job developing them. There’s now an exciting data visualization unit in the Expression class, and due to his efforts, the Justice class has been approved for the Race and Equity requirement in LSA — the first computing course at Michigan to receive that approval. We just hired a lecturer this semester, Donovan Colquitt (PhD in Engineering Education), to teach the Expression class and the MediaComp Python class. A bunch of PCAS courses are taught by faculty across LSA: Sara Veatch from Biophysics is teaching the Python for scientists course, Andy Marshall from Anthropology developed an R for scientists course, and Andrew McInnerney of Linguistics now teaches the “Alien Anatomy” course on Generative AI. We also have terrific staff — Tyrone Stewart as our Academic Program Manager, and Kelly Campbell who is our chief administrator and who helps us figure out all the bureaucracy of creating a new program.
Today, I’m the only person teaching COMPFOR courses who has a computer science graduate degree. That may be the secret sauce to making PCAS scale — it’s not about computer science (field, major, or department). It’s about liberal arts and sciences faculty teaching the computing that their students need to succeed.
How K-12 CS teachers support students for whom English is not their first language
Emma Dodoo is an Engineering Education Research PhD student working with me. She was born in the United States but grew up in Ghana. Her world growing up was multi-lingual and multi-cultural. When Emma took her first computer science classes, she was surprised to find that the curriculum was not only dominated by English but that it was virtually impossible to learn programming without also learning English terminology. She won an NSF GRFP to explore how to support students for whom English isn’t their first language (emerging bilinguals (EB)) taking CS courses.
Over the next two weeks, she’s presenting two papers from her first study. She interviewed US-based K-12 computer science teachers who have EB students in their classes, to ask them about the challenges that they saw EB students facing, the teachers’ strategies for dealing with those challenges, and their programming tool choices based on the EB students’ needs. She showed them a variety of programs (in blocks, in teaspoon languages, in different text programming languages) as design probes, for them to say what they liked and disliked.
At the SIGCSE 2025 Technical Symposium (program here), she is presenting the challenges and strategies that the teachers identified. (Pre-print of her paper available here.) Here’s my over-simplified take on what she learned. (Be sure to read the whole paper to get the important nuances that the student reports and the advisor glosses over.) The cognitive load for EB students in CS classes is enormous — learning CS and learning English at the same time. They’re going to get lost. There’s no way to avoid it. So teachers build in touchpoints for the EB students to synch back up with the class. The teachers emphasize color (e.g., for keywords in the IDE) and pictures (e.g., in diagrams) to provide non-linguistic ways for EB students to figure out what’s going on.
At the PLATEAU 2025 workshop, she is presenting a paper on how K-12 CS teachers make programming tool choices with the needs of EB students in mind. (Pre-print available here.) The trade-offs here are much more complicated. The teachers told her that block-based programming languages are a huge win — colors, non-linguistic information in the form of block shapes, and the ability to localize the terms in the blocks. BUT, the CS teachers are concerned for their EB students as immigrants to the United States. The teachers want students to be able to have job skills as soon as possible, because it’s important for their and their family’s success. So many of the high school teachers emphasize Python and software engineering skills.
This is such a hard trade-off. Nobody gets a job programming in a block-based programming language. Text programming languages can scare students off. Teaspoon languages or blocks-based programming languages could create a welcoming on-ramp to programming that could lead more CS classes later. Balancing these trade-offs in an instructional design is what Emma is working on next in her dissertation.
Dr. Bahare Naimipour defended her dissertation
I’m on sabbatical this semester, so I finally have time to catch up on some long overdue blog posts.

Bahare at her defense with her committee: From left, Barbara Ericson, Shanna Daly, James Holly Jr., Bahare, me, and Tammy Shreiner.
Dr. Bahare Naimipour successfully defended her Engineering Education Research dissertation this last August 2024, Supporting Social Studies Data Literacy Education: Design of Technology Tools and Insights from Expert Teachers and Teacher Change Journeys.
I’ve posted about Bahare’s work over the years. She had a poster in 2019 about our first participatory design sessions aimed at understanding what social studies teachers wanted in data visualization tools (see post here). She has been working on the NSF grant that Tammy Shreiner and I received in 2020 to study how social studies teachers adopted data literacy (announcement of that grant here). Bahare had a paper at FIE 2020 (presented virtually, as that was during the pandemic) on how social studies teachers interacted with programming-based data visualization tools (post here). She compared programming and non-programming tools at SITE 2021 (post here). The tool that we created, DV4L, was the first of what we later called teaspoon languages — here is the post where we talked about a couple of teaspoon languages for social studies education.
Bahare’s dissertation is made of three related studies. The abstract from her dissertation is below. Here’s my quickie summary of the three studies, framed for a computing education audience.
First, Bahare describes the long process of developing DV4L — across multiple participatory design sessions, both in-person in Tammy’s pre-service classes before the pandemic, and on-line with in-service teachers during the pandemic. She articulates the features of DV4L which are specific to social studies teachers and describes how they were developed in response to teacher needs. This chapter has been accepted as a paper in J-PEER.
Second, Bahare followed three teachers for two years as they (slowly) developed data literacy plans for their classrooms that used technology. This is such a rich story. Bahare frames it in terms of Guskey’s Model of Teacher Change. Guskey said that teachers don’t change because of professional development. They have to have some interest in change, or they wouldn’t be taking the professional learning opportunities seriously. They actually change when they try something in the classroom and the students’ response convinces the teacher that a new approach might work. Bahare watched that happen, but found that it was even more iterative than Guskey describes. Her teachers took multiple professional development sessions before they might even try something. She saw teachers try something…and get it wrong, and with some encouragement from Bahare, try again. This study really gives you a sense for what it’s going to take to achieve CS for All across the curriculum.
Finally, Bahare interviewed exemplary social studies teachers (selected by some pretty tough criteria) and asked them how they implemented data literacy in their classrooms. Bahare saw patterns across what the teachers were doing, and those data literacy design patterns are going to feed into future professional learning opportunities. The amazing thing for this audience is almost none of them used any computational tools. They liked our tools when Bahare demonstrated them, and maybe some might adopt — but I doubt it. They are excellent teachers recognized for their skill, and they got there without computation. Why would they change now? Maybe if we showed them how much more they could do with computational tools. Maybe if we showed them how easy it could be. Those are possibilities for future studies.
All told, Bahare has written a remarkable dissertation. It’s about data literacy in social studies education, but more, it’s about the challenges that face us as we bring the power of computing beyond the STEM classroom.
Abstract
This dissertation aims to contribute to the K-12 engineering education literature in a social studies context. Data literacy (DL) is the ability to understand and interpret what data means by drawing conclusions from patterns, trends, and correlations in data visualizations (DVs). DL is part of K-12 U.S. social studies standards making it relevant for engineering education researchers since it intersects both engineering and social studies. All K-12 students take social studies classes, yet most people are not data literate. Research suggests that social studies teachers have insufficient resources for teaching DL, so not all social studies teachers teach it. The goal of this dissertation is to shed light on the topic of K-12 DL in social studies by exploring three research questions:
- When designing engineering tools for non-STEM social studies teachers, what design considerations should be met?
- How do K-12 social studies teachers choose to explore data literacy in their pedagogy after participating in a data literacy professional learning opportunity (PLO)?
- How do expert social studies teachers use and explore DVs in their pedagogy, describe their data literacy pedagogical strategies, and explore/use technology tools to support their data literacy pedagogy?
To answer my first research question (Study 1), a participatory design (PD) approach was used to learn what social studies teachers (both pre-service and in-service) want in their classrooms by testing the usability of real tools with participants. Through three design phases, pre and in-service teacher groups informed the design and development of learning tools for social studies DL. Using a Social Construction of Technology lens, I describe the scaffolding embedded in the resulting tool DV4L by considering: 1) teachers’ perceptions of usefulness and usability in the DL tools they explored, and 2) how PD sessions with pre- and in-service teacher groups evolved over time beginning with their interactions with existing tools and leading to our current DV4L prototype tools.
I addressed my second research question through a longitudinal study (Study 2) that delved into how three K-12 social studies teachers explored DL during and after a PLO. Narrative methods were used to describe how three social studies teachers changed their DL practices. The journeys began with teachers as they explored a DL focused PLO, incorporated DL in their lesson plan(s), and include their reflections after implementing the lesson(s) in their classrooms. I used Guskey’s Model for Teacher Change as my analytical lens to understand each teacher’s DL journey.
My experiences in Study 1 and Study 2 made me wonder how expert teachers were meeting their DL learning goals. I used Shulman’s Pedagogical Content Knowledge framework to design Study 3 and address my third research question. I looked at how expert teachers explored DVs and described their DL pedagogical strategies and technology uses through a think aloud and semi-structured interview. Findings describe how five expert teachers made meaning of data and DVs through the practices and strategies they used or described using in their pedagogy.
This dissertation informs the design of curriculum, PLOs, and technology tools to support social studies teachers reach their DL learning goals. It has already informed the design of two socially constructed DL tools for K-12 social studies. Such tools provide teachers pedagogical power in their graphing activities in ways that support their DL learning goals while also promoting engineering skills and thinking.
Do I Have a Say in This, or Has ChatGPT Already Decided for Me?: Guest Blog Post from Aadarsh Padiyath
Hi all! This is Aadarsh Padiyath, a PhD Candidate at the University of Michigan, advised by Barbara Ericson and Mark Guzdial. At ICER and in ACM XRDS this year, I presented research that challenges a pervasive narrative in our field.
In discussions about ChatGPT in computing education, I frequently see claims that “LLMs are here to stay,” from both dejected colleagues and ecstatic researchers… proclamations that we need to “embrace LLMs or face being left behind…” that “since students used ChatGPT they must’ve found it beneficial…” also “Since ChatGPT is free, students will inevitably use it…” and “Prompts First Finally: natural language programming was where we were always headed.”
I see these pieces as exemplifying technological determinism – the belief that technology inevitably shapes society in predictable ways. It’s the reductive “if X, then Y” thinking that assumes new technologies will have predetermined outcomes on our lives and institutions, with humans playing a more passive role – regardless of the countless human choices and social influences that exist.
Our research, presented at ICER 2024, tells a different story. We found ChatGPT adoption in computing education was not a one-time choice nor a foregone conclusion, but an ongoing active negotiation shaped by social factors and individual experiences.
Through surveys, interviews, and midterm performance data from a Python programming course, we found students’ actual use of ChatGPT to challenge determinist narratives:
- Social factors, not just ChatGPT’s capabilities, shape adoption: Students’ perceptions of ChatGPT’s role in their future careers and their beliefs about their peers’ usage (often overestimated) significantly influenced their decisions.
- ChatGPT adoption varies widely – there’s no one-size-fits-all approach: We found diverse approaches, from full embrace to strategic application to complete rejection on moral grounds, often tied to students’ personal learning goals and values.
- Approaches change over time – there’s a feedback loop between ChatGPT use and learning outcomes: Students often initially internalized prevailing determinist narratives, but after experiencing ChatGPT’s impact on their learning (sometimes through receiving helpful guidance and sometimes through a lower midterm performance than they expected) many changed their approach, demonstrating agency and intentionality in their learning process.
These findings challenge the idea that ChatGPT’s impact on CS education is predetermined, LLMs are not necessarily “here to stay.” Instead, they highlight the complex, dynamic relationship between students and this tool.
In light of this, I wrote a XRDS article speaking directly to students about intentionality in their use of ChatGPT. The article encourages students to make active decisions rather than passive ones when incorporating ChatGPT into their learning process. I argue that students should explicitly define their relationship with ChatGPT in their learning process, align their usage with their goals and values, and continually reflect on this relationship. By doing so, they’re less likely to find themselves in unexpected or undesired situations regarding their learning outcomes or skill development.
As educators and researchers, I see our role as helping students make these informed decisions about their relationships with tools like ChatGPT. We need to help students understand the social influences at play, correct misconceptions about ChatGPT’s capabilities and usage, and guide them in aligning their use of tools like ChatGPT with their personal learning goals and values.
By moving beyond deterministic narratives, we can develop more nuanced, student-centered approaches to addressing AI in computing education.For more details on this research and its implications, check out our ICER paper, “Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course” and my XRDS article “Do I Have a Say in This, or Has ChatGPT Already Decided for Me?”
Assessing Student Understanding of Computing: Self-Efficacy, Non-CS Majors, and ChatGPT
Assessment is a hot topic in computing education research right now.
I’m sharing below a workshop announcement from Nell O’Rourke and Melissa Chen. They want to help students make accurate self-assessments, because (as Nell’s group has found in the past, with one paper described here) students tend to have inflated views of what they should be able to do, and when they can’t achieve those lofty goals, there is a negative impact on their self-efficacy.
We just received notice that our panel for SIGCSE Virtual 2024 has been accepted, on the topic of “Assessments for Non-CS Major Computing Classes” from Jinyoung Hur, Parmit Chilana, Katie Cunningham, Dan Garcia and me. I’ll give away here my position on this panel: We get assessments for non-CS majors wrong because we think about them as CS majors. Calling a non-major’s introductory computing course “CS0” is making the assumption that it’s the starting point for a sequence that goes on to CS1 and so on. Mastery learning is a good idea, but only when the skills to be mastered are appropriate for the student. Asking non-CS majors to master the skills of a CS1 is holding them to the standards of the CS major. There is more than one kind of “knowing how to code.” There are conversational programmers, computational artists and scientists, and others in our CS1 classes who need to code or to understand the process of coding, but don’t need or want the skills of a professional software developer. Assessment for non-CS majors has to recognize alternative endpoints for computing education.
Side note: Everything that we say about non-CS majors computing education applies to K-12 computing education. We should not assume that K-12 students are being prepared for software development jobs. Not all K-12 students will be CS majors, and there are other uses for programming in other careers besides software development.
Finally, ChatGPT is showing up everywhere in computing education research these days. We computing teachers have typically assessed understanding of computing by evaluating proficiency with textual programming. Now ChatGPT can be as proficient at the textual languages as the average CS1 student. Assessing understanding becomes harder when we can’t use proficiency as a proxy — the LLMs can make students appear proficient without any actual understanding.
We have a lot to do in assessment as computing education expands and LLMs can perform more of the programming tasks.
——————————
Do you teach an undergraduate introductory computing or programming course and want to help your students make accurate judgements about their programming ability?
We are researchers from Northwestern University interested in co-designing curricular and policy interventions with instructors to help students more accurately assess their programming abilities and develop higher self-efficacy.
Sign up here to learn more about our research on student self-assessments and to collaboratively design interventions at our two-day workshop on August 7 and August 8 at 12-3 PM central time (in your time zone)! Registration will close 3 days prior to the first session. More information about this workshop is available on the workshop website.
To be eligible for this workshop, you must teach an undergraduate-level introductory course and are 18 years of age or older. This study has been approved by the Northwestern University IRB (STU00222017: “Designing interventions to support student motivation and self-efficacy“). The PI for this study is Professor Eleanor O’Rourke.
If you have any questions, please email [email protected].
Best,
Dr. Eleanor O’Rourke
Melissa Chen
Northwestern University
A Purpose-First Theory of Transfer of Knowledge from Programming
One of the persistent research questions in computing education research (CER) is, “If you learn something in programming, can you use that something somewhere else?” Learning scientists call this “knowledge transfer.” In the first few decades of CER, the question was whether problem-solving skills that you developed in programming could be used elsewhere. (Spoiler: only if they were taught for transfer — see that story in this blog post.) Then the question was “What about programming itself? If you learn one programming language, is it easier to learn the next one?” Spoiler: Yes, to some extent. Ethel Tshukudu did her dissertation on transfer between Java and Python, where it works and where it doesn’t (see her ICER 2020 paper).Felienne Hermans is designing Hedy explicitly to support transfer (see her ICER 2020 paper).
Tamara Nelson-Fromm and I had a paper in this year’s PLATEAU Workshop (link to workshop) that has now been posted in the repository (link to paper), “A purpose-first theory of transfer through programming in a general education computing course.” We propose a different perspective on how to think about knowledge transfer from programming. If you use programming to learn X, does X transfer to a new programming context? Here’s our theory: For most people, the most common kind of knowledge transfer from programming is transfer from their purpose for programming, far more than the programming language itself. We call this “purpose-first” in reference to Katie Cunningham’s “purpose-first programming” dissertation work (see blog posts here and also here).
Our paper frankly describes an accident. Calling it an experiment is absolutely wrong, and it’s only a study because of post-hoc analysis. It comes out of our Computing for Creative Expression class in PCAS. Here’s the set-up.

I gave a lecture on digital representations of color, pixels, and image manipulations, like posterizing, negating an image, and generate a grayscale. I always use peer instruction questions in my classes, and in this class, I built on the forward testing effect. Before I taught any programming, I gave the students example programs in Snap! and using the Pixel Equations teaspoon language (link to a blog post on Pixel Equations), and just asked them “What does this do?” The idea is that they wouldn’t do well, but they’d be thinking about these things going into Pixel Equations. Then, before going on to talk about building image filters in Snap!, we did another forward-testing quiz with Snap! examples (and some Pixel Equations, which was no longer forward testing).
Here’s the accident part. I always collect their names during peer instruction so that I can give them participation credit, but on these, I forgot. So, I had a big mess of anonymized data, from the “pre” programming stage, and “mid” between Pixel Equations and Snap! (but without explicit pre-mid links). Our human subjects review board gave Tamara and I permission to analyze those two piles of data.
We looked for a variety of kinds of transfer. Did they focus more on procedure, e.g. describe the moment-by-moment process? Did they describe the structure of the code? What really popped for us is that the mid-quiz had so much more image processing language. Suddenly, they’re talking about luminance and posterizing, when they weren’t on the pre-quiz. We suggest that they’re transferring their purpose for programming (image processing), without any sign of transferring structural or behavioral knowledge.
Here’s a metaphor to explain what’s happening. Imagine that you’re taking a music class here in the United States, where they’re teaching the class in English. Then, you study abroad, where you’re taking another music course, but now it’s in Spanish or German. The first language that you’re going to want to pick up in the study abroad course is for the music. You’ll want to figure out how to talk about “rhythm,” “notes,” and “time signatures.” You’re working at transferring your knowledge of music. Now, if you’re a student of language, maybe you’re also interested in how Spanish and English are similar and different, or maybe you’re noticing the syntax and semantics of English and German. Maybe you’re a student of education, and you’re interested in how English, Spanish, or German support (or detract) from the discussion of music (and there may very well be differences in how the languages interact with the learning of music). But those are the unusual cases. Mostly, you’re a music student and you want to talk and learn about music.
Most people don’t want to learn programming for its own sake. Even if programming is a great way to explore math, science, expression, and music, the focus is on the purpose for the programming. This is likely the most common case. I work with a biologist who does data manipulation and modeling in Python, and is now doing her statistics in R. She sometimes moves code between Python and R (with a lot of ChatGPT help), but she’s not really interested in learning about either Python or R. She transfers her scientific purpose. She knows what she’s learning in each. She’s doing science, and programming is just a tool for her purpose. Python is a good tool for her modeling, and R is a good tool for her statistics. Maybe she thinks about why each is good for each purpose — but from our discussions, she mostly doesn’t.
Most transfer of knowledge between programming experiences is not about syntax or semantics. It’s about the purpose for programming. That comes first.
Recent Comments