0% found this document useful (0 votes)
2 views9 pages

Sedlbauer Et Al., 2024

Uploaded by

fatih karatas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views9 pages

Sedlbauer Et Al., 2024

Uploaded by

fatih karatas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Received: 24 May 2023 Revised: 16 February 2024 Accepted: 25 February 2024

DOI: 10.1111/jcal.12967

ARTICLE

Students' reflections on their experience with ChatGPT

Josef Šedlbauer 1 | Jan Činčera 2 | Martin Slavík 1 | Adéla Hartlová 3

1
Department of Chemistry, Faculty of Science,
Humanities and Education, Technical Abstract
University of Liberec, Liberec, Czech Republic
Background: The emergence of Generative Artificial Intelligence has brought a num-
2
Department of Environmental Studies,
Faculty of Social Studies, Masaryk University,
ber of ethical and practical issues to higher education. Solid experimental evidence is
Brno, Czech Republic yet inadequate to set the functional rules for the new technology.
3
Department of Biology, Faculty of Science,
Objectives: The objective of this study is to analyse the experience of undergraduate
Humanities and Education, Technical
University of Liberec, Liberec, Czech Republic students’ interaction with ChatGPT and contribute to identifying the problems aris-
ing from the widespread use of artificial intelligence.
Correspondence
Josef Šedlbauer, Department of Chemistry, Methods: Junior university students (N = 25) were assigned the task of working on
Faculty of Science, Humanities and Education,
their seminar essays with the aid of ChatGPT. Most students were novices with this
Technical University of Liberec, Liberec,
Czech Republic. tool (the study was conducted in the spring of 2023). Their essays were analysed
Email: [email protected]
qualitatively, according to the principles of the general inductive approach.
Results and Conclusions: The initial attitudes towards artificial intelligence were
almost equally distributed from enthusiastic to indifferent and cautious, with one stu-
dent refusing to interact with the chatbot on ideological grounds. After the first expe-
rience, most of the students declared themselves adopters of the new technology.
We have found some evidence for enhancing critical thinking competence when
using ChatGPT, as well as examples of unquestioned reliance on its outputs. The ten-
dency to personification of the chatbot was apparent in the students' essays.
Implications: The findings show how easily the students embrace artificial intelli-
gence and suggest a failure of any attempts for its strict regulation. This, on the other
hand, underlines the need for emphasis on personal and research-oriented
approaches in teaching and learning.

KEYWORDS
artificial intelligence, ChatGPT, critical thinking, education, sustainability

1 | I N T RO DU CT I O N According to OpenAI (2023), the model can provide ‘various


types of learning benefits to the students, including quick information
The advent of Generative Artificial Intelligence (GenAI) tools such as retrieval on a wide range of topics, better conceptual understanding
ChatGPT has triggered a heated public debate on their promises, of complex topics, developing students' problem-solving skills, writing
limits, threats, and ethical considerations. Scientists and educators are, and communication skills, storytelling skills, or critical thinking. In addi-
of course, an integral part of this debate. The following paragraphs tion, ChatGPT can provide students non-judgemental emotional sup-
provide a brief insight into the research and opinions published so far. port and opportunity for personalized learning’. This self-assessment

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any
medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
© 2024 The Authors. Journal of Computer Assisted Learning published by John Wiley & Sons Ltd.

1526 wileyonlinelibrary.com/journal/jcal J Comput Assist Learn. 2024;40:1526–1534.


13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ŠEDLBAUER ET AL. 1527

of ChatGPT is a comprehensive summary of the hopes for the new with the release of ChatGPT. Farrokhnia et al. (2023), in a SWOT anal-
technology in education. On the other hand, OpenAI (2023) admits ysis of ChatGPT, support the emphasis on recognizing coursework
several risks connected with its use in an academic environment, and authentic formative assessment, self-assessment, reflection
including ‘providing inaccurate or misleading information, hindering reports, portfolios, and peer feedback. Based on their analysis, AI can
students' critical thinking skills, or reducing students' autonomy. In support students' complex learning, while it may lead to a weaker
addition, it may fail in providing emotional support to students, raise understanding of the text and a decline in higher-order cognitive skills.
potential privacy issues, and societal biases, potentially reinforcing Some authors are also concerned that AI can amplify existing social
and perpetuating existing inequalities’. Acquired evidence addresses bias and unfairness in learning (Farrokhnia et al., 2023; Kasneci
all these claims and is not yet conclusive. et al., 2023). According to Susnjak (2022), AI can exhibit critical think-
Empirical experience with ChatGPT is primarily based on the ing skills and generate highly realistic texts, which presents a potential
author's attempts to retrieve answers from the system or quantitative threat to students' academic integrity. In one of the few qualitative
surveys of some target groups. As noted by many, the answer quality studies, Michalon and Camacho-Zuñiga (2023) worked with 19 stu-
depends on how the prompts are formulated (e.g. Chang & dents in the Future Studies course at a private Mexican university.
Kidman, 2023) because GenAIs do not work with the context but only The students were instructed to use ChatGPT in various tasks regard-
with probability relations among words and word clusters. Most often, ing the methods studied in the class. The authors noticed, among
the authors find that the main strength of ChatGPT is sorting and others, that flaws in the chatbot answers represent an illustrative case
summarizing the existing knowledge, which is ideal for assistance in study of the importance of not blindly trusting the information that
writing and education if used with proper caution (e.g. Cooper, 2023; has been generated. According to the authors, this is an incentive for
Montenegro-Rueda et al., 2023). Tlili et al. (2023) performed an analy- developing students' critical thinking skills. Several other studies
sis of tweets of the early ChatGPT adopters on the subject, conducted (Kim & Adlof, 2023; Lin, 2023) explore the potential disruption of the
content analysis of interviews with selected subjects on how they existing education systems, point to ChatGPT's potential to facilitate
perceive the use of ChatGPT in education, and investigated their user critical thinking and complex learning and suggest strategies for inte-
experiences. In a study from Hong Kong universities, Chan and Hu grating ChatGPT into educational settings. Some authors
(2023) reported on students’ perceived benefits and concerns regard- (e.g. Zirar, 2023) are less enthusiastic, emphasize that reliance on
ing GenAI. Most participants of their study considered this tool as GenAI outputs without critical evaluation of such information
valuable, with numerous benefits, and were willing to work with it, adversely impacts student learning, and argue for limited use of GenAI
primarily on learning, writing, and research purposes. However, vari- in education.
ous concerns and challenges appeared in the students' answers, such The performance and reliability of ChatGPT and other GenAI
as challenges concerning the accuracy and transparency of GenAI. tools are at the core of this debate. While the AI outputs improve
The respondents also pointed to inequalities in access to technology quickly, some pertinent problems remain. These include more complex
and disadvantages for the students who do not use such tools for any questions requiring multiple rounds of inference and understanding of
reason. Another research from this group (Chan & Lee, 2023) focused subject domain expertise. Yang et al. (2023) evaluated ChatGPT in
on generational aspects of GenAI acceptance. The younger students assistance to educators, noticing its tendency to overlook the subtle
(generation Z) were rather optimistic about the potential benefits of logical connections between declarative sentences or inappropriate
GenAI, including enhanced productivity, efficiency, and personalized reasoning methods, resulting in poor judgement and incorrect
learning, and expressed intentions to use GenAI for educational pur- responses. On the other hand, ChatGPT achieved a 100% accuracy
poses. Their teachers acknowledged the potential benefits of GenAI rate when answering fact-based questions. Zhang and Shao (2024)
but expressed heightened concerns about overreliance, and ethical reported on the difficulties of ChatGPT in adapting to individual stu-
and pedagogical implications, emphasizing the need for proper guide- dents' needs and learning preferences. In a study on software testing,
lines and policies to ensure responsible use of the technology. The Jalil et al. (2023) found that ChatGPT was able to provide correct or
study of Romero-Rodríguez et al. (2023) reported on the acceptance partially correct answers just in 55.6% of cases and correct or partially
of ChatGPT by university students in Spain. The main finding is that correct explanations of answers in 53.0% of cases. It should be
experience of prior use is the fundamental determinant of this accep- emphasized once more that the abilities of GenAI are improving ‘in
tance. To prevent the rising gap between adopters and those more front of our eyes’ and such results must be considered time-
reluctant to the technology, the authors advise training students in dependent. Nevertheless, these tools are far from flawless.
the ethical and responsible use of ChatGPT and in their ability to for- In another aspect of academic life, there is a broad agreement
mulate clear and specific questions and verify the responses. Similarly, that clear editorial and higher institution policies must address ethical
Boubker (2024) found that ChatGPT's perceived usefulness positively issues arising from the emergence of GenAIs for scholarly publishing
influenced ChatGPT use and students' satisfaction, fostering the need (Lund et al., 2023; Salvagno et al., 2023). However, this is just one
for responsible implementation of this tool into the educational facet of academic integrity challenged by the GenAI that elevates
practice. cheating and plagiarism, as we understand them today, to new levels.
Testing AI's impact on education outputs has emerged earlier Most authors (e.g. Cotton et al., 2023; García-Peñalvo, 2023;
(Jang et al., 2022; Mota-Valtierra et al., 2019) and gained momentum Perkins, 2023) argue for the legitimate use of AI, considering its
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1528 ŠEDLBAUER ET AL.

potential risks and rewards and urging the institutions to take steps (February 2023) only the free version of ChatGPT 3.5 was available.
for responsible management of these tools. Cotton et al. (2023) and The essay assignment was as follows:
others also admit the necessity of developing and applying methods
and tools to detect and prevent academic dishonesty. For example, • Choose a scientific paper or a summary report from the list of
the European University Association (EUA, 2023) formulated a posi- recommended sources.
tion calling for an update of institutional-level policies, guidance on • Search this source for a research question(s) or main topic and for-
approaches to day-to-day practice obligations to reference the use of mulate a task for the AI application (https://chat.openai.com) that
AI in academic and student work, and its restricted use for certain corresponds to this topic. If necessary, adjust the wording so
types of learning and assessment. that the answer is not trivial (generally known) and contains infor-
To complement this introductory overview, the research of GenAI mation and connections that you consider personally enriching and
impacts on other fields should be mentioned as it often reaches also interesting for others.
to the education domain. This is particularly the case of AI in health- • Validate the output of the application using the original source and
care. Public health, telemedicine, radiology, and medical communica- other sources that you find by a standard literature search.
tion are viewed as the obvious opportunities for generative AI in • Indicate which part of the AI answer was new and rewarding for
healthcare (Biswas, 2023; Hopkins et al., 2023; Sallam, 2023). The role you. If you disagree with any part of the answer, state it and give
of ChatGPT in educating healthcare professionals and the associated reasons. Reflect on the whole experience of using AI for your own
risks is commented on by many authors (e.g. Arif et al., 2023; knowledge and professional growth.
Lee, 2023). Cascella et al. (2023) investigated the feasibility of
ChatGPT in several clinical and research scenarios, namely for the Students worked on their essays at home, the allocated time for
support of clinical practice, scientific production, and misuse in medi- completing the work was 6 weeks, corresponding to the end of March
cine and research. Out of the other areas reflecting ChatGPT in scien- 2023. Out of the 25 essays delivered by the students, 16 were written
tific literature, we mention just the concern of environmental impacts by females and 9 by males. The students' essays were analysed quali-
of AI (Rillig et al., 2023), which is a hidden but perhaps a substantial tatively, according to the principles of the general inductive approach
problem, similar to other technologies based on massive computing (Corbin & Strauss, 2008; Patton, 2002; Thomas, 2006). The analysis
power such as blockchain (Mulrow et al., 2022). was performed in April 2023.
While this literature overview makes no claim for completeness, First, we identified and coded the data segments reflecting stu-
it provides a fair state-of-the-art characterized mostly by anticipation, dents' perspectives on AI. The codes were further grouped into cate-
analogy, and expert opinion. Real data are adding up, but there is still gories and broader themes (Saldaña, 2015; Thomas, 2006). The
a lack of papers analysing how students experience their interaction ongoing analysis was discussed with the team of all of the authors.
with AI when using it for a learning purpose. This paper aims to con- Specifically, the second author initiated the coding process and
tribute to the experimental evidence of the interaction of ChatGPT defined the first set of emerging codes and categories. These were
with the typical student body. We focus on the following research subsequently discussed within the entire team, with ongoing compari-
question: How do students reflect on their experience with ChatGPT? sons between codes and data. Once the team reached a consensus on
the coding procedure, each member coded assigned data sections. In
the subsequent step, the codes underwent mutual review and modifi-
2 | METHOD cation. Finally, the team members coded the entirety of the data and
completed the list of codes, categories, and broader themes. As the
Our research group were college students attending an interdisciplin- process relied on full mutual agreement among all coders, there was
ary course in Environmental Science at a public university in the no need for calculating the coding agreement.
Czech Republic. Out of the total N = 25 subjects, 17 are majors in The final themes, ‘interaction with AI’, ‘learning benefits’, and
one of the Science subjects (Biology, Chemistry, Geography, and ‘perceived meaning of AI’, refer to the primary directions of how the
Physics), 4 are enrolled in the Nanotechnology study program students reflect on their experience with AI. The categories provide
and 3 are majors in Humanities or Languages. The group diversity further insight into the themes. For the list of categories and their def-
contributed to the representativeness of the results. initions, see Table 1.
As a part of the course evaluation, the students were required to
write an essay based on a selected source from the list of recom-
mended literature. The list comprises recent scientific papers dealing 3 | RE SU LT S
with various aspects of environmental studies and global problems,
complemented by recent comprehensive reports from this field. Liter- 3.1 | Interaction with AI
ature sources selected by the students for the 2023 class are provided
in Supplementary Material. Four categories were identified within this theme (see Table 1). First,
In the class of 2023, the students were instructed to work on the students reflect on expectations that subsequently framed their
their essays with the aid of ChatGPT. At the time of the assignment experience. The second category describes the process of providing
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ŠEDLBAUER ET AL. 1529

TABLE 1 List of categories and their definitions.

Theme Category Definition


Interaction Communication Overall evaluation of the nature of the process
with AI process
Input Description of the process of formulating the query
Expectation Reflection on respondents' expectations before commencing the process
Output Reflection on the provided answer
Learning Critical thinking Reflection on the effect on students' competence for rethinking, analysing, seeking possible mistakes, or
benefits engaging in other critical thinking processes
Strategic Reflection on the effect on students' ability to identify possible solutions, assess their risks and benefits
competence
System thinking Reflection on the effect on students' ability to discern new relationships, a higher level of complexity, or
additional perspectives on the learned topic
Perceived Overall Overall evaluation of the benefits and risks of AI
meaning evaluation
Personification Interpretation of AI as a partner in dialogue, attributing AI with its own intentions or rationality
Utilization Elaboration on the potentiality of AI for respondents' use or its use in contemporary society
Source of Highlighting potential social risks associated with using AI, perceiving it as a social threat, and considering
problems negative social impacts of AI

students' inputs, that is the way they formulated their queries. The Another strategy was to formulate the question carefully and/or
third category focuses on how students evaluate the ‘outputs’, that is to refine it gradually, similar to searching with the aid of search
on the quality and nature of AI's response. The last category refers to engines such as Google. Some students found out that looking for a
the overall evaluation of the students' interaction with ChatGPT. useful prompt was not as easy as they had expected, and the received
Regarding initial expectations, most students said that they had outputs were trivial or not entirely accurate. Students quickly realized
never worked with GenAI before. A few admitted their prior interest that the quality of the answer depended on the ‘explanation’ to AI of
and intention to work with the chatbot, citing that the essay was an what it should focus on. Accordingly, some of them framed the query
impetus for them. One student was already proficient with ChatGPT with an introduction or with targeted use of technical terms to guide
and previous experience influenced her expectations. Many students AI to a more precise ‘understanding’ of the purpose of the question.
expressed their curiosity about interacting with AI and did not know
what to expect. One student refused to use AI completely (see I asked the first question literally but received only a
Section 3.3. for a detailed description of this case). general answer. (R14, F).

I wasn't interested in AI and didn't feel the need to get It often seemed to me that the answer is ‘empty’; it
to know it. (R5, F). tries to write something, but in the end, you don't find
out much. (R24, F).
I had been interested in working with AI for some time,
so I was curious how the tool would interact and work At first, I found it difficult to formulate the question in
within this assignment. (R2, F). such a way as to elicit a desirable, non-trivial answer.
Gradually I learned this skill. I formulated the questions
Students used two basic strategies to input their questions. Some so that each question first had an introduction, in
students started by exploring the possibilities of AI, verifying the plau- which I tried to use as many technical terms as possible
sibility of the answers, such as how we might ‘test’ our counterpart in so that AI could work with as much relevant data as
a discussion: possible, and only then did I ask the question. (R4, F).

First, I wanted to see how AI would deal with one of Either way, the questioning tended to a dialogical format, like
the many myths I often read in discussions about cli- with a real partner, characterized by continuous questioning, gradual
mate change on various Internet servers (R15, F). modification and repetition of the questions to make them more
specific.
In the questions for Chat GPT, I focused on questions During this process, several students encountered problems with
of a moral nature. However, I also asked more trivial verifying sources of unclear origin. They wanted to supplement the
so-called verification questions. (R6, M). sources with differently worded and more specific questions:
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1530 ŠEDLBAUER ET AL.

I tried to ask AI directly where it got all its information 3.2 | Learning benefits
from. The output was that many encyclopedias, news-
paper articles, and academic texts had been entered Just a few students reflected on the benefits of their interaction with
into its database and supposedly verified by AI on their learning. Most of them provided examples of situations
humans. (R2, F). when AI stimulated their system thinking. Based on their observa-
tions, GenAI showed them the discussed issue from a different per-
At first, I conducted the whole interview ‘loosely’ – I spective, and so it helped them grasp it in higher complexity:
did not ask AI for the source of the answers. Subse-
quently, I repeated the questions but included a AI helped me to reflect on the relationship between
request for the sources of scientific publications at the extreme events and global change, and – after compar-
beginning of the conversation. Most of the answers ing it with the other sources – I must agree with the
were pretty much the same regarding moral issues. statements of AI. My work with it was an enriching
The robot just ‘added’ sources or provided less infor- experience. (R11, M).
mation when I asked for citations. (R6, M).
Based on the responses provided by AI and the com-
Considering the quality of the output, some students rated the parison with scientific papers, AI provides interesting
answers as superficial and unspecific, while others were satisfied. perspectives and even new information on the issue of
Regarding the reliability of the answers, some students judged climate inequity. (R16, F).
ChatGPT as trustworthy, while others found the answers irrelevant
and/or difficult to verify. While some students used terms such as In some cases, new information caused cognitive dissonance in
‘understandable’ and ‘relevant’ when evaluating GenAI outputs, a students. As a result, it allowed a deeper understanding of an investi-
similar proportion of students used more negative terms such as ‘triv- gated environmental problem:
ial’ and ‘untrustworthy’.
I was surprised by my work with AI. I did not expect
From a professional perspective, AI provided me with our planet to be in such bad shape… (R20, F).
the same helpful information I found in the litera-
ture (R11, M). Moreover, some students acknowledged that AI developed their
critical thinking. It was stimulated by the perceived imperfections in
Working with AI application was rewarding, brought the outputs generated by ChatGPT, for example their shallowness and
additional insights into this problem and confirmed the limited reliability. For example:
existing ones, which were easily verified by the avail-
able data. (R18, M). I think ChatGPT is an incredibly handy and robust tool.
However, because one cannot entirely rely on it (…) it
Good at finding general information, but not scientific makes one think and seek other sources to verify the
or technical one. (R2, F). outputs… (R19, F).

Unfortunately, in my use, I also encountered problems At the same time, one student pointed out that AI can help to
such as failure to generate an appropriate answer to develop a strategic competence, for example, by fast evaluation of
the question asked. The answer to the question often the risks and benefits of the considered strategies:
did not match the information from the original
source. (R8, M). OpenAI could be used to analyse the potential risks
and benefits of preemptive action, including the risk of
In the end, most students have successfully mastered the art of unintended consequences or escalation, and to gener-
creating a prompt (or rather a series of prompts) that ChatGPT could ate new strategies for balancing proactive security
answer in a relevant way and evaluated the whole communication measures with traditional models of deter-
process as fun, rewarding and fast. As one put it: rence. (R21, F).
When using AI, a human will get an answer to almost anything
within a few seconds, and according to tested questions, the right
answer. Of course, you have to formulate the questions correctly to 3.3 | The perceived meaning of AI
get relevant information, but it is much more efficient and fun than
the classic web browser search and clicking through many websites In their overall evaluation, students often referred to AI with signifi-
and not finding the right answer anyway. (R1, F). cantly positive emotions, surprise, and even enthusiasm:
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ŠEDLBAUER ET AL. 1531

As for the use of AI, I used AI for the very first time We shouldn't take it as an enemy, which is unfortu-
while writing this essay, and I must say that I was really nately what many people still do, but as a collaborator
pleasantly surprised and will definitely use it again in who can help us and make our work easier (R15, F).
the future. (R1, F).
In other answers, other verbs appeared, associated with living
Most students highlighted the high utilization value of AI on both beings rather than things. According to them, GenAI can ‘admit’,
personal and societal levels. However, their positive feeling about the ‘warn’ or ‘predict’ something. Only one student demonstrated aware-
utilization of AI for the benefit of contemporary society was some- ness of such personification:
times mixed with worries and concern. Based on this, AI can be both
useful and a source of problems. One subject perceived AI as ‘fright- One answer that surprised me is the recommendation
eningly perfect’. Another expressed concerns about its influence on of alternative forms of treatment other than the use of
the degradation of human culture and the deepening of social inequal- antidepressants, such as exercise, therapy, meditation,
ity in the world. However, the number of students who reflected their etc. You could almost believe that a human is writing
concerns was considerably lower than those who highlighted the posi- this, but that is the hidden danger of using AI. The user
tive aspects of AI. must never forget that this is only an applica-
tion. (R1, F).
I am excited about working with AI and will definitely
be using it not only to search for information for my
personal interests but also for future assignments that 3.4 | Teacher's perspective
I will be involved in. (R25, M).
By intent, no prior guidance on using ChatGPT was issued to the stu-
It is likely that we will see rapid and steep progress in dents. We have found that in most essays the ChatGPT-generated
many fields, but it will be lined with soullessness and answers were separated from the author's text, some students even
be even more distanced from our essence – from quoted ChatGPT on each occasion. However, not all of these students
nature, and this is completely contrary to my beliefs included also the inputs (their formulation of prompts), thus hamper-
and my approach to life. (R17, M). ing the traceability of the outputs. In a few cases, the outputs of
GenAI and the author's text were mixed and hard to distinguish. In
This student also referred to a possible political bias in the AI, cit- these essays, work with the originally assigned literature source was
ing a recent paper that reported a preference for left-leaning view- typically minimized and the topic was elaborated mainly by the
points in 14 out of 15 political orientation tests submitted to AI. However, even these essays were of acceptable quality within
ChatGPT (Rozado, 2023), and refused to use ChatGPT at all. the university-level standards. Overall, compared with the previous
The last and very notable perspective is the tendency for personi- years when the students’ essays were written without the aid of AI,
fication of AI. Although most subjects write about AI as a ‘thing’, the quality of the essays improved in 2023. They were more informa-
there is a tendency for some to treat it more as a partner in a dialogue. tive, and context-providing, with fewer stylistic and grammar flaws.
This is manifested mainly in the attribution of intent in the chatbot The quality of the discussion was similar to in previous years. The
answers, its efforts to push a certain agenda and/or defend a certain amount of personal comments and observations decreased. However,
opinion: this is probably attributable to the change in the assignment that
diverted part of the attention of the students from the topic of the
It appeals that, in addition to discussing the problem as essay to dealing with the new tool.
such, the causes, which are closely related to other Quality of students’ learning outputs when using GenAI is defi-
global environmental topics, should also be nitely calling for further research. Our tentative conclusion is that stu-
addressed. (R23, M). dents’ coursework from now on must always include some practical
part, for example peer presentation and defence of the essays.
The perceived intention of AI to promote a certain agenda may
also be related to the perception of AI as a pragmatically oriented
entity that can consider the effectiveness of individual solution 4 | DI SCU SSION
variants:
The adoption of ChatGPT by the students was swift and relatively
It pragmatically evaluates how each of these criteria easy, although some of them encountered a problem in articulating
adds to the attractiveness of the topic for its media the question(s) in a way that allows for the extraction of meaningful
presence. (R23, M). and informative output. This is no surprise as the students are already
used to other IT tools, often based on some form of AI, such as search
Some students explicitly refer to AI as a collaborator, a partner engines or translators. This observation is in agreement with the other
with whom we must learn to live: authors that stress positive attitudes towards the emergence of AI, its
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1532 ŠEDLBAUER ET AL.

benefits and strengths (Gherheş & Obrad, 2018; Sandu & Gide, 2019; also that ‘it lacks genuine emotions and empathy. Students seeking
Chen et al., 2023; Deng & Lin, 2023; Kung et al., 2023; etc.). emotional support may not receive the same level of understanding
At the same time these authors, and many others, warn about the and compassion as they would from human interactions, potentially
concerns and challenges of the new technology. In this study, we impacting their well-being’. (OpenAI, 2023). We note at this point that
encountered several of them. First, there is the question of account- the fact that the study's authors used AI responses as something to
ability (Cotton et al., 2023; Susnjak, 2022). While some students cited be discussed, reflects its new, emerging, and different quality.
ChatGPT regularly in their essays, others indistinguishably mixed its Furthermore, some students mentioned potentially negative
answers with other sources. Most students used a declaration of the social impacts of spreading AI such as the risk of biased responses. It
type ‘This section was prepared with the aid of ChatGPT’. This confu- is unclear if such concerns were based on their experience when
sion probably underlines more the need for explicit guidelines for interacting with ChatGPT or if this opinion was influenced by the con-
using AI properly rather than the lack of academic integrity. At least temporary public debate (Farrokhnia et al., 2023; Kasneci et al., 2023;
one thing seems clear: the author, a human person, is responsible. Rozado, 2023). Regardless of the source, the concerns reflect the fear
However, this might also suggest that it is acceptable to use ChatGPT of the unexpected social impacts of the new technology widespread
even without declaration if the author verifies and accommodates the within the population. In this light, we argue for including AI in aca-
outputs. It can be argued that, similarly, we do not cite the usage of demic curricula not only as an information tool, but also as a subject
tools such as DeepL or Google Translator that are also based on AI of debate, investigation, and thorough analysis.
language models. It appears to us that confronting this situation is Finally, we need to address the problem of personal integrity. AI
inevitable if we do not opt for a widespread ban of generative AIs or is a tool that reaches deeper into human nature than, say, a word pro-
rely on anti-AI-plagiarism software, which is both rather unrealistic. cessor. AI may provide emotional support to the students, but it may
Another issue relates to the decline in students' higher-thinking also become a source of perceived risk and challenge their emotional
skills, as some of the authors worried (e.g. Farrokhnia et al., 2023; stability (OpenAI, 2023). This cannot be ignored and GenAI cannot be
Zirar, 2023). Based on our findings, it can be argued that, at least in considered as a tool only (Perkins, 2023). Therefore, if the students'
some cases, the interaction with AI facilitated students' learning: their beliefs against using AI are strong, as we have encountered with
system thinking and critical thinking competence. In light of the partic- ‘(R17, M)’, we respect that. In our opinion, people should be free to
ular focus of students' essays (environmental and global problems), AI opt out of the AIs' use and given a chance to accomplish the tasks by
promoted also their competence for sustainability, supporting the other means. This imperative is relatively easy to follow in education
results of Bianchi et al. (2022). Based on this finding, AI may be con- and perhaps more difficult in many (near-future) jobs. Anyway, this
sidered an effective tool supporting education for sustainability in issue should not be overlooked.
higher educational institutions. It should be noted, on the other hand,
that the students associated AI's effect on critical thinking with its
imperfection, the perceived need to re-formulate their question, or to 4.1 | Limitations
verify the obtained response, lacking the necessary depth. From this
perspective, it may be reasonable to see the potential risks in the The findings of this study are based on a relatively small (N = 25) and
effect of AI's development on students' willingness to question its specific (university students) sample. Other groups, or particularly
outputs. As a result, it is not easy to draw a clear conclusion on the younger students, may reflect on their interaction with AI differently.
effect of GenAI on students' learning in the long-term perspective: As we have already discussed, students reported their experience
the contemporary benefits may be reduced in the future. What seems with GenAI in a specific stage of development (spring 2023) with all
safe to say is that from now on any meaningful student assignment the initial imperfections. More advanced technology may amplify
will require a practical part, experiment and/or (physical) product of some of the findings of this study while suppressing others.
some sort, specific to each student. While even laboratory protocols Last but not least, the qualitative design of this study does not
may be vulnerable to cheating because of AI (Humphry & allow for generalization of its findings. Follow-up research might focus
Fuller, 2023), the actual work in the supervised laboratory or the field on testing some of the hypotheses drawn from this or other studies.
has to be performed by the students themselves. For example, is there a relationship between the dialogical nature of
Some students' tendency to personalize AI is uncharted territory students' interaction with AI and the tendency for its personalization?
yet to be investigated by further studies. It is possible that the dialogi- How widespread is this tendency among its users? Could the sug-
cal form of interaction with AI deviates from the query-response for- gested positive impact of the AI interaction on students' systems and
mat applied in ‘traditional’ retriever systems and, as a result, changes critical thinking competence be verified among the other groups?
the perception of the instrument. While we do not imply that the stu- These questions call for further investigation.
dents truly consider AI as a living being, they may see it subcon-
sciously as more than just a tool, a partner of its kind. The
ambivalence of this tendency is actually reflected in both responses of 5 | CONC LU SIONS
the ChatGPT creator, mentioning that the instrument ‘can offer guid-
ance, encouragement, and resources to help students cope with Most of the university students assigned with the task of using the
stress, anxiety, or other challenges they may face’ (OpenAI, 2023), but early version of ChatGPT in their homework quickly developed
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ŠEDLBAUER ET AL. 1533

the necessary skills and positive attitudes to the tool. There were Chan, C. K. Y., & Hu, W. (2023). Students' voices on generative AI: Percep-
exemptions, even an ideologically-based refusal to interact with tions, benefits, and challenges in higher education. International Journal
of Educational Technology in Higher Education, 20(1), 43. https://doi.
AI. We cannot confirm the concern of losing critical thinking abilities
org/10.1186/s41239-023-00411-8
when using GenAI. On the contrary, at least some students acknowl- Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are GEn Z
edged the encouragement for deeper insight and critical evaluation of students more interested in adopting generative AI such as ChatGPT
AI outputs. However, this effect may be connected with the ChatGPT in teaching and learning than their Gen X and millennial generation
teachers? Smart Learning Environments, 10(1), 60. https://doi.org/10.
imperfections and therefore temporary. An interesting tendency
1186/s40561-023-00269-3
worth further research is the possible bias of personification when Chang, C.-H., & Kidman, G. (2023). The rise of generative artificial intelli-
using this tool, compared to other tools that ‘do not chat’ with the gence (AI) language models – Challenges and opportunities for geo-
user, such as Wikipedia or search engines. graphical and environmental education. International Research in
Geographical and Environmental Education, 1–5, 85–89. https://doi.
ChatGPT and other GenAIs already become a part of the educa-
org/10.1080/10382046.2023.2194036
tion environment and most definitely will stay with us. Their message Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial intelli-
to educators and education, at least as we hear it, is strong and clear: gence (AI) student assistants in the classroom: Designing Chatbots to
pursue more personal, experimental-based education with formative support student success. Information Systems Frontiers, 25(1), 161–
182. https://doi.org/10.1007/s10796-022-10291-4
assessment and peer review—and technology will stand by your side.
Cooper, G. (2023). Examining science education in ChatGPT: An
exploratory study of generative artificial intelligence. Journal of Science
AUTHOR CONTRIBUTIONS Education and Technology, 32, 444–452. https://doi.org/10.1007/
Josef Šedlbauer: Conceptualization; writing – original draft; s10956-023-10039-y
writing – review and editing; data curation; supervision; investigation. Corbin, J., & Strauss, A. (2008). Basics of qualitative research. In Techniques
and procedures for developing grounded theory (3rd ed.). Sage.
Jan Činčera: Methodology; writing – original draft; writing – review
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheat-
and editing; data curation; investigation. Martin Slavík: Data curation; ing: Ensuring academic integrity in the era of ChatGPT. Innovations in
formal analysis; validation. Adéla Hartlová: Data curation. Education and Teaching International, 1–12, 1–12. https://doi.org/10.
1080/14703297.2023.2190148
Deng, J., & Lin, Y. (2023). The benefits and challenges of ChatGPT: An
ACKNOWLEDGMENT
overview. Frontiers in Computing and Intelligent Systems, 2, 81–83.
Open access publishing facilitated by Technicka univerzita v Liberci, https://doi.org/10.54097/fcis.v2i2.4465
as part of the Wiley - CzechELib agreement. EUA. (2023). European University Association: Artificial intelligence tools
and their responsible use in higher education learning and teaching.
https://eua.eu/resources/publications/1059:artificial-intelligence-
DATA AVAI LAB ILITY S TATEMENT
tools-and-their-responsible-use-in-higher-education-learning-and-
The data that support the findings of this study are available from the teaching.html
corresponding author upon reasonable request. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT
analysis of ChatGPT: Implications for educational practice and
research. Innovations in Education and Teaching International, 1–15, 1–
ORCID
15. https://doi.org/10.1080/14703297.2023.2195846
Josef Šedlbauer https://orcid.org/0000-0003-0610-2211 García-Peñalvo, F. J. (2023). The perception of artificial intelligence in edu-
Jan Činčera https://orcid.org/0000-0003-0704-7402 cational contexts after the launch of. Education in the Knowledge Society,
Martin Slavík https://orcid.org/0000-0002-7404-8387 24, 1–9. https://repositorio.grial.eu/bitstream/grial/2838/1/01.pdf
Gherheş, V., & Obrad, C. (2018). Technical and humanities Students' per-
Adéla Hartlová https://orcid.org/0000-0003-4244-0218
spectives on the development and sustainability of artificial intelligence
(AI). Sustainability, 10(9), 3066. https://doi.org/10.3390/su10093066
RE FE R ENC E S Hopkins, A. M., Logan, J. M., Kichenadasse, G., & Sorich, M. J. (2023). Arti-
Arif, T. B., Munaf, U., & Ul-Haque, I. (2023). The future of medical educa- ficial intelligence chatbots will revolutionize how cancer patients
tion and research: Is ChatGPT a blessing or blight in disguise? Medical access information: ChatGPT represents a paradigm-shift. JNCI Cancer
Education Online, 28(1), 2181052. https://doi.org/10.1080/10872981. Spectrum, 7(2), pkad010. https://doi.org/10.1093/jncics/pkad010
2023.2181052 Humphry, T., & Fuller, A. L. (2023). Potential ChatGPT use in undergradu-
Bianchi, G., Pisiotis, U., & Cabrera Giraldez, M. (2022). GreenComp the ate chemistry laboratories. Journal of Chemical Education, 100(4),
European sustainability competence framework. In Y. Punie & M. Baci- 1434–1436. https://doi.org/10.1021/acs.jchemed.3c00006
galupo (Eds.). Publications Office of the European Union. https://doi. Jalil, S., Rafi, S., LaToza, T. D., Moran, K., & Lam, W. (2023). ChatGPT and
org/10.2760/821058 software testing education: Promises & perils. In 2023 IEEE interna-
Biswas, S. S. (2023). Role of chat GPT in public health. Annals of Biomedical tional conference on software testing, verification and validation work-
Engineering, 51, 868–869. https://doi.org/10.1007/s10439-023- shops (ICSTW) (pp. 4130–4137). IEEE. https://doi.org/10.1109/
03172-7 ICSTW58534.2023.00078
Boubker, O. (2024). From chatting to self-educating: Can AI tools boost Jang, J., Jeon, J., & Jung, S. K. (2022). Development of STEM-based AI edu-
student learning outcomes? Expert Systems with Applications, 238, cation program for sustainable improvement of elementary learners.
121820. https://doi.org/10.1016/j.eswa.2023.121820 Sustainability, 14(22), 15178. https://doi.org/10.3390/su142215178
Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D.,
feasibility of ChatGPT in healthcare: An analysis of multiple clinical Fischer, F., Kasneci, E., Gasser, U., Groh, G., Günnemann, S.,
and research scenarios. Journal of Medical Systems, 47(1), 33. https:// Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C.,
doi.org/10.1007/s10916-023-01925-4 Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., … Kasneci, G. (2023).
13652729, 2024, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/jcal.12967 by FATIH KARATAS - Nevsehir Haci Bektas , Wiley Online Library on [10/11/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1534 ŠEDLBAUER ET AL.

ChatGPT for good? On opportunities and challenges of large language New Approaches in Educational Research, 12(2), 323. https://doi.org/
models for education. Learning and Individual Differences, 103, 102274. 10.7821/naer.2023.7.1458
https://doi.org/10.1016/j.lindif.2023.102274 Rozado, D. (2023). The political biases of ChatGPT. Social Sciences, 12(3),
Kim, M., & Adlof, L. (2023). Adapting to the future: ChatGPT as a means 148. https://doi.org/10.3390/socsci12030148
for supporting constructivist learning environments. TechTrends, 68, Saldaña, J. (2015). The coding manual for quantitative researchers (3rd ed.).
37–46. https://doi.org/10.1007/s11528-023-00899-x Sage.
Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., et al. Sallam, M. (2023). ChatGPT utility in healthcare education, research, and
(2023). Performance of ChatGPT on USMLE: Potential for AI-assisted practice: Systematic review on the promising perspectives and valid
medical education using large language models. PLOS Digital Health, concerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare
2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198 11060887
Lee, H. (2023). The rise of ChatGPT: Exploring its potential in medical edu- Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Can artificial intelligence
cation. Anatomical Sciences Education. https://doi.org/10.1002/ase. help for scientific writing? Critical Care, 27(1), 75. https://doi.org/10.
2270 1186/s13054-023-04380-2
Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your aca- Sandu, N., & Gide, E. (2019). Adoption of AI-Chatbots to enhance student
demic life. Royal Society Open Science, 10(8), 230658. https://doi.org/ learning experience in higher education in India. In 2019 18th interna-
10.1098/rsos.230658 tional conference on information technology based higher education and
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. training (ITHET) (pp. 1–5). IEEE. https://doi.org/10.1109/ITHET46829.
(2023). ChatGPT and a new academic reality: Artificial intelligence- 2019.8937382
written research papers and the ethics of the large language models in Susnjak, T. (2022). ChatGPT: The End of Online Exam Integrity? arXiv.
scholarly publishing. Journal of the Association for Information Science https://doi.org/10.48550/arXiv.2212.09292
and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750 Thomas, D. R. (2006). A general inductive approach for analyzing qualita-
Michalon, B., & Camacho-Zuñiga, C. (2023). ChatGPT, a brand-new tool to tive evaluation data. American Journal of Evaluation, 27(2), 237–246.
strengthen timeless competencies. Frontiers in Education, 8, 1251163. https://doi.org/10.1177/1098214005283748
https://doi.org/10.3389/feduc.2023.1251163 Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T.,
Montenegro-Rueda, M., Fernández-Cerero, J., Fernández-Batanero, J. M., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian
& Lo  pez-Meneses, E. (2023). Impact of the implementation of angel: ChatGPT as a case study of using chatbots in education. Smart
ChatGPT in education: A systematic review. Computers, 12(8), 153. Learning Environments, 10(1), 15. https://doi.org/10.1186/s40561-
https://doi.org/10.3390/computers12080153 023-00237-x
Mota-Valtierra, G., Rodríguez-Reséndiz, J., & Herrera-Ruiz, G. (2019). Con- Yang, X., Wang, Q., & Lyu, J. (2023). Assessing ChatGPT's educational capa-
structivism-based methodology for teaching artificial intelligence bilities and application potential. ECNU Review of Education, 1,
topics focused on sustainable development. Sustainability, 11(17), 20965311231210006. https://doi.org/10.1177/20965311231210006
4642. https://doi.org/10.3390/su11174642 Zhang, H., & Shao, H. (2024). Exploring the latest applications of OpenAI
Mulrow, J., Gali, M., & Grubert, E. (2022). The cyber-consciousness of and ChatGPT: An in-depth survey. Computer Modeling in Engineering
environmental assessment: How environmental assessments evaluate and Sciences, 138(3), 2061–2102. https://doi.org/10.32604/cmes.
the impacts of smart, connected, and digital technology. Environmental 2023.030649
Research Letters, 17(1), 013001. https://doi.org/10.1088/1748-9326/ Zirar, A. (2023). Exploring the impact of language models, such as
ac413b CHATGPT, on student learning and assessment. Review of Education,
OpenAI. (2023). What learning benefits can ChatGPT provide to students? 11(3), e3433. https://doi.org/10.1002/rev3.3433
What are the risks of using ChatGPT in academic environment?
https://chat.openai.com/
Patton, M. Q. (2002). Qualitative research and evaluation methods. Sage. SUPPORTING INF ORMATION
Perkins, M. (2023). Academic integrity considerations of AI large language Additional supporting information can be found online in the Support-
models in the post-pandemic era: ChatGPT and beyond. Journal of Uni-
ing Information section at the end of this article.
versity Teaching and Learning Practice, 20(2), 7. https://doi.org/10.
53761/1.20.02.07
Rillig, M. C., Ågerstrand, M., Bi, M., Gould, K. A., & Sauerland, U. (2023).
Risks and benefits of large language models for the environment. Envi- How to cite this article: Šedlbauer, J., Činčera, J., Slavík, M., &
ronmental Science & Technology, 57(9), 3464–3466. https://doi.org/10. Hartlová, A. (2024). Students' reflections on their experience
1021/acs.est.3c01106 with ChatGPT. Journal of Computer Assisted Learning, 40(4),
Romero-Rodríguez, J.-M., Ramírez-Montoya, M.-S., Buenestado-
1526–1534. https://doi.org/10.1111/jcal.12967
Fernández, M., & Lara-Lara, F. (2023). Use of ChatGPT at university as
a tool for complex thinking: Students' perceived usefulness. Journal of

You might also like