Ails CCS
Ails CCS
This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2024.Doi Number
ABSTRACT In the context of rapid advancements in artificial intelligence (AI), enhancing AI literacy
among college students is crucial for promoting the sustainable development of higher education. It is
imperative to strengthen the assessment and training of AI literacy among college students. however, there is
currently a lack of effective tools for evaluating AI literacy in students from developing countries. Therefore,
based on previous literature and the Chinese context, this study developed and validated an Artificial
Intelligence Literacy Scale for Chinese College Students (AILS-CCS). After developing the initial framework,
this study collected survey data (N=546) through random sampling. Following the execution of Exploratory
Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA), the final validated AILS-CCS emerged as
a theoretically and empirically consistent tool, consisting of 15 items across four dimensions: Awareness,
Usage, Evaluation, and Ethics. This tool holds significant importance for the assessment and training of AI
literacy among college students in China and other developing countries globally.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
students based on the affective, behavioral, cognitive, and IV. Scale Development Process
ethical (ABCE) model, which includes six dimensions:
intrinsic motivation, self-efficacy, behavioral intention, A.Initial Scale Construction
behavioral engagement, know and understand, and use and Based on the AI literacy framework established by Wang et
apply AI[15]. However, as mentioned earlier, current research al.[11], we conceptualized an original AI literacy framework
lacks studies assessing AI literacy among college students in comprising four dimensions: Awareness, Usage, Evaluation,
developing countries, and effective AI literacy assessment is and Ethics. Awareness refers to the understanding of AI
fundamental for conducting digital literacy initiatives and concepts and basic principles. Usage denotes the ability to
evaluating the effectiveness of AI education. The purpose of use AI in daily life and learning. Evaluation signifies the
this study is to develop a scale that can effectively measure AI ability to critically assess AI tools and AI-generated content.
literacy among Chinese college students, thereby helping Ethics involves the ethical and safe use of AI.
China and other developing countries enhance AI literacy We incorporated relevant items from previous
among college students and better utilize AI technology. studies[11],[13]-[15],[23], and adapted some items to fit the
local internet environment in China, thereby determining the
III. Method initial items for the four dimensions of the AI literacy
Based on the classic scale development procedures initially framework. Subsequently, we employed an expert validation
proposed by Churchill (1979)[32] and the subsequent practical approach to purify the initial items, making the following key
applications in AI literacy scale development, this study modifications: first, we removed items that were not suitable
followed several fundamental steps for the development and for the living and learning characteristics of Chinese college
validation of the scale: (1) initial scale construction; (2) formal students. Second, we revised items that were inaccurately
data collection; (3) exploratory factor analysis (EFA); and (4) phrased or prone to causing misunderstandings.
confirmatory factor analysis (CFA). Additionally, to further refine the questionnaire design, the
The initial scale construction phase primarily included the authors conducted in-depth interviews with a sample of
construction of the AI literacy framework, collection and college students. Initially, respondents were asked to complete
organization of the item pool, initial item purification, and the original version of the questionnaire and were then asked
pilot testing. The framework construction and item pool if they could understand all the questions and complete the
organization were based on extensive literature reviews. questionnaire smoothly. The respondents indicated that they
Subsequently, we purified the initial items mainly through could understand all the questions well, but some items needed
expert evaluations. Following this, we invited 25 Chinese adjustments to facilitate better responses. For example, the
college students to participate in a pilot test and conducted in- respondents mentioned that they did not adapt well to reverse-
depth interviews to assess the acceptability and worded items. Therefore, we changed all reverse-worded
comprehensibility of the initial questionnaire. Based on the items to positive ones to maintain consistency with the style of
feedback, we further refined the scale, resulting in the initial other items. For instance, the original item "I am never vigilant
version of the scale. about privacy and information security issues when using
After constructing the initial scale, formal data collection artificial intelligence applications or products." was revised to
commenced. We conducted the survey through random "I am always vigilant about privacy and information security
sampling, ensuring anonymity and promising respondents that issues when using artificial intelligence applications or
their information would be kept strictly confidential and not products."Next, we randomly selected some tasks from these
used for any illegal purposes. Completing the questionnaire items and required respondents to perform them on-site during
took approximately 3-5 minutes. Ultimately, we collected 493 the interviews (for example, logging into AI tools and
questionnaires from five universities in Zhejiang Province, interacting with them) to test the consistency between their
China, of which 448 were valid. Following previous research self-assessed AI skills and their actual AI skills. We found that
practices, we divided the sample into two parts: sample A and the participants' self-reported answers were consistent with
sample B, used for exploratory factor analysis and their actual behaviors and abilities. After the aforementioned
confirmatory factor analysis, respectively. steps, the finalized initial version of the AILS-CCS is shown
During the exploratory factor analysis phase, we used in Table 1.
sample A and applied principal component analysis in SPSS
to determine the factor structure, deleting non-compliant items B. Survey and Sample Statistics
based on certain criteria. In the confirmatory factor analysis The survey was conducted nationwide through an online
phase, we used sample B to validate the factor structure questionnaire platform. The questionnaire consisted of two
identified in the exploratory factor analysis phase. We utilized parts: the first part covered the demographic characteristics of
AMOS software for the analysis and tested the model fit. the respondents, and the second part was a scale measuring the
Finally, we examined the model's convergent validity and respondents' AI literacy, using a five-point Likert scale. A total
discriminant validity, further demonstrating the rationality of of 593 questionnaires were collected, and after excluding
the factor structure. those with too many missing answers or obvious anomalies,
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
546 valid questionnaires were obtained, resulting in an to explore the factor structure of the scale, and sample B was
effective response rate of approximately 92.1%. Table 2 used for confirmatory factor analysis (CFA) to test the factor
summarizes the demographic characteristics of the structure and measurement invariance, thereby improving the
respondents. validity and model fit.
To examine the potential factor structure of the scale and
the validity and reliability of the corresponding dimensions,
we divided the sample into two groups: sample A and sample
B. Sample A was used for exploratory factor analysis (EFA)
TABLE 1. Initial Version of AILS-CCS
Item Literature
Item Description
No. Sources
Awareness
1 I understand the definition of artificial intelligence.
I am familiar with some underlying principles of artificial intelligence (e.g., linear models, decision
2
trees, machine learning).
I understand how artificial intelligence perceives the world (e.g., seeing, hearing) to perform various [11],[14],
3 tasks.
[23]
4 I can distinguish between intelligent devices and non-intelligent devices.
I can compare different concepts related to artificial intelligence (e.g., the difference between deep
5 learning and machine learning).
6 I understand how artificial intelligence technology can aid my learning and daily life.
Usage
7 I am proficient in using artificial intelligence applications or products.
8 I can use artificial intelligence applications or products to help me solve problems in daily life.
9 I can leverage artificial intelligence for innovation (e.g., proposing innovative solutions or ideas).
[11],[13],
10 I can use artificial intelligence applications or products to assist my learning.
[15], [23]
11 I can select the most appropriate artificial intelligence application or product for a specific task.
12 I can name some applications or products that use artificial intelligence technology.
Evaluation
13 I can select the appropriate solution from various options provided by artificial intelligence.
14 I verify the accuracy of content generated by artificial intelligence when I have doubts about it.
15 I know how to check the reliability of content generated by artificial intelligence.
[11],[14]
16 I can evaluate the limitations of different artificial intelligence applications or products.
17 I can identify biases in content generated by artificial intelligence.
18 I remain skeptical or cautious about content generated by artificial intelligence.
Ethics
19 I always adhere to ethical principles when using artificial intelligence applications or products.
I can describe potential legal issues and try to avoid them when using artificial intelligence
20 applications or products.
I am always vigilant about privacy and information security issues when using artificial intelligence [11],[14],
21 applications or products. [15]
22 I can critically reflect on the impact of artificial intelligence on individuals and society.
23 I am always alert to the misuse of artificial intelligence technology.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
understand how artificial intelligence perceives the world (e.g., Based on the suggestions by Hair et al.[35], this study
seeing, hearing) to perform various tasks.”, and item 5 “I can selected relevant fit indices to assess the model's fit, focusing
compare different concepts related to artificial intelligence on absolute fit indices (CMIN/df, RMSEA) and incremental
(e.g., the difference between deep learning and machine fit indices (NFI, CFI)[36]. The chi-square value (CMIN) is an
learning).” Factor 3 is identified as “Awareness,” measuring indicator for judging the overall model fit, used to assess the
the understanding of AI concepts and basic principles. difference between the sample covariance matrix and the
Factor 4 includes items 7 “I am proficient in using artificial theoretical model covariance matrix. A smaller CMIN
intelligence applications or products.”, item 8 “I can use indicates a better fit between the theoretical model and the
artificial intelligence applications or products to help me solve actual data. The ratio of chi-square to degrees of freedom
problems in daily life.”, and item 10 “I can use artificial (CMIN/df) should be less than 3, indicating a good fit.
intelligence applications or products to assist my learning.” RMSEA (Root Mean Square Error of Approximation) values
Factor 4 is identified as “Usage,” measuring the ability to between 0.05 and 0.08 represent a good fit. NFI (Normed Fit
proficiently use AI in daily life. Index) and CFI (Comparative Fit Index) values greater than
0.9 indicate a good fit. The test results are shown in Table 4,
D. Confirmatory Factor Analysis demonstrating that all model fit indices meet the fit criteria,
indicating a good model fit.
To validate the factor structure identified in the exploratory TABLE 4. Model Fit Results
factor analysis, confirmatory factor analysis (CFA) was
Measure Threshold Estimate
conducted using AMOS 24.0 software with the maximum
likelihood estimation method. The model fit was evaluated CMIN — 198.063
using model fit indices. The model and fit results are shown in 𝐶𝑀𝐼𝑁/𝑑𝑓 Between1 and 3 2.358
Figure 1.
RMSEA <0.08 0.076
CFI >0.9 0.941
NFI >0.9 0.903
Note: CMIN= chi-square value; df= degree of freedom;
RMSEA = Root Mean Square Error of Approximation; CFI =
Comparative Fit Index; NFI = Normed Fit Index.
𝐶𝑀𝐼𝑁/𝑑𝑓 values between 1 and 3, RMSEA values less
than .08, CFI and NFI values greater than .900 suggest
adequate model fit.
TABLE 5. Confirmatory Factor Analysis Results
Std.
Dimension Item CR AVE
Coefficients
AW1 0.65
AW2 0.83
Awareness 0.849 0.587
AW3 0.84
AW4 0.73
US1 0.75
Usage US2 0.87 0.862 0.676
US3 0.84
EV1 0.73
EV2 0.86
Evaluation 0.868 0.622
EV3 0.81
EV4 0.75
ET1 0.74
FIGURE 1. Confirmatory Factor Analysis Fit Results of Ethics ET2 0.84 0.890 0.669
AILS-CCS ET3 0.83
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
ET4 0.86
For discriminant validity, it is generally determined by
Next, the composite reliability, convergent validity, and whether the correlation coefficients between a factor and other
discriminant validity of the scale were further tested. factors are less than the square root of its AVE value[40]. As
Generally, a composite reliability greater than 0.700 indicates shown in Table 6, the correlation coefficients between all
good composite reliability of the sample data[37]. As shown factors and other factors are less than the square root of their
in Table 5, the composite reliability of the five factors is all respective AVE values, indicating that the scale has good
greater than 0.700, indicating that the scale has strong discriminant validity.
reliability. Convergent validity is typically judged by the
E. Analysis of Demographic Influences on AILS-CCS
following criteria: standardized factor loadings are all greater
To further analyze the differences in AI literacy among
than 0.500[38]; average variance extracted (AVE) is greater
Chinese college students based on demographic factors, we
than 0.500[39]-[40]; and composite reliability (CR) is greater
conducted an ordinary least squares (OLS) regression analysis
than 0.700[37]. If all these conditions are met, the sample data
on Sample B to examine the relationship between the scores
are considered to have good convergent validity. The analysis
of each AI literacy sub-dimension and demographic variables.
results show that the factor loadings of all items are higher than
The regression results are shown in Table 7.
0.500, the composite reliability of the factors is greater than
In the dimensions of AI Awareness, Usage, and Evaluation,
0.700, and the AVE values are all greater than 0.500.
we found that the coefficients for gender were significantly
Therefore, the scale is considered to have good convergent
positive, indicating that male college students tend to score
validity.
higher in AI Awareness, Usage, and Evaluation. This also
TABLE 6. Discriminant Validity Analysis Results
suggests that male students have better abilities in
Awareness Usage Evaluation Ethics understanding, using, and evaluating AI compared to female
Awareness 0.766 students. There were no significant differences between male
and female students in AI Ethics.
Usage 0.405 0.822 Notably, the frequency of AI usage was significantly
Evaluation 0.383 0.526 0.789 positively correlated with all four dimensions, indicating that
Ethics 0.152 0.489 0.416 0.818 higher frequency of AI usage is associated with higher overall
AI literacy.
Note: The values on the diagonal of the table are the square
roots of the AVE values for the corresponding dimensions.
TABLE 7. Regression Analysis: AILS-CCS and Demographics.
Awareness Usage Evaluation Ethics
𝛽 t 𝛽 t 𝛽 t 𝛽 t
Gender 0.135* 2.345 0.117* 2.135 0.202*** 3.820 0.111 1.763
Grade -0.059 -1.130 -0.024 -0.481 0.036 0.754 0.048 0.843
AI use 0.532*** 9.221 0.593*** 10.786 0.570*** 10.772 0.425*** 6.726
F 44.030*** 56.513*** 66.897*** 23.837***
2
Adjusted R 0.356 0.416 0.458 0.226
Note: *P<0.05,**p<0.01,***p<0.001
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
factor structure and test model fit, while also validating measures the ability to critically evaluate AI-generated content
convergent and discriminant validity. The final version of the and assess the limitations of the tools themselves; and the
AILS-CSS, as shown in Table 8, includes four dimensions "Ethics" dimension (comprising items 19, 21, 22, and 23)
with 15 items. Specifically, the "Awareness" dimension assesses adherence to ethical standards and the ability to
(comprising items 1, 2, 3, and 5) assesses students' safeguard against security risks (such as privacy breaches)
understanding of the definition and basic principles of AI; the when using AI products.
"Usage" dimension (comprising items 7, 8, and 10) evaluates
the ability to use AI tools (such as ChatGPT and Baidu
Wenxin Yiyan) to assist learning or solve daily problems; the
"Evaluation" dimension (comprising items 13, 16, 17, and 18)
Table 8. Final Version of AILS-CCS (Including 15 Items)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
In a future AI society, it is crucial to improve the AI literacy universities should create more opportunities for female
of every college student. Therefore, we believe that the scale students to interact with and use AI technologies, encouraging
developed in this study is of great significance for promoting them to actively participate in AI-related clubs and community
the digital transformation of higher education. It also holds activities to increase their interest and engagement in AI and
substantial value for bridging the digital divide, ensuring that boost their self-efficacy. By implementing these practical and
every student can benefit from AI technologies. This scale can feasible measures, universities can ensure that every student
provide universities with an effective tool to understand benefits from the advancements brought by AI technologies.
students' AI literacy levels and aid in evaluating the This approach not only facilitates the digital transformation of
effectiveness of AI education. Universities can use the specific higher education but also helps bridge the digital divide,
scores from the four dimensions of this scale to implement promoting equitable access to artificial intelligence
differentiated training, thereby improving training efficiency. technologies[45].
Meanwhile, it is essential to regularly assess AI literacy
B.Practical Implications among university students. Only by understanding which
To comprehensively enhance the AI literacy of university aspects of AI literacy students score lower on can targeted
students, institutions of higher education can implement a training be effectively conducted. When conducting
series of systematic and comprehensive measures across four assessments, it is recommended to use this scale flexibly and
dimensions of AI literacy. Firstly, enhancing AI awareness is make adjustments based on the discipline or major of the
crucial. Universities should offer courses such as individuals being assessed, as the AILS-CCS scale developed
"Introduction to Artificial Intelligence" that cover basic AI in this study is intended for general college students rather than
concepts, principles, and application scenarios to effectively students in specific fields.
strengthen students' foundational knowledge of AI.
Additionally, regularly organizing lectures and seminars that C. Limitations and Further Research
introduce cutting-edge AI technologies and applications can This study has certain limitations that need to be addressed
help students stay abreast of technological advancements and in future research. First, although the development process
broaden their horizons. In terms of improving AI usage skills, of AILS-CCS was informed by extensive literature and
universities should encourage students to use AI tools research frameworks, it cannot be guaranteed that these
reasonably and provide corresponding training and guidance. sources encompass all aspects of AI literacy. There may still
However, it is also essential to establish regulatory systems to be unidentified areas of knowledge, indicating that much
ensure students use AI technology within reasonable limits, work remains to be done on this topic. Although this
maintaining academic integrity, and avoiding over-reliance on questionnaire is relatively comprehensive, it currently
AI, which could affect their autonomous learning capabilities. includes only four dimensions. As AI technology continues
To strengthen AI evaluation skills, universities can use case to evolve, additional issues may arise that require attention.
Therefore, future research should strive to expand the
studies and specialized assessment projects to teach students
dimensions and items of the AI literacy scale and ensure that
how to evaluate the effectiveness and potential risks of AI
the scale is continuously updated.
applications, thereby enhancing their evaluation capabilities.
Secondly, the AILS-CCS questionnaire is based on self-
In the realm of AI ethics education, universities should assessment by respondents, but a limitation of self-
emphasize the ethical and safety education related to artificial assessment questionnaires is that the results may be
intelligence, ensuring that students adhere to relevant ethical influenced by the subjective biases of the respondents. Thus,
standards and legal regulations while using AI technologies, it is important to not only develop subjective scales but also
and increasing their awareness of privacy protection and to create objective scales, such as using true/false or
ethical standards. multiple-choice questions to measure respondents' AI
While enhancing the overall AI literacy of university literacy levels. Additionally, this study primarily employed
students, special attention should be given to particular groups questionnaire surveys; future research may need to explore
such as female students, implementing targeted measures to other methodologies. For example, experimental testing
help them improve their AI literacy. As the research results methods under controlled conditions could be used to collect
indicate, female students exhibit disadvantages in AI data for the scale, and the structure and items of the scale
awareness, usage, and evaluation. This may be due to their could be optimized based on experimental data.
insufficient grasp of AI principles or a lack of confidence in Finally, this scale was developed based on a sample of
information technologies such as AI. Therefore, universities Chinese university students, and future research needs to
should provide personalized support, establish special tutoring translate the scale into other languages to test its applicability
classes for female students, and offer more learning resources in different linguistic and cultural contexts. This would help
and support to help them improve their understanding and enhance the international applicability of the scale. Ensuring
evaluation capabilities of AI. A mentorship system could be measurement invariance is an important condition for using
established, where mentors guide female students through AILS-CCS cross-linguistically and culturally. It is also
their challenges and questions in AI learning. Furthermore, important to note that this scale was not developed for a
specific population, and we recommend that it be used only
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
for measuring AI literacy in general university student frequency of social media use mean safer and more knowledgeable
digital usage?" Educ. Inf. Technol., vol. 26, no. 1, pp. 1043-1067, 2021.
populations. [6] Central Committee of the Communist Party of China and the State
Council, "China Education Modernization 2035," [Online]. Available:
VI. Conclusion https://hxzyrz.hnnu.edu.cn/_upload/article/files/4e/63/7371e2004767
In the AI era, AI literacy has become a fundamental skill and 84ecb791bb19dd54/098b7518-30b8-4983-83b4-8b4ffc1c933e.pdf.
Accessed on: May 5, 2024.
essential literacy for citizens. As digital natives, enhancing AI [7] A. J. A. M. Van Deursen, E. J. Helsper, and R. Eynon, "Development
literacy among college students plays a vital role in promoting and validation of the Internet Skills Scale (ISS)," Inf. Commun. Soc.,
the sustainable development of higher education. Conducting vol. 19, pp. 804-823, 2016.
[8] F. Siddiq, P. Gochyyev, and M. Wilson, "Learning in Digital
AI literacy training is an important way to enhance college Networks–ICT literacy: A novel assessment of students' 21st century
students' AI literacy, and scientifically and effectively skills," Comput. Educ., vol. 109, pp. 11-37, 2017.
assessing college students' AI literacy levels is the foundation [9] M. Ghomi and C. Redecker, "Digital Competence of Educators
for digital literacy initiatives. Therefore, developing an AI (DigCompEdu): Development and Evaluation of a Self-assessment
Instrument for Teachers' Digital Competence," CSEDU, pp. 541-548,
literacy scale for college students is crucial. 2019.
The main contribution of this study is the development and [10] X. Li and R. Hu, "Developing and validating the digital skills scale for
validation of an AI literacy scale for Chinese college students school children (DSS-SC)," Inf. Commun. Soc., vol. 25, pp. 1365-
1382, 2020.
(AILS-CCS), enriching the research on AI literacy in the [11] B. Wang, P. L. P. Rau, and T. Yuan, "Measuring user competence in
context of developing countries. We primarily employed four using artificial intelligence: Validity and reliability of artificial
steps to achieve the research objectives. The first step was the intelligence literacy scale," Behav. Inf. Technol., 2022. doi:
10.1080/0144929X.2022.2072768.
initial scale construction, mainly through a literature review to [12] S. Russel and P. Norvig, Artificial Intelligence: A Modern Approach,
build a preliminary framework and initial item pool. Then, 3rd ed. Upper Saddle River, NJ, USA: Prentice-Hall, 2010.
expert evaluations, pilot testing, and in-depth interviews were [13] A. Carolus, et al., "MAILS-Meta AI literacy scale: Development and
used to modify and refine the initial items, forming the first testing of an AI literacy questionnaire based on well-founded
competency models and psychological change-and meta-
version of the AILS-CCS. We ensured that all suggested items competencies," Comput. Human Behav.: Artif. Humans, vol. 1, no. 2,
genuinely reflected the AI literacy of college students. The 2023, Art. no. 100014.
second step involved formal survey sampling, with valid [14] M. C. Laupichler, A. Aster, N. Haverkamp, and T. Raupach,
"Development of the “scale for the assessment of non-experts’ AI
questionnaires being divided into samples A and B for literacy”–An exploratory factor analysis," Comput. Human Behav.
subsequent analysis and validation. The third step was Rep., vol. 12, 2023, Art. no. 100338.
conducting exploratory factor analysis to refine the potential [15] D. T. K. Ng, W. Wu, J. K. L. Leung, and S. K. W. Chu, "Artificial
intelligence (AI) literacy questionnaire with confirmatory factor
scale structure and related items. The fourth step was analysis," in Proc. IEEE Int. Conf. Adv. Learn. Technol., 2023, pp.
confirmatory factor analysis to test the consistency of the 233-235.
factor structure and to examine the model's convergent and [16] H. Ç. Bal, C. Kalayci, and S. Artan, "Farklı Gelir Grubuna Sahip
Ülkelerde Dijital Bölünmenin Boyutu ve Belirleyicileri [The size and
discriminant validity. Through these steps, this study proposed
determinants of digital divide in countries of different income
a reliable AILS-CCS, which consists of four dimensions: groups]," Uluslararası Ekon. Yenilik Dergisi, vol. 1, pp. 107-123,
Awareness, Usage, Evaluation, and Ethics, encompassing 15 2015.
items. This represents an original contribution to the field of [17] S. McMillan, "Literacy and computer literacy: Definitions and
comparisons," Comput. Educ., vol. 27, no. 3-4, pp. 161-170, 1996.
AI research in developing countries, and the scale holds [18] D. Buckingham and A. Burn, "Game literacy in theory and practice,"
significant theoretical and practical implications for the J. Educ. Multimedia Hypermedia, vol. 16, no. 3, pp. 323-349, 2007.
assessment, training, and enhancement of AI literacy among [19] D. Buckingham, "Television literacy: A critique," Radical Philos., vol.
51, pp. 12-25, 1989.
college students in developing countries in the future. [20] M. B. Eisenberg, "Information literacy: Essential skills for the
information age," DESIDOC J. Lib. Inf. Technol., vol. 28, no. 2, 2008.
[21] P. Gilster and P. Glister, Digital Literacy, New York, NY, USA: Wiley
Computer Pub., 1997.
REFERENCES [22] S. Livingstone, "Media literacy and the challenge of new information
[1] X. Zhai, X. Chu, C. S. Chai, M. S. Y. Jong, A. Istenic, M. Spector, J.- and communication technologies," Commun. Rev., vol. 7, no. 1, pp.
B. Liu, J. Yuan, and Y. Li, "A review of artificial intelligence (AI) in 3-14, 2004.
education from 2010 to 2020," Complexity, vol. 2021, pp. 1–18, 2021. [23] D. Long and B. Magerko, "What is AI literacy? Competencies and
doi: 10.1155/2021/8812542. design considerations," in Proc. CHI Conf. Human Factors Comput.
[2] S. Reddy, J. Fox, and M. P. Purohit, "Artificial intelligence-enabled Syst., pp. 1-16, Apr. 2020.
healthcare delivery," J. R. Soc. Med., vol. 112, no. 1, pp. 22–28, Jan. [24] D. T. K. Ng, J. K. L. Leung, K. W. S. Chu, and M. S. Qiao, "AI literacy:
2019. doi: 10.1177/014107681881551. Definition, teaching, evaluation and ethical issues," Proc. Assoc. Inf.
[3] P. Gupta and M. K. Pandey, Role of AI for Smart Health Diagnosis Sci. Technol., vol. 58, no. 1, pp. 504–509, 2021. doi: 10.1002/pra2.487.
and Treatment, in Smart Medical Imaging for Diagnosis and [25] A. Carolus, Y. Augustin, A. Markus, and C. Wienrich, "Digital
Treatment Planning, Chapman and Hall/CRC, 2024, pp. 23-45. interaction literacy model–Conceptualizing competencies for literate
[4] China Youth Daily, "Over 80% of surveyed college students have used interactions with voice-based AI systems," Comput. Educ.: Artif.
AI tools," [Online]. Available: Intell., vol. 4, p. 100114, 2023.
https://baijiahao.baidu.com/s?id=1782763724373222422&wfr=spide [26] M. Pinski, M. Adam, and A. Benlian. (2023, Apr). AI knowledge:
r&for=pc. Accessed on: May 5, 2024. Improving AI delegation through human enablement. presented at
[5] A. Hernández-Martín, M. Martín-del-Pozo, and A. Iglesias-Rodríguez, Proc. 2023 CHI Conf. Human Factors Comput. Syst.
"Pre-adolescents’ digital competences in the area of safety. Does
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3468378
[27] F. Faruqe, R. Watkins, and L. Medsker, "Competency model approach Zhenzhen Chen received her master's degree in statistics from Iowa State
to AI literacy: Research-based path from initial framework to model," University, in 2019. She is currently an engineer at
arXiv preprint arXiv:2108.05809, 2021. Institute of big data, Fudan University. She has
[28] M. Kandlhofer, G. Steinbauer, S. Hirschmugl-Gaisch, and P. Huber, participated in numerous AI-related research
"Artificial intelligence and computer science in education: From projects, gaining extensive experience. Her
kindergarten to university," in Proc. IEEE Front. Educ. Conf., pp. 1-9, research interests include data analysis,
Oct. 2016. quantitative analysis, sample survey, small area
[29] D. Cetindamar, et al., "Explicating AI literacy of employees at digital estimate.M
workplaces," IEEE Trans. Eng. Manage., 2022.
[30] I. Lee, S. Ali, H. Zhang, D. DiPaola, and C. Breazeal, "Developing
middle school students' AI literacy," in Proc. ACM Tech. Symp.
Comput. Sci. Educ., pp. 191-197, Mar. 2021.
[31] S. W. Kim and Y. Lee, "The artificial intelligence literacy scale for
middle school students," J. Korea Soc. Comput. Inf., vol. 27, no. 3, pp.
225-238, 2022.
[32] G. A. Churchill Jr., "A paradigm for developing better measures of
marketing constructs," J. Mark. Res., vol. 16, pp. 64-73, 1979.
[33] A. Field, Discovering Statistics Using IBM SPSS Statistics, Sage,
2013.
[34] J. C. Nunnally, Psychometric Theory, 2nd ed., New York, NY, USA:
McGraw-Hill, 1978.
[35] F. Hair, W. C. Black, B. J. Babin, et al., Multivariate Data Analysis:
A Global Perspective, New York, NY, USA: Pearson Educ. Int., 2010.
[36] D. Hooper, J. Coughlan, and M. R. Mullen, "Structural equation
modelling: Guidelines for determining model fit," Electron. J. Bus.
Res. Methods, vol. 6, pp. 53-60, 2008.
[37] R. P. Bagozzi and S. K. Kimmel, "A comparison of leading theories
for the prediction of goal-directed behaviours," Br. J. Soc. Psychol.,
vol. 34, pp. 437-461, 1995.
[38] R. Bailey and S. Ball, "An exploration of the meanings of hotel brand
equity," Serv. Ind. J., vol. 26, pp. 15-38, 2006.
[39] R. P. Bagozzi and Y. Yi, "On the evaluation of structural equation
models," J. Acad. Mark. Sci., vol. 16, pp. 74-94, 1988.
[40] C. Fornell and D. F. Larcker, "Evaluating structural equation models
with unobservable variables and measurement error," J. Mark. Res.,
vol. 18, pp. 39-50, 1981.
[41] E. Hargittai and S. Shafer, "Differences in actual and perceived online
skills: The role of gender," Soc. Sci. Q., vol. 87, no. 2, pp. 432-448,
2006.
[42] Y. J. Park, "My whole world’s in my palm! The second-level divide
of teenagers’ mobile use and skill," New Media Soc., vol. 17, no. 6,
pp. 977-995, 2015.
[43] E. C. Tandoc Jr., et al., "Developing a perceived social media literacy
scale: Evidence from Singapore," Int. J. Commun., vol. 15, Art. no. 22,
2021.
[44] S. Liu, et al., "Current status and influencing factors of digital health
literacy among community-dwelling older adults in Southwest China:
a cross-sectional study," BMC Public Health, vol. 22, Art. no. 996,
2022.
[45] H. Abuhassna, et al., "The Information Age for Education via
Artificial Intelligence and Machine Learning: A Bibliometric and
Systematic Literature Analysis," Int. J. Inf. Educ. Technol., vol. 14, no.
5, 2024.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4