AI Landscape
Topics covered
AI Landscape
Topics covered
European Commission
Directorate-General for Digital Services
Directorate B — Digital Enablers & Innovation
Unit B2 — Interoperability and Digital Government (DIGIT.B.2)
Contact: KOTOGLOU Stefanos - Project Officer for the Public Sector Tech Watch
E-mail: [email protected]
European Commission
B-1049 Brussels
Legal Notice
This document has been prepared for the European Commission however it reflects the views only of the authors,
and the European Commission is not liable for any consequence stemming from the reuse of this publication.
More information on the European Union is available on the Internet (http://www.europa.eu).
Luxembourg: Publications Office of the European Union, 2025
© European Union, 2025
The reuse policy of European Commission documents is implemented by the Commission Decision 2011/833/EU
of 12 December 2011 on the reuse of Commission documents (OJ L 330, 14.12.2011, p. 39). Except otherwise
noted, the reuse of this document is authorised under a Creative Commons Attribution 4.0 International (CC-
BY 4.0) licence (https://creativecommons.org/licenses/by/4.0/). This means that reuse is allowed provided
appropriate credit is given and any changes are indicated.
For any use or reproduction of elements that are not owned by the European Union, permission may need to be
sought directly from the respective rightholders.
2
Overview 5 Landscape of GenAI 32
use cases in Europe
11 Context and objective 6
of the report 41 Methodology for collecting and 34
analysing GenAI use cases
12 Structure of the report 7
42 Use case analysis 35
52 Future steps 50
32 Landscape of generative 22
AI guidelines and policies 8 References 64
across the EU
33 Results analysis 25
3
Abstract
5
Context and objective of the report
The rapid emergence of generative artificial technologies. It is a first effort to build and share
intelligence (generative AI or GenAI) has created knowledge with specific regard to the adoption
transformative potential across a range of of GenAI technologies within European public
different sectors, including the public sector. administrations.
Unlike traditional AI systems that are designed
for specific tasks, GenAI systems can create new The report contributes to the body of knowledge in
content (in the form of text, images or audio) and several ways. It shares up-to-date information on
based on its training data. This capability has the adoption of GenAI within public administrations,
captured the attention of both the public and the providing stakeholders with a first overview of how
private sectors, and adoption has accelerated as European public administrations are starting to
organisations experiment with its applications. experiment with and use GenAI tools. Leveraging
EU public administrations are beginning to test the PSTW online cases repository 2 allows this
and use GenAI tools for service delivery and report to provide innovative data that provide
administrative functions. This adoption presents insights into GenAI adoption. The number of
a wide range of opportunities, but it also raises use cases collected is lower than other AI-based
technical, organisational, legal and societal technologies (representing only 4.24% of total AI
challenges. use cases 3), but it nonetheless signals a rapid
uptake of the technology (with more than half of
This report examines how European public the cases being piloted or started in 2024 alone).
administrations are using emerging GenAI
applications by leveraging data from the Public The report is also the first EU effort to map
Sector Tech Watch (PSTW) observatory. It national, regional and local guidelines related
evaluates existing guidelines and procedures to the use of GenAI within European public
established by EU Member States to regulate how administrations. Moreover, the methodology used
civil servants and employers use this technology; to gather, classify and analyse the identified
and analyses the GenAI use case repository documents can provide a methodological and
provided by the PSTW. It also includes qualitative taxonomic basis for future research on this matter,
insights from five targeted interviews that the for EU bodies and academic researchers alike.
authors of this report conducted with public
administration managers who are actively piloting The report aims to improve knowledge-sharing
and imlplementing GenAI solutions to improve and collaboration between European public
administrative workflows and public service administrations, encouraging continuous learning
delivery. a n d imp ro v e me n t . I t a ls o a ims t o p r o m o t e
the ethical and effective use of GenAI in the
This report builds on previous reports that public sector by giving stakeholders practical
analyse the public sector’s adoption of artificial and strategic insights into its adoption and
intelligence (AI), blockchain and other emerging governance.
2 https://interoperable-europe.ec.europa.eu/collection/public-sector-tech-watch
3 This report analyses 61 GenAI cases, which are part of a collection of 1 343 AI use cases in the PSTW observatory collection. 6
Structure of the report
M o re speci fical ly, this report conta ins t h e Landscape of GenAI use cases in
following sections. Europe
This section gives a preliminary overview of all the
Literature and policy landscape GenAI use cases that the PSTW has collected.
This section summarises the latest developments Building on quantitative and primary qualitative
in GenAI literature, technology, systems and data analysis, it analyses the data of the use
models; and provides a useful table explaining cases and focuses specifically on the relevant
the key terms and concepts. It also includes a themes and patterns that were identified. A
brief description of the EU’s key GenAI policy qualitative analysis based on five interviews with
instruments and support. public administration managers actively piloting
and implementing GenAI solutions identified some
EU guidelines and policies on GenAI remarkable highlights that could be replicated in
deployment and use other public sector organisations.
This section analyses the mapped national,
regional and local guidelines and policies that Conclusion
have been developed by EU Member States to The conclusion summarises the key findings of
shape and regulate the use of GenAI within their the report and their implications for the future.
organisations.
7
Literature and policy landscape
8
Generative AI: terminology and background
Terminology adopted
9
Table 1. Generative AI Keywords
Keywords Descriptions
Foundation model The foundation model concept is a new machine-learning paradigm in which one large model is pretrained on
a huge amount of data (broad data at scale) and can be used for many downstream tasks and applications
(Bommasani et al., 2022). The learning objectives of foundation models tend to be general and largely
focused on the structure of the data itself (i.e. learning representations directly from the data attributes without
the need for a specific underlying truth). Examples of learning objectives are: (i) predicting the next word
when given a sentence; (ii) capturing a distribution of images when given a text prompt; and (iii) capturing
and encoding representative features of data (images, audio or text). Foundation models can therefore be the
basis for GenAI. However, it should be noted that foundation models can also be used for ‘non-generative’
purposes. These would typically imply a more limited output (e.g. a numeric or discrete value) rather than
generating a longer free-form output. Examples include text or image classification.
AI system vs AI model An AI system is a machine-based system that (i) is designed Figure 1. Representation of an AI
to operate with varying levels of autonomy; (ii) may exhibit system and an AI model.
adaptiveness after deployment; and (iii) for explicit or implicit
objectives, infers, from the input it receives, how to generate
outputs such as predictions, content, recommendations or
decisions that can influence physical or virtual environments AI system
(Regulation (EU) 2024/1689). An AI system comprises various
components, including (in addition to the model or models)
Tecniques:
elements such as interfaces, sensors, conventional software, GAN, VAE, +API, interfaces, sensors,
softwares,etc.
etc. RNNs
Conversely, an AI model is the core computational engine of
an AI system. From a scientific and technical standpoint and AI model
in accordance with ISO (2022) terminology, an AI model is a
‘physical, mathematical, or otherwise logical representation of
a system, entity, phenomenon, process or data’ (Fernández-
Llorca et al., 2024, p. 6) as described in Figure 1. Source: Author`s own elaboration
General-purpose A general-purpose AI system is ‘an AI system which is based on a general-purpose AI model, and which has
AI system the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems’
(Regulation (EU) 2024/1689 Article 3).
General-purpose A general-purpose AI model is an AI model ‘that displays significant generality and is capable of competently
AI model performing a wide range of distinct tasks regardless of the way the model is placed on the market and that
can be integrated into a variety of downstream systems or applications, except AI models that are used
for research, development or prototyping activities before they are placed on the market’ (Regulation (EU)
2024/1689).
Large language Large language models (LLMs) are a type of AI model primarily used in natural language processing (NLP)
model (LLM) and capable of specifically processing and generating human language, focusing on tasks like text completion,
summarisation, translation and dialogue (Fernández-Llorca et al., 2024; OECD, 2023a). They are a type of
generative AI model that can perform a wide range of different computational tasks (Fernández-Llorca et
al., 2024). LLMs are built using machine-learning techniques (especially deep learning) and are trained on
massive datasets that contain diverse texts. LLMs typically use architectures based on deep neural networks,
specifically transformer models.
Pre-trained model A pre-trained model is a machine-learning model that has already been trained on a large dataset and can
often be adjusted for specific types of tasks. These models often serve as a useful starting-point for developing
new machine-learning applications, because they come with pre-set weights that can be fine-tuned to meet
the requirements of the target task (Encord, 2024)
10
Table 1. Generative AI Keywords
Keywords Descriptions
Generative adversarial A generative adversarial network (GAN) is a type of machine-learning model where two neural networks
network (GAN) (known as the generator and the discriminator) compete against each other using deep learning techniques
to improve their performance. GANs are unsupervised models and operate in a zero-sum game framework
(Fernández-Llorca et al., 2024; OECD, 2023; TechTarget, 2024). During training, the generator’s goal is
to create data that the discriminator cannot easily identify as fake. Both networks are trained together, so
the generator gets better at producing realistic outputs, while the discriminator becomes better at detecting
artificially created data. This adversarial process continues in a feedback loop, driving the generator to
produce higher-quality outputs. For example, GANs can generate highly realistic images of human faces that
do not belong to any real person (TechTarget, 2024).
Variational A variational autoencoder (VAE) is a type of generative model used in machine learning to create new data
autoencoder (VAE) that resemble the input data they were trained on. VAEs consist of two main components: the encoder and the
decoder. This allows VAEs to learn how to extract key features from the input data (encoder) and use those
features to recreate the original input (decoder) (IBM, 2024).
Recurrent neural A recurrent neural network (RNN) is a type of deep-learning model able to process sequential or time-series
network (RNNs) data. RNNs excel in tasks where the input data varies in length and is ordered. RNNs work through a structure
composed of a hidden state that captures past information and feedback loops that allow the network to feed
the hidden state back into the model, thus enabling it to process sequences of data (MathWorks, 2024).
Transformers Transformers (or transformer architectures) were introduced in 2017 and revolutionised natural language
processing (NLP) by enabling models to handle long-range dependencies in text more efficiently than previous
methods, such as recurrent neural networks (RNNs) and long short-term memory (LSTM) networks (OECD,
2023a). Key components of transformer architectures are:
positional encoding, which provides information on the position of each part of an input sequence
(e.g. a sequence of text). Transformer models process data in parallel rather than sequentially, so positional
encoding helps the model maintain the correct order of words when generating the output;
the attention mechanism, which draws connections between the different parts of the input sequence,
thus allowing a language model to focus on previously hidden vectors in an input sequence to predict an
output sequence;
the self-attention mechanism, which assigns varying levels of importance to different words within the
same sentence. This helps the model capture dependencies between words.
AI hallucinations Hallucinations are a phenomenon related to LLMs and other generative AI tools wherein the model
occasionally generates non-existent or inaccurate content, which may not even be based on training data.
These misinterpretations (also known as confabulations) occur due to various factors, such as overfitting,
training data bias/inaccuracy and high model complexity (IBM, 2023). Purely connectionist systems, such as
LLMs, might lack mechanisms for maintaining consistency and logical coherence, especially in tasks requiring
abstract reasoning or symbolic representations. This can lead to the generation of hallucinations (Marcus,
2001).
Prompt engineering Prompt engineering is the process of crafting and structuring instructions or ‘prompts’ for LLMs to optimise
their performance for specific tasks. It is a form of programming where prompts define context, rules and
desired outputs. It involves crafting clear, specific and detailed prompts that can guide the model’s output
with the main purpose of preventing hallucinations. Effective prompt engineering employs techniques like
persona setting (where the LLM assumes a role), output customisation (which formats the responses) and
question refinement (which suggests improved queries for better interaction). These methods, documented
in reusable prompt patterns, help address common challenges in LLM interactions (e.g. accuracy, clarity and
task automation) (White et al., 2023).
11
GenAI development: a background overview GenAI does not always involve the use of
f o u n d a t i o n a l m o d e l s . H o w e v e r, t h e m o r e
In recent months, GenAI has rapidly become a powerful generative models, which are based
focal point in discussions about technological on architectures like GPT and diffusion models,
innovation, capturing the attention of both the are now considered foundational and have most
public and the private sectors. Unlike traditional recently been associated with the GenAI concept.
AI systems designed for specific tasks, GenAI The underlying concept of GenAI has existed
systems can create and generate new content in for a while, but the recent widespread use of
response to prompts, based on their training data. the term is linked to newer, more powerful AI
These models can generate not only text but models rooted in the development of deep neural
also images, videos and audios (or combinations networks, which started in the 1950s and have
of all three of these elements). This capability progressed through vast research. The earliest
is powered by deep-learning models that have developed generative models are a subset of the
been trained on vast amounts of data to learn AI technologies that is known as natural language
underlying patterns and distributions (Lorenz et processing (NLP), and which understands and
al., 2023). uses human language as an input.
12
component to understand input prompts, but they
are distinct from LLMs (Fernández-Llorca et al.,
2024).
13
The EU policy landscape and future trends
The AI Act, the AI innovation package and the ‘General-purpose AI model’ means an AI
GENAI4EU initiative model, including where such an AI model
is trained with a large amount of data
GenAI models started proliferating after the EU using self-supervision at scale, that
institutions began their work on regulating AI. displays significant generality and is
This prompted the EU’s legislators to include and capable of competently performing a
conceptualise GenAI systems in the final version wide range of distinct tasks regardless
of the EU AI Act. In the legislation, GenAI is of the way the model is placed on the
covered under the regulation of general-purpose market and that can be integrated
AI (GPAI) models, whose definition recognises into a variety of downstream systems
broad and varied applications of this type of or applications, except AI models that
technologies. are used for research, development or
prototyping activities before they are
placed on the market (Regulation (EU)
2024/1689, Article 3).
14
The AI Act applies a risk-based approach in which The Package includes the GenAI4EU initiative,
GPAI models are subject to certain regulatory which aims to support emerging GenAI applications
standards, depending on their level of risk. through the development of innovative use cases
The EU’s AI Office launched a stakeholders in 14 industrial ecosystems and across the
consultation process on the future guidelines public sector (including application areas include
for GPAI systems and models that all private robotics, health, biotech, manufacturing, mobility,
providers must respect in the future for a safe climate and virtual worlds).
and secure deployment of AI models. At the time
of writing this report in November 2024, the first The Commission is also encouraging organisations
draft of the first Code of Practice was released to to prepare for the implementation of the upcoming
the public and the final general-purpose AI Code AI Act through a voluntary initiative known as the
of Practice was expected to be released and AI Pact, which was launched in response to the AI
signed by GPAI providers in May 2025. Act coming into force. The Pact aims to promote
the early implementation of the AI Act’s provisions
Following the political agreement reached on and is structured around two main pillars:
the EU AI Act, the Commission also released the
new AI Innovation Package in January 2024. community building, where the Pact aims
This was aimed at boosting the development to create a network of organisations committed to
of trustworthy AI by EU start-ups and small responsible AI. This network would both promote
and medium-sized enterprises (SMEs). The the exchange of best practices and provide
Package establishes a new AI Office and releases practical guidance on implementing the AI Act
funds for several measures. The Package also through workshops and information sharing;
contains the EU AI Start-Up and Innovation
Communication (whose aims include boosting early implementation, where the Pact
innovative applications for GenAI in Europe’s encourages organisations to make voluntary
industrial ecosystems) while also upholding pledges to implement certain aspects of the AI
EU values, tackling risks and promoting the Act ahead of schedule (through actions like AI
responsible use of AI. governance strategies, identifying high-risk AI
systems, and promoting AI literacy among staff).
The Package is intended to upgrade the EU’s
supercomputing resources and make them
accessible to AI start-ups for developing and Challenges in the European public sector and
training GenAI systems, through the creation of related regulatory actions
AI factories. The Package also provides financial
support for GenAI, investing around EUR 4 billion The rapid spread, evolution and development
until 2027 through Horizon Europe and the Digital of GenAI technologies presents public
Europe Programme. The Commission will not administrations with both challenges and
only upgrade infrastructure but also support the opportunities when it comes to promoting
development and implementation of Common the strengthening of EU GenAI industries, and
European Data Spaces to ensure the availability trustworthy adoption and use across the private
of high-quality data repositories. and public sectors.
Initiatives to strengthen the GenAI skills of Public administrations across the EU are facing
the EU workforce are planned for educating, increasing challenges related to technological
training, skilling and reskilling workers. infrastructure (particularly the availability of
computational resources) in order to facilitate the
15
widespread adoption of GenAI across various This is also addressed by the ALT-EDIC, which
sectors. Training large GenAI models requires leverages the European Language Data Space to
substantial processing power and resources. The create a common EU data infrastructure to train
considerable costs associated with this pose entry LLMs.
barriers to start-ups and SMEs. To address this,
the EU has amended the Regulation establishing Another challenge facing public administrations
the European High-Performance Computing is ensuring regulatory compliance with the
Joint Undertaking in order to facilitate funding AI Act and trustworthy use of AI – not only
for the development of AI factories capable of across industry and the private sector, but
training large general-purpose AI (GPAI) models; also in public sector organisations. The AI
and to widen access to AI for a broader range of Act therefore requires each Member State to
public and private users (including start-ups and establish regulatory sandboxes (at national
SMEs). level or jointly with the competent authorities of
other Member States) and to allocate sufficient
Another key challenge is the development of resources to them. These sandboxes provide a
language models for some European local controlled environment for developing and testing
languages other than English (especially innovations in the pre-marketing phase and for
regarding conversational models trained and proving compliance with the AI Act. The AI Act
able to interact based on European data). This gathers regulations and principles such as the
is crucial for improving the performance of General Data Protection Regulation (GDPR)
GenAI models in those languages; for improving and the principles for trustworthy AI, based on
the models’ knowledge of European datasets; the European Declaration on Digital Rights and
and for ensuring EU data sovereignty. 56% of Principles for the Digital Decade and the Ethics
open-source datasets available in the Hugging guidelines for trustworthy AI of the High-Level
Face platform are in English (other European Expert Group on Artificial Intelligence.
languages such as French, Spanish, German
or Portuguese only account for 3.4%, 2.8%, EU Member States and public administrations are
1.7% and 1.7% respectively). In 2024, the also currently addressing the practical implications
Commission therefore set up a new European of GenAI adoption. Measures include promoting
Digital Infrastructure Consortium (the Alliance the responsible and spontaneous use of AI
for Language Technologies EDIC (ALT-EDIC)) to tools by public employees; safeguarding citizen
allow the Member States to pool funding and other data; ensuring explainability and trustworthiness;
resources in order to promote the development of and maintaining data sovereignty when procuring
innovative LLMs with multilingual and multimodal these kinds of systems. The increasing adoption
capabilities4. of GenAI systems in the public sector means
that it is crucial to equip public employees
Other challenges arise in relation to further with the necessary skills to develop, test
resources needed, such as the availability of and responsibly use these technologies; and
high-quality, structured datasets on the EU’s to properly communicate and explain them to
and Member States’ information in European the public. Public administrations are therefore
languages to train EU models properly on history, developing guidelines, protocols and policies to
economy, legislation, health, education, etc. promote good practices in the public sector.
4 The Consortium is coordinated by France and includes 17 Member States (Bulgaria, Croatia, Czechia, Denmark, Finland, France,
Greece, Hungary, Ireland, Italy, the Netherlands, Latvia, Lithuania, Luxembourg, Poland, Slovenia and Spain) and 8 observing
Member States (Austria, Belgium, Cyprus, Estonia, Malta, Portugal, Romania and Slovakia). For more information, see: https://
language-data-space.ec.europa.eu/related-initiatives/alt-edic_en. 16
EU guidelines and policies on GenAI deployment and use
17
Methodology for the guidelines and policy-mapping
18
Data collection Taxonomy
Our research strategy was to collect publicly The PSTW team developed a preliminary
available information from official government taxonomy to facilitate the qualitative analysis
websites, official repositories of international of the collected documents. This taxonomy
organisations, and research institutions. We was designed to provide an initial analytical
conducted a wide-ranging desk review of online framework. It drew on the classification system
publicly available information to collect – for the previously used to ensure that data fields are
first time – guidelines, rules and policies on the consistent and applicable to generative AI use
use of GenAI within EU administrations. Our cases, guidelines and policies. New data fields
structured methodology involved: have been added in order to enable detailed
analysis and categorisation of the mapped
1. developing a search strategy to documents. Table 2 outlines the taxonomy fields.
identify the relevant keywords, languages and
administrations the research should cover;
19
Table 2. Guidelines and policy-mapping taxonomy.
Name of the document Indicates the official name of the document. Document metadata
Link Indicates the official website source (when possible) or another Web
relevant source.
Description Briefly describes the document (possibly including context and main Produced by the author
insights).
Type of document Indicates the typology of the document or initiative: Produced by the author
Policy: high-level document (e.g. a strategic outlook, policy based on EU vocabulary
strategy, agenda or strategic plan) that outlines overall goals, values terms
and intentions, thus providing a guide for an action plan or more
specific rules and procedures.
Rule or regulation: specific, enforceable statements (often
adopted by legislative or regulatory bodies) that define what is allowed
or prohibited. They provide clear boundaries for public servants’
actions and are mandatory.
Guidelines: non-binding recommendations that offer advice
and guidance on preferred methods, good practices or recommended
approaches.
Rules of procedure or protocol: step-by-step instruction on
how to carry out specific tasks or processes within the organisation.
They are usually mandatory and applicable in a specific context, thus
ensuring consistency and compliance.
Responsible Indicates the responsible organisation or owner of the guidelines/ Produced by the author
organisation name proposal and the type of administration (central government, regional
and category government, local government, EU institution or agency, university or
research institution).
Status Indicates the stage of implementation of the policy, guideline, Produced by the author
procedure or rule.
Year of publication/ Indicates the year of publication of the guidelines or approval of Produced by the author
approval the policy.
Intended user Indicates the target of the document: Produced by the author
internal users within single public organisations: the
document applies only internally to employees within the administration
or the body issuing the policy, guidelines or rules.
public sector only: the document targets all employees of
public organisations within a state, region or municipality.
public and private sectors: the document targets all persons
in public organisations and private entities willing to use GenAI tools.
20
Data analysis Limitations
The data analysis comprised both qualitative and The methodology has some inherent limitations,
basic quantitative analyses. Qualitative thematic because this mapping exercise was only a first
analysis was conducted in order to identify and approach to identifying documents relevant to
analyse the key themes and patterns emerging GenAI use polices across the content and it
from the documents collected. The analysis relied exclusively on secondary data sources.
was guided by the predefined taxonomy, which The findings cannot therefore be considered as
categorised documents based on their type statistically representative for the purpose of
(e.g. policy, guideline or procedure), geographical understanding guideline development in the EU.
scope (e.g. national or regional), responsible Further research to amplify the variety of available
organisation (e.g. central government or local data sources employed is to be encouraged.
government), status (e.g. in development or This present work aims to lay the groundwork
approved) and target technology (e.g. GenAI). for future studies and the further collection of
guidelines in a structured manner, expanding the
To identify thematic trends, the qualitative analysis basis of the taxonomy proposed. As the research
used the seven key requirements (human agency efforts continue, we expect more guidelines,
and oversight, technical robustness and safety, policies and documents to be identified, enlarging
privacy and data governance, transparency, this first repository.
diversity, non-discrimination and fairness, societal
and environmental well-being and accountability)
presented in the Ethics Guidelines for Trustworthy
AI of the AI HLEG5.
4 documents from EU institutions and Most (19) of these documents were produced
agencies; by national administrations, who issue
recommendations for internal purposes, for the
more than 1 document (including public sector, or for the public and private sphere
guidelines and policies released by local or in general. Local governments produced 7, EU
regional administrations) for 9 Member States authorities produced 4 and regional institutions
(5 for Sweden; 3 for Germany; and 2 each for produced 3.
Estonia, Ireland, Greece, Spain, France, Italy and
Austria);
22
Broken down by document type, 23 of the Germany, France, Poland or Sweden) or EU-
documents are guidelines, 5 are rules or level agencies. Sweden has issued guidelines
regulations, 3 are policy documents, and 2 are not only for national administrators but also for
rules of procedure or protocols. Of the guidelines: all local governments that wish to experiment,
19 were developed by national administrations, pilot and use GenAI for clerical or administrative
8 by local and regional authorities, 4 by EU tasks. Other interesting examples include the
institutions or agencies, and 2 by universities. Estonian Ministry of Justice’s guidelines for AI
use in public bodies, the Austrian Ministry of
29 of the collected documents had already been Education’s guidelines for the use of generative
approved or published at the time of writing this AI in schools, and the Dutch Ministry of Interior’s
report. The remaining 4 were still at the drafting strategy on GenAI for the whole government.
stage. Of the 29 documents already published, The Italian Chamber of Deputies (one of the two
26 were published in 2024 and 3 in 2023, which national legislative bodies) has issued a report on
illustrates how very recent such documents are. how GenAI-based tools can support legislative
The complete list of documents is in Annex I. documentation and parliamentary activities.
The documents were also classified according 12 of the documents are guidelines or policies
to intended users and target technology issued by individual organisations to regulate
(see Section 3.1, Table 2 on the taxonomy) to employees’ use of AI and 9 focus specifically
have a better understanding of the status of on GenAI (Figure 2, third column from the left).
guidelines and policies on GenAI. The intended These documents address either the way in
user is defined as the stakeholder targeted by which employees use AI models to carry out
the document, either to ensure compliance or daily administrative tasks or the safest way to
to promote compliance. Some of the documents integrate these tools into internal processes
address both public sector organisations and and IT procedures. GenAI models are used
private companies (including the general public not only through online and publicly available
to some extent), while others target only public tools (e.g. Open AI’s ChatGPT) but also through
sector organisations or are internal documents for in t e rn a l s y s t e ms d e s ig n e d a n d t a i l o r e d b y
a single public administration. However, target external providers. Examples of guidelines for
technology refers to whether the document employees of an individual organisation include
addresses AI in general (including but not limited the Commission’s protocol for GenAI use and
to GenAI) or focuses exclusively on GenAI. the Italian National Social Security Agency’s
guidelines for organisation-wide deployment of
As Figure 2 shows, most of the documents are ad hoc solutions based on generative AI.
addressed either to both public and private
sector organisations or only to public sector Figure 2 below gives an overview of the 33
organisations (21 out of 33, i.e. the first two documents mapped and explained above. The
columns in Figure 2). 14 of these 21 focus y-axis categorises documents based on their
on generative AI only. The fact that most of target technology (AI in general or generative AI
the documents specifically address GenAI is specifically). The x-axis indicates the intended
not surprising, given the emerging uncertainty users (both public and private sector; public sector
regarding the challenges and risks inherent only; or internal users within an individual public
in these new AI models and systems. The organisation, as described above). The colour-
guidelines and policies are usually written or coded boxes represent the type of document (as
published by government agencies for digital per the taxonomy presented in Section 3.1).
affairs or information security (e.g. in Denmark,
23
Figure 2. Guidelines, policies and rules on the use
of GenAI within EU Member States.
Intended user
Internal users within single public
Public and private sectors Public sector only organisations
Federal Office for Information Security (BSI) Ministry of the Interior and Kingdom Relations Chamber of Deputies
National Agency for Information Systems Security Ministry of Justice Ministry of Culture
Generative AI
French Data Protection Authority AI Sweden Kungsbacka Municipality
European Research Area Platform Digital Administration Authority Region Västra Götaland
University of Cyprus
Authority for the Digitalisation of Romania Department of Public Expenditure National Social Insurance Agency (INPS)
Agency for Administrative Modernization (AMA) Federal Minister for Digital Affairs
Artificial
intelligence
Ministry of Education and Culture
Ministry of Education
Guideline
Rule or regulation
Rule of procedure or protocol
Policy
24
3 Results analysis
A review of all the 33 documents revealed autonomy, prevention of harm, fairness and
many common themes covering a wide range explicability. The seven key requirements are
of organisational, legal and technical matters. summarised below:
To provide structured and meaningful insights,
a thematic analysis has been conducted 1. Human agency and oversight: ‘AI
of the documents based on the seven key systems should empower human beings, allowing
requirements detailed in the AI HLEG’s Ethics them to make informed decisions and fostering
Guidelines for Trustworthy AI, as described in the their fundamental rights. At the same time, proper
methodology (Section 2.1). The AI HLEG identified oversight mechanisms need to be ensured, which
these seven key requirements in order to guide can be achieved through human-in-the-loop,
the implementation of trustworthy AI based on the human-on-the-loop, and human-in-command
four ethical principles of respect for human approaches.’ (AI HLEG, 2019).
25
2. Technical robustness and safety: 7. Accountability: ‘Mechanisms
‘AI systems need to be resilient and secure. They should be put in place to ensure responsibility and
need to be safe, ensuring a fall-back plan in case accountability for AI systems and their outcomes.
something goes wrong, as well as being accurate, Auditability, which enables the assessment of
reliable and reproducible. That is the only way algorithms, data and design processes plays a
to ensure that also unintentional harm can be key role therein, especially in critical applications.
minimised and prevented.’ (AI HLEG, 2019). Moreover, adequate and accessible redress
should be ensured.’ (AI HLEG, 2019).
3. Privacy and data governance:
‘Besides ensuring full respect for privacy and The analysis also identified three other major
data protection, adequate data governance themes: intellectual property rights protection;
mechanisms must also be ensured, taking into education and training in using new tools; and
account the quality and integrity of the data, and public-private collaboration (particularly as
ensuring legitimised access to data.’ (AI HLEG, regards privacy and data governance).
2019). All these 10 themes are presented and aggregated
in four groups (Figure 3) based on their thematic
4. Transparency: ‘The data, system similarity, namely:
and AI business models should be transparent.
Traceability mechanisms can help achieve human agency/oversight and accountability
this. Moreover, AI systems and their decisions with respect to the use of GenAI tools;
should be explained in a manner adapted to
the stakeholder concerned. Humans need to be transparency in the use of these tools;
aware that they are interacting with an AI system,
and must be informed of the system’s capabilities diversity, non-discrimination and fairness,
and limitations.’ (AI HLEG, 2019). technical and safety robustness, and societal and
environmental well-being safeguards linked to
5. Diversity, non-discrimination GenAI systems and models;
and fairness: ‘Unfair bias must be avoided, as
it could have multiple negative implications, from privacy and data governance aspects
the marginalisation of vulnerable groups to the when implementing GenAI solutions. The next
exacerbation of prejudice and discrimination. section describes the insights from the thematic
F o s t e r i n g d i v e r s i t y, A I s y s t e m s s h o u l d b e analysis in more detail.
accessible to all, regardless of any disability, and
involve relevant stakeholders throughout their
entire life circle.’ (AI HLEG, 2019).
26
Figure 3. Main themes identified in the guidelines and
policies collected with respect to GenAI deployment.
Accuracy
Accountability Security
Reliability
Human-in-the-loop
Risk mitigation
Transparency
Explainability Confidentiality
Interpretability
Data minimisation
Openness Data security
Data sovreignty
Tracebility
Open Data
Privacy and data governance
27
Main themes addressed The Irish Department of Public
Expenditure acknowledges AI can be used to
Human agency and oversight and generate evidence for improved decision-making
accountability are among the most common but also emphasises that it cannot be used as a
themes, appearing in 29 documents out of substitute for human judgement. Human decision-
33. Public administrations explicitly state that makers must ultimately make the final decisions
human users should remain responsible for the In high-risk circumstances.
content generated with AI tools, and that users
should remain solely accountable for the content The Administrative Modernisation
generated. The use of AI tools to generate and Agency (Portugal) highlights the multifactorial
modify content might seem to shift accountability and distributed nature of accountability in AI
from the human to the machine, but users should systems, involving interactions with various
be accountable and responsible for the results individuals such as designers, developers and
produced under their oversight. All the documents end users. Human operators play a key role
that were mapped were agreed on this principle. because they are responsible for their actions
Some examples are cited below. within the system’s workflows. Actions should
therefore be clearly traceable across the whole
The City of Vienna (Austria) emphasises chain of responsibility, so that operators and
the importance of human oversight in using users can be held accountable for how they
text-based AI tools, acknowledging that they interact with the AI system.
can generate inaccurate or misleading content
(known as ‘AI hallucinations’ or ‘confabulations’). The Commission prohibits the direct
It places the responsibility for the content created reproduction of generative AI model outputs
on the ‘(human) users’ . created with online and publicly available tools
in official Commission documents (particularly
The Kungsbacka and Nacka those that are legally binding).
municipalities (Sweden) stress the importance
both of inputting reliable data into AI systems Our research also identified privacy and data
and of ethically employing their output. They urge governance themes that concern both the public
employers to mitigate risks while maximising the and the private sectors. These themes include
potential benefits of AI by checking the content GDPR standards for personal data protection and
generated and ensuring it is correct and ethical. sensitive information security. All the guidelines
They invite users to follow the applicable legislation and policies we collected addressed the topic of
and guidelines when handling information, so as data protection (in line with GDPR requirements)
to utilise the tools in a trustworthy manner. but some particularly interesting insights are cited
below.
The Italian Chamber of Deputies notes
that ownership, human accountability and The Commission states that employees
control are fundamental in AI usage in order to must refrain from sharing any non-public or
ensure that legal and democratic procedures personal data with generative AI models. A key
are upheld. Those involved in determining which risk is the unauthorised disclosure of information
AI systems are used and those who use them that has been shared during work because any
must be accountable for their decisions, thus input given to a generative AI service can be
helping to ensure compliance with parliamentary used to generate future outputs that may become
prerogatives and individual rights and freedoms. publicly accessible.
28
The French Cybersecurity Agency by GenAI tools (including a footnote with the name
(ANSSI) and French Data Protection Authority of the tool and the date of output generation). It
(CNIL) state that models should consider data requires its employees to verify any AI-produced
privacy issues by design – not only in the public information, because it may sometimes include
sector but also in the private one – because false information and sources.
GDPR guidelines apply everywhere.
The Austrian Federal Ministry of
The Polish government advises against Education, Science and Research and the
sharing sensitive data with generative AI and G o v e r n m e n t o f N o r t h R h i n e - We s t p h a l i a
highlights the importance of using systems that (Germany) recommend clearly identifying
prioritise confidentiality and data protection. It AI-generated or AI-edited content. Ensuring
also urges caution when entering information accuracy and dependability is critical, especially
(particularly classified, official or personal data) in fields like research and education. Moreover,
into AI tools. many universities in Member States such as
Estonia and Cyprus are also working on creating
The technical robustness, safety, diversity, regulations on the use of AI in academic work and
non-discrimination and fairness of GenAI tools evaluations.
must be accurate and reliable. 20 of the documents
we found address this need, because it is closely Transparency is another key aspect identified
linked to societal and environmental well- across the guidelines and policies (it is addressed
being. Unfortunately, these models may suffer in 29 documents). It applies both to employees
from AI hallucinations (see glossary above) which (who should be transparent when producing
can produce unreliable, inaccurate and biased content using GenAI) and to public administrations
outputs, and ultimately affect users. AI-generated (who should clearly indicate when they are using
content should therefore be subject to critical GenAI tools). Users should be aware when they
review for accuracy and ethical acceptability. are engaging with AI-generated content. Legally
Examples include: binding decisions should not be outsourced to
generative AI models, because their internal
The City of Vienna (Austria) and the workings are not transparent or known and they
Flemish Digital Agency (Belgium) address the might therefore act as ‘black boxes’, leading to low
issue of AI hallucinations. They recommend critical levels of explicability and therefore transparency.
reviews, accuracy checks and assessments of External providers must also be transparent with
ethical acceptability. Significant concerns with administrations on data management and output
generative AI are the potential unreliability of the creation, in compliance with the EU AI Act, and
text produced, and the accuracy and integrity of this is reflected in policies and guidelines as well.
the data on which the generated content is based.
The model is trained on a wide range of texts The German Federal Office for
from various sources, so it may include biases or Information Security states that developers and
generate hallucinations. Users are also advised operators need to provide adequate information
to contribute their own insights and use their own on any AI tool they develop, so that users can
judgement to validate the output generated. make informed decisions about whether the
AI model is suitable and appropriate for them.
The Polish government also notes the They should clearly explain the associated risks,
risk that AI systems may generate misleading or safeguards implemented, and any residual risks
inaccurate content (hallucinations) and requires or limitations. Better explanation of how LLMs
officials to explicitly disclose the content generated generate content can also improve transparency
at a technical level.
29
The Italian Chamber of Deputies requires replicate content that is protected by copyright,
decisions and processes related to the use of trademarks and patents laws, because this might
AI systems to be explained, specifying that infringe legal rights.
explanations must be public and understandable,
and thus allow democratic control. It has mandated Some guidelines and procedures address other
the Italian Parliament to obtain the information and issues related to the deployment and use of
rights needed in order to correctly explain how AI generative AI, such as how public employers
systems are being used and how they function. could provide education and training to make
In addition, all AI-generated outputs need to be both the adoption of generative AI tools and
easily recognisable and distinguishable. the digital transition process as secure, smooth
and accessible as possible for all employees.
The City of Vienna (Austria) requires any Some guidelines and procedures also encourage
content produced with the support of AI tools to public-private collaboration for the effective use
be clearly indicated. of GenAI, recognising that public administrations
often rely on external providers to adopt these
The Danish Agency for Digital tools. Transparency at every stage from design to
Government requires careful assessment of implementation, respect for intellectual property
whether or not AI models’ results should be rights, and clear accountability are essential to
used to make decisions affecting citizens or ensuring successful partnerships, and are the
businesses. Management are responsible for foundation for solid and effective public-private
ensuring that AI tools are used appropriately and collaborations.
transparently, especially when used in public
services. Some guidelines not only address ethical principles
and the trustworthy use of the technologies
Most of the documents (19) raised the new and described above, but also provide specific step-
overarching issue of compliance with intellectual by-step practical advice on how to be compliant
property rights. AI models such as online chatbots with those principles, like 7 documents collected
might have been trained on material protected by that have technical annexes:
copyright legislation – thus breaching copyright
protection – but their users might not be aware The Flemish Digital Agencys guideline
of that. not only details prompting techniques for
obtaining more effective GenAI results with GPT
The Flemish Digital Agency (Belgium) models, but also recommends the use of internal
tells public employees to check AI-generated software that is already integrated with its work
output because publicly available materials might environment (this is more advisable for data
be used without permission – which could be privacy);
considered an infringement of copyright and
intellectual property. The French Cybersecurity Agency has
released guidelines on developing, training and
The Danish Agency for Digital deploying generative AI systems in an ethical and
Government warns that AI-generated outputs legally compliant way;
should be flagged as such and that intellectual
property rights need to be considered, because The Dutch government has established
generative AI may combine elements from the Government AI Validation Team to develop
copyrighted or trademarked content. It is guard rails for generative AI models and guidelines
advisable not to encourage AI tools to use or to validate GenAI tools both inside and outside
the public sector;
30
The Polish government has issued a
technical guideline for employers who are using
GenAI online tools for the first time, noting that
prompt engineering is considered essential in
order to fully exploit the capability of generative
AI models to provide real support for daily tasks.
This guideline includes advice on ‘How to have
a constructive conversation with GenAI’, for
instance by providing precise context, examples,
and additional information. It also covers how to
determine the style and tone, and how to request
multiple variants;
31
4 Landscape of GenAI use cases in Europe
32
Following an overview of the main guidelines The scope of qualitative analysis included in this
and policies relating to GenAI deployment and report (Section 4.3) is based on five interviews,
use in administrations, the research has focused which the research team conducted with public
and analysed Generative AI use cases collected administration managers who have started
by the PSTW. The preliminary quantitative piloting and implementing such solutions in
analysis of GenAI use cases is based on the order to improve their administrative processes
PSTW’s collection of cases. The methodology or public services. The organisations in question
used in this form of case collection includes were:
protocols established for data gathering and
dataset maintenance, along with the taxonomy the City of Helsinki;
for categorising collected cases of emerging
technologies. This methodology has been used the Government of Catalonia (specifically,
in various reports, thus ensuring a robust and the Centre for Telecommunications and
methodologically sound framework for the Information Technologies, CTTI);
ongoing collection, analysis and dissemination of
cases. The methodology is under constant review the University of Bologna;
and is strengthened where necessary. The data
collection and the taxonomy used to categorise the Bulgarian Institute for Computer
the GenAI cases analysed here therefore follow Science, Artificial Intelligence and Technology;
the ‘Methodology for the public sector Tech
Watch use case collection – Taxonomy, data the Dublin City Council (the Tourism Unit).
collection, and use case analysis procedures’
(Tangi et al., 2024). Likewise, the limitations of
the methodology referred to in this report are
valid and applicable to this analysis.
33
4 1 Methodology for collecting and analysing GenAI use cases
The interviews were semi-structured and followed Regarding the implementation and adoption
a pre-defined script aimed at gaining a more processes, topics such as procurement,
comprehensive understanding of various aspects interoperability, testing, training and skills,
of each use case (e.g. the initial problem to communication and feedback were analysed. Our
address, the main opportunities identified, the interviews also addressed past and future risks
decision-making involved, and the implementation and challenges relating to data protection, data
and adoption processes), including from an ownership, data storage and trustworthiness.
organisational point of view.
34
4 2 Use case analysis
At the time of writing, the PSTW database administrations indicates that an increasing
hosts 61 cases of generative AI solutions that number of administrations are starting to adopt
have been identified in 20 different European this technology across all levels of administration.
countries and in the EU institutions. Of these Overall, however, the small number of cases
61 cases, 8 were identified in Italy, 6 each in means that these findings should not be
Germany and the United Kingdom, and 5 each in considered statistically significant.
Finland, Spain and the EU institutions. These use
cases occur mainly in national administrations or Regarding the function of government (COFOG) ,
across various countries, but a smaller number of Figure 5 shows that most cases (35 out of 61) are
them occur in local and regional governments (as in the area of general public services, followed
shown in Figure 4). Further use cases continue by public order and safety (8), economic affairs
to be identified. The total of 31 solutions reported (5) and use cases in the area of housing and
for 2024 and collected across European public community amenities (3).
35
Regarding the types of application, generative
AI is being used for service personalisation
(e.g. chatbots for citizens) and to support internal
management processes (e.g. tools that summarise
legislation for lawyers in the courts). GenAI can
also support innovation in public policymaking:
solutions such as UrbanistAI are helping to
involve residents in urban planning, while other
applications can support and streamline how
legislation is being discussed in parliaments by
supporting legislative drafting or legal consistency
checks.
36
Figure 5. Distribution of GenAI cases according to
government function and application type.
37
Figure 6. Distribution of GenAI cases according
to state of development.
38
Box 1. The emergence of national large language models
An emerging trend can be : the development of national large language models (LLMs). These
are a form of ‘public good’ that governments and research centres are working to provide to
citizens, businesses, researchers and public authorities. They are trained using national data
to ensure cultural relevance and accuracy, and are being developed in national languages. This
trend is exemplified by some use cases identified by the PSTW observatory (e.g. BgGPT in
Bulgaria, GPT-SW3 in Sweden, FinGPT and PORO in Finland, Modello Italia in Italy, NorGPT
in Norway, OpenLLM-RO in Romania, GPT-NL in the Netherlands, Qra in Poland and Open
Spanish LLM in Spain). Figure 6 presents these initiatives, indicating the Member State,
implementing organisation, year of implementation and state of play.
Key expected benefits from these initiatives include fostering digital sovereignty; reducing
reliance on global technology providers; and preserving linguistic and cultural heritage. These
models are also viewed as foundational tools that can help with the creation of services and
applications and in turn promote more equitable innovation processes in their respective
ecosystems.
39
As Figure 7 shows, generative AI solutions
are similarly used to improve government-
to-citizens and government-to-government
interactions. This indicates that these models
and systems can be tailored to respond to a
wide range of needs, in terms of improving
administrative efficiency (e.g. streamlining of
administrative work) and enhancing public
services for citizens (e.g. chatbots).
40
4 3 Analysis of the results of interviews
In addition to the quantitative analyses based on
the PSTW’s case collection, we also conducted
five targeted interviews with public administrations
across the EU (see Box 2), as described in the
methodology presented above.
41
Box 2. Use cases and public organisations interviewed
In Finland, the City of Helsinki piloted Microsoft 365 Copilot in 2024 to enhance
employees’ well-being and work by providing AI-powered assistance within the Microsoft 365
suite. This initiative was led by Tomas Lehtinen (the city’s Head of Data and Analytics), who
was interviewed for this analysis.
In Ireland, Dublin City Council partnered with Open AI and Data and Design in 2024
to develop ‘A Day in Dublin’, a prototype AI-powered itinerary planner for personalised tourist
experiences. The project team interviewed Barry Rogers (the city’s Head of Tourism), and Rudy
O’Reilly Meehan (the CEO of Data and Design).
In Bulgaria, the Institute for Computer Science, Articificial Intelligence and Technology
(INSAIT), which is funded by the Bulgarian government, implemented BgGPT, an open-source
large language model for the Bulgarian language to foster public and private sector innovation.
BgGPT supports public and private organisations in the development of specific applications
using the model. An interview with Borislav Petrov (INSAIT’s Executive Director) and Emiliyan
Pavlov (INSAIT’s Data Scientist) provided further details on this initiative.
During the five interviews, we collected interesting insights into strategic partnerships, benefits and
ongoing challenges posed by this technology in terms of implementation and adoption.
42
Strategic partnerships f o r Te l e c o m m u n i c a t i o n s a n d I n f o r m a t i o n
Technologies (CTTI) (the central IT provider
When it comes to implementing GenAI in the for the Government of Catalonia) to investigate
public sector, a key common practice that the potential use of GenAI to summarise legal
emerges is the establishment and development texts to help the public understand legislation.
of partnerships. In this regard, the interviews By leveraging its technical expertise, the CTTI
highlighted the critical role of collaboration and developed a solution tailored to the government’s
partnerships in supporting the deployment of needs, thus providing a useful example of an
GenAI tools and solutions in the public sector, effective and successful collaboration between
in the form of collaboration between and within public-sector organisations.
public organisations and partnerships with the
private sector. In Bulgaria, the Institute for Computer Science,
Artificial Intelligence and Technology (INSAIT) has
Firstly, collaborative approaches have been developed an open-source LLM in Bulgarian
identified between public sector organisations (BgGPT) and provides technical support and
and research organisations with a view to harnessing training to public organisations, such as the
capabilities and expertise. Collaboration with National Revenue Agency. This support is aimed
other public sector organisations and research at helping public organisations to use the BgGPT
institutions can lead to the pooling of expertise tool to develop specific applications for their use,
and resources, increased technological capacity build capacity within these organisations and
and reduced innovation costs. One example of a promote the adoption of the model.
use case developed through collaboration is the
GENAI4LEX project, which provides a GenAI Secondly, public administrations are also
tool to assist the legislative drafting process. building public-private partnerships with
Following a call for projects from the Italian private technological providers and innovation
Chamber of Deputies, the University of Bologna firms (both international companies and
entered into partnerships with a broad range of start-ups). Our research indicated that public
collaborators. This collaboration enabled the administrations sometimes give preferential
consortium leader to use the expertise of each treatment to technological companies that are
partner to create a proof of concept. The project already providing them with services and platforms.
required a combination of technical AI expertise This approach offers several advantages, such as
and legal knowledge to design a model capable being able to use existing procurement processes
of interpreting legislative texts. The Consortium and contracts, and having an existing basis
included other academic institutions, such as of trust and mutual understanding with those
LUISS CESP – the Center for Parliamentary companies. Prior experience with these providers
S t u d i e s , t h e U n i v e r s i t y o f Ve r o n a a n d t h e might also reduce the uncertainty associated
University of Turin, in collaboration with the with experimenting with GenAI for public
National Research Council (CNR)’s Institute of administrations. Using existing contracts can also
Legal Informatics and Judicial Systems, and help administrations to test solutions in faster,
three spin-off companies (BitNomos, Aptus and iterative ways, thus generating significant lessons
ASIMOV AI). learned before launching large procurement
processes.
In Spain, a GenAI tool has been developed to
help summarise publicly available legislative For example, Dublin City Council entered into a
information for the Government of Catalonia. partnership with a global company (OpenAI) and
The Publications Office of the Government of a local start-up, (Data and Design) to develop
Catalonia sought assistance from the Centre their GPT-powered tourist itinerary planner,
43
‘A Day in Dublin’. The partnership established expected to improve daily work by automating
with Data and Design, which specialises in data time-consuming tasks such as legal research
visualisation and storytelling, provided the data and summarising amendments. The automated
analysis to offer recommendations according to summary feature will also help employees to
users’ preferences and expectations, matching prepare for City Chamber discussions and
them with activities, services, events and committees’ presentations, which is another time-
businesses in the city. consuming task.
Similarly, the City of Helsinki chose Microsoft The solution implemented to summarise legal
Copilot to pilot GenAI tools to support texts by the Government of Catalonia proved to
employees with administrative tasks. The city be a transparent and trustworthy innovation
teams were already using Microsoft 365 services, in public services to enhance government
so integrating Copilot into their existing workflow openness. The tool is an example of how AI can be
was a natural progression. Technical integration successfully used to bridge the communication gap
was seamless because Copilot was already between the government and the general public,
integrated within their Microsoft ecosystem employing AI tools in a controlled environment.
(Microsoft SharePoint and Teams, datasets, The interviewees mentioned that the scheme
etc.) and because the fact that employees were gave them the opportunity to test GenAI in a non-
already using these tools in their daily activities sensitive and risk-free environment that brings
eased the adoption process. benefits for government and society. Whereas
Catalonia’s tool benefits local people, the City of
Dublin’s pilot is meant to improve the experience
Benefits of the implemented solutions of visitors to the city by providing personalised
itineraries through the GPT-based chat, while at
The implementation and piloting of these five the same time benefiting local people by mitigating
GenAI solutions have brought benefits to the the negative impacts of overtourism. Indeed, the
organisations, in terms of organisation and model can offer personalisation services that
daily work and enhancement of government are based on the popularity of attractions while
transparency and openness. also encouraging tourists to visit less well-known
attractions, thereby ensuring that visitors are
One finding that emerged from an internal survey dispersed more evenly across the city.
is that Helsinki’s pilot project has empowered city
staff by providing them with a new technical tool and The BgGPT model is deemed beneficial for society
new skills not only to improve their productivity as a whole and for the business environment in
at work, but also to enhance their well-being particular because of its cost efficiency. As the
and job satisfaction. From an organisational provider explained, the model reduces operational
point of view, Copilot increased the efficiency of costs for users compared with commercially
work across the city workstreams by enhancing available alternatives. For instance, utilizing
the quality of work, thereby saving time in the proprietary models for specific tasks could cost
long term. Another key benefit of the solution is tens of thousands euros, whereas adopting and
that Copilot guaranteed the security of the data customizing the Bulgarian LLM might cost many
used, so none of the information entered into the times less. The open-source nature of the model
Copilot tool by the employees would be used to makes it readily affordable, which is a significant
train their language models. Likewise, the solution benefit (especially for smaller organisations or
proposed by the GENAI4LEX-B project is also public institutions with limited budgets).
44
Implementation and adoption challenges train the model in the legal domain, specifically
on legislation applicable to Italy. Legal language
Through the interviews conducted with the five is highly specialised and nuanced, and requires a
public organisations (Par 4.3), we have identified comprehensive understanding of legal concepts
and documented four key implementation and and relationships. Annotated legal texts – where
adoption challenges affecting the following key terms and concepts are tagged and linked to
aspects: ontologies (formal representations of knowledge)
– provide the AI model with the necessary context
the availability and quality of large and structure to interpret and analyse legal
datasets; information.
the promotion and adoption of the tool; Second, the implementation of GenAI-
powered solutions in the public sector goes
capacity-building within the organisation; beyond technical implementation and requires
the addressing of organisational challenges
legal, societal and ethical risks associated relating to skills, training and implementing
with the use of GenAI tools. the solution. For example, in Helsinki’s pilot
project to implement Copilot across various teams,
First, technical challenges might arise during training and feedback sessions were conducted
the development and testing of the GenAI with the public officials who were taking part. This
solutions. Ensuring the accuracy of the models approach was intended to familiarise employees
in terms of the quality of outputs was a significant with the tool’s capabilities, to address concerns
challenge in all the cases we examined. and to gather valuable qualitative feedback for
Reducing hallucinations and ensuring high levels further improvement. Similarly, the University of
of precision were particular challenges in this Bologna, which was leading the development of
regard. In Spain, the CTTI Centre focused on the GENAI4LEX project, highlighted the need to
prompt engineering to minimise hallucinations. educate members of parliament and office staff
During the proof-of-concept phase, different on the capabilities and limitations of GenAI tools
prompts were tested and fine-tuned to ensure to ensure the appropriate and responsible use of
accurate and reliable summaries of legal texts. the system and correct interpretations of the AI-
This iterative process involved collaborating generated results.
with legal experts to validate the AI-generated
summaries and refine the prompt in order to Third, another crucial challenge is the mitigation
improve accuracy. Another challenge to be of legal, societal and ethical risks associated
addressed when adopting a GenAI solution is with the use of GenAI tools, specifically the
ensuring the availability and quality of data used trustworthy use of these solutions. Specific
to train and operate GenAI systems. In Ireland, challenges arise on issues such as bias or
Dublin City Council partnered with a local start-up hallucinations, requiring more transparency and
to analyse large high-quality datasets using local explainability. GenAI models can inherit biases
information (such as event programming, local that are already present in (i) the data on which
businesses, tourist services, and cultural and they are trained; (ii) the construction of the
recreational activities) and their geo-localisation. models; or (iii) the prompts used – potentially
Domain-specific datasets (including adequate leading to discriminatory outcomes. The
formats, standards and ontologies) are crucial in GENAI4LEX project emphasised the importance
training certain models for sector applications. of mitigating bias in training data and algorithms
The GENAI4LEX project stressed the importance to prevent inaccurate outcomes impacting the
of using annotated legal texts and ontologies to legislation that is being drafted. To achieve this,
45
it is necessary to ensure that the AI system does Fourth, protecting sensitive data, ensuring
not favour certain legal interpretations or prioritise citizens’ and employees’ privacy and
information from specific sources; impartiality and guaranteeing data sovereignty is a common
fairness must be ensured in legislative processes. concern expressed by the public administrations
Similarly, Dublin City Council highlighted the that we interviewed. This finding corresponded to
point that its tool could potentially favour certain the guidelines and policies analysed in Section
tourism recommendations because the output 3, because most of the identified documents
recommended certain options for each proposed focused on addressing these aspects explicitly.
activity in preference to other available options. For example, the City of Helsinki focused on
The project aims to address this by incorporating data privacy and security. It therefore signed
randomness in the itinerary-generation process a data protection agreement with the provider
and permitting user choice by providing a range to ensure its alignment with the City’s ethical
of alternative recommendations that match the AI principles. Selective participation in the pilot
visitors’ preferences and that can be chosen excluded departments that regularly handled
from (rather than just one). As for the mitigation sensitive data (e.g. the departments that work
of risks, communicating how GenAI systems directly with citizens’ data, and HR teams that
produce their outputs and their limitations is manage employee information) in order to
crucial for building trust and ensuring that public prevent personal or confidential information
organisations are accountable to citizens. The being exposed to the tool. The City of Helsinki
Catalan CTTI and the Publications Office of the further mitigated potential risks by disabling the
Government of Catalonia recognised the risks web search feature integrated into Copilot for all
associated with simply providing AI-generated participants.
summaries of legal texts. To foster trust and
accountability among citizens, they included
clear explanations that not only described how
AI was used to summarise the legal texts on the
publication’s website, but also pointed out that this
might have limitations and that the summarised
information did not have legal value.
46
5 Conclusions
47
This report is the first PSTW research intended to A set of guidelines and procedures approved
systematically analyse the emerging adoption of by EU Member States to steer the use of this
Generative AI within European public services technology has been collected and evaluated.
and administrations. The analysis builds on Further analysis offers a quantitative overview
previous reports on the public sector’s adoption of based on the PTSW’s available data on how public
AI, blockchain and other emerging technologies. administrations across Europe are experimenting
It thus contributes to the PTSW’s wider mission, with GenAI applications within their organisations.
which is to build and disseminate knowledge Furthermore, the report incorporates qualitative
on emerging technologies and to promote the insights derived from five targeted interviews
exchange of practices and learning by presenting conducted by the research team with public-
use cases from public administrations across administration managers who are actively piloting
Europe. This report provides key terminology and implementing GenAI solutions to enhance
related to Generative AI technologies and a administrative workflows and delivery of public
brief overview of the current policy landscape services.
regarding this technology at EU level: the AI Act,
the AI Innovation Package and the GenAI4EU
initiative.
48
5 1 Main findings
First, the mapping of existing guidelines and have been fully implemented and proved to be
policies contains a total of 33 official documents early successes. Emerging trends include the
from 17 EU Member States and 4 EU institutions. development of national language models aimed
Most of 33 guidelines and policies in the sample at enhancing digital sovereignty and addressing
are non-binding guidelines, followed by a localised needs.
smaller number of rules, policies and protocols.
National administrations contributed the most Third, targeted interviews with representatives
documents, followed by local governments, from five public administrations across the
EU institutions and regional administrations. EU provided further insight into the practical
23 of the documents focused on generative AI challenges and opportunities associated with
and on addressing its unique challenges, GenAI implementation. Key takeaways include the
including human oversight, data protection, importance of establishing inter-organisational
technical robustness and transparency. Common and intra-organisational collaborations and
themes among these documents included human public-private partnerships – both for ensuring
responsibility and accountability; data protection trustworthy and safe implementation, and for
and privacy concerns; the accuracy and reliability addressing challenges relating to data availability,
of GenAI tools; transparency; and the need for skills training and risk mitigation. We identified
educationand training. a number of challenges and particularly those of
ensuring transparency, addressing biases,
Second, the preliminary overview of the and protecting data sovereignty and citizens’
61 generative AI use cases across 20 EU privacy. Many organisations also highlighted the
Member States shows that these solutions are need to encourage the adoption of Generative
predominantly implemented in general public AI tools by training public employees in their
services, followed by other government functions responsible and effective use, thereby ensuring
such as public order and safety, economic – most importantly – its ethical deployment.
affairs, and housing. Generative AI has been The implementation of these solutions has also
used to improve both government-to-citizen demonstrated significant benefits for public
interactions and government-to-government administrations, such as improved productivity;
interactions, enabling more personalised enhanced job satisfaction and well-being;
public services and streamlined administrative increased government transparency; cost
processes. Most use cases are still at the efficiency; and more effective engagement
planning, development or piloting stages, but 17 with the general public.
49
5 2 Future steps
This report is an initial basis for the PSTW to pursue
further research in the nascent field of GenAI
adoption in European public administrations. Our
analysis indicates a growing trend of GenAI
experimentation and implementation across
the public sector, together with an increasing
awareness of the challenges associated with
responsible and transparent use. This highlights
the need for continuing research in this area to
support public administrations in understanding
GenAI technologies, mitigating risks and ensuring
their responsible and effective application for
improved delivery of public services and public-
administration processes.
50
6 Annex I
51
Table 2. Guidelines, policies and rules on generative AI
use within administration – a complete list.
7 Guide for Danish Denmark Agency for Digital National Guideline 2024 https://en.digst.dk/news/
public authorities on Government – news-archive/2024/
the responsible use Danish government maj/the-agency-for-dig-
of generative AI ital-government-pub-
lishes-ai-guides/
52
Table 2. Guidelines, policies and rules on generative AI
use within administration – a complete list.
14 National strategic Romania Authority for the National Policy 2024 https://www.adr.gov.
framework in the Digitalisation of ro/wp-content/up-
field of artificial Romania loads/2024/06/Strate-
intelligence 2024- gia-Nationala-pentru-In-
2027 teligenta-Artificiala.pdf
53
Table 2. Guidelines, policies and rules on generative AI
use within administration – a complete list.
19 Rules for Gen Sweden Nacka kommun Local Guideline 2024 https://www.nacka.se/
AI use medarbetare/digitaliser-
in the Nacka ing/stod-for-medarbetare/
jobba-sakert/informations-
Communal
sakerhet-i-kommunen/
ai-nacka-kommun/
21 New guidelines Germany Federal Minister for National Guideline 2024 https://bmdv.bund.de/
for Digital Affairs SharedDocs/DE/Pressemit-
more AI in teilungen/2024/047-wiss-
ing-ki-richtlinie.htm-
administration
l?nn=13326 https://bmdv.
bund.de/SharedDocs/
DE/Anlage/K/presse/
pm-047-ki-richtlinie.
pdf?__blob=publicationFile
54
Table 2. Guidelines, policies and rules on generative AI
use within administration – a complete list.
55
Table 2. Guidelines, policies and rules on generative AI
use within administration – a complete list.
aihubtest-
Use of
s3.eu-north-
33 Sweden AI Sweden Local Guideline https://aihubtest-bucket.
Generative AI s3.eu-north-1.amazonaws.
onaws.com/p
– jointly produced 2024 com/public/storage/re-
orage/resourc
guidelines for sources/HRYluagR0GT-
nTOiBv2oa3oRcrhrbd-
uagR0GTnT
Swedish vCpPLX1lBzE.pdf
municipalities
a3oRcrhrbdv
56
7 Annex II
57
The Table below includes an overview of the 61 cases used to carry out the high-level analysis of
Section 4. These cases were taken from the PSTW Dataset available at: https://interoperable-europe.
ec.europa.eu/collection/public-sector-tech-watch/data-download.
58
PSTW ID Case Country Responsible Organisation Link
PSTW-1993 Dublin City Council and Ireland Dublin City Council https://irishtechnews.ie/dublin-city-coun-
OpenAI partnership to show cil-and-openai-ai-europes-tourism/
the potential of AI to support
Europe`s tourism Industry
59
PSTW Case Country Responsible Organisation Link
PSTW-2120 Leveraging LLMs for topic Spain BiDA – Lab, Universidad https://arxiv.org/abs/2306.02864
classification in public affairs Autónoma de
60
PSTW Case Country Responsible Organisation Link
PSTW-2195 F13, the LLM used by the Germany State of Baden-Württemberg https://www.cnr.it/it/news/12868/il-pro-
State of Baden-Württemberg in getto-genai4lex-b-premiato-alla-cam-
collaboration with Aleph Alpha era-dei-deputati-intelligenza-artificiale-gen-
erativa-per-i-lavori-parlamentari
61
PSTW Case Country Responsible Organisation Link
PSTW-2210 NorGPT – the open ‘Made Norway NorwAI and NTNU https://www.ntnu.edu/norwai/
in Norway’ LLM norgpt-language-models
PSTW-2216 Leveraging LLMs for topic Spain BiDA – Lab, Universidad https://arxiv.org/abs/2306.02864
classification in public affairs Autónoma de Madrid
62
PSTW Case Country Responsible Organisation Link
63
8 References
64
AI HLEG, Ethics guidelines for trustworthy AI, European Commission, Shaping Europe’s digital
2019. https://digital-strategy.ec.europa.eu/en/ future, 2022. https://digital-strategy.ec.europa.
library/ethics-guidelines-trustworthy-ai. eu/en/library/european-declaration-digital-rights-
and-principles.
Bommasani, R., Hudson, D., Adeli, E., Altman,
R., Arora, S., von Arx, S., Bernstein, M., Bohg, J., European Commission, Commission launches AI
Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, innovation package, 2024. European Commission.
S., Card, D., Castellon, R., Chatterji, N., Chen, https://ec.europa.eu/commission/presscorner/
A., Creel, K., Davis, J., Demszky, D., & Liang, detail/en/ip_24_383.
P., On the Opportunities and Risks of Foundation
Models, No arXiv:2108.07258, 2022. https://doi. Fernández-Llorca, D., Gómez, E., Sánchez, I., &
org/10.48550/arXiv.2108.07258. Mazzini, G., ‘An interdisciplinary account of the
terminological choices by EU policymakers ahead
European Commission, Commission launches of the final agreement on the AI Act: AI system,
consultation on the Code of Practice for general- general purpose AI system, foundation model,
purpose Artificial Intelligence | Shaping Europe’s and generative AI’, Artificial Intelligence and Law,
digital future, 2024. https://digital-strategy. 2024, pp. 1-14. https://doi.org/10.1007/s10506-
ec.europa.eu/en/news/commission-launches- 024-09412-y.
consultation-code-practice-general-purpose-
artificial-intelligence. Goodfellow, I., Pouget-Abadie, J., Mirza, M.,
Xu, B., Warde-Farley, D., Ozair, S., Courville,
European Commission, Commission publishes A . , & B e n g i o , Y. , G e n e r a t i v e A d v e r s a r i a l
first draft of General-Purpose Artificial Intelligence Networks, No arXiv:1406.2661, 2014. https://doi.
Code of Practice | Shaping Europe’s digital future, org/10.48550/arXiv.1406.2661.
2024. https://digital-strategy.ec.europa.eu/en/
news/commission-publishes-first-draft-general- IBM, What Are AI Hallucinations? | IBM, 2023.
purpose-artificial-intelligence-code-practice. https://www.ibm.com/topics/ai-hallucinations.
65
Martin, B., Tangi, L., & Burian, P., European Tangi, L., Combetto, M., Gattwinkel, D., &
Landscape on the Use of Blockchain Technology Pignatelli, F., AI Watch: European landscape
by the Public Sector, JRC Publications Repository, on the use of artificial intelligence by the public
2022. https://publications.jrc.ec.europa.eu/ sector. Publications Office of the European Union,
repository/handle/JRC131202. 2022. https://data.europa.eu/doi/10.2760/39336.
MathWorks, What Is a Recurrent Neural Network Tangi, L., Combetto, M., & Martin, B., Methodology
(RNN)?, 2024. https://it.mathworks.com/ for the Public Sector Tech Watch Use Case
discovery/rnn.html. Collection. JRC Publications Repository, retrieved
on 15 November 2024 from https://publications.
OECD, (2023a), AI language models: Technological, jrc.ec.europa.eu/repository/handle/JRC137409.
socio-economic and policy considerations, OECD.
https://doi.org/10.1787/13d38f92-en. TechTarget, What is a Generative Adversarial
Network (GAN)? | Definition from TechTarget,
OECD, (2023b), AI language models: Technological, S e a r c h E n t e r p r i s e A I , 2 0 2 4 . h t t p s : / / w w w.
socio-economic and policy considerations, OECD. techtarget.com/searchenterpriseai/definition/
https://doi.org/10.1787/13d38f92-en. generative-adversarial-network-GAN.
66
An action supported by
Different EU Member States are implementing policies to guide the ethical use of Generative AI by developing national guidelines and promoting best practices. For instance, the City of Helsinki has signed a data protection agreement to align with ethical AI principles, and cities like Vienna and the Flemish Digital Agency in Belgium recommend critical reviews and accuracy checks of AI outputs. These implementations focus on ensuring data privacy, enhancing transparency in AI application, and promoting accountability among human operators who work with these systems .
The AI Act promotes data privacy and protection by incorporating GDPR standards, which set the requirements for personal data protection and sensitive information security. Employees are required to refrain from sharing non-public or personal data with generative AI models to prevent unauthorized information disclosures. Additionally, AI systems should be designed with privacy concerns in mind to align with GDPR guidelines, ensuring these standards are applied consistently across both public and private sectors .
Key concerns related to the use of Generative AI in the public sector include privacy and data governance, potential unreliability of AI-generated content, and the accuracy and ethical acceptability of such content. These concerns are tied to risks such as AI hallucinations, which can lead to inaccurate or biased outputs. Public administrations must focus on protecting sensitive data and ensuring that AI-generated information is critically reviewed for accuracy and ethical standards .
Transparency plays a crucial role in building trust in public-sector AI applications by ensuring that citizens understand how AI systems generate outputs and their limitations. Public administrations like those in Catalonia have included clear explanations of how AI is used in summarizing legal texts, acknowledging potential limitations and emphasizing that the summaries do not have legal value. These practices aim to foster citizen trust by promoting accountability and clear communication, as demonstrated by various EU public institutions .
The research methodology for mapping guidelines and policies on Generative AI involved collecting publicly available information from official government websites and repositories of international organizations. This included a systematic search using AI-powered keyword searches across official government webpages in various EU languages. The data was then structured and categorized using a taxonomy developed by the PSTW team to analyze the documents qualitatively, ensuring consistency across generative AI use cases and guidelines .
Public administrations in Europe are employing several strategies to mitigate the risks of AI-generated content. These strategies include promoting critical reviews and accuracy checks of AI outputs to prevent the dissemination of unreliable or biased information. Additionally, cities like Helsinki are implementing data protection agreements and excluding departments handling sensitive data from AI tool pilots. Public administrations are also providing clear explanations about the limitations of AI-generated summaries, as seen in the Government of Catalonia's initiative for legal text summaries .
EU guidelines and policies aim to equip public employees with skills necessary for AI adoption by promoting training programs that focus on developing, testing, and responsibly using AI technologies. The guidelines emphasize the importance of clear communication about AI applications and require employees to understand AI system operations and potential limitations. This approach ensures that employees can effectively implement AI solutions within their workflows and effectively communicate with the public about these technologies .
The AI Act integrates existing European regulations and principles by incorporating the GDPR and principles from the European Declaration on Digital Rights and Principles for the Digital Decade and Ethics guidelines for trustworthy AI. This integration aims to ensure that AI systems comply with standards for data protection, privacy, and ethical considerations. For public administrations, this means developing guidelines and policies for the responsible use of AI, training employees on these technologies, and ensuring data sovereignty when using AI systems .
The EU AI Act aims to regulate artificial intelligence systems to ensure compliance with the principles of trustworthy AI. It integrates existing frameworks such as the General Data Protection Regulation (GDPR) and the European Declaration on Digital Rights and Principles for the Digital Decade. The primary objectives include safeguarding citizen data, ensuring explainability and trustworthiness, maintaining data sovereignty, and promoting responsible AI tool usage by public employees .
Accountability in AI systems is addressed by ensuring that human operators remain responsible for their actions within AI workflows. This involves making actions traceable across the entire chain of responsibility, ensuring that designers, developers, and users can be held accountable. The document highlights the importance of human judgment in decision-making, especially in high-risk circumstances, and emphasizes that generative AI cannot substitute human judgment .