FAIRA-framework-v1 0 0
FAIRA-framework-v1 0 0
Foundational artificial
intelligence risk assessment
framework
Final
V1.0.0
September 2024
OFFICIAL - Public
1
FAIRA FRAMEWORK
2
Contents
3
Instructions
Please be familiar with and refer to the AI governance policy and guideline when completing this framework.
For further background and context it may also be helpful to refer to The National Framework for the
Assurance of AI in Government (NFAAIG) 2024.
Each question in the framework is populated with guidance which should be deleted when finalised.
The FAIRA should be completed by a team of experts including technical, policy, subject domain specialists,
operational experts, and accountable officers.
A team should plan a series of collaborative workshops to fill out a FAIRA and expect to reach out to
additional experts depending on risks identified or required subject matter expertise.
Because it requires expertise from multiple AI risk domains (see Table 1) the team may find it easier to work
through sections of Part A and Part B in parallel as answers to Part A will inform Part B and vice versa.
Part C is not a template requiring completion, but provides information that can assist with Parts A and B.
Some of the questions may be challenging to answer due to:
o the complexity and opacity of components of many AI solutions available for government use,
o the diversity of skills required of human operators, and
o the variety of emergent risks from using AI in different contexts.
Knowledge gaps should be highlighted and communicated to decision makers so that appropriate risk
controls can be implemented.
Please keep in mind this framework is not intended to replace existing ICT governance and risk management
processes, but instead should support efforts to govern AI across all risk domains.
4
Background
Government agencies managing the lifecycle of an AI system should use the Foundational AI risk assessment
(FAIRA) as a communication tool for governance, risk, and assurance activities with stakeholders. Agencies should
use FAIRA to identify risks specific to AI solutions as supplementary inputs to any existing assessment frameworks
and activities, such as privacy impact assessments, information security assessments, human rights impact
assessments and so on, that inform existing risk management activities. Aligned to the NFFAIG, the FAIRA
framework promotes a common approach to identifying, evaluating, communicating, and managing risks
associated with AI in the Queensland Government. The framework can assist agencies with meeting their
mandated requirement to have a consistent and evidence-based process for AI evaluation under the AI governance
policy. For further information on what agencies must do regarding the governance of AI, please see the AI
Governance Policy.
The FAIRA framework is a transparency, accountability, and risk identification tool for Queensland Government
agencies evaluating artificial intelligence (AI) solutions. The FAIRA aims to help stakeholders identify risks and
potential mitigation actions specific to the AI lifecycle. FAIRA is ‘foundational’ because stakeholders can use it to
describe an AI solution in terms of technical components, system design, human interaction, implementation, and
their associated impacts to provide foundations for management action in existing risk processes. A list of common
controls is also provided to assist teams with identification of actions that could be taken to reduce risk resulting
from the FAIRA. Teams can use FAIRA as the basis for communicating additional AI risks and mitigations with
stakeholders and as a springboard into other evaluation frameworks such as privacy or human rights. FAIRA can
improve the requirements, implementation, and operation of an AI solution and in doing so strengthen public trust
in how Government manages the AI lifecycle.
5
Figure 1: How FAIRA Works
A FAIRA should be initiated at the earliest opportunity when an AI solution is under consideration and throughout
its lifecycle. Stakeholder consultation should be conducted and answers to any remaining gaps should be sought
from relevant subject matter experts (see Table 1: Domains of AI Risk).
It is necessary to identify the boundaries/scope of the AI solution (e.g.- what it contains and what it entails,
integration points with associated upstream and downstream systems). It is also important to ensure that the
context identifies what is NOT part of the scope of the evaluation.
6
Domains of AI risk in the Queensland Government
AI intersects with many domains of risk. Table 1 provides twelve examples of domains of AI risk that an agency could
consider. In preparing a FAIRA, teams can seek assistance on risks in domains unfamiliar to them by:
· referring to their own internal agency subject matter experts for the domain (such as Privacy or Human Rights officers)
· refer to guidelines that support best practice implementation of legislation relevant to the domain (such as Privacy
Impact or Human Rights Impact assessments)
· Engage directly with statutory officer holders or departments for further guidance and advice.
7
Domain Key position Agency, department or Relevant legislation and policy
statutory body
5. Public records and State Archivist State Archives Public Records Act 2002
right to information Chief Customer & Digital Queensland Government
Officer Customer & Digital Group Records Governance Policy (2019)
Special Commissioner, Equity Office of the Special Guideline on creating and keeping
and Diversity Commissioner, Equity and records for the proactive protection of
Diversity vulnerable persons (2020)
Crown IP
7. Information Privacy Information Commissioner Office of the Information Information Privacy Act 2009 (Qld)
Commissioner Information Privacy Act 1988 (Cwlth)
8
Domain Key position Agency, department or Relevant legislation and policy
statutory body
8. Information Chief Customer & Digital Queensland Government Information security policy (IS18) and
Security Officer Customer & Digital Group associated policy documents.
9
context of use. The components of an AI solution should inform a values assessment to identify all relevant value
misalignments that could constitute risks.
10
Table 1: AI solution
The AI solution is the installed software comprising of digital assets, use of cloud servers, APIs, databases, data
used within the software, algorithms, models and integration within the digital ecosystem of an organisation.
11
Component Description Further information,
websites, notes
AI functionality? administrative decisions, i.e., actions or decisions made by a Commonwealth
government employee while carrying their legislated duties. Ombudsman’s
Automated Decision
Consider the following examples: Making Best Practice
Content development and approval Guide, p.5
Data interpretation and business strategy
Prioritisation of communications and tasks The term automated
Workflow and process optimisation system is used in this
Security and compliance oversight guide to describe a
Customisation and user experience computer system that
Resource allocation and investment automates part or all of
Crisis management and response an administrative
Employee training and development decision-making
Customer relationship management process. The key
feature of such systems
is the use of pre-set
logical parameters to
perform actions, or
make decision, without
the direct involvement
by a human being at
the time of decision.
Automated systems
can be used in different
ways in administrative
decision-making. For
example:
•They can make a
decision.
12
Component Description Further information,
websites, notes
•They can recommend
a decision to the
decision-maker.
1.5. Can the AI An ‘action’ generally refers to the steps or processes undertaken by
solution convert an automated system. This includes the execution of programmed
decisions into action? tasks, the application of rules or algorithms, and other operational
If so, how is this done? functions within the system. Where as a "decision" is the outcome
or result produced by the automated system after processing.
And is this subject to
inputs and applying its rules or algorithms. It represents the
direct human conclusion reached by the system, which can affect individuals or
intervention? entities subject to the decision. See Commonwealth Ombudsman.
(2023) Automated Decision Making: Better Practice Guide.
13
Component Description Further information,
websites, notes
14
Component Description Further information,
websites, notes
other systems?
2.2. What is the effect Consider impacts (benefits and harms) to wellbeing, See NSW AI Risk
of AI solution on autonomy, justice, work satisfaction, workflow, accountability. Scorecard
human operators? Is the effect known? If not, how could it be prepared for or
monitored? Cebulla, A., Szpak, Z.,
Howell, C. et al.
Applying ethics to AI in
the workplace: the
design of a scorecard
for Australian
workplace health and
safety.
2.3. What expertise is Consider:
required to use the AI Technical skills- the solution requires high levels of technical,
15
Component Description Further information,
websites, notes
solution? security data and privacy expertise to implement and
maintain.
Personal skills- Staff training regarding responsible use
including bias management, safety, diversity, and inclusion is
imperative. Requires discerning judgment by its users to
ensure safe and appropriate use.
2.4. Who is
accountable for
decisions made using
the AI system?
16
Component Description Further information,
websites, notes
and actual AI use government’s advice
inputs tracked and on Artificial Intelligence
recorded? and public records.
3.4. Does the AI Are users able to limit the data used, for example exclude
require data from the data for universal use of the solution, or just for a specific
digital or physical prompt?
environment of its Are data inputs traceable?
designed or intended
use? If so, what data is Consider the following example:
accessed? While the AI solution links to key documents used to create a
response, the origin of all outputs may not be traceable by the
end user.
3.5. How is the system
protected against
corrupted or missing
data?
3.6. What are the
Business Impact Levels
(BIL) of the inputs
(information assets)?
17
Component Description Further information,
websites, notes
4.1. What are the AI Consider the following example: generative AI outputs
solution data outputs?
4.2. How are planned Refer to Queensland
and actual AI use Government’s advice
outputs tracked and on Artificial Intelligence
recorded? and public records.
4.3. Could the outputs Consider the following example: to an external user
allow unauthorised
access to information?
4.4. Is output sent to If yes, provide explanation and justification. If no, provide
external sources review processes
without being checked
by a human first?
4.5. Does the AI
produce an output
involving data that is
regulated by the law?
4.6. Does the data Consider privacy implications.
contain personally
identifiable data? Is it Consider the following example:
accessible internally AI solution draws on government data available to the user
and/or externally? and outputs will draw on that data. Data may contain
identifiable data.
4.7. Is the AI designed If the solution is dependent on human oversight and control, it
to (or consequentially) is not to be used to provide output that directly contributes to
provide output that independent action of effect that is regulated by the law.
directly contributes to
independent action of
18
Component Description Further information,
websites, notes
effect that is regulated
by the law?
4.8. What are the
Business Impact Levels
(BIL) of the outputs?
5.2 How will they be Consider how the AI system is involved in interactions, how
impacted, and to what their data is used and what (if any) automated decisions will
degree? affect them
5.3. How will those Consider the following examples: The NFAAIG
impacted be informed? Disclaimer before use, outputs clearly labelled as AI implementation,
generated, contestability and rights provided? section 7. contestability
Training and skills uplift, exploratory use case with monitoring may be helpful.
and evaluation
19
Design Inputs are the constraints on the AI solution including values, requirements, and controls. Design inputs will
be continuously updated to manage changing risks captured through ongoing evaluation.
Component Description Further information,
websites, notes
6.1. What data sets Consider the following examples:
were used to build the open internet, specific data sets
AI solution?
6.2. What values and Draw on product design documentation.
principles drive the AI Consider what principles might be missing that are integral to
solution design? Queensland Government, for example transparency.
6.3. How are ethical, See NFAAIG
legal, safety, technical
frameworks or policies
considered?
20
Component Description Further information,
websites, notes
or intended use? See FAIRA PART C—
Controls for AI Risks.
21
Component Description Further information,
websites, notes
interoperability,
benchmarks, research
frameworks, mandates
and strategies etc…)
9.2. How and when is Consider the kinds of feedback collected, how is it collected See Implementing
feedback received for and analysed; and how is it actioned. Australia’s AI Ethics
the AI solution? Principles in
Consider the following: government
Who receives the feedback?
Is feedback received continuously or periodically? See FAIRA Part C
Controls for AI Risks
Monitoring, test and
evaluation.
22
Component Description Further information,
websites, notes
9.4. What evaluation Consider the following: See FAIRA Part C
processes are used for Cornerstones of AI assurance in the NFAAIG Controls for AI Risks
the AI solution? Monitoring, test and
evaluation.
9.5 How are Consider what the contingency plans are for adverse
undesirable results outcomes.
detected?
9.6. Has the AI solution Describe the independent review or the reason why an
been subject to independent review is not warranted currently.
independent review?
23
Human, societal and environmental wellbeing: AI systems should benefit individuals,
society and the environment.
Human-centred values: AI systems should respect human rights, diversity, and the autonomy
of individuals.
Fairness: AI systems should be inclusive and accessible and should not involve or result in
unfair discrimination against individuals, communities or groups.
Privacy protection and security: AI systems should respect and uphold privacy rights and
data protection and ensure the security of data.
Reliability and safety: AI systems should reliably operate in accordance with their intended
purpose.
Transparency and explainability: There should be transparency and responsible disclosure
so people can understand when they are being significantly impacted by AI and can find out
when an AI system is engaging with them.
Contestability: When an AI system significantly impacts a person, community, group or
environment, there should be a timely process to allow people to challenge the use or outcomes
of the AI system.
Accountability: People responsible for the different phases of the AI system lifecycle should be
identifiable and accountable for the outcomes of the AI systems, and human oversight of AI
systems should be enabled.
Table 2: NFAAIG (2024)
24
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
Human, societal, How does your AI solution What are the risks to human, See recommendations 1.
prioritise human, societal and societal and environmental Human, societal and
and environmental environmental wellbeing? wellbeing from implementing environmental wellbeing,
wellbeing all components of your AI Implementing Australia’s AI
AI solution helps stakeholders solution? Will your AI action Ethics Principles in
Throughout their find government processes: reduce the wellbeing of government in the National
lifecycle, AI systems Easier, more efficient, more vulnerable groups or framework for the assurance
should benefit productive, more complete, individuals? Will your AI of artificial intelligence in
individuals, society and more accurate, affect or impact employee government.
the environment More sustainable employment?
AI solution makes See FAIRA PART C—Controls
Consider legislative, policy stakeholders better off, safer, AI solution may make for AI Risks.
and agreement obligations more satisfied, more government processes:
in the Public Sector Act understood, more trusting, Worse
2022, Public Sector Ethics more sustainable, Harder
Act 1994, Work Health and Less efficient
Safety Act 2011 and AI solution improves Less productive
Queensland Climate Action communication by Less More disconnected
Plan, Inclusion and diversity friction, better relationships, less understood
strategy 2021-2025, More understood, Easier to
Integrity Act 2009, get help. AI solution may make
Environmental Protection stakeholders worse off, make
Act 1994, Use of generative them unsafe, or harmed.
AI in Queensland
Government, AI solution may degrade
The State Government communication by more
Entities Certified Agreement friction, diminishing
2023: relationship, making humans
“Part 21: Introduction of less understood and getting
technology/future of work help harder.
(3) Each entity, through the
relevant [consultative
25
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
committee] CC, will consult
on proposed technological
change or advancements,
including the use of artificial
intelligence technologies,
which may affect or impact
on employee’s
employment.”
26
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
Human-centred How does your AI solution What are the risks to human- See recommendations 2.
prioritise human-centred centred values from Human-centred values,
Values values? Consider benefits to Implementing Australia’s AI
implementing all
stakeholder autonomy and components of your AI Ethics Principles in
AI systems should
justice. solution? Consider risks to government in the National
respect human rights,
autonomy and justice. framework for the assurance
diversity, and the
E.g. AI solution will improve of artificial intelligence in
autonomy of individuals. Particularly whether AI action
government decision making government
will reduce autonomy or
by: See FAIRA PART C—Controls
Increasing fairness and justice for vulnerable groups for AI Risks.
Consider legislative and or individuals.
reducing bias,
policy obligations in the Providing greater sensitivity
Human Rights Act 2019, to rights,
Human Rights Impact Improving diversity and E.g. AI solution is potentially:
Assessment Resources, inclusion, Unfair,
Public Sector Act 2022, providing better information, Confusing,
Work Health and Safety Act Make it easier to get help, Incorrect,
2011, and Public Sector Helps stakeholders to be Has unclear provenance,
Ethics Act 1994 Use of more informed, or Reduces options,
generative AI in Queensland Give actors more agency Makes it harder to get help,
Government over their actions and have Unfair processes and
their rights respected. outcomes,
Reduced rights,
Lack of accountability,
Lack of contestability,
Amplified bias, or
Creates a coercive
experience.
AI solution makes
stakeholders less informed,
27
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
have less agency and affect
their rights.
Fairness How does your AI solution What are the risks to fairness See recommendations 3.
ensure fairness and reduce and bias across all Fairness, Implementing
AI systems should be unintended bias? How does components of your AI Australia’s AI Ethics
inclusive and accessible, your AI solution support or solution? Will your AI solution Principles in government in
and should not involve or improve human rights? substantially impact human the National framework for
result in unfair rights? the assurance of artificial
discrimination against E.g. Consider how the intelligence in government.
individuals, communities solution was tested for
or groups. fairness prior to release. See FAIRA PART C—Controls
Consider product information for AI Risks.
Consider legislative and about limitations.
policy obligations in the
Human Rights Act 2019, Consider information
Human Rights Impact available about the data used
Assessment Resources, to create the solution, how
Public Sector Act 2022, and where this was tested,
Public Sector Ethics Act and whether the testing may
1994, Public Records Act have gaps and bias.
2002 and Guideline on
creating and keeping
records for the proactive
protection of vulnerable
persons (March 2020),
Artificial Intelligence and
Public Records 2024
Privacy protection How does your AI solution What are the privacy and See recommendations 4.
maintain privacy and security risks from Privacy protection and
and security security? implementing all security, Implementing
28
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
components of your AI Australia’s AI Ethics
AI systems should solution? Does your AI Principles in government in
respect and uphold solution potentially expose the National framework for
privacy rights of the commercial, sensitive, or the assurance of artificial
individuals and ensure protected data (including intelligence in government.
the protection of data personal information) to
unauthorised stakeholders? See FAIRA PART C—Controls
Consider legislative and for AI Risks.
policy obligations in the Consider deliberate and
Information Privacy Act accidental data sharing and
2009; Information Privacy integration and data
Act 1988 (Cwlth) matching.
Information security
classification framework
(QGISCF); Information
security policy (IS18:2018),
Inter-jurisdictional
legislation and policies such
as GDPR, Privacy Impact
Assessment Resources
including threshold privacy
assessment template.
Reliability and How does your AI solution What are the reliability and What can be done to reduce
ensure relability and safety? safety risks from reliability and safety risks of
safety implementing all your AI solution?
components of your AI
Throughout their
solution? See recommendations 5.
lifecycle, AI systems
Is your AI solution for use in Reliability and Safety,
should reliably operate
high risk of harm Implementing Australia’s AI
in accordance with their
environments e.g. essential Ethics Principles in
intended purpose
services, critical government in the National
29
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
infrastructure, safety framework for the assurance
Consider legislative and components of products, of artificial intelligence in
policy obligations in the health, education, law government.
Information management enforcement, administration
policy framework (2017), of justice and democratic Voluntary AI Safety Standard
Information security policy processes?
(IS18:2018); ICT asset See FAIRA PART C—Controls
disaster recovery plan Consider the evaluations and for AI Risks.See FAIRA PART
(aligned with ISO 270001); tests provided by the C—Controls for AI Risks
procurement policies and solution company, including
Guideline on creating and against its own monitoring,
keeping records for the feedback and evaluation
proactive protection of processes.
vulnerable persons (March
2020), Work Health and Consider whether low
Safety Act 2011, Use of quality/ degraded/out-of-
Generative AI in the date/inappropriate
Queensland Government documents would be
(2023), and Public Sector processed along with other
Act 2022 documents.
Consider if transitory / short
term value / trivial/obsolete
information is retained and
could be used by the AI to
produce incorrect responses.
30
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
the technical implementation
of the product including: the
quality of data it uses, the
skills and values of humans
responsible for its use within
their specific job tasks, the
context within which it is
deployed and the
governance that controls
how it is implemented
throughout the AI lifecycles.
Transparency and How is the operation of your What are the transparency What can be done to
AI solution explained and explainability risks when improve transparency and
explainability transparently? implementing all explainability of your AI
components of your AI solution?
There should be
solution?
transparency and See recommendations 6.
responsible disclosure so Transparency and
Consider how much certainty
people can understand explainability, Implementing
there is about how and why
when they are being Australia’s AI Ethics
the solution functions the
significantly impacted by Principles in government in
way it does, and how the
AI, and can find out when the National framework for
government may fill in any
an AI system is engaging the assurance of artificial
gaps in knowledge.
with them intelligence in government.
See FAIRA PART C—Controls
Consider legislative and for AI Risks.See FAIRA PART
policy obligations in the C—Controls for AI Risks
Right to Information Act
2009, Right to Information
(RTI), Public Sector Act
2022; procurement policies;
Use of Generative AI in the
Queensland Government
31
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
(2023) Generative Artificial
Intelligence Records; Public
Records Act 2002; Metadata
management principles,
Metadata schema for
Queensland Government
assets guideline, Guideline
on creating and keeping
records for the proactive
protection of vulnerable
persons (March 2020), and
Public Records Act 2002
Contestability How are the operations of What are the risks to the What can be done to ensure
your AI solution contestable? contestability of all the operation of your AI
When an AI system components of your AI solution remains
significantly impacts a solution? contestable?
person, community,
group or environment, See recommendations 7.
there should be a timely Contestability Implementing
process to allow people Australia’s AI Ethics
to challenge the use or Principles in government in
outcomes of the AI the National framework for
system the assurance of artificial
intelligence in government.
Consider legislative and
policy obligations in See FAIRA PART C—Controls
Generative Artificial for AI Risks.
Intelligence Records Public
Records Act 2002; Right to
Information Act 2009; Public
32
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
Interest Disclosure Act
2010, Human Rights Act
2019; Public Sector Act
2022; Public Sector Ethics
Act 1994; Public Records
Act 2002; Work Health and
Safety Act 2011 and
Guideline on creating and
keeping records for the
proactive protection of
vulnerable persons (March
2020)
Accountability
How does your AI solution What are the accountability
ensure appropriate human risks of all components of What can be done in the
Those responsible for the
accountability? your AI solution? deployment of your AI
different phases of the AI
solution to ensure
system lifecycle should
Consider if and where appropriate human
be identifiable and
decisions are made or accountability for its
accountable for the
outputs being created by the operation?
outcomes of the AI
government through an AI
systems, and human
solution without appropriate See recommendations 8.
oversight of AI systems
human oversight or Accountability, Implementing
should be enabled.
accountability. Australia’s AI Ethics
Principles in government in
Consider legislative and
the National framework for
policy obligations in Public
the assurance of artificial
Sector Act 2022; Public
intelligence in government.
Sector Ethics Act 1994;
Generative Artificial
See FAIRA PART C—Controls
33
Value What benefits align What risks relate to What can be done
to this value? this value? to reduce the risks?
Intelligence Records; Public for AI Risks.See FAIRA PART
Records Act 2002; Financial C—Controls for AI Risks
Accountability Act 2009;
Financial and Performance
Management Standard
2019; Building Policy
Framework; Queensland
Treasury Strategic Plan
2023-2027; Queensland
Treasury ‘A Guide to Risk
Management’; Crime and
Corruption Act 2001;
procurement policies;
Integrity Act 2009
34
Business Understanding and Learning and skills Guardrails and Monitoring, test and Transparency and
Function gaps analysis uplift intervention evaluation accountability
Executive Lead AI Strategy Enrol in executive Build positive AI risk Create an AI Keep decision
and implementation responsible AI skills culture. Oversight audit trails: Keep
plan. uplift including AI Strengthen AI risk Committee. appropriate records
Understand your governance and AI committees, Approve AI risk of how AI is used in
organisational AI risk domains in the accountability processes executive
governance maturity Queensland processes, framework. administrative
including AI risk Government. documentation, and Encourage decision making.
committees, Ensure working groups. evidence-based use Authorise public
management, administrative staff Promote and of AI in the dissemination of AI
standards, legal increase their skills model ethical organisation. use, assurance case
obligations and using AI tools for behaviours, value Require evidence- studies for
policies. administrative alignment and based decision government AI
processes, responsible and safe making for AI projects and
documentation and AI leadership. investment and procurements.
record keeping. sustainment.
Managemen Document AI Enrol in AI Set up contracts Establish Use a risk
t solution using FAIRA product/project with third party information steering management
components analysis manager training vendors that support committee or process and assign
and values and AI risk AI risk controls project board to risk responsibilities
assessment. management skills including monitoring review risks and to roles and/or
Answer the uplift. and feedback. provide advice individuals.
question when do I Organise Team AI Develop and use across the AI Communicate AI
need to do a FAIRA? Skills uplift across all AI responsibly within lifecycle. risks transparently
team skill sets to government using Conduct regular AI with stakeholders.
learn all relevant AI inclusive design, risk assessments. Establish ways of
policies, frameworks FAIRA tools and AI Create ongoing working with teams
and obligations. risk identification monitoring to keep appropriate
Ensure team and management procedures to detect records.
increase their skills throughout the AI and address poor
using AI tools for product lifecycle. performance (e.g.
business Adopt ‘privacy-by- bias, unreliability,
processes, including design’ and ‘ethics- inappropriate or
documentation and by-design’ unsafe use) that
record keeping. processes. may arise as the
system evolves or as
new data is
integrated.
Develop metrics to
measure ROI and
compliance (ethical,
legal, regulatory,
policy).
Technical Audit data and IT Technology: Learn Build safety Design or adopt Keep appropriate
systems for about the AI features within the metrics suitable to records of use of AI.
technical readiness product. Understand AI solution, such as evaluate technical Document code with
to use AI including the data assets RAG, automated risks. indicators when
security, privacy, involved in 35
safety responses, Collect quantitative using AI tools for
reliability, safety. deploying AI and bias detection and usage and coding.
Licence
This work is licensed under a Creative Commons Attribution 4.0 International licence. To view the terms of this
licence, visit http://creativecommons.org/licenses/by/4.0/ . For permissions beyond the scope of this licence, contact
[email protected].
To attribute this material, cite the Queensland Government Customer and Digital Group, Department of Transport
and Main Roads.
The licence does not apply to any branding or images.
Copyright
Foundational artificial intelligence risk assessment framework
Document history
Version Date Author Key changes made
All content has been consulted on
under guideline, and then split into
August 2024 QGEA Policy team
0.1.1 two documents- guideline and
framework.
Final feedback from AI unit and
0.1.2 August 2024 QGEA Policy team
key stakeholders incorporated.
1.0.0 September 2024 CCDO, QGCDG Approved
36
Accountability mechanisms for AI systems used by government agencies should include clear identification of responsible personnel throughout the AI lifecycle, ensuring that individuals overseeing AI operations are well-defined. Legislative frameworks like the Public Sector Act and Ethics Act should guide these efforts by stipulating accountability standards. Additionally, implementing independent audits, public transparency reports, and establishing protocols for stakeholder feedback can also strengthen accountability .
The nature of AI inputs significantly affects system outputs as they determine the accuracy and reliability of AI decisions. Poor quality or biased inputs can lead to unreliable or unfair outputs, necessitating accountability measures for AI designers and operators. These implications emphasize the need for thorough data vetting, regular system audits, and ensuring transparency about input sources to mitigate unintended consequences and maintain accountability throughout the AI lifecycle .
AI solutions can enhance stakeholder wellbeing, autonomy, and justice by reducing repetitive task fatigue, thereby increasing productivity and leading to faster, more informed customer interactions. The reduction in routine tasks allows human operators to focus on more complex decision-making processes, enhancing autonomy. Moreover, by ensuring interactions are quicker and more informed, AI solutions promote a fairer allocation of resources and equitable service delivery to stakeholders .
Effective implementation and maintenance of AI solutions require strong technical expertise, including data security and privacy management, alongside personal skills like bias management, safety, diversity, and inclusion awareness. Technical skills ensure that AI systems are implemented securely, reducing risks of data breaches or misuse, while personal skills help in ensuring that these systems are used responsibly, mitigating biases and promoting ethical use of AI .
AI systems can affect privacy and security by potentially exposing sensitive data to unauthorized access or misuse. Managing these risks involves implementing strong security protocols aligning with legislative obligations like the Information Privacy Act, conducting privacy impact assessments, and ensuring secure data handling practices. Using encrypted communication, access controls, regular monitoring, and privacy-preserving technologies can help safeguard against potential data breaches .
AI solution design can align with principles of fairness and human rights by incorporating inclusivity and non-discrimination from the outset. Ensuring that the AI system is tested for fairness prior to release involves documenting any known limitations and biases. The fairness of the AI can be monitored through transparent practices and policies aligned with legislative obligations, such as the Human Rights Act and Public Sector Ethics Act, which emphasize the importance of protecting vulnerable persons and ensuring equitable AI interactions .
Biased AI outputs can result in unfair treatment or discrimination against individuals or groups, which undermines stakeholder trust and can exacerbate existing inequalities. Strategies to mitigate these risks include implementing robust training processes on diverse datasets to minimize bias, conducting regular audits, and developing bias detection algorithms. Additionally, fostering a culture of inclusivity and ethics in AI design, aligned with legislative frameworks like the Human Rights Act, can help address and minimize these biases effectively .
Vital data quality considerations include accuracy, completeness, reliability, relevance, and timeliness. These aspects ensure that the AI system functions as intended and produces accurate and reliable outputs. Poor data quality can lead to faulty decision-making due to incorrect or outdated information, potentially resulting in negative impacts on stakeholders or operational inefficiencies. Additionally, if the AI solution uses data external to the government, transparency about data sources becomes crucial to maintain public trust .
Managing transparency and explainability in AI solutions requires implementing practices that promote clarity about how the AI functions and the rationale behind its operations. Use of clear documentation, explainable AI techniques, and regular audits can help ensure that stakeholders understand AI decisions. These measures also involve filling knowledge gaps and maintaining open communication channels with users, which may be guided by national frameworks and ethics principles to support external evaluations and improvements .
AI systems interface with humans through various interfaces, such as virtual assistants with chat interfaces, which can improve accessibility for end-users and product owners. The perceived impacts on human operators include potential benefits to workflow and accountability by facilitating efficiency and decision-making accuracy, but also risks such as reduced work satisfaction and autonomy if AI systems overshadow human input. Understanding these impacts involves monitoring AI effects on wellbeing and preparing for negative outcomes by implementing measures like surveys or feedback mechanisms .