0% found this document useful (0 votes)
415 views7 pages

Openai Response Aia White-Paper

OpenAI's white paper discusses the European Union's Artificial Intelligence Act (AIA) and its implications for general purpose AI systems. The company emphasizes the importance of regulatory oversight to ensure AI tools are used safely and beneficially, while also detailing their own practices for mitigating risks associated with AI deployment. OpenAI advocates for thoughtful language in the AIA that encourages proactive risk management rather than penalizing providers for identifying potential misuse.

Uploaded by

Rem Vaggio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
415 views7 pages

Openai Response Aia White-Paper

OpenAI's white paper discusses the European Union's Artificial Intelligence Act (AIA) and its implications for general purpose AI systems. The company emphasizes the importance of regulatory oversight to ensure AI tools are used safely and beneficially, while also detailing their own practices for mitigating risks associated with AI deployment. OpenAI advocates for thoughtful language in the AIA that encourages proactive risk management rather than penalizing providers for identifying potential misuse.

Uploaded by

Rem Vaggio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

sions

       


Ref. Ares(2023)3314133 
- 11/05/2023

® OpenAL

OpenAI White Paper on the European


Union's Artificial Intelligence Act
Union’s

Introduction
OpenAI
OpenAl is an AIAl research and deployment company with the mission of ensuring that artificial
general intelligence (AGI) is developed and used in a way that benefits all of humanity.
humanity.’1 Since
‘our
our founding in 2015,
founding in we have
2015, we have deployed numerous AI
deployed numerous Al systems
systems on
on the
the path
path towards
towards that
that goal,
goal,
including GPT-3,
GPT-3, a large language model that performs a variety
avarietyofof natural language tasks;
DALE, an image generation sysem that aws tated pcures fom fox input and Codes, a
DALL·E, an image generation system that draws detailed pictures from text input; and Codex, a
code generaion system wich wes code base on fext Input.
code generation system which writes code based on text input.

Opens foundational
OpenAI’s foundational charter
chate revolves
evolves around he development
around the developmentofof safe and beneficial
“safe and beneficial AGL In
AGI.” In
addition 1 coeA esearch and developmen, we invest heavy in poly research and
addition to core AI research and development, we invest heavily in policy research and
formulation,
formulation, riskis analysis
analysis and
and mitigation,
migation andand technica and process
technical and process infrastructure
iffasinctue foto maximize
maximize
sale use
safe use ofof our
our teclcgie. Out company
technologies. Our company isis govermed
governed byby a@ non-profit
nonprofit with
wih independent
independent
directors making up a majority of the board, and the board is required to put social benefit
ahead ofof allal other
ahead over considerations.
considerations. Open
OpenAI alo has aa unique
also has unique ‘cappec:roft” legal structure
“capped-profit” legal iractre that
hat
allows us
allows us to
to effectively
effectively increase
increase our
our investments
investments in
in computing power and
computing power and talent
talent while
while
maintaining the checks and balances needed to actualize our mission.
mission.

We believe that
We believe that AGI
AGI has
has the
the potential
potential to
to profoundly
profoundly benefit
benefit society,
society, and understand that
and understand that
realizing these benefits requires oversight and governance of AI Al beyond industry alone.
alone. We
support inoughtul regulatory an policy approaches designed 0 ensure that powerful Al fools
support thoughtful regulatory and policy approaches designed to ensure that powerful AI tools
benef the largest number of people, and we applaud th EU fo acking he immense
benefit the largest number of people, and we applaud the EU for tackling the immense
challenge of comprehensive Al egiiaton via the Arial Intligence Ac (AA).
challenge of comprehensive AI legislation via the Artificial Intelligence Act (AIA).

OpenAl shares
OpenAI shares the
the EU’s
EU's goal
goal of
of increasing
increasing public
public trust
trust in
in Al tools by
AI tools by ensuring
ensuring that they are
that they built,
are built,
deployed, and used sel, and we believe he A wil be key mechanism n securing that
deployed, and used safely, and we believe the AIA will be a key mechanism in securing that
outcome. Many themes
outcome. Many themes and
and requirements
requirements of of the
the AIA
AIAare reflected in
are reflected in the
the tools
tools and
and mechanisms
mechanisms
that OpenAl already employs to balance technological progress with safe and beneficial use.
that OpenAI already employs to balance technological progress with safe and beneficial use.
For example, we curently require applications building wih our tools 1 adhere (0 use-case
For example, we currently require applications building with our tools to adhere to use-case
policies that
policies that exclude harmful or
exclude harmful or especially-risky uses; monitor
especially-risky uses; monitor and
and audit
audit applications to help
applications to help
prevent misuse; and employ an iterative deployment process, through which we release
products wih baseline capabies an stringent restictons, and slowly expand features andor
products with baseline capabilities and stringent restrictions, and slowly expand features and/or

1
eco arf ger tigen” igh atcromous sys at uproar a most
We define “artificial general intelligence” as highly autonomous systems that outperform humans at most
economically valuable work. More information is available at https://openai.com/charter/.

1
® OpenAL

loosen requirements
loosen requirements asas we receive feedback
we receive on how
feedback on how they
they are
are being
being used.
used. We
We seek
seek to
to share
share our
our
experiences in building and deploying Al systems while continuing to lear from others.
experiences in building and deploying AI systems while continuing to learn from others.

We recognise that
We recognise that the
the EU
EU has
has received
received feedback
feedback on all aspects
on all aspects of
of the
the AIA
AIA-- accordingly,
accordingly, this
this
White Paper focuses
White Paper on issues
focuses on issues that
that we
we are
are particularly
particulary familiar
familiar with
with given our experience
given our experience and
and
mission: requirements
mission: requirements around
around general
general purpose
purpose Al systems, modifications
AI systems, modifications to
to deployed
deployed systems,
systems,
and the scope of certain high risk use cases.
cases. We appreciate the opportunity to contribute to the
discussion and are excited for ongoing engagement on the AIA.
discussion and are excited for ongoing engagement on the AIA.

DISCUSSION
I.I. General Purpose
General Purpose Al Systems as
AI Systems as High-Risk
High-Risk Systems
Systems
Recent amendments to the AIA have sought to ensure that general purpose Al systems, which
Recent amendments to the AIA have sought to ensure that general purpose AI systems, which
have the potential tobe deployed in high risk use cases, are adequately covered by the AIA. In
have the potential to be deployed in high risk use cases, are adequately covered by the AIA. In
their consolidated proposal (15/06/2022), the French Presidency of the Council has put forward
their consolidated proposal (15/06/2022), the French Presidency of the Council has put forward
Articles that would cover or exclude general purpose
purpose Al
AI systems under certain conditions.
conditions. We
understand the concerns rising from the unregulated release of general purpose Al systems
understand the concerns arising from the unregulated release of general purpose AI systems
and offer
and offer suggestions
suggestions on
on potential
potential impact
impact and
and issues
issues to
to consider.
consider.
For background, OpenAl primarily deploys general purpose Al systems for example, our
For background, OpenAI primarily deploys general purpose AI systems - for example, our
GPT-3 language
GPT-3 language model
model can be used
can be used for
for aa wide
wide variety of use
variety of use cases
cases involving
involving language,
language, such
such asas
summarization, classification, questions and answers, and translation. By itself, GPT-3 is nota
summarization, classification, questions and answers, and translation. By itself, GPT-3 is not a
high-risk system, but possesses capabiltes that can potentially be employed in high risk use
high-risk system, but possesses capabilities that can potentially be employed in high risk use
cases. Accordingly,
cases. Accordingly, wewe have
have dedicated
dedicated significant
significant resources
resources toto determining
determining guidelines, best
guidelines, best
practices, and limitations for uses for our services. We currently outline a set of “high stakes
practices, and limitations for uses for our services. We currently outline a set of “high stakes
applications” in fields such as law, medicine, politic, finance, and Givi services, where
applications” in fields such as law, medicine, politics, finance, and civil services, where
applications proposed
applications proposed to be built
to be built using
using our
our services
services are
are subject
subject to
to additional
additional scrutiny
scrutiny that
that
requires clear dentication and management of isks. For example, in an employment contex,
requires clear identification and management of risks. For example, in an employment context,
‘we would not
we would not support
support aa use
use case
case involving
involving the
the use
use of
of GPT-3
GPT-3todetermine eligibility for
to determine eligibility for
employment, but
employment, but may
may support
support a a use
use case
case where
where GPT-3
GPT-3 assists
assists aa user
user by
by suggesting
suggesting potential
potential
text for ob postings (which is reviewed by the user before publication), given the simpler bounds
text for job postings (which is reviewed by the user before publication), given the simpler bounds
and comparatively
and comparatively lower
lower risk
risk of
of the
the latter.
latter. This
This level
level of
of oversight
oversight over
over the use of
the use of our services isis
our services
enabled by deploying GPT-3 through an application programming interface (API) which allows
us to review signups, implement technical oversight, and identity and prevent repeated acts of
us to review signups, implement technical oversight, and identify and prevent repeated acts of
abuse.
abuse.

We believe our
We believe our approach
approach toto mitigating
mitigating risks
risks arising
arising from
from the
the general purpose nature
general purpose nature ofof our
our
systems isis industry-leading,
systems industry-leading, and
and we
we have
have outlined some of
outlined some of these
these practices
practices in
in aa collaborative
collaborative
publication
publication with
with other
other labs titled “Best Large Language Models.”Models.” Despite
Practices for DeployingLargeLanguage
“BestPracticesforDeploying

2
® OpenAL

measures such
measures such as those previously
as those previously outlined,
outlined, we are concerned
we are concerned that
that proposed
proposed language
language around
around
general purpose systems may inadvertently result in all our general purpose
purposeAl
AI systems being
captured by
captured by default.
default.
“The currently proposed
The currently proposed Article 4.c.1 contemplates
Article 4.c.1 contemplates that
that providers
providers ofof general
general purpose
purpose AlAI systems
systems
will be exempled “when the provider has explicitly excluded any high-risk uses in the
will be exempted “when the provider has explicitly excluded any high-risk uses in the
instructions ofof use
instructions use oror information
information accompanying
accompanying the the general purpose AI
general purpose Al system.”
system.” While
While we.
we
believe that
believe that we would currently
we would fall under
currently fall under this
this exemption given the
exemption given protective measures
the protective measures we we
employ, Article 4.c.2 potentially undermines the intentofArticle 4.c.1 by stating that “Such
employ, Article 4.c.2 potentially undermines the intent of Article 4.c.1 by stating that “Such
shall...not be deemed justified if the provider has sufficient reasons to consider that
exclusion shall…not
the system may be misused.”
the system may be misused.”

As outlined above,
As outlined above, we consider and
we consider and continue
continue to review on
to review an ongoing
on an basis the
ongoing basis the different ways
different ways
that our systems
that our systems may
may be
be misused,
misused, and
and we employ many
we employ many protective
protective measures
measures designed
designed foto avoid
avoid
and counter
and counter such
such misuse.
misuse. The
The current framing may
current framing may inadvertently
inadvertently incentivise
incentivise an
an avoidance
avoidance of
of
active consideration of ways thata general purpose Al system may be misused so that
active consideration of ways that a general purpose AI system may be misused so that
providers do not have “sufficient reasons to consider [misuse]" and can avoid addiional
providers do not have “sufficient reasons to consider [misuse]” and can avoid additional
requirements. The fundamental nature and value of general purpose Al
requirements. AI systems are that they
can be used for many application areas;
areas; we do not think it would meet the goals of safe and
beneficial Al to inadvertently encourage providers
beneficial AI to inadvertently encourage providers to
to turn
tum aa blind
blind eye to potential
eye to potential risks.
risks.
We suggest reframing
We suggest reframing the
the language
language toto incentivize
incentivize rather
rather than penalize providers
than penalize providers that
that consider
consider
and address
and address system misuse, especially
system misuse, especially ifif they
they take
take actions that indicate
actions that indicate they
they are actively
are actively
identifying and
identifying and mitigating
mitigating risks.
risks.
An
An example
example of possible language
of possible language could be that
could be providers of
that providers of general
general purpose
purpose systems will be
systems will be
exempted as per Article 4.¢.1 ‘when the provider () has explicitly excluded any high-risk uses in
exempted as per Article 4.c.1 “when the provider (i) has explicitly excluded any high-risk uses in
the instructions of use or information accompanying the general purpose Al system, (i) performs
the instructions of use or information accompanying the general purpose AI system, (ii) performs
periodic assessments to understand the possibility ofmisuse,
of misuse, and (iii) implements reasonable
mitigation measures to address those risks.” We propose removing the language currently in
Aric 4.2 and replacingit wit this suggested text
Article 4.c.2 and replacing it with this suggested text.

Il.
II. Generative Al
Generative systems considered
AI systems considered high-risk
high-risk under
under the IMCO-LIBE
the IMCO-LIBE
report
report

The European Parliament’s


The European Pariiament’s original
orginal IMCO-LIBE
IMCO-LIBE report
report (20/04/2022)
(20/04/2022) proposes
proposes language
language
amending Annex
amending lil, adding
Annex III, adding 1.8.a,
1.8.a, which would classify
which would a large
classify a large swath
swath of
of content-generation
content-generation
systems as high-risk systems if they generate “text content that would faely appear to a
systems as high-risk systems if they generate “text content that would falsely appear to a
person to be human generated and authentic” or “audio or video content that appreciably
person to be human generated and authentic” or “audio or video content that appreciably
resembles existing natural persons, in a manner that significantly distorts or fabricates the

3
® OpenAL

original situation, meaning,


original situation, meaning, content,
content, or
or context and would
context and would falsely
falsely appear to aa person
appear to person to be
to be
authentic.” Portions of this language overlap with Article 52's transparency obligations around
authentic.” Portions of this language overlap with Article 52’s transparency obligations around
disclosing
disclosing that content has
that content has been artificially generated
been artificially generated or
or manipulated,
manipulated, and
and we suggest aligning
we suggest aligning
these requirements within
these requirements within Article 52 rather
Article 52 rather than
than adding
adding aa separate
separate set
set of
of requirements
requirements under
under
Annex i.
Annex III.

For more context, GPT-3 and our other general purpose Al AI systems such as DALL·E
DALL-E may
generate outputs that could be mistaken for human text and image content. content. However, in line
with the requirements outined in Article 52, we require deployers building on ourAP! to not
with the requirements outlined in Article 52, we require deployers building on our API to not
theyare
mislead users that they are interacting with an AI
Al system or AIAl generated content. We have
developed mechanisms to allow us to verify the synthetic origin of images generated by
DALLE, and
DALL·E, and are
are constantly
constantly testing
testing and iterating on
and iterating on restrictions
restrictions within
within our Content Policy
our Content Policy to
to
address concerns around deepfakes and artificially generated content. For example, we: we
currently prohibit
currently prohibit the
the generation
generation of
of images
images ofof specific
specific individuals,
individuals, but
but are
are exploring
exploring mitigations
mitigations.
that we think
that we think would support benign
would support benign use
use cases for such
cases for such generations.
generations. We continue to
We continue to evaluate
evaluate
methods to combat deepfakes and similar problems, and with current safeguards in place, we: we
believe users will be aware thal they are interacting with an Al system and that GPT-3 or
believe users will be aware that they are interacting with an AI system and that GPT-3 or
DALL-E output does not mislead people.
DALL·E people.

Despite these
Despite these efforts,
efforts, the
the new
new language
language in in Annex
Annex IIIIll 1.8.a
1.8.a could
could inadvertently
inadvertently require
require us
us to
to
consider both GPT-3 and DALL·E
DALL-E to be inherently high-risk systems since they are theoretically
capable of generating content within the scope of the clause. We suggest that instead of adding
capable of generating content within the scope of the clause. We suggest that instead of adding
these additional clauses to Annexlil,
Annex III,Article
Article 52 can be relied on (or amended if deemed
appropriate). This Article can sufficiently require and ensure that providers put into place
appropriate).
reasonably appropriate mitgations around disinformation and deepfakes, such as watermarking
reasonably appropriate mitigations around disinformation and deepfakes, such as watermarking
content or
content maintaining the
or maintaining the capability
capability to confirmifif aa given
to confirm piece of
given piece of content
content was
was generated by
generated by
their system
their system.

lll.
III. Requiring New
Requiring New Conformity
Conformity Assessments
Assessments for Substantial Modifications
for Substantial Modifications
The AIA currently requires a new conformity assessment each timeanAl time an AI system undergoes a
“substantial modification’, defined
“substantial modification”, defined as
as aa change
change that “affects the
that “affects the compliance
compliance ofof the
the Al system
AI system
‘with
with the requirements set
the requirements set out in Title
out in Title III
Ill Chapter
Chapter 22 of
of this
this Regulation
Regulation or
or results
results in
in aa modification
modification
10 the intended purpose for which the Al system has been assessed.” We are concerned that
to the intended purpose for which the AI system has been assessed.” We are concerned that
his requirement may impact innovations that increase the safely of the Al system on the
this requirement may impact innovations that increase the safety of the AI system on the
market, such as those achieved through our iterative deployment model.
market, model. This model allows us
to constantly reassess features and risk levels and make safety and security changes to our
systems on
systems onaa frequent,
frequent, ongoing bass.
ongoing basis.

We propose that
We propose modifications made
that modifications made toto increase
increase the
the safety
safety ofof an
an AlAI system
system on the market
on the market oror toto

4
® OpenAL

mitigate risk
mitigate risk should
should not
not be
be captured
captured by by “substantial modification”. For
“substantial modification”. For example,
example, addressing
addressing
concerns around hate speech may require monitoring the changing landscape of what
constitutes hate
constitutes hate speech
speech (such
(such as in relation
as in relation to
to new
new social
social movements)
movements) andand quickly updating
quickly updating
systems accordingly.
systems accordingly. OpenAI’s
OpenAl's iterative
iterative deployment allows our
deployment allows our researchers
researchers and
and engineers
engineers to
to
make improvements to our Al systems and tools to help ensure that they are continuously
make improvements to our AI systems and tools to help ensure that they are continuously
becoming safer,
becoming safer, less
less biased,
biased, and
and more
more useful.
useful. This
This reduces
reduces the
the time
time between
between the
the discovery of
discovery of
important safety updates and the implementation and availability of such updates.

However, the current definition of “substantial modification” could be interpreted to require a new
However, the current definition of “substantial modification” could be interpreted to require a new
conformity assessment whenever changes such as these are made, as there is the possibility
that it could affect the complianceofthe Al system with broader Tite Ill Chapter 2
that it could affect the compliance of the AI system with broader Title III Chapter 2
requirements 22 To
requirements. To avoid
avoid an undesirable outcome
an undesirable where improvements
outcome where improvements to to the
the safety
safety and
and
‘well-functioning of an
well-functioning of an Al systems are
AI systems unnecessarily delayed,
are unnecessarily delayed, we suggest excluding
we suggest excluding
modifications made
modifications made for safety or
for safety or risk
risk mitigation
mitigation reasons
reasons that
that are not reasonably
are not reasonably expected
expected to to
have aa negative
have negative impact
impact on health, safety,
on health, safety, or
or fundamental rightsof
fundamental rights of any person; however,
any person; however, ifif the
the.
provider subsequently has reason to believe that such impacts have happened or may be likely,
provider subsequently has reason to believe that such impacts have happened or may be likely,
the modification should be rolled back and a new conformity assessment required before the.
the modification should be rolled back and a new conformity assessment required before the
redeployed.
modification is redeployed.

IV.
IV. Concerns With
Concerns With Scope
Scope of
of Certain
Certain High
High Risk
Risk Use
Use Cases
Cases
Our final suggestions focus on specific categories of high risk use cases listed in Annex III. Ill. As
mentioned earier, OpenAl generally disallows most use cases deemed high risk by the AIA.
mentioned earlier, OpenAI generally disallows most use cases deemed high risk by the AIA.
However, there
However, is some
there is some ambiguity
ambiguity where Annex III
where Annex ll may
may capture
capture certain
certain low
low risk
risk use
use cases.
cases. We We.
believe it is critical that sectors fundamental to human growth and improvement, such as
education and employment, are able to benefit broadly from Al advancements, particularly when
education and employment, are able to benefit broadly from AI advancements, particularly when
the advancements do
the advancements do not
not pose
pose aariskto
risk to aaperson'sfundamental rights.
person’s fundamental rights.

As one example,
As one example, Section
Section 4.a
4.a in
in Annex lil outlines
Annex III outlines “AI
“Al systems
systems intended
intended to be used
to be used for
for
recruitment or selection of natural persons, notably for advertising vacancies, screening of
recruitment or selection of natural persons, notably for advertising vacancies, screening or
firing applications, evaluating candidates.” While we agree that an Al system used to make or
filtering applications, evaluating candidates.” While we agree that an AI system used to make or
relied on as a primary input fo direct employment decisions should be considered high risk.
relied on as a primary input for direct employment decisions should be considered high risk,
there are a number of use cases that could inadvertently be captured by the current language
‘which
which would help benefit and modernize the sector without presenting risk to people.
people. For
example, one OpenAl customer uses GPT-3's text generation capabilities to help people
example, one OpenAI customer uses GPT-3’s text generation capabilities to help people create
create
and edit
and edit job descriptions
job descriptionsfor more effective
for more effective recruiting.
recruiting. This
This could
could be
be considered
considered asas falling under
falling under
the categoryof “advertising vacancies" in Section 4.3, but does not seem to be the primary
the category of “advertising vacancies” in Section 4.a, but does not seem to be the primary

#2 We understand that
We understand that since
since we
we operate
operate primarily
primarily as
as aa general
general purpose
purpose system
system provider,
provider, the
the conformity
conformity assessment
assessment
may not
may not be
be implicated
implicated at ll.riskHowever,
at all. However, weefforts
we or users
or users maybebuild
may buld applications
applications that do
that do fai
fall undor high categories.
under high-risk categories, in
‘hehcaseAl
which case AI safety and
safety and ik igalon
mitigation ers woud
would be slowed
slowed by
by conformity assessment
conformity assessment requirements
requirements or
for
Substantial moafcatons
substantial modifications.

5
© OpenAT

thrust
thrust of
of the
the categorization, because the
categorization, because the AI
Al system supports the
system supports human decision
the human maker as
decision maker an
as an
assistant and is not the primary author of the job description. The description is reviewed and
ultimately finalized by a person, and the posting and availabilty of the description is not
ultimately finalized by a person, and the posting and availability of the description is not
determined by the Al system
determined by the AI system.

Similarly, Section 3.b covers "Al systems intended to be used for the purpose of assessing
Similarly, Section 3.b covers "AI systems intended to be used for the purpose of assessing
students in educational and vocational training institutions and for assessing paricipants in ests
students in educational and vocational training institutions and for assessing participants in tests
commonly required for admission to educational institutions.“ As with the previous ob
commonly required for admission to educational institutions." As with the previous job
description example, we believe thatthe use of our Al systems to, for example, help test witers
description example, we believe that the use of our AI systems to, for example, help test writers
develop and edit new test questions, would present large benefits in helping to modernize the
educational sector without entaiing fisks as those intended 0 be addressedby Annex i.
educational sector without entailing risks as those intended to be addressed by Annex III.
However, the specific language cold be read to include the development of test questions to be.
However, the specific language could be read to include the development of test questions to be
part of “assessing students” or “assessing participants in tests.” We suggest that the useofAl
part of “assessing students” or “assessing participants in tests.” We suggest that the use of AI
systems to generate test questions for human curation, ediing, and selection should not be
systems to generate test questions for human curation, editing, and selection should not be
considered high risk, as this would result in reduced abilty for educators to benefit from Al
considered high risk, as this would result in reduced ability for educators to benefit from AI
advancements. To adress these concerns, we propose that Sections 3 and 4ofAnnex Il be
advancements. To address these concerns, we propose that Sections 3 and 4 of Annex III be
amended to clay a focus on use cases that wil have a material impact on a person's
amended to clarify a focus on use cases that will have a material impact on a person’s
employmentoreducational
or educational opportunities,
opportunities, and not potentially capture broad sector-wide use like
test question generation or job description generation hat present relatively low isk. An
test question generation or job description generation that present relatively low risk. An
example of alternative language for potential consideration:
example of alternative language for potential consideration:

3. Education and vocational raining:


“3. Education and vocational training:

(a) AI systems
(a)Al systems intended
intended to
to be
be used
used to
to make decisions regarding access to or assignment of
toorassignment of
naturalpersons 0 educational and vocational training nstittons;
natural persons to educational and vocational training institutions;

(5) AlAI systems


(b) systems intended
intended toto be used for
be used for evaluating
evaluating performance
performance inin educational
educational and
and
vocational raining institutions, including performance in tests commonly required for
vocational training institutions, including performance in tests commonly required for
institutions.
admission to educational institutions.

4. Employment, workers management and access to self-employment;


4. Employment, workers management and access to self-employment:

(8) AlAI systems


(a) systems intended
intended toto make decision regarding
make decisions regarding the
the suitability
sitabilty ofof natural
natural persons for
persons for
employment, including determining access tojob vacancies, screening or filtering
employment, including determining access to job vacancies, screening or filtering
applications, and
applications, and evaluating
evaluating candidate
candidate performance;
performance;

(b) Al
AI systems intended to make decisions on promotion and termination of work-related
contractual relationships,
relationships, and for monitoring and evaluating performance and behavior of
persons in such relationships.”
persons in such relationships.”

6
® OpenAL

Additionally, given the


Additionally, given the continued
continued advancementofAl systems’ capabilities,
advancement of AI systems’ capabilities, we
we expect
expect that
that
currently unknown high risk use cases will continue to emerge,
emerge, making it important
important to ensure
to
that the AIA
that the remains agile
AIA remains agile in
in capturing
capturing ongoing
ongoing developments. Quickly capturing
developments. Quickly new high
capturing new high risk
risk
Al systems and
AI systems and removing
removing those
those which have proven
which have proven themselves
themselves sufficiently
sufficiently low
low risk
risk must
must bebe low
low
friction. We agree with the submissions that advocate for a process that can ensure a speedy
friction. We agree with the submissions that advocate for a process that can ensure a speedy
turmaround when it comes to adding new high risk Al systems to Annex I. Equally, we welcome
turnaround when it comes to adding new high risk AI systems to Annex III. Equally, we welcome
the Czech Presidency
Presidencyofof the Council's
Council’s proposal (15/07/2022), which includes an amendment to
Article 7.3 empowering the European Commissiontodelete
to delete Al
AI systems from Annex III Ill via
delegated acis under specific circumstances.
delegated acts under specific circumstances.

Conclusion
We hope that
We hope that these
these comments
comments provide
provide aa helpful
helpful perspective
perspective ofof the
the capabilities
capabiltes and safety
and safety
mechanisms of general purpose Al systems, and we appreciate the opportunity to share some
mechanisms of general purpose AI systems, and we appreciate the opportunity to share some
ofof our
our core expertise and
core expertise and viewpoints
viewpoints on the AIA.
on the AIA. We recognize and
We recognize appreciate the
and appreciate the enormity
enormity ofof
the EU’s
the EU's work in understanding
work in understanding and
and encouraging developmentofof critical
encouraging development Al technology
critical AI technology while:
while
ensuring that
ensuring the development
that the development and
and use
useofof these systems respects
these systems respects fundamental human rights
fundamental human rights
and values. We remain ready to assist and advise however needed.
needed.

You might also like