Ensuring Open Source AI thrives under the EU’s new AI rules

In 2024, the European Union approved the Artificial Intelligence Act, the world’s first legal framework for AI. Part of the law mandated the creation of a Code of Practice for General Purpose AI for AI developers. The OSI applied to take part in the drafting of these rules, but when the first draft arrived, we discovered issues that would make it impossible for Open Source AI projects to comply. Here’s how we fixed them.

What is the Code of Practice for, and why is it important?

The European Union (EU) Code of Practice for General Purpose AI is an upcoming set of voluntary rules designed to help developers prove they are complying with the AI Act. It is a temporary measure while the EU develops in-depth standards on artificial intelligence (AI), but it will inspire the future standards the EU makes.

In August of 2024 the Open Source Initiative (OSI) applied to take part in the drafting process. We were accepted and have been working since then to make sure these new rules were written with Open Source in mind. 

What was wrong with the Code of Practice?

Overall, the Code of Practice proposes sensible practices to reduce the risks AI could pose. AI systems that comply with the OSI’s Open Source AI Definition already follow most of these rules, but there were a couple of elements in the Code of Practice that would have been impossible for Open Source developers to implement.

In particular, previous drafts of the Code of Practice mandated acceptable use policies, and a prohibition of certain uses of the AI system. But having these restrictions conflicts with the freedom of use that Open Source guarantees, in particular, rule 6 of the Open Source Definition (No Discrimination Against Fields of Endeavor). This would mean developers would have to choose between complying with the Code of Practice or being Open Source.

This is bad both for developers, and for the EU: companies might not consider Open Source solutions if they aren’t sure they can comply with the law, which could delay or even halt the development and deployment of Open Source AI systems in Europe. This would be particularly unfortunate, as Open Source AI systems are by nature the most transparent and accessible. Additionally, the proposed restrictions can’t actually be enforced; restrictions could be removed by a downstream user, and acceptable use policies could simply be ignored.

How the OSI worked to fix it

In the first round of feedback, we raised our concerns about the incompatibility of the Code of Practice with Open Source principles. After we saw no improvement in the second draft, we teamed up with like-minded organizations, and wrote a letter to the chairs, highlighting the issue. On the 11th of March the third draft of the Code of Practice was released, which made acceptable use policies optional, and exempted Open Source AI from prohibiting certain downstream uses.

We welcome these changes as they will allow Open Source AI developers to adhere to the Code of Practice, removing a serious barrier to Open Source AI development in Europe!

What’s next?

The Open Source Initiative remains committed to ensuring Open Source is taken into account when new rules like these are being made. We’ll continue to follow the drafting process and ensure these changes aren’t reversed. We’ll also continue to fight the openwashing of some AI models such as Meta’s Llama, while working to educate and inform lawmakers about Open Source and its benefits.

If you like what we are doing, consider donating! Your support makes our work possible!