GitHub’s Chief Lawyer on the EU AI Act's Impact on Open Source

GitHub's Chief Legal Officer Shelley McKinley spoke to AI Business to discuss what the Act means for open source development.

Ben Wodecki, Jr. Editor

January 2, 2024

5 Min Read
Flag of the EU
Getty Images / AI Business

Near the close of 2023, European lawmakers finally reached an initial agreement on how to regulate AI after years of impasse. The EU AI Act introduces a risks-based system that would categorize all AI systems based on their potential to affect citizen’s rights.

While free and open source AI models are said to be largely exempt from the Act, the text of the regulation is not yet final and not yet publicly available.

AI Business spoke with GitHub Chief Legal Officer Shelley McKinley to discuss how the Act will affect open source development. Microsoft-owned GitHub is the largest open source repository and community in the world. The following is an edited transcript of that conversation.

How does the EU AI Act treat open source models?

Shelley McKinley: Assuming the provisionary agreement is reflected in the final regulation, the AI Act will give certainty to developers of most open source AI that they can continue to innovate. This is particularly clear for developers building and sharing AI components, including datasets, training code, models, and application software, rather than implementing entire AI systems.

The Act provides an open source exemption but limits it in cases of clear risk: banned systems, high-risk systems, systems with transparency obligations, as well as the largest foundation models. Whether they are open source or closed, the AI Act is focused on regulating general purpose models that could pose systemic risks.

Related:Expert View: Preparing for the EU AI Act

That means developers building these models will need to comply with the law with regard to documentation, evaluation, notification, energy-use reporting and cybersecurity requirements. This is in line with how we approach AI regulation at GitHub globally: keeping a focus on risks while enabling developers to continue to innovate responsibly.

Do you think it goes far enough? What would you like to have seen?

The big caveat here is that the text is not yet final and available for public review, but from what we have seen during negotiations and through the provisionary agreement, in terms of open source and the impact on developers, I do think policymakers have struck the right balance between regulating high-risk scenarios and enabling open innovation in line with our advocacy since the (European) Commission’s proposal in 2021.

Much work will continue to fill in details on compliance and ensure the inclusive development of harmonized standards. However, from the developer perspective, the political agreement is an encouraging outcome.

What would have been the impact on European open source innovation without the exemption?

Related:UPDATE: EU Reaches Deal on Historic AI Act

If there were no exemption for open source, we would almost certainly see developers pull back on their upstream contributions to open source AI in the European market. This would have a direct impact on overall innovation not only in the EU, but across the globe.

Open source software components are ubiquitous; the latest Synopsys report found open source components are in 96% of software and make up 76% of a given software stack. If developers in the EU stopped building open source components that can be used in AI systems, we would see a ripple effect that could chill AI innovation worldwide.

The Act has not passed yet – does the lack of a consolidated text until at least January pose a risk at all?

While the provisionary agreement seems to reflect a commitment to regulate high-risk AI systems without stifling broader innovation, there is always a risk that the final text may not offer the clarity on open source that we have been calling for. That said, I’m optimistic that policymakers heard the calls of the community and that the final text will ensure developers can continue collaborating and innovating, responsibly and openly, in the EU.

Open source may be safe, but organizations building AI open source projects still face challenges – including hiring technical talent, among others. What would GitHub like to see emerge to help those organizations?

As the home of open source, GitHub has always been focused on finding ways to support the open source community. Again, open source is ubiquitous, part of our strategy for supporting the ecosystem has been educating organizations on how they depend on open source and providing ways for them to invest in the projects they rely on.

That is why we created GitHub Sponsors, a program that contributes both to the health of the organization and the open source maintainers and projects that make up their software supply chain. For example, the financial support maintainers receive can then provide organizations with the technical talent support they are seeking as well. When executed at scale, it can be a win-win.

It has been heartening to see this approach inform governments as well. The German Sovereign Tech Fund provides similar financial support to key open source projects and GitHub was proud to partner in supporting the Open Technology Fund’s Free and Open Source Software Sustainability Fund to support internet freedom infrastructure. Whether it be by way of the National AI Research Resource considered in the U.S. or other mechanisms, taking these models pioneered for open source software and supporting open source AI research groups would be a clear win.

While funding programs provide one potential answer to the various hiring challenges organizations face, the demand for developer talent worldwide is so high that all organizations need to focus on creating an environment in which developers want to work. That can include investing in AI tools that help developers stay in the flow and be more creative, keeping lines of communication to leadership open, and making a concerted effort to increase representation in the organization.

Read more about:

ChatGPT / Generative AI

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like