EU AI Act: Shaping Or Destroying The Future Of US Open Source Software?

The EU’s revised AI Act has made a gutsy move by prohibiting API access to generative AI models from American companies like OpenAI, Amazon, Google, and IBM.

The amended act, approved by the committee recently, introduces penalties for American open-source developers and software distributors, including GitHub, in case unlicensed generative models are accessible in Europe.

While the act allows for open-source exceptions concerning traditional machine learning models, it explicitly disallows safe-harbor provisions for open-source generative systems.

Companies that make models accessible in the EU without obtaining comprehensive and costly licenses would face substantial fines, amounting to either €20,000,000 or 4% of their global revenue, whichever is greater.

This responsibility also extends to open-source developers and hosting services like GitHub, as they would be held accountable for offering unlicensed models.

Effectively, the EU’s actions are pressuring prominent American tech companies to undermine small American businesses’ viability while threatening to penalize significant components of the American tech ecosystem.

If implemented, the enforcement of the AI Act would be removed from the control of individual EU member states. According to the act, third parties would have the ability to file lawsuits against national governments to enforce fines.

Moreover, the act extends its jurisdiction beyond territorial boundaries, meaning that third parties could compel European governments to engage in conflicts with American developers and businesses.

Main Provisions Of The Amended AI Act

The EU's revised AI Act has made a gutsy move by prohibiting API access to generative AI models from American companies like OpenAI, Amazon, Google, and IBM.

The linked PDF (144 pages) contains the actual law text. However, it’s worth noting that the formatting style of the provisions differs from that commonly found in American statutes. As a result, reading through this document can be quite challenging and cumbersome. To assist with navigation, We have included page numbers corresponding to the relevant sections within the PDF.

Extensive Scope of Jurisdiction: According to the act, the jurisdiction extends to providers and deployers of AI systems established in or located in a third country. This applies when either the Member State law is applicable due to public international law or when the output generated by the AI system is intended for use within the Union. (Pg 68-69)

To comply with the regulations, it is mandatory to register any AI project or foundational model that falls under the “high-risk” category with the government. The registration process involves providing details of the anticipated functionality of the system. In case the system surpasses the registered functionality, it may be recalled. This could pose a challenge for open-source projects that operate with less control. Registration also mandates the disclosure of various details such as the data sources utilized, computing resources (including training time), performance benchmarks, and red teaming. (Pg 23-29)

Costly Risk Assessment Needed: As per the regulation, the EU member states are responsible for conducting “third-party” assessments in each country. The cost of these assessments will vary based on the size of the applying company. The benchmarks used for these tests are yet to be established. Post-release monitoring will also be mandatory, likely performed by the government. Additionally, recertification will be necessary if the models exhibit unforeseen capabilities or undergo significant training. (Pg 14-15, and see provision 4a for clarification that this involves government testing.)

Very Vaguely Defined: The act provides a list of risks encompassing various aspects, including potential risks to the environment, democracy, and the rule of law. However, the specific definition of what constitutes a risk to democracy is not clearly defined. It raises the question of whether the act itself could potentially pose a risk to democracy. (Pg 26)

No Exemption for Open Source LLMs: The act does not exempt foundational open-source models from its provisions. Both the programmers and distributors of the software hold legal liability. In the case of other types of open-source AI software, the liability is shifted to the group that employs the software or brings it to market. (Pg 70)

Restrictions on APIs: The act essentially bans using APIs (Application Programming Interfaces) that allow third parties to implement AI models without running them on their own hardware. Examples of such implementations include AutoGPT and LangChain. According to these rules, if a third party utilizes an API and discovers a new model functionality, they are required to obtain certification for that specific functionality.

Mandatory Disclosure of Confidential Technical Information: According to the law, the prior provider must disclose what would typically be considered confidential technical information to the third party, enabling them to complete the licensing process. This provision effectively restricts startup businesses and other tinkering individuals from utilizing an API, even if they are located in the US. If a tinkerer decides to make their software accessible in Europe, it will necessitate licensing and compulsory disclosure. (Pg 37)

Liability of Open Source Developers: The act’s wording is unclear in certain areas. It does not encompass free and open-source AI components. Foundational Models (LLMs) are treated separately from components. This implies that while open-source traditional machine learning models are permissible, generative AI models are not.

In the scenario where an American open-source developer uploads a model or code that uses an API on GitHub, and that code becomes available in the EU, the developer would be held liable for releasing an unlicensed model. Additionally, as the hosting platform, GitHub would bear liability for hosting an unlicensed model. (Pg 37, 39-40)

Restrictions on LoRA Technique: The act essentially prohibits the use of LoRA, a technique that allows for the gradual addition of new information and capabilities to a model at a lower cost. Open-source projects often rely on LoRA as they lack the resources for expensive computing infrastructure. Major AI models are also rumored to utilize LoRA for training, as it is more cost-effective and allows for easier safety checks compared to introducing numerous new features simultaneously. (Pg 14)

Recertification for LoRA Usage: If an open-source project obtains the required certificates, it would still need to undergo recertification each time LoRA is used to expand the model.

Stringent Permitting Review for Deployment: Deployers, whether individuals or entities using AI systems, must undergo a rigorous review process for permitting before launching their systems. Small businesses in the EU, however, are exempt from this requirement. (Pg 26)

Litigation Rights of Third Parties: Concerned third parties hold the right to sue through a country’s AI regulator established by the act. This means deploying an AI system can be challenged individually in multiple member states. Third parties can initiate legal action to compel a national AI regulator to impose fines. (Pg 71)

Significant Fines: Non-compliance with the regulations can result in fines ranging from 2% to 4% of a company’s gross worldwide revenue, with individuals facing fines of up to €20,000,000. However, European-based SMEs and startups receive some leniency regarding fines. (Pg 75)

Exemptions for R&D and Clean Energy Systems in the EU: The use of AI for research and development tasks or clean energy production are exempted from compliance with this regulatory system. (Pg 64-65)

Related Stories:

AI Act And US Law

Challenges with Broad Extraterritorial Jurisdiction: The extensive grant of extraterritorial jurisdiction in the AI Act raises concerns.

Regardless of their motive, it allows any EU citizen to potentially compel EU governments to take legal action if unlicensed AI models are somehow accessible within the EU. This goes beyond the typical requirement of EU-based companies complying with EU laws, creating a broader scope of enforcement.

Significant Issues with API Restrictions: The API restrictions pose a major problem. Currently, many American cloud providers offer access to API models with few limitations, apart from potential waiting lists that providers are actively addressing.

This allows programmers or inventors working from home or their garages to access cutting-edge technology at reasonable costs. However, under the AI Act restrictions, API access becomes complex and is limited mainly to enterprise-level customers.

Conflicting Approaches: The EU’s objectives contrast with the demands of the FTC (Federal Trade Commission). Imposing similar restrictions within the United States would raise significant antitrust issues.

The FTC has expressed concerns about avoiding a situation like the Amazon scenario, where a dominant company leverages its position to secure a majority of profits at the expense of smaller partners. Acting in line with the intentions of the AI Act would likely trigger major antitrust concerns for American companies.

Conflicting Impact on Innovation: Apart from the antitrust provisions, the AI Act’s innovation treatment presents a point of conflict. In the American context, discovering new ways to utilize software for profit is generally encouraged.

However, under the EU Act, finding innovative applications for software can invalidate safety certifications, necessitating a new licensing process. These disincentives to innovation are likely to cause friction, especially considering the statute’s extraterritorial reach.

Issues with Open Source Provisions: The treatment of open source developers in relation to foundational models poses a significant problem within the AI Act.

It labels developers and distributors working on or with foundational models as potential wrongdoers, holding them liable for releasing unlicensed foundational models or code that enhances such models. For other open-source machine learning forms, the licensing responsibility falls on those deploying the system.

Challenges of Sanctioning the Tech Ecosystem: Attempting to sanction specific parts of the tech ecosystem, such as open-source developers, can have adverse consequences. Open-source developers are unlikely to respond positively to being told by a government, particularly a foreign one, that they cannot create certain programs.

Additionally, what would happen if platforms like GitHub and various co-pilots decide that dealing with Europe’s regulations is too challenging and restrict access? Such actions may have repercussions that have not been thoroughly considered.

What Are The Defects Of The Act?

Encouragement of Unsafe AI: Adding to the concerns, the AI Act seems to promote the development of narrowly tailored AI systems. However, history has shown us that such systems can be inherently risky, as evidenced by the complex algorithms used in social media platforms.

Many social media algorithms prioritize content based solely on its engagement value, lacking the ability to assess the potential impact of the content. In comparison, large language models can be trained to recognize and avoid promoting violent content. From an experiential standpoint, the foundational models the EU is apprehensive about are safer than the models they advocate for.

Corruption in the Legislation: The perception arises that this legislation is deeply flawed. If there are concerns about large language models, then those concerns should apply universally in all circumstances.

Granting exemptions to R&D models demonstrates a lack of seriousness regarding the legislation’s objectives. The likely outcome of such a policy would be the creation of a society where only the elite have access to R&D models, while small entrepreneurs and others are left without such opportunities.

Anticipated Problems and Unforeseen Consequences: This law is expected to pass, but the EU may soon realize that it has inadvertently created numerous unforeseen problems. Unfortunately, certain regulations, particularly those of algorithms used by large social networks, require attention and address.

Help Someone By Sharing This Article