- x NeonPulse | Future Blueprint
- Posts
- 🤖 OpenAI lobbied EU to water down regulation
🤖 OpenAI lobbied EU to water down regulation
NP #011
Good morning and welcome to the latest edition of neonpulse!
Today, we’re talking regulation. Because while OpenAI’s CEO has been pro-regulation to the public, he seemingly has been lobbying the EU to water down regulation…
OpenAI lobbied EU to water down regulation
Ever since we first heard about AI, regulation has been a topic of conversation. Some say it is more dangerous than even nuclear weapons. Others advocate for a 6-month halt in its development, to reassess the risk and create a regulatory framework that suits them.
Sam Altman, the CEO of Open AI and certainly one of the leaders in this revolution, has always been pro-regulation. He even said: “You’d be crazy if you weren’t at least a little bit scared of AI.”
But now, there’s an interesting twist in Altman’s stance on regulation. He has supposedly been lobbying the EU for looser regulations…
OpenAI seemingly wants elements of the E.U.’s AI Act to be watered down in ways that would reduce the regulatory burden on the company. This is according to leaked documents about OpenAI’s engagement with E.U. officials, obtained by TIME.
OpenAI proposed several changes that were later made to the final text of the E.U. law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January.
OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems—including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2—to be "high risk".
OpenAI is not the only one lobbying. Microsoft and Google both have been too. Both tech giants are actively involved in AI, with Microsoft having invested $13 billion and Google having its own “Bard” LLM.
Both companies say that the burden for complying with the Act’s most stringent requirements should be on companies that explicitly set out to apply an AI to a high-risk use case—not on the (often larger) companies that build general-purpose AI systems.
“By itself, GPT-3 is not a high-risk system", according to OpenAI. This was stated in a previously unpublished seven-page document that it sent to E.U. Commission and Council officials in September 2022, titled “OpenAI White Paper on the European Union’s Artificial Intelligence Act”.
The document also states that while the system in itself is not high-risk, it does “possess capabilities that can potentially be employed in high-risk use cases”.
OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general-purpose AI systems should be considered inherently high risk.
AI regulation continues to be an ongoing topic of discussion, demanding serious attention from everyone involved. As we move forward, only time will be able to tell how it all plays out…
Do we need strict or loose regulation? |
Cool AI Tools
🔗 Codesphere: Deploy anything in seconds, no experience needed.
🔗 Typefully 2.0: AI-powered Twitter & LinkedIn manager.
🔗 Danelfin AI: AI-powered stock-picking with up to 94% historic win-rate.
And now your moment of zen
“The Rock” if he was a rock
Credits: u/Tartofreze_
That’s all for today folks!
If you’re enjoying neonpulse, we would really appreciate it if you would consider sharing our newsletter with a friend by sending them this link: