- x NeonPulse | Future Blueprint
- Posts
- Microsoft, OpenAI, and Meta want to Own AI
Microsoft, OpenAI, and Meta want to Own AI
#NP 063
Good morning and welcome to the latest edition of neonpulse!
Today we’ll be talking about Microsoft, OpenAI, and Meta Wanting to Control AI
Airplane is the developer platform for building custom internal tools. You can transform scripts, queries, APIs, and more into powerful UIs and workflows.
Airplane's new AI built-ins allow you to integrate large language models into your workflows with minimal configuration. These AI built-ins offer several features: a chat function to send a single message to an LLM, a chatbot function to facilitate a conversation with an LLM, and a reusable AI function to convert inputs to outputs by following a set of instructions.
Use Airplane AI built-ins to automate data entry, automatically synthesize information, and help your teams move more quickly.
Sign up for a free account to try it out!
Microsoft, OpenAI, and Meta want to Control AI
In the fight for AI regulation major players such as Microsoft, OpenAI, and Meta are significantly influencing discussions on AI regulation. These industry giants have gained prominence, securing high-profile engagements like White House meetings and congressional hearings. Their influence extends to setting the agenda on AI's potential impacts on humanity and driving AI policy conversations. However, this focus on a handful of companies has left smaller AI players, both commercial and noncommercial, feeling excluded and uncertain about their future prospects.
Collectively referred to as "Big AI," these influential entities are actively guiding potential AI policies. Recently, OpenAI, Meta, Microsoft, Google, Anthropic, and Amazon collaborated with the White House to commit to responsible AI investment and the development of watermarking tools to identify AI-generated content. Furthermore, OpenAI, Microsoft, Anthropic, and Google established the Frontier Model Forum, aiming to promote safe and responsible usage of advanced AI systems. This coalition seeks to advance AI research, share insights with policymakers, and engage with the broader AI community.
However, these major players only represent a segment of the generative AI market. While OpenAI, Google, Anthropic, and Meta operate foundation models that are language or image-focused, a burgeoning sector of smaller businesses develops applications and tools using these models. Despite facing similar scrutiny, these smaller players lack the financial resources enjoyed by Big AI, making them vulnerable to business disruptions due to regulatory noncompliance.
The concern for these smaller entities revolves around their lack of influence over AI regulations. They fear they will have little input in shaping the rules that will govern their operations. For instance, accountability issues pose challenges for companies like Dataiku, which builds data analytics applications and relies on external AI models. If rules hold AI companies responsible for how their models handle data and respond to queries, companies like Dataiku could face undeserved consequences.
While the Frontier Model Forum aims to collaborate with civil society groups and governments, there is uncertainty about whether membership will be extended to more AI companies. Smaller players emphasize the need for their inclusion in the regulatory conversation to ensure their interests are considered.
To address these concerns, industry experts propose tailoring regulatory requirements and penalties to the size and scope of AI players. Additionally, creating space for smaller stakeholders, such as those focused on foundation models, within industry coalitions or standards organizations would better represent their unique needs.
However, the reliance on large corporations like Microsoft, OpenAI, and Meta to shape regulations poses risks. It could lead to regulatory capture, where policies are influenced by established industry players to favor their interests. This might inadvertently stifle competition and hinder innovation, leaving users without a say in governance.
AI Now, an organization focused on AI ethics, has raised concerns about the influence of Big AI through a report. The organization argues that regulators and the public should lead AI discussions, rather than companies driving the narrative.
Experts like Beena Ammanath of the Global Deloitte AI Institute stress the importance of involving a diverse range of stakeholders, including non-corporate entities, to foster trust in AI technology. The engagement of NGOs, academics, international bodies, and policy experts is crucial. As lawmakers continue deliberating AI regulation, there remains an opportunity to broaden the conversation and prioritize the public interest in shaping responsible AI adoption and ethical standards.
Do we need strict or loose regulation? |
You wake up to a perfect morning, free of constant notifications. This is the freedom you've always craved — you work out, go for a walk, or simply enjoy spending time with your loved ones without interruption. How have you done it? AI.Simple’s Free Newsletter – ‘crack the AI code’ today
Cool AI Tools
🔗 AI.Simple: Free newsletter digest to help grow your biz with AI.
🔗 MindPal: Dump all your files and chat with them, a real second brain
🔗 PodcastGPT: Instant insights from podcasts
🔗 GPT Web App Generator: combines the power of GPT with the Wasp full-stack framework to swiftly create versatile web applications.
And now your moment of zen
More: I am chatgpt programmer
That’s all for today folks!
If you’re enjoying neonpulse, we would really appreciate it if you would consider sharing our newsletter with a friend by sending them this link:
https://neonpulse.beehiiv.com/subscribe?ref=PLACEHOLDER