🤖 Identity fraud, SlackGPT, and LLaMa's on the loose!

2/8/23

Good morning and welcome to the latest edition of neonpulse!

Here's what we have for you today:

  • ChatGPT is coming to Slack

  • The future of identity fraud

  • Meta’s new LLaMa escapes!

SlackGPT

Slack has announced a new ChatGPT integration this week, taking us one step closer to the glorious day when you never have to talk to your coworkers again!

The new integration will allow you to create summaries of threads to help keep you up to date with company news, and will allow you to quickly draft responses to your colleagues inside the app.

The news follows Slack parent company Salesforce introducing its “Einstein GPT” functionality into its main CRM product, allowing users to generate customer emails and craft responses to customer questions.

A.I. identity fraud

As A.I. continues to evolve, identity fraud is going to reach levels that we can’t even imagine.

Free services currently available give even non-technical people the ability to create photorealistic digital reproductions of any person, and when combined with services that allow you to clone a persons voice (based off as little as a 3 second recording of them talking) it’s only a matter of time before digital clones become indistinguishable from a real person.

Yet a new startup cofounded by OpenAI’s Sam Altman is looking to solve this problem by developing the next generation of identity verification, a problem that CEO Alex Blania see’s as essential given the rapid progression of A.I. tech.

Speaking about current identity verification technologies, Blania said that he “fundamentally believes they’re just going to get ripped apart by the next generation of large language models like ChatGPT that are going to come out over the next 12 to 24 months.”

“Neither digital content nor intelligence will be good enough to discriminate who is or isn’t human anymore. You will need something that bridges to the physical world, as everything else will break.”

Worldcoin Iris Scanner

To create this bridge to the physical world, Worldcoin utilizes iris scans to verify a users identity, creating a unique cryptographic equation which is then used to verify a user’s identify during future transactions.

The company currently has the orb devices installed in locations around the world (“everywhere from universities in Kenya to shopping malls in Lisbon,” said Blania) and while the company currently boasts 1.3 million users, Worldcoin has a long way to go to acheive Blania’s goal of becoming the world’s largest identity verification system.

Yet Blania’s ambitions extend far beyond identify verification, as he plans to make Worldcoin the “largest financial system globally, and to make it fully privacy preserving and inclusive.”

The basics of achieving that goal would include creating their own global currency (presumably called the Worldcoin) along with creating an app that allows users to make purchases and send transfers to other users.

One potential use case for the technology would be for the distribution of universal basic income, an economic idea that has been tested in small experiments yet may become necessary for larger portions of the population as A.I. begins to replace human workers.

Worldcoin is backed with $125 million dollars in venture funding, attracting capital from top investment firms including Andreessen Horowitz, Variant, Khosla Ventures, Coinbase and Tiger Global.

You can learn more about Worldcoin’s mission on there site here.

LLaMa on the loose

Meta’s brand new large language model, which was previously distributed only to approved government organizations and researchers, has leaked online, and now is available for everyone to download.

And while some people are concerned:

“Get ready for loads of personalized spam and phishing attempts… open sourcing these models was a terrible idea” tweeted cybersecurity researcher Jeffrey Ladish.

Others are significantly less worried:

“Most people don’t own the hardware required to run the largest version of LLaMA at all, let alone efficiently,” said Stella Biderman, a machine learning researcher at Booz Allen Hamilton.

So where exactly does the truth lie?

Well according to A.I. researcher Matthew Di Ferrante, “anyone familiar with setting up servers and dev environments for complex projects” should be able to get LLaMA operational “given enough time and proper instructions,” making the leak sound more like an opportunity for A.I. researchers to play with the latest version of Meta’s technology and less like a terminator-style hacker tool that’s coming to destroy the world.

One benefit of the leak is that open access to the large language model will help to democratize A.I. systems, with Ferrante saying that more people having access to this technology “is a net good since it prevents us getting into some monopoly situation where companies like OpenAI are the only entities capable of serving complex AI models.”

Meta is actively filing takedown requests of the model online, but the efforts may prove fruitless as it appears that the genie is out of the bottle.

And now your moment of zen

That’s all for today folks!

If you’re enjoying Neon Pulse, we would really appreciate it if you would consider sharing our newsletter with a friend by sending them this link:

And if you’re looking for past newsletters, you can find them all here.

Want to advertise your products in front of thousands of AI investors, developers, and enthusiasts?