OpenAGI, or why we shouldn't trust Open AI to protect us from the Singularity

OpenAGI, or why we shouldn't trust Open AI to protect us from the Singularity

Open AI just dropped a pretty remarkable blog post on their roadmap for not destroying civilization with their imminent artificial general intelligence (AGI):

"As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like."

Now, I'm around 98% sure that Open AI mostly answers the question: What if we allocated unlimited resources to building a better auto-complete? ChatGPT is an amazing tool but it's amazing at guessing which word (token) is likely to appear next. Quite possibly their blog post is just an exercise in anchoring - if they're 95% of the way to AGI then GPT4 must be pretty amazing and therefore worth a lot of money. If everyone realized that they're more like 2% of the way there, and the next 1% is going to be exponentially difficult, then some of the froth would blow off.

But what if they really are close to the singularity? After all, we have no idea what causes non-artificial intelligence.

Their ideas for keeping us safe are a little disturbing:

"We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important."

Given the lack of transparency around the inner workings of ML models, and the lack of knowledge around what intelligence even looks like, this is a pretty risible idea. And:

"Finally, we think it’s important that major world governments have insight about training runs above a certain scale."

We are facing down the prospect of a second Trump term while the UK has a Prime Minister who thinks that a homeless person might be 'in business'.

The most concerning part for me is:

"...we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access."

Creating AGI would be an amazing and terrifying accomplishment. Treating it as a slave feels like the most surefire way to usher in the most terrifying possible consequences, for us and for the AGIs.

Full disclosure: I use Open AI embeddings for related posts and site search. The words on this blog are my own though. I do occasionally generate a post image using Stable Diffusion like the rather strange one above.

Add your comment...

Related Posts

You Might Also Like

(All Etc Posts)

(Published to the Fediverse as: OpenAGI, or why we shouldn't trust Open AI to protect us from the Singularity #etc #openai #ml What OpenAI got wrong in their blog post on AGI and how we should treat AGIs if they ever arrive. )

Add Comment

All comments are moderated. Your email address is used to display a Gravatar and optionally for notification of new comments and to sign up for the newsletter.

Newsletter

Related

A robot head