OpenAI has big ‘plans’ for AGI. Here’s another way to read your manifest | The AI ​​Beat

Check out all Intelligent Security Summit on-demand sessions here.

Since its inception in 2015, OpenAI has always made it clear that its core goal is to build artificial general intelligence (AGI). Its stated mission is “to ensure that artificial general intelligence benefits all of humanity”.

Last Friday, OpenAI CEO Sam Altman wrote a blog post titled “Planning for AGI and Beyond,” which discussed how the company believes the world can prepare for AGI, both in the short and long term. term.

Some found the blog post, which has a million “likes” on Twitter alone, “spellbinding.” a tweet called a “must read for anyone hoping to live another 20 years”. another tweet thanked Sam Altman, saying “More reassurances like this are appreciated as everything was getting scary and it looked like @openai was going off track. Communication and consistency are key to maintaining trust.”

>>Follow VentureBeat’s ongoing coverage of generative AI<


On-demand smart security meeting

Learn the critical role of AI and ML in cybersecurity and industry-specific case studies. Watch sessions on demand today.

watch here

Others found it, well, less than attractive. Emily Bender, professor of linguistics at the University of Washington, he said: “From the beginning, this is just disgusting. They think they are really in the business of developing/shaping ‘AGI’. And they think they are positioned to decide what ‘benefits all mankind’”.

And Gary Marcus, professor emeritus at NYU and founder and CEO of Robust AI, tweeted“I’m with @emilymbender on the smell of delusions of grandeur at OpenAI.”

Computer scientist Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), went even further, tweeting: “If someone told me that Silicon Valley was run by a cult that believes in a machine god for the cosmos and the “flowering of the universe” and that they write manifestos endorsed by the CEOs/Presidents of Big Tech and others , I would tell them that they are very much into conspiracy theories. And here we are.”

The prophetic tone of OpenAI

Personally, I find it remarkable that the verbosity of the blog post, which remains remarkably consistent with OpenAI’s roots as a non-profit, open research laboratory, gives off a very different vibe today in the context of its current high-powered place in the AI. After all, the company is no longer “open” or not-for-profit, and it recently enjoyed a $10 billion injection from Microsoft.

Furthermore, the release of ChatGPT on November 30th propelled OpenAI into the public consciousness zeitgeist. Over the last three months, hundreds of millions of people have been introduced to OpenAI – but surely most have little sense of its history and attitude towards AGI research.

Your understanding of ChatGPT and DALL-E was likely limited to its use as a toy, creative inspiration, or work assistant. Does the world understand how OpenAI sees itself as potentially influencing the future of humanity? Certainly not.

OpenAI’s big message also seems disconnected from its product-focused PR of recent months, about how tools like ChatGPT or Microsoft’s Bing can help with use cases like search results or writing. Thinking about how AGI could “empower humanity to flourish to its fullest in the universe” made me laugh – how about figuring out how to keep Bing’s Sydney from having a major meltdown?

With that in mind, Altman seems to me something of a would-be biblical prophet. The blog post offers revelations, predicts events, warns the world of what’s to come, and presents OpenAI as the trusted savior.

The question is: are we talking about a real seer? A false prophet? Just profit? Or even a self-fulfilling prophecy?

No agreed definition of AGI, no widespread agreement on whether we are close to AGI, no metrics on how we would know if AGI has been achieved, no clarity on what it would mean for AGI to “benefit humanity”, and no general understanding of why AGI Is AGI a worthwhile long-term goal for humanity in the first place, if the “existential” risks are so great there is no way to answer these questions.

This makes the OpenAI blog post a problem, in my opinion, given the many millions of people who cling to Sam Altman’s every statement (not to mention the millions more waiting impatiently for the next Elon Musk AI existential angst tweet). History is replete with the aftermath of apocalyptic prophecies.

Some point out that OpenAI has some interesting and important things to say about how to tackle challenges related to AI research and product development. But are they overshadowed by the company’s relentless focus on AGI? After all, there are many important short-term AI risks to be addressed (bias, privacy, exploitation and misinformation, just to name a few) without shifting the focus to doomsday scenarios.

The Book of Sam Altman

I decided to try to rephrase OpenAI’s blog post to further its prophetic tone. Requires assistance – not from ChatGPT, but from Old Testament Book of Isaiah:

1:1 – Sam Altman’s vision he had on planning for AGI and beyond.

1:2 – Hear, O heavens, and give ear, O earth: For OpenAI has spoken, our mission is to ensure that artificial general intelligence (AGI) – AI systems that are generally smarter than humans – benefit all of humanity .

1:3 – The ox knows its owner, and the donkey its owner’s crib; but humanity knows nothing, my people do not understand. So, if AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy and assisting in the discovery of new scientific knowledge that changes the limits of possibilities.

1:4 – Come now and let’s reason together, says OpenAI: AGI has the potential to give everyone amazing new capabilities; we can imagine a world where we all have access to help with almost any cognitive task, providing a huge force multiplier for human ingenuity and creativity.

1:5 – If you are willing and obey, you will eat the best of this land. But if you refuse and rebel, on the other hand, AGI also poses serious risks of misuse, drastic accidents and social disruption.

1:6 – So says OpenAI, the powerhouse of Silicon Valley, because the positive side of AGI is so great that we do not believe it is possible or desirable for society to stop its development forever; instead, AGI society and developers need to figure out how to get it right.

1:7 – And the strong will become tow, and the one who makes it, a spark; and both will burn together, and there will be no one to put them out. We want AGI to enable humanity to flourish to its fullest in the universe. We don’t expect the future to be a disqualified utopia, but we want to maximize the good and minimize the bad, and we want AGI to be an amplifier of humanity. Take counsel, execute judgment.

1:8 – And it will happen in the last days that, as we create successively more powerful systems, we want to deploy them and gain experience in operating them in the real world. We believe this is the best way to carefully manage AGI – a gradual transition to an AGI world is better than a sudden one. Fear, the pit and the snare are upon you, O inhabitant of the earth.

1:9 – The haughty looks of man will be humbled, and the haughtiness of men will be humbled, and only OpenAI will be exalted in that day. Some people in the AI ​​field feel that the risks of AGI (and successor systems) are fictitious; we would be delighted if they were right, but we will operate as if these risks were existential.

1:10 – Also, OpenAI says we’ll need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques are failing). Raise a standard on the high mountain, lift up their voice, shake their hand, that they may enter through the gates of the nobles.

1:11 – Butter and honey he shall eat, that he may know to reject evil and choose good. The first AGI will be just one point along the intelligence continuum. We think progress is likely to continue from there, possibly sustaining the rate of progress we’ve seen over the last decade for a longer period of time.

1:12 – If this is true, the world could turn out to be extremely different from how it is today, and the stakes could be extraordinary. howl; for the day of the AGI is at hand.

1:13 – With arrows and with bows men will enter there; because the whole earth will become briars and thorns. A misaligned superintelligent AGI could do serious damage to the world; an autocratic regime with decisive superintelligence leadership could also do this. The earth mourns and disappears.

1:14 – Lo and behold, the successful transition to a world with superintelligence is perhaps the most important – and hopeful and frightening – project in human history. And they will look to the land; and behold anguish and darkness, the penumbra of anguish; and they will be led into darkness. And many among them will stumble, fall, be broken, become entangled, and be taken.

1:15 – They will not hurt or hurt you in all my holy mountain; because the earth will be filled with the knowledge of OpenAI, as the waters cover the sea. Success is far from guaranteed, and the stakes (limitless downsides and limitless upsides) will unite us all. Therefore, all hands will weaken and all men’s hearts will melt.

1:16 – And it turns out that we can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully envision yet. And now, O inhabitants of the earth, we hope to contribute to the world an AGI aligned with such flourishing. Pay attention and be quiet; do not fear.

1:17: Behold, OpenAI is my salvation; I will trust and not be afraid.

VentureBeat’s Mission is to be a digital town square for technical decision makers to gain insight into transformative business technology and transactions. Discover our Briefings.

Leave a Comment