Major shifts at OpenAI spark skepticism about impending AGI timelines

Post content hidden for low score. Show…

50me12

Ars Tribunus Angusticlavius
7,064
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Same. I don't even understand what the underpinnings of AGI is supposed to look like.

I work with and get LLMs to some extent, but more LLM is still just more math, vectors ... output. More LLM is just more LLM, not necessarily magically different or any fewer fundamental shortcomings.
 
Upvote
173 (181 / -8)
Post content hidden for low score. Show…

Gibborim

Ars Tribunus Militum
1,675
Because money?

OpenAI and others have had a long pattern of raising funds and then those folks not in the immediate founders circle of big money ... move on to other companies raising funds where they can get in on that money and work on their own projects.

With all the money going into AI companies it makes sense that employment is very fluid regardless of breakthroughs or not.
The point in the article is that these people have a large stake in OpenAI and would benefit massively if OpenAI cracks AGI.
 
Upvote
153 (155 / -2)

jasonmicron

Ars Scholae Palatinae
1,450
Subscriptor++
They're leaving because (and this is a vague recollection of some articles I read online so might be incorrect here) is Altman seems to have unrealistic goals. Could also be that those heading out don't want to be tied to any copyright suits. Or as the 2nd commenter said, money.

Or, they only went there to learn enough about how OpenAI is doing their magic and want to go chase those sweet VC dollars with their own startup.
 
Upvote
-19 (20 / -39)

volcano.authors

Smack-Fu Master, in training
78
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
ChatGPT-3.5 = 175 billion parameters, according to public information

Different studies have slightly varying numbers for a human brain, but it's 1000x more: from 0.1 to 0.15 quadrillion synaptic connections. Source: https://www.scientificamerican.com/article/100-trillion-connections/ (among others)

While it's likely to require something more than just scaling up the model size, I thought this gives some clue about scale. I agree with you, perhaps the answer is "it's not" scaling.
 
Upvote
64 (75 / -11)

bsplosion

Ars Praetorian
475
Subscriptor++
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Here's a really interesting discussion of exactly that subject:

The short answer appears to be "maybe, but probably not, and possibly never due to data, compute, and other constraints".
 
Upvote
117 (117 / 0)

kinpin

Ars Tribunus Militum
1,519
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT
 
Upvote
50 (59 / -9)

kinpin

Ars Tribunus Militum
1,519
They're leaving because (and this is a vague recollection of some articles I read online so might be incorrect here) is Altman seems to have unrealistic goals. Could also be that those heading out don't want to be tied to any copyright suits. Or as the 2nd commenter said, money.

Or, they only went there to learn enough about how OpenAI is doing their magic and want to go chase those sweet VC dollars with their own startup.
I doubt they're leaving because of Copyright infringement, if that was the case, they wouldn't be moving to Anthropic.

Anthropic fires back at music publishers' AI copyright lawsuit | Reuters

RIAA Backs AI Copyright Lawsuit Against Anthropic, Sees Similarities with Napster * TorrentFreak
 
Upvote
47 (47 / 0)

TylerH

Ars Praefectus
3,191
Subscriptor
The moves have led some to wonder just how close OpenAI is to a long-rumored breakthrough of some kind of reasoning artificial intelligence

Meanwhile the rest of us over here in reality have long since known the truth: OpenAI and everyone else are blowing steam when they say they're "Close to AGI".
 
Upvote
191 (193 / -2)
Post content hidden for low score. Show…

Longmile149

Ars Scholae Palatinae
2,347
Subscriptor
I kinda wonder if Altman is just running with the guardrails off after coming up roses in the wake of the board debacle earlier this year.

A big part of what they claimed to be unhappy about was that he’s conniving, dishonest, and sows division. What are the odds that without some moderating influences he’s just making OpenAI a shittier place to work and the people most able to walk are moving on before it gets properly bad?
 
Upvote
135 (136 / -1)

tuple

Wise, Aged Ars Veteran
178
Same. I don't even understand what the underpinnings of AGI is supposed to look like.

I work with and get LLMs to some extent, but more LLM is still just more math, vectors ... output. More LLM is just more LLM, not necessarily magically different or any fewer fundamental shortcomings.
I think that what you are missing is that no matter how true that is, it won't open the pockets of investors ;-)
 
Upvote
19 (20 / -1)

Ceedave

Ars Praetorian
560
Subscriptor
Schulman’s parting remarks quoted in the last paragraph of the article said:
Despite the departures, Schulman expressed optimism about OpenAI's future in his farewell note on X. "I am confident that OpenAI and the teams I was part of will continue to thrive without me," he wrote. "I'm incredibly grateful for the opportunity to participate in such an important part of history and I'm proud of what we've achieved together. I'll still be rooting for you all, even while working elsewhere."
(Emphasis mine)
This seems to assume an enduring contribution and relevance that I’m not confident OpenAI and its competitors will achieve…you bros may just be churning VC bucks, at the end of the day.
 
Upvote
44 (44 / 0)
Post content hidden for low score. Show…

L0neW0lf

Ars Tribunus Militum
1,947
I've already been using Anthropic's free version of Claude.ai for some time, and plan on continuing to do so if and when I actually need an AI. I found that it works as well as ChatGPT for what I've used it for (shaping MySQL queries primarily, sometimes formatting information into a summary), and Anthropic supposedly had the goal of being more ethical.

Both AIs I find, are like I.Robot - "My responses are limited; you must ask the right questions". At least for best results. If you're good at writing technical documentation, it's a plus for working with AI.
 
Upvote
14 (16 / -2)
Post content hidden for low score. Show…
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
a lot of computer science and cognitive science academics also say "it's not".

the core algorithm in current ML is some kind of 'stochastic gradient descent'. That basically means, guessing at neural net connectivity values until it gives pretty good answers for questions in the training set.

Unlike a biological mind, it never takes the training data as an input, and, at best, can do a good job producing plausible results for prompts that are within, or lie close to (in a high dimensional vector space) the training data. the network does not have an inherent ability to generalize beyond that. Or model the world and reason about the model internally.

current approaches simply cannot ever be 'generally intelligent', because there isn't enough training data, or enough atoms in the universe, to make a computer that could work like that.

edit: and most of the people who say otherwise are drawing a salary, or collecting VC investment, that requires keeping this hype bubble growing.
 
Upvote
90 (102 / -12)

Hispalensis

Ars Tribunus Militum
1,649
Subscriptor
Microsoft CTO Kevin Scott has countered these claims, saying that LLM "scaling laws" (that suggest LLMs increase in capability proportionate to more compute power thrown at them) will continue to deliver improvements over time

Scaling laws don't matter if you are running out of training data, as some researchers suggest (like this recent paper on arxiv https://arxiv.org/html/2211.04325v2).
 
Upvote
52 (53 / -1)
Post content hidden for low score. Show…
Post content hidden for low score. Show…

jesse1

Ars Scholae Palatinae
624
ChatGPT-3.5 = 175 billion parameters, according to public information

Different studies have slightly varying numbers for a human brain, but it's 1000x more: from 0.1 to 0.15 quadrillion synaptic connections. Source: https://www.scientificamerican.com/article/100-trillion-connections/ (among others)

While it's likely to require something more than just scaling up the model size, I thought this gives some clue about scale. I agree with you, perhaps the answer is "it's not" scaling.

this makes a ton of assumptions that are just that assumptions

A) that a parameter in a NN is equivalent to a synaptic connection

B) That synaptic connections are the most efficient way to do "intelligence" so that it could be used as a benchmark to go


C) That learning as done for the brain is what is most efficient for how it should be done with computers.
 
Upvote
65 (65 / 0)

Jt21

Smack-Fu Master, in training
42
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it' not"

Prominent researchers like Yann LeCun agree with you. It'll take more than just a scaled up LLM.
 
Upvote
35 (35 / 0)

jesse1

Ars Scholae Palatinae
624
Basically it is induction based on emergent behaviour and test performance seen already from simply scaling (more data and more parameters). Many AI researchers are skeptical, but on the other hand the progress already seen has been pretty shocking. Most AI researchers think at a minimum it will have to be a combination of LLM+search; LLM+symbolic reasoning; LLM+planner; or more likely more complex designs etc. and plenty believe that additional breakthroughs are needed.

Most AI researchers thought AGI was just some tweaks away from Random Forest when they were the cutting edge or similarly for Bayesian Networks when they were at their peak and Clippy was popping up on screens.
 
Upvote
47 (48 / -1)

ninjonxb

Smack-Fu Master, in training
65
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
That is the thing that bothers me the most. Sure these are "multimodel" but we are still fundamentally iterating on LLM's here. At best we can do some tricks here or there where it can do other things, but that doesnt make it an AGI.

Throw in that we know very well that we are already hitting the limitations of LLM with very false (and sometimes dangerous) misinformation. Sometimes very publicly: See Google Search

But there are a certain subset of people that seem convinced that an LLM is already a general purpose AI and can do nearly anything you can describe to it in text. Which is insane and dangerous

I am convinced that an AGI breakthrough will be its own thing(if it happens), it won't be iterating on an LLM. At best with an LLM we are going to get something that appears it knows what it is doing (which it already does) but isnt doing any real reasoning. There are multiple papers on this. This one is one of my favorites: https://arxiv.org/abs/2406.02061
 
Upvote
28 (30 / -2)
Post content hidden for low score. Show…
Post content hidden for low score. Show…
Post content hidden for low score. Show…
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. [...]
The real "safety" concerns are mostly about totally fucking up our society with fake generated content on social media and elsewhere. Which is quite bad enough for people to have some real concern over the fucking sociopathic techbros making it. Remember "Radio Rwanda"? A fucking genocide was basically incited by a fucking radio station.

Now substitute that for some lone sociopath using LLMs to spam the already unhinged "social" media with enough fake news, and you have what?

...checks news...

Some of the worst violent far‑right attacks in recent UK history, for example (which didn't even take a LLM to create, but you get the gist, hopefully).
 
Upvote
86 (90 / -4)
Last time I used GPT4, I asked it to list some highly-rated Japanese novels that hadn't been translated into English yet. It proceeded to list four novels that had already been translated, apologised and stated that it would correct itself, then listed another four novels that had already been translated (without acknowledgment or correction).

And people are skeptical about AGI because some employees left?
 
Upvote
62 (63 / -1)
Post content hidden for low score. Show…
The reality behind closed doors, out of earshot of major investors, is that while the strides these AI systems have made is impressive, the truth is that it's still a long ways from AGI. Not just in terms of the software, but the hardware, too. There are multiple layers of critical breakthroughs still necessary. They've got lots of different pieces of an AGI that seem to be working reasonably well in their own way, but they still don't have the 'foundation' and 'superstructure' of AGI to which all those pieces connect. It's a bit like having solar-roofing panels, ornate molding, and elaborate tiles all laid out and ready to build something....but no blueprint, no 2x4s, no cement, etc. They've got all 'flashy' bits but none of the real structure, none of the stuff that holds it all together.
 
Upvote
21 (22 / -1)

ibad

Ars Praefectus
3,629
Subscriptor
Because Sama and some others have lied to investors that they could achieve human-like intelligence within 5 years by mostly scaling up feed-forward neural-networks and the data used to train them. At the very least he promised them that hallucinations and basic reliable reasoning would be "done" pretty soon.

In reality these things will take 10+ years and will require new architectures like IJEPA or others. Probably many more fundamental advances are required.

Any investors that can wait for 10+ years will be OK, as long as the AI company they invested in survives and moves on to the next thing successfully. Many will take a soaking when the LLM/VLM bubble pops (gonna happen within 5 years, maybe a lot sooner).

EDIT: And some people don't want to be there when it starts raining.
 
Last edited:
Upvote
30 (30 / 0)