De Kraker: "If OpenAI is right on the verge of AGI, why do prominent people keep leaving?"
See full article...
See full article...
The moves have led some to wonder just how close OpenAI is to a long-rumored breakthrough of some kind of reasoning artificial intelligence
Same. I don't even understand what the underpinnings of AGI is supposed to look like.Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
The point in the article is that these people have a large stake in OpenAI and would benefit massively if OpenAI cracks AGI.Because money?
OpenAI and others have had a long pattern of raising funds and then those folks not in the immediate founders circle of big money ... move on to other companies raising funds where they can get in on that money and work on their own projects.
With all the money going into AI companies it makes sense that employment is very fluid regardless of breakthroughs or not.
Here's a really interesting discussion of exactly that subject:Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
a lot of computer science and cognitive science academics also say "it's not".Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
The real "safety" concerns are mostly about totally fucking up our society with fake generated content on social media and elsewhere. Which is quite bad enough for people to have some real concern over the fucking sociopathic techbros making it. Remember "Radio Rwanda"? A fucking genocide was basically incited by a fucking radio station.If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. [...]
ChatGPT-3.5 = 175 billion parameters, according to public information
Different studies have slightly varying numbers for a human brain, but it's 1000x more: from 0.1 to 0.15 quadrillion synaptic connections. Source: https://www.scientificamerican.com/article/100-trillion-connections/ (among others)
While it's likely to require something more than just scaling up the model size, I thought this gives some clue about scale. I agree with you, perhaps the answer is "it's not" scaling.
ChatGPT-3.5 = 175 billion parameters, according to public informationHaving followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Microsoft CTO Kevin Scott has countered these claims, saying that LLM "scaling laws" (that suggest LLMs increase in capability proportionate to more compute power thrown at them) will continue to deliver improvements over time
I doubt they're leaving because of Copyright infringement, if that was the case, they wouldn't be moving to Anthropic.They're leaving because (and this is a vague recollection of some articles I read online so might be incorrect here) is Altman seems to have unrealistic goals. Could also be that those heading out don't want to be tied to any copyright suits. Or as the 2nd commenter said, money.
Or, they only went there to learn enough about how OpenAI is doing their magic and want to go chase those sweet VC dollars with their own startup.
Basically it is induction based on emergent behaviour and test performance seen already from simply scaling (more data and more parameters). Many AI researchers are skeptical, but on the other hand the progress already seen has been pretty shocking. Most AI researchers think at a minimum it will have to be a combination of LLM+search; LLM+symbolic reasoning; LLM+planner; or more likely more complex designs etc. and plenty believe that additional breakthroughs are needed.
Yeah but we'll definitely have Full Self Driving next year.Meanwhile the rest of us over here in reality have long since known the truth: OpenAI and everyone else are blowing steam when they say they're "Close to AGI".
(Emphasis mine)Schulman’s parting remarks quoted in the last paragraph of the article said:Despite the departures, Schulman expressed optimism about OpenAI's future in his farewell note on X. "I am confident that OpenAI and the teams I was part of will continue to thrive without me," he wrote. "I'm incredibly grateful for the opportunity to participate in such an important part of history and I'm proud of what we've achieved together. I'll still be rooting for you all, even while working elsewhere."
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it' not"
But blockchain, cryptocoins, NFT have totally revolutionized databases, financial transactions, ownership of digital apes. Totally.(Emphasis mine)
This seems to assume an enduring contribution and relevance that I’m not confident OpenAI and its competitors will achieve…you bros may just be churning VC bucks, at the end of the day.
this makes a ton of assumptions that are just that assumptions
A) that a parameter in a NN is equivalent to a synaptic connection
B) That synaptic connections are the most efficient way to do "intelligence" so that it could be used as a benchmark to go
C) That learning as done for the brain is what is most efficient for how it should be done with computers.
That is the thing that bothers me the most. Sure these are "multimodel" but we are still fundamentally iterating on LLM's here. At best we can do some tricks here or there where it can do other things, but that doesnt make it an AGI.Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
That is true. But that is a slightly more nuanced question. Does infallibility define AGI? It will certainly constrain applications in the coming years, but does that invalidate the “sci-fi fantasies?” It’s not like humans are infallible, and yet we are dangerously intelligent.
Congrats! You are now more qualified to be an "AI researcher" than most of the VC tech bros!Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
natural selection and genetics are completely different processes than learning. natural selection took billions of years to make intelligence, can openAI wait that long?First of all, “guess and check” is how natural selection got us here. The simplicity of the algorithm does not make it ineffective.
Second of all, that is not at all what GD is. GD is the opposite of guessing. You literally calculate the favorable mutations.
That's kind of the thing, right? If the people investing in AGI were interested in simple 'intelligence' systems, we've already got robotic systems that are fully capable of replacing people for nearly all mundane and mechanical tasks. But they're 'single-use' systems. They're of no use outside of the specific task for which they're created, and each one constitutes a considerable investment to design, build, test, and deploy. If all they wanted were 'worker ant' AI systems, they've already got that...along with all the limitations and inflexibility that comes with it. AGI is the promise of a single 'tool' for all tasks. A 'multi-tool' if you will. Which necessitates a 'tool' that's at least as complex as what it'd be replacing -- people. And these AI prototype 'tools' are no where near that level of complexity or capability.There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while we can't do an apples to apples comparison between parameters to synapses - how does your math work there?
I think that what you are missing is that no matter how true that is, it won't open the pockets of investors ;-)Same. I don't even understand what the underpinnings of AGI is supposed to look like.
I work with and get LLMs to some extent, but more LLM is still just more math, vectors ... output. More LLM is just more LLM, not necessarily magically different or any fewer fundamental shortcomings.
How convenient for you that you don't have to present a counterargument to the points you vaguely decry as "ignorance." Must be nice to be so obviously smart and correct that you are relieved of the trouble involved in having to demonstrate any of the things you assert.Yes please. Let's endlessly speculate on what little information we have (and apparently even less knowledge of AI).
The ignorance in this forum about how AI and biological intelligence works is off the charts.
Most of the time the highest voted comments are so wrong/ignorant that it has become an exercise in futility to debunk them.
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT
The nematode C. elegans has 302 neurons and lives in the wild, feeding on certain bacteria, and displays waking and sleeplike states. I wonder how an LLM model with 302 parameters would function.There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while I don't think we can do an apples to apples comparison between parameters to synapses - you did that comparison. So how does your math work there?