Major shifts at OpenAI spark skepticism about impending AGI timelines

Post content hidden for low score. Show…

idspispopd

Ars Scholae Palatinae
832
this makes a ton of assumptions that are just that assumptions

A) that a parameter in a NN is equivalent to a synaptic connection

B) That synaptic connections are the most efficient way to do "intelligence" so that it could be used as a benchmark to go


C) That learning as done for the brain is what is most efficient for how it should be done with computers.

It is also comparing to a human brain, the most (one of?) complex brains on the planet. Biology can do AGI with far fewer synapses, as evidenced by all the less complex brains that exist.

So we are already at the point where LLMs are more complex than working biological brains when compared this way. By this alone we can see that there is something major LLMs are missing besides scale for AGI.
 
Upvote
31 (31 / 0)
Post content hidden for low score. Show…
First of all, “guess and check” is how natural selection got us here. The simplicity of the algorithm does not make it ineffective.
natural selection and genetics are completely different processes than learning. natural selection took billions of years to make intelligence, can openAI wait that long?
Second of all, that is not at all what GD is. GD is the opposite of guessing. You literally calculate the favorable mutations.

you dropped 'stochastic'. the network is initiated with random values. at no point is the training data introduced into the model, the model is simply curve fit to the training data. The model doesn't learn, in any way that word has ever been defined. learning is an active process that involves taking in information through the senses.

The models don't have senses and are incapable of learning. Training is something done to, or with, the model, which is an entirely latent set of data structures with no intent or agency.
 
Upvote
24 (31 / -7)

hambone

Ars Praefectus
4,178
Subscriptor++
I think AGI-like technology will be stuck in the "uncanny valley" for quite a while. This is where it will be close to human like intelligence but never quite close enough.

Which is not to say it won't be useful, just not human-like.

You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
 
Upvote
22 (22 / 0)

Atterus

Ars Tribunus Militum
1,778
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Congrats! You are now more qualified to be an "AI researcher" than most of the VC tech bros!

LLMs were originally designed specifically to mimic intelligence, but ultimately based purely on statistical likelyhood of next word using the query as a "scaffold". Add in huge datasets and "wow" it seems smart (except for having zero uniqueness and failing in basic math for a computer). The truth is that Altman found LLMs, was fooled, and tried to be the guy that "made AI". Flash a bleached smile at investors and voila! The media defends him and nods in agreement as he says he "invented" stuff done in '69. Gets to directly influence AI policy despite zero qualification, and actively led a coup against actual data scientists that didn't like his unethical profiteering.

The reality is that we are nowhere near a "general" intelligence. Such a model is going to be a massive committee setup with dedicated models designed for specific questions and governing models designed to guide dataspace expansion and align those model to an LLM that is no more than a GUI. Overall, it isn't that hard to develop. The problem is money, infrastructure, time, and overcoming the bleach-toothed children pretending their two hours with Tensorflow means anything (and those with the money ignoring their flashy sales pitch. The same crowd that fell for crypto too).

I'm sorta shocked these so-called "AI" companies are chasing after people that literally have zero formal education in the matter beyond "hey! Look at what this toolbox can do!" and utterly ignoring the people that invented the underlying methods screaming at everyone they are misusing them. There are still a lot of the OG data scientists and chemometricians around that built this stuff in FORTRAN on IBM machines. Although, I did see FORTRAN got some serious updates recently... Seems like the jump from 77 to 95 all over again.
 
Upvote
25 (31 / -6)

moriad

Wise, Aged Ars Veteran
190
There’s two incorrect ideas floating around here:

1) That the only thing OpenAI (or others) work on is LLMs. They’re not. They’re studying various architectures and approaches, and constantly researching new ideas. Nobody is insisting that LLMs will be AGI. Also, “LLMs” take different forms. Most models today (such as for speech recognition or image synhesis) use some form of transformers and tokens and other approaches pioneered by LLMs but aren’t LLMs.

2) That predicting the next word is trivial. It’s not. Predicting the next word is how many questions on standardized tests work. The problem is limitlessly hard. To predict the next word perfectly would require godlike intelligence and would compress Wikipedia to 0 bytes. Prediction, whether of words, actions, the weather, etc, is the very essence of science and intelligence.
But as Niels Bohr said, "Predictions are hard, especially about the future."
 
Upvote
11 (11 / 0)

alex_d

Ars Scholae Palatinae
1,317
you dropped 'stochastic'. the network is initiated with random values. at no point is the training data introduced into the model, the model is simply curve fit to the training data. The model doesn't learn, in any way that word has ever been defined. learning is an active process that involves taking in information through the senses.

The models don't have senses and are incapable of learning. Training is something done to, or with, the model, which is an entirely latent set of data structures with no intent or agency.
That’s not what stochastic refers to. And actually, the first modern deep learning models were initialized with Boltzman machines. It just turned out to not matter.

You are picking a definition of learning to fit your argument. There actually is such a thing as online learning. It also doesn’t matter, and opens up difficulties.
 
Upvote
-13 (6 / -19)

alex_d

Ars Scholae Palatinae
1,317
I think AGI-like technology will be stuck in the "uncanny valley" for quite a while. This is where it will be close to human like intelligence but never quite close enough.

Which is not to say it won't be useful, just not human-like.

You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
That is true. But that is a slightly more nuanced question. Does infallibility define AGI? It will certainly constrain applications in the coming years, but does that invalidate the “sci-fi fantasies?” It’s not like humans are infallible, and yet we are dangerously intelligent.
 
Upvote
-7 (2 / -9)
Post content hidden for low score. Show…

hambone

Ars Praefectus
4,178
Subscriptor++
That is true. But that is a slightly more nuanced question. Does infallibility define AGI? It will certainly constrain applications in the coming years, but does that invalidate the “sci-fi fantasies?” It’s not like humans are infallible, and yet we are dangerously intelligent.

Infallibility is an absolute, so it isn't very useful way to measure anything.

That said, I have 52,000KM on the odometer of my car and not once have I blocked an ambulance because I'm too stupid to pull over.

:finedog:
 
Upvote
27 (28 / -1)

DaveSimmons

Ars Tribunus Angusticlavius
9,678
(Emphasis mine)
This seems to assume an enduring contribution and relevance that I’m not confident OpenAI and its competitors will achieve…you bros may just be churning VC bucks, at the end of the day.
But blockchain, cryptocoins, NFT have totally revolutionized databases, financial transactions, ownership of digital apes. Totally.

(This time will be different, trust us!)

/s
 
Upvote
35 (37 / -2)
There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while we can't do an apples to apples comparison between parameters to synapses - how does your math work there?
That's kind of the thing, right? If the people investing in AGI were interested in simple 'intelligence' systems, we've already got robotic systems that are fully capable of replacing people for nearly all mundane and mechanical tasks. But they're 'single-use' systems. They're of no use outside of the specific task for which they're created, and each one constitutes a considerable investment to design, build, test, and deploy. If all they wanted were 'worker ant' AI systems, they've already got that...along with all the limitations and inflexibility that comes with it. AGI is the promise of a single 'tool' for all tasks. A 'multi-tool' if you will. Which necessitates a 'tool' that's at least as complex as what it'd be replacing -- people. And these AI prototype 'tools' are no where near that level of complexity or capability.

And I daresay, they won't like an AGI system when/if they do get built. Because it'll be just as complex as people...with all the problems that entails. The same problems that are pushing these wealthy investors to fund the development of AGI in the first place.
 
Last edited:
Upvote
23 (23 / 0)
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI.
Achieving AGI, and perhaps finding out that its name is SkyNet is not the only safety concern. The more immediate concerns are that the current LLMs produce lots of false information, and that there may be no dependable way to fix that. Misinformation can sometimes be amusing, but it can also be dangerous. Imagine if a government's military made a decision based on that. Obviously they can and do, based on misinformation derived from humans, but the fear is that these LLMs could make the problem even worse than it already is.

Or suppose voters in a country like ours made decisions based on the output of an LLM, particularly if that LLM could be "poisoned" so as to produce biased information. Or a hundred other scenarios.
 
Upvote
15 (15 / 0)
The two biggest financial problems that companies working on LLMs have at the moment are lawsuits draining them and the fact use cases for it are getting hard to find. Investors want the next big thing no matter what it is, but they'll notice the weak return on investment. People have always had the "gold rush" behavior firmly in place. I'm not sure what the use case is for an AI that functionally speaking, is as human as I am. If they ever get there, are we going to use it cruelly for slave work? I won't, but who will?

Humans know so little about how their brains work that even highly educated scientists and doctors have trouble understanding it. We still struggle to define sentience, life, and death. How can beings with such a severe lack of understanding these things remake it in another form? What exactly is the point, other than scientific discovery?
 
Upvote
17 (17 / 0)
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
You mean the "genius" of a couple of crypto grifters pumping up the wow factor of image generators (which have been around for a number of years now - I remember playing with tools that created photo-realistic headshots almost a decade ago) and having precisely 0 explanation other than "magic" if enough compute - and money - is gathered not only for how they might achieve what they want, but even how to define it is turning out to be a bunch of BS?

Who'd-a-thunk
 
Upvote
17 (17 / 0)

wildsman

Ars Scholae Palatinae
664
That's kind of the thing, right? If the people investing in AGI were interested in simple 'intelligence' systems, we've already got robotic systems that are fully capable of replacing people for most mundane mechanical tasks. But they're 'single-use' systems. They're of no use outside of the specific task for which they're created, and each one constitutes a considerable investment to design, build, test, and deploy. If all they wanted were 'worker ant' AI systems, they've already got that...along with all the limitations and inflexibility that comes with it. AGI is the promise of a single 'tool' for all tasks. A 'multi-tool' if you will. Which necessitates a 'tool' that's at least as complex as what it'd be replacing -- people. And these AI prototype 'tools' are no where near that level of complexity or capability.

And I daresay, they won't like an AGI system when/if they do get built. Because it'll be just as complex as people...with all the problems that entails.
Well, it really depends, my personal view is that it won't really be AGI in the sense that it will be exactly human like (because that would cause the issues you're talking about) but it will be close enough that it can actually do most of our jobs quite easily.

Again, the way I see it, a level of AGI that replaces most human cognitive tasks is probably at least a couple of decades away. Once we get humanoid robots and train them in the real world with our chores/jobs, I really don't see any reason why an AI robot can't do 90% of current human jobs in the next 50 years.
 
Upvote
-10 (3 / -13)
That said, I have 52,000KM on the odometer of my car and not once have I blocked an ambulance because I'm too stupid to pull over.
I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.

If I'm wrong, and the comparison is out there, please point me to it.

Another interesting comparison would be drivers with a given blood alcohol level vs. these autonomous systems. There are lock-out systems that won't allow a human to drive if they breathe into a device and the device detects alcohol above a certain level; this is sometimes used for drivers who have had DUIs. Perhaps instead of preventing them from driving the car could take over.
 
Upvote
-13 (2 / -15)
Well, it really depends, my personal view is that it won't really be AGI in the sense that it will be exactly human like
There's really no reason to believe we'd create a human-like intelligence.

If we make "AGI" it will be completely alien in ways that are hard to comprehend.
 
Upvote
13 (14 / -1)

TVPaulD

Ars Scholae Palatinae
1,315
Yes please. Let's endlessly speculate on what little information we have (and apparently even less knowledge of AI).

The ignorance in this forum about how AI and biological intelligence works is off the charts.

Most of the time the highest voted comments are so wrong/ignorant that it has become an exercise in futility to debunk them.
How convenient for you that you don't have to present a counterargument to the points you vaguely decry as "ignorance." Must be nice to be so obviously smart and correct that you are relieved of the trouble involved in having to demonstrate any of the things you assert.
 
Upvote
18 (19 / -1)
Well, it really depends, my personal view is that it won't really be AGI in the sense that it will be exactly human like (because that would cause the issues you're talking about) but it will be close enough that it can actually do most of our jobs quite easily.

Again, the way I see it, a level of AGI that replaces most human cognitive tasks is probably at least a couple of decades away. Once we get humanoid robots and train them in the real world with our chores/jobs, I really don't see any reason why an AI robot can't do 90% of current human jobs in the next 50 years.
Any system that can accurately and safely interact with humans has to be able to accurately predict human actions and behaviors and must be, by necessity, at least as complex and cognizant as humans. It's the same problem facing autonomous vehicles -- the vehicles have to safely interact with irrational and unpredictable humans. The only way they can do that is if they are able to understand and predict human behavior, which, by necessity, means that they are just as complex, just as aware and cognizant as humans. An ant will never be able to understand humans or predict human action and behavior. As such an 'ant' operating heavy machinery, driving vehicles, or running a kitchen will never be safe around people. It will always be a hazard to anyone around them. The ant has to be smart enough to think like a human to be safe around humans. And if the ant can think like a human....
 
Upvote
13 (15 / -2)
There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while I don't think we can do an apples to apples comparison between parameters to synapses - you did that comparison. So how does your math work there?
a parameter is as little as a a single bit. the behavior of a real biological neuron, even one in an organism as simple and well documented as c. elegans is obviously vastly more complicated than a boolean value. How many more times? Put another way, how large an artificial neural network would be required to completely model just a single c. elegans neuron?

You can't make any comparison at all, because we don't even understand all the complexity of single neurons, let alone a network of 1000 of them.
 
Upvote
11 (14 / -3)

idspispopd

Ars Scholae Palatinae
832
There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while we can't do an apples to apples comparison between parameters to synapses - how does your math work there?

My math works like:
  1. Read my post. Notice how my post specifically says not use a human brain as a comparison.
  2. Pick any of the countless number of biological brains that exist with fewer "synapses" than GPT-4. Note that humans are not the only animal with a brain, consciousness, and intelligence.
  3. Notice how the biological brain is achieving impressive things like consciousness and understanding, while the more complex GPT-4 is not even close.
 
Upvote
11 (13 / -2)
LLMs are nowhere near AGI. They are nowhere near even approaching bacteria in terms of being able to interact with their environment and adapt beyond their training.

They poop out words in an order that is not always nonsensical.
There are papers on self improving models, either from a feedback model or from user feedback https://arxiv.org/pdf/2310.00898v3
 
Upvote
-11 (1 / -12)
Outsider (to AI) looking in:

What are the chances that there are no big breakthroughs and LLMs are all we get for a long time and the wait for AGI has just been a pump n dump?
Imo, AGI is a meaningless buzzword, and we will never arrive because it has no definition.

Otoh, steady improvement for 5 years would be amazing.

Otooh, maybe it dies here and now.
 
Upvote
3 (4 / -1)

The Real Blastdoor

Ars Tribunus Militum
2,065
Subscriptor++
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
I've assumed that the logic there is that the LLM will compute that to maximize the probability of getting the next word right, it has to figure out how to turn itself into an AGI. So, it will just <<insert magic step>> in order to do that.
 
Upvote
13 (13 / 0)

NZArtist

Wise, Aged Ars Veteran
100
When I were a lad (says the old man, waving his walking-stick in the air) I worked on computer simulations of various physics. One I was quite proud of was a fluid simulation (2d at the time). In those days it was just a skinned surface made up of a bazillion meta-balls with parameters for 'size' to give volume, parameters for stickiness, and particle movement properties. It made for quite a convincing fluid simulation. (I discovered that merely changing the size radius of one set of particles makes two dissimilar fluids separate, like oil and water. Well I thought that was cool anyway).
Forty years later and water simulations are actually pretty good. In movies it's almost impossible to tell where the real water ends and the CG water starts. But every now and then when something big is rising out of the ocean you get a jarring visual break of the water looking like a cascade of small ping-pong balls as the simulation isn't perfect.

With LLM AI it's very possible we're looking at a simulation of intelligence that will only ever look like intelligence without actually being intelligence. And it's possible that LLM is the wrong vehicle for general intelligence. It can only ever be a flawed mimic because the underlying principle is wrong. It will still be useful for certain things But it's also possible that LLMs are an algorithmic cul-de-sac that can never be as good as Actual Intelligence.
 
Upvote
24 (24 / 0)
Does anyone who writes these things actually work at a tech company? Momentum will carry you a long time, and leaders are very rarely magical. You could take the top 3 leaders in my org away and nothing of consequence would happen for at least 3-4 years, we have very good roadmaps that far out. Plus there are a couple dozen others who could be promoted into those positions in a heartbeat (and would be if someone was hit by a bus, or maybe in these cases the scenario should be 'helicopter crash'). They might (or might not be) the best for that job, but there are a bunch of other people within 2-3% of the capability ready to step in and grow from the experience.
 
Upvote
-13 (2 / -15)

DaveSimmons

Ars Tribunus Angusticlavius
9,678
I've assumed that the logic there is that the LLM will compute that to maximize the probability of getting the next word right, it has to figure out how to turn itself into an AGI. So, it will just <<insert magic step>> in order to do that.
But if all humans are removed that also lets the LLM answer every question asked of it by a human correctly. No humans, no questions, zero incorrect answers, Problem solved.

Kill All Humans.
 
Upvote
7 (7 / 0)
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT

It depends on what people are taking about.

I think when people heavily financially invested in generative AI talk about safety they talk about terminator scenarios largely as a disingenuous distraction from the real harms that are here today. Basically the a ability of generative AI to automate and amplify just the standard low level shitty behavior people do already. Deepfakes, automated stalking and abuse, fraud, extortion, spear phishing, and so on. None of that is new or specific to AI, but AI can amplify and scale those attacks tremendously. That's what AI safety should be about, not grey goo.
 
Upvote
18 (18 / 0)
(Just today, Altman dropped a hint on X about strawberries that some people interpret as being a hint of a potential major model undergoing testing or nearing release.)
Where do strawberries grow? In fields. What does everyone know about strawberry fields? That nothing is real. Where else is nothing real? In a dream. Put it all together and what do you get? Field of Dreams. The message is clear: OpenAI is going to release an AI-powered Apple Vision Pro app that allows people to play catch with simulations of their dead fathers. It's obvious, really.
 
Upvote
17 (17 / 0)

Civitello

Ars Scholae Palatinae
6,645
LLMs are nowhere near AGI. They are nowhere near even approaching bacteria in terms of being able to interact with their environment and adapt beyond their training.

They poop out words in an order that is not always nonsensical.
My current favorite benchmark is trying to get them to play wordle, LLMs seem completely incapable of doing that even with the most patient and cooperative human assistant,. Even a 6 year old can play worldle better than most llms. Worse is that LLMs, unlike 5yos, can't understand that they aren't accomplishing the task. Until they can perform all cognitive tasks as well as a 6yo and many as well as an adult expert, I'll be unconvinced we are approaching agi.
 
Upvote
8 (8 / 0)

zoward

Ars Centurion
310
Subscriptor
I find irony in all these announcements that “OpenAI will definitely have magic AI results any minute now, y’all” being posted on Elon Musk’s website.
There seem to be many similarities between Musk's "Full self-driving" and OpenAI's "AGI". I'm not expecting either any time soon.
 
Upvote
14 (14 / 0)

NZArtist

Wise, Aged Ars Veteran
100
There seem to be many similarities between Musk's "Full self-driving" and OpenAI's "AGI". I'm not expecting either any time soon.
I think it's because the young'uns dont yet realize the "80% of the way there!" is the easy part, and it's the last 20% that takes all the time (or proves to be impossible).
"Geeze, five years later and now we're only 81% of the way there..."
 
Upvote
16 (16 / 0)