Major shifts at OpenAI spark skepticism about impending AGI timelines

johnsonwax

Ars Legatus Legionis
14,585
I've argued this elsewhere so I'll offer it up here as well.

OpenAIs near-term problem is they are clearly chasing AGI and believing that ChatGPT serves as a MVP on that path (I don't think it is, but they do). The problem is that they need to be a functional business to pursue that unless it's right around the corner (which they keep insinuating it is).

ChatGPT isn't a very good functional business, because it over/underserves most of the markets where people are willing to pay for this. At one end you have Apple's pitch which is on-device AI which can interact with your personal data, even if it can't do as much is more utilitarian. Helping me find an old email is probably more useful to more people than an AI that can both write a sonnet and a real-estate listing but won't fit locally on a phone. At the other end you have expert systems to help synthesize drugs or diagnose cancer or analyze their quarterly sales, which is also not well served by writing a sonnet or a real-estate listing and also you want tightly intergraded into your company's data which is why the open source tools are getting so much attention.

What's left in the middle doesn't seem like a particularly large business - certainly not large enough to sustain their ambitions toward building an AGI. Put aside whether you think ChatGPT is good enough or not, or whether it will or won't lead to AGI, ChatGPT is not a good enough fit for what the market wants to buy to sustain the effort - you can only get so far on VC dollars and Microsoft panic investing. You have to align the product to what consumers or enterprise wants. But the principals want AGI, and are refusing to do that, hence the conflict. Meanwhile, Apple can build their own local stuff and offering it up free to users, and open source is out there hitting the wider set of needs - both undermining the opportunity to charge the kind of money they need.
 
Upvote
18 (18 / 0)

lolware

Ars Centurion
309
Subscriptor
We already have General Intelligence devices, those are called human beings.

The main outcome of pouring billions (trillions) of dollars and terawatts of energy into the development of AGI instead of the development of actual human beings is a deeply misanthropic concentration of wealth.

I fear for the future that these greedy mad scientists and their enablers are preparing for us.
 
Upvote
20 (23 / -3)

DaveSimmons

Ars Tribunus Angusticlavius
9,678
I think it's because the young'uns dont yet realize the "80% of the way there!" is the easy part, and it's the last 20% that takes all the time (or proves to be impossible).
"Geeze, five years later and now we're only 81% of the way there..."
Like with a practical fusion reactor. The latest results after 92+ years of research is a 5-second-long reaction.

Practical fusion has been 20 years away for the last 70 years.
 
Upvote
11 (11 / 0)
You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
Otoh, we do have AVs now. If they keep rolling out it still be one of the most revolutionary techs of our generation, just 10 years too slow.
 
Upvote
-9 (1 / -10)

johnsonwax

Ars Legatus Legionis
14,585
There seem to be many similarities between Musk's "Full self-driving" and OpenAI's "AGI". I'm not expecting either any time soon.
The main similarity is that investors in the modern tech era, between the iPhone, Amazon e-commerce, and Google search and a number of others have gotten addicted to the idea that if they invest a bit of money in just the right spot, that it can blow up and pretty quickly make them fantastically wealthy. And these are ideas that have such broad appeal and such incredible utility that they consume entire industries worth of value. Self-driving cars is one of those things that if you can crack that nut, you win a MASSIVE market, even if all you accomplish is to make the occupation of truck driver nonexistent. Same goes for AGI - it consumes so much that its inventors will hoover up a not-small share of global GDP. And they are things that if you do nail it (like Google Search), there's not really a place for competitors to operate. You just win capitalism.

Horace Deidu has a great quote:

Those who predict the future we call futurists. Those who know when the future will happen we call billionaires.

And that's the thing here. Even if we concede that both of these things will someday happen (and I don't see why not, though I think self-driving will be largely immaterial because the nature of transportation will need to change anyway) it doesn't mean that plowing money into them now will pay off. And if AGI does arrive, the idea that it'll be able to operate in the current economic system seems unrealistic. You can't cede the entire economy to that entity - you have no choice but to rewrite the rules of society to block that from happening, preventing the expected payoff from ever actually arriving.
 
Upvote
5 (5 / 0)

johnsonwax

Ars Legatus Legionis
14,585
Otoh, we do have AVs now. If they keep rolling out it still be one of the most revolutionary techs of our generation, just 10 years too slow.
I think we're going to find that the problem right now isn't that gas cars are bad or that human driven cars are inefficient, but that cars carry so many secondary costs that they are unsustainable as the thing to organize society around - that applies to EVs and autonomous cars and everything else that substitutes into that role. So, you may get to autonomy just in time to see this $40,000+ forced consumption tax on society collapse under its own weight.
 
Upvote
7 (8 / -1)

Tall Dwarf

Ars Scholae Palatinae
843
ChatGPT-3.5 = 175 billion parameters, according to public information

Different studies have slightly varying numbers for a human brain, but it's 1000x more: from 0.1 to 0.15 quadrillion synaptic connections. Source: https://www.scientificamerican.com/article/100-trillion-connections/ (among others)

While it's likely to require something more than just scaling up the model size, I thought this gives some clue about scale. I agree with you, perhaps the answer is "it's not" scaling.
No it doesn't. You are comparing parameters with connections, they are different.

LLM architecture determines connections, number of parameters is a completely different number.

Human brain has ~86 billion neurons and they perform a huge number of tasks that LLM does not.

"I thought this gives some clue about scale" - no it does not. You are comparing apples with electrons.
 
Upvote
8 (8 / 0)

LetterRip

Ars Tribunus Militum
2,967
My current favorite benchmark is trying to get them to play wordle, LLMs seem completely incapable of doing that even with the most patient and cooperative human assistant,. Even a 6 year old can play worldle better than most llms. Worse is that LLMs, unlike 5yos, can't understand that they aren't accomplishing the task. Until they can perform all cognitive tasks as well as a 6yo and many as well as an adult expert, I'll be unconvinced we are approaching agi.

Try playing wordle without knowing what letters words are composed of. AGI's generally don't have access to character level information (neither what characters a word is composed of, nor the order or quantify of the characters) so doing tasks requiring character level information is highly unlikely to be successful.
 
Upvote
3 (3 / 0)

MoMonies

Smack-Fu Master, in training
3
I've argued this elsewhere so I'll offer it up here as well.

OpenAIs near-term problem is they are clearly chasing AGI and believing that ChatGPT serves as a MVP on that path (I don't think it is, but they do). The problem is that they need to be a functional business to pursue that unless it's right around the corner (which they keep insinuating it is).

ChatGPT isn't a very good functional business, because it over/underserves most of the markets where people are willing to pay for this. At one end you have Apple's pitch which is on-device AI which can interact with your personal data, even if it can't do as much is more utilitarian. Helping me find an old email is probably more useful to more people than an AI that can both write a sonnet and a real-estate listing but won't fit locally on a phone. At the other end you have expert systems to help synthesize drugs or diagnose cancer or analyze their quarterly sales, which is also not well served by writing a sonnet or a real-estate listing and also you want tightly intergraded into your company's data which is why the open source tools are getting so much attention.

What's left in the middle doesn't seem like a particularly large business - certainly not large enough to sustain their ambitions toward building an AGI. Put aside whether you think ChatGPT is good enough or not, or whether it will or won't lead to AGI, ChatGPT is not a good enough fit for what the market wants to buy to sustain the effort - you can only get so far on VC dollars and Microsoft panic investing. You have to align the product to what consumers or enterprise wants. But the principals want AGI, and are refusing to do that, hence the conflict. Meanwhile, Apple can build their own local stuff and offering it up free to users, and open source is out there hitting the wider set of needs - both undermining the opportunity to charge the kind of money they need.
Great comment. I think Meta's philosophy of open-weighting their models that more or less perform at GPT-4s level puts OpenAI and other general AI companies in a pretty tough spot. Where do you go from here if you're them? Spend billions on GPT-5 that is better at generalist "knowledge" and asymptotically approaches only outputting the "smartest" stuff that's in its training set? Pretty cool to play around with but still not necessarily a product. I would say that Apple Intelligence is the most compelling use case of AI yet and it's not really a cutting edge model by many standards. It doesn't need to be. I'm not really sure what breakthrough in AI would get me excited again but I'm very curious to play around with Llama 3.1.
 
Upvote
3 (4 / -1)

archtop

Ars Tribunus Militum
1,611
Subscriptor
There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while I don't think we can do an apples to apples comparison between parameters to synapses - you did that comparison. So how does your math work there?
The nematode C. elegans has 302 neurons and lives in the wild, feeding on certain bacteria, and displays waking and sleeplike states. I wonder how an LLM model with 302 parameters would function.
 
Upvote
18 (19 / -1)

LaunchTomorrow

Wise, Aged Ars Veteran
114
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
I mean I'm sorta kinda closer to technically knowledgeable (still not an AI researcher), and nothing that anyone has shown so far will make that leap. The current models are still hilariously under-capable compared to what AGI should be.

Think about it: you can read one Wikipedia article and learn most of what there is to know (assuming Wikipedia is complete and accurate which it isn't) about some topic. These LLMs train on basically the contents of the entire Internet and still can't form proper English sentences that are tonally appropriate. They are extremely training inefficient if that makes sense.

They still lack the ability to apply formal reasoning to general inputs. Like if you say "Dogs are mammals, and cats are also mammals, are dogs and mammals the same?" while dressing it up a bit to defeat the trivial "are dogs and cats the same?" question, then they totally break down.

The hallucinations are basically a core feature of these models, not a bug. Hallucinations are just the true nature showing through, that they work by statistically predicting the next token in a sequence in response to an input sequence. Sometimes, the most "common" answer makes sense, sometimes it doesn't.

The hope is that if you can just throw enough computing power at the problem that you can get so specific about your predictions that it basically amounts to regurgitating the "fact" itself. Like "if I can phrase the question as the chain of every event from 6000 BC leading up to this moment, what would be the next part of my response to this question?" kind of reasoning. But that's obviously stupid. You don't need the context of Pharaoh Rameses II dying in 1213 BC to know that both cats and dogs are mammals, but cats are not dogs nor vice versa.

Now maybe someone will manage to pivot our current models into something that actually can learn facts and relationships between things as a certainty, and I'll be proven wrong. There's even a tiny bit of evidence to make the case for that (Anthropic's research on Golden Gate Claude for instance showing that there was a specific parameter that it linked to the Golden Gate Bridge and that parameter was also strongly linked to parameters for things like bridges and San Francisco).

However let me end with a fun fact. The human brain has like 100 trillion synapses which in our current understanding are most analogous to parameters in a model. However, recent research has suggested that the microtubules in neurons could also do computations inside each neuron. Each neuron has billions of microtubules. That could mean we need to model more like 100 million trillion or more parameters.
 
Upvote
17 (17 / 0)

LaunchTomorrow

Wise, Aged Ars Veteran
114
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT
The problem is that AI safety is already a present concern. These models may generate bullshit, but sling enough bullshit around and some people will start to believe it. Unfortunately, most people involved in the business seem to be either camp "AI is our savior" or "AGI is coming soon with doom close behind". None seem interested in the real harms being created today.
 
Upvote
6 (6 / 0)

Fatesrider

Ars Legatus Legionis
21,424
Subscriptor
They're leaving because (and this is a vague recollection of some articles I read online so might be incorrect here) is Altman seems to have unrealistic goals. Could also be that those heading out don't want to be tied to any copyright suits. Or as the 2nd commenter said, money.

Or, they only went there to learn enough about how OpenAI is doing their magic and want to go chase those sweet VC dollars with their own startup.
I'd go with the notion of Altman being a used car salesman, investors being used car buyers, and the folks at OpenAI all know it.

Altman is there to drive investment. He's writing checks with his mouth the other's almost certainly can't cover (at least within any investor patience limit time frame) with the technology they have at hand.

It's not rocket science. Altman is much like Musk - there to promote the company, make bullshit promises, attract VC funds and to get paid handsomely for inflating it's actual market value. From what I've read, he doesn't seem to do anything else.
 
Upvote
13 (13 / 0)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
Because OpenAI not on the verge of AI. LLMs are not AI and will never lead to AI. The difference between LLMs and AI is like the difference between Candyland and chess. Their entire concept and mode of operation is probabilistic, not intelligent. They’re janky Chinese Rooms, not intelligences. And there is not enough training data, power, and investment dollars on this Earth to even make them good Chinese Rooms.

People are leaving San Altman’s dumb scam company because they know the bottom is about to fall out, and they want someone else holding the bag when it does. They cannot raise enough investment to survive at their current or reasonably foreseeable cash burn rate. OpenAI is a dead man walking. It’s just a matter of how quickly it’s allowed to fall.
 
Upvote
9 (10 / -1)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
If we extrapolate from the progress in deep learning over the last 12 years, low. Very low.

In that timespan we went from, “Holy shit, it can recognize cats slightly better than bad” to “Holy shit, ChatGPT.” How do people not understand this?
Maybe we’re less easily impressed than you.
 
Upvote
13 (13 / 0)

LaunchTomorrow

Wise, Aged Ars Veteran
114
I think AGI-like technology will be stuck in the "uncanny valley" for quite a while. This is where it will be close to human like intelligence but never quite close enough.

Which is not to say it won't be useful, just not human-like.

You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
Except that at least in the case of autonomous driving, you can kinda "gate" the end state in this limited form. An autonomous car doesn't necessarily need to be able to handle every conceivable driving scenario, for instance snow. Just don't operate the cars when
/where it snows. Also, worst case scenario, you have the excuse that humans are terrible at driving in general, so at least it's better than human drivers! For AGI, it is by definition all-encompassing which makes it much much harder.
 
Upvote
1 (1 / 0)

LaunchTomorrow

Wise, Aged Ars Veteran
114
Any system that can accurately and safely interact with humans has to be able to accurately predict human actions and behaviors and must be, by necessity, at least as complex and cognizant as humans. It's the same problem facing autonomous vehicles -- the vehicles have to safely interact with irrational and unpredictable humans. The only way they can do that is if they are able to understand and predict human behavior, which, by necessity, means that they are just as complex, just as aware and cognizant as humans. An ant will never be able to understand humans or predict human action and behavior. As such an 'ant' operating heavy machinery, driving vehicles, or running a kitchen will never be safe around people. It will always be a hazard to anyone around them. The ant has to be smart enough to think like a human to be safe around humans. And if the ant can think like a human....
You don't need to fully predict human behavior for fully automated driving. You just have it react/predict how humans drive cars, and walk on sidewalks, and so on, while being predictable itself. Subway trains don't predict human behavior, and 99% of the time we interact fine with them, even when they are the surface level type. As long as the cars are predictable, and can handle the limited subset of "hey that car started moving towards my lane, maybe they're going to merge", "hey that pedestrian just stepped into a crosswalk, I should stop" then things are generally fine.

You don't need an autonomous car to discuss the works of Plato with you. Therefore it doesn't have to be as generally intelligent as you.
 
Upvote
-1 (6 / -7)

alex_d

Ars Scholae Palatinae
1,317
I think it's because the young'uns dont yet realize the "80% of the way there!" is the easy part, and it's the last 20% that takes all the time (or proves to be impossible).
"Geeze, five years later and now we're only 81% of the way there..."
It depends what “there” is. Some things like speech recognition and machine translation languished at the 80% mark for a couple of decades, but are now “there” for most definitions of there, thanks to modern deep neural networks.

Chatbots didn’t have a 20 year ramp of being almost-there. They basically exploded out of nowhere. Same for image generation. So this 80/20 rule doesn’t always hold. (Or maybe the 80% can be more than sufficient?)

But look, the question of when isn’t too relevant. Predicting with certainty ASI next year is silly. But why should we feel much better if it is 20 or 50 years away? That’s sooner than the nightmare scenarios of global warming. We still need to confront the same existential “scifi fantasies,” and even do something about them, not make fun of them.
 
Upvote
-9 (3 / -12)

Skelator123

Ars Scholae Palatinae
1,145
Because Sama and some others have lied to investors that they could achieve human-like intelligence within 5 years by mostly scaling up feed-forward neural-networks and the data used to train them. At the very least he promised them that hallucinations and basic reliable reasoning would be "done" pretty soon.

In reality these things will take 10+ years and will require new architectures like IJEPA or others. Probably many more fundamental advances are required.

Any investors that can wait for 10+ years will be OK, as long as the AI company they invested in survives and moves on to the next thing successfully. Many will take a soaking when the LLM/VLM bubble pops (gonna happen within 5 years, maybe a lot sooner).

EDIT: And some people don't want to be there when it starts raining.
I'll start believing claims that AGI is "getting close" when someone has a system that doesn't completely eat itself from the inside when fed its own output.
Because in my mind, "introspection" is kind of a fundamental part of intelligence, and if merely thinking about your last idea makes you spray vomit all over the carpet then something is going the exact wrong direction.
 
Upvote
11 (12 / -1)

gwg

Seniorius Lurkius
46
Same. I don't even understand what the underpinnings of AGI is supposed to look like.
I can imagine an attempt at AGI by using an LLM as the "processing" block. At least two more blocks would be needed though, a "state" block and an executive/goal/motivation/emotion block. Connect them all together and then get them to advance the state in the direction dictated by the goals. I'm sure the rest is just details :)
 
Upvote
3 (5 / -2)

LaunchTomorrow

Wise, Aged Ars Veteran
114
I think we're going to find that the problem right now isn't that gas cars are bad or that human driven cars are inefficient, but that cars carry so many secondary costs that they are unsustainable as the thing to organize society around - that applies to EVs and autonomous cars and everything else that substitutes into that role. So, you may get to autonomy just in time to see this $40,000+ forced consumption tax on society collapse under its own weight.
I think this is a more valid point than the other person saying "AVs must necessarily be AGI". I'm from car country, and even I wish we could all pile into microcosms of 20-story mixed-use buildings arranged directly next to each other.

Commute to work? Oh just ride the elevator down to the lobby, walk 3 buildings over, and ride 5 floors back up. Friend hosting a shindig? Oh that's cool, they live like 5 floors above you. Need groceries? Oh that's in the next building over. Want to go spend some time in nature? Oh, 70% of the country is now national parks because we've fully compacted the residential areas, leaving only farmland, resource extraction, and national parks as major land users.

Cars are great, the rev of a high-performance gas engine is intoxicating, but we need to move on from cars as the central feature of US cities.
 
Upvote
9 (9 / 0)

tayhimself

Ars Tribunus Angusticlavius
6,211
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Halvar Flake - the security researcher and entrepreneur said that the combustion engine didn't get us to the moon and similarly LLMs will not get us to AGI.
 
Upvote
2 (4 / -2)

Odomitus

Wise, Aged Ars Veteran
134
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Yeah, it's not an issue of throwing more computational power at the problem.

I think most laymen associate AGI with watershed consciousness, which is a very difficult thing to measure as it pertains to human levels of intelligence when the "mind" in question has no concise physical being. I'm not even ok with using the term AI in association with these learning models. They've demonstrated time and time again that they're not intelligent. They're just very strong procedural generation engines.
 
Upvote
1 (3 / -2)
I think we're going to find that the problem right now isn't that gas cars are bad or that human driven cars are inefficient, but that cars carry so many secondary costs that they are unsustainable as the thing to organize society around - that applies to EVs and autonomous cars and everything else that substitutes into that role. So, you may get to autonomy just in time to see this $40,000+ forced consumption tax on society collapse under its own weight.
What do you think has changed in the last 50 years that would make electric cars unsustainable?

I do not like cars, but most folks seem to think they are great.
 
Last edited:
Upvote
0 (0 / 0)
You don't need to fully predict human behavior for fully automated driving. You just have it react/predict how humans drive cars, and walk on sidewalks, and so on, while being predictable itself. Subway trains don't predict human behavior, and 99% of the time we interact fine with them, even when they are the surface level type. As long as the cars are predictable, and can handle the limited subset of "hey that car started moving towards my lane, maybe they're going to merge", "hey that pedestrian just stepped into a crosswalk, I should stop" then things are generally fine.

You don't need an autonomous car to discuss the works of Plato with you. Therefore it doesn't have to be as generally intelligent as you.
You cannot predict the behavior of anything you don't understand. To understand the behavior of an organism, you need to be at least as cognizant as that organism. It's part of why people who are not smart have such a difficult time connecting with a world that is built by and for people who are much smarter than them -- they do not have the capacity to fully grasp the concepts governing the systems around them (regardless of whether those systems are good or bad). It's also why people who are not smart gravitate towards simple answers to the complexity of the real world -- simple answers that they can understand 'make more sense' to them, even if those answers are completely wrong.

The ant cannot and will never be able to predict the behavior and actions of humans. It will never be a 'safe' driver. It will never be a safe operator of heavy machinery. It will never be safe to run a kitchen around other people. It will always be a hazard to anyone around it. Because it cannot predict what the people around it will do next, no matter how well it 'understands' the rules that are supposed to be followed. Because people don't follow 'the rules' all the time, or even most of the time. They're more like guidelines.
 
Last edited:
Upvote
-9 (0 / -9)

LaunchTomorrow

Wise, Aged Ars Veteran
114
You cannot predict the behavior of anything you don't understand. To understand the behavior of an organism, you need to be at least as cognizant as that organism. It's part of why people who are not smart have such a difficult time connecting with a world that is built by and for people who are much smarter than them -- they do not have the capacity to fully grasp the concepts governing the systems around them (regardless of whether those systems are good or bad). It's also why people who are not smart gravitate towards simple answers to the complexity of the real world -- simple answers that they can understand 'make more sense' to them, even if those answers are completely wrong.

The ant cannot and will never be able to predict the behavior and actions of humans. It will never be a 'safe' driver. It will never be a safe operator of heavy machinery. It will never be safe to run a kitchen around other people. It will always be a hazard to anyone around it. Because it cannot predict what the people around it will do next, no matter how well it 'understands' the rules that are supposed to be followed. Because people don't follow 'the rules' all the time. Not even most of the time. They're more like guidelines.
You don't need to understand someone's views on moral relativism or Jungian psychology to predict whether they will step out onto the street after continuously walking in a straight line toward it without slowing down.

You are making nonsense arguments. Physical interactions, and especially physical interactions in the realm of streets and cars are a small subset of human behavior and conscious breadth. And you hardly even need to "predict" anything, autonomous vehicles can measure your exact distance and velocity a hundred times a second and just react as needed. Their reaction times far outstrip the blob of goo between your ears.

In any case, this argument is already dead: we already have autonomous cars. They operate every day in SF. And at least in the case of Waymo, they cause relatively little harm. Sure there have been some cringe-worthy screwups, like the ambulance thing, but that was easily solved by updating the software and also collaborating with local emergency responders to design a system to allow them to redirect autonomous cars around ongoing incident areas.
 
Upvote
1 (6 / -5)
You cannot predict the behavior of anything you don't understand.
Um, we have celestial calendars that very accurately predicted eclipses and the like made by people that couldn't remotely understand why those things were happening.

There is a difference between being able to predict something, and being able to defend why the prediction is valid, which is what I think you're describing.

To use a current event, pollsters are pretty good at predicting the outcome of an election, but they are all over the goddamn map in their explanations for why voters are voting that way to the degree that it's almost impossible to work out which, if any of them, are correct. But the polling stands.
 
Upvote
13 (14 / -1)

LaunchTomorrow

Wise, Aged Ars Veteran
114
You cannot predict the behavior of anything you don't understand. To understand the behavior of an organism, you need to be at least as cognizant as that organism. It's part of why people who are not smart have such a difficult time connecting with a world that is built by and for people who are much smarter than them -- they do not have the capacity to fully grasp the concepts governing the systems around them (regardless of whether those systems are good or bad). It's also why people who are not smart gravitate towards simple answers to the complexity of the real world -- simple answers that they can understand 'make more sense' to them, even if those answers are completely wrong.

The ant cannot and will never be able to predict the behavior and actions of humans. It will never be a 'safe' driver. It will never be a safe operator of heavy machinery. It will never be safe to run a kitchen around other people. It will always be a hazard to anyone around it. Because it cannot predict what the people around it will do next, no matter how well it 'understands' the rules that are supposed to be followed. Because people don't follow 'the rules' all the time, or even most of the time. They're more like guidelines.
Another example, go try to catch a rabbit with your hands. Just do it.

The rabbit is not as smart as you (or are you going to seriously make the argument that a rabbit is as smart as a human?). And yet, it knows plenty well enough how to run away from you. How can this be that the rabbit predicted your movements?!? Perhaps it's because you don't have to fully understand something to avoid killing or being killed by it.

That or you are seriously arguing that a rabbit must understand literature and science in order to run away when you approach it.

The same applies to autonomous cars; they don't have to be as smart as us, they just have to be smart enough to not run us over.
 
Upvote
5 (6 / -1)
There won't be any AGI anytime soon, I know a lot about the subject and generative AI can't think or reason, it cannot learn from new information that it learns right now and then use that to extrapolate information it does not know. It is made to seem intelligent like a human with all kinds of tricks.
So there won't be a AGI developing out of this, as the hallmark of a AGI is not even present, not even a little.

We also don't know at all how we humans think and reason, what makes us special compared to animals and what true intelligence even is.

One of the biggest glaring errors that shows how weak generative AI still is are errors and randomness: They are always there, you can't protect against them, because for example ChatGPT can't tell if it made a mistake, you always have to point it out to them.
This way, you always have to have a human babysit an AI for errors, because otherwise, a self acting AI breaks down and fails after the first steps it does. Now they are starting to talk about using AI agents to monitor the AI for errors and correcting them, BUT THAT CAN'T EVER WORK for this reason:
So in order to do that, you would have to use a better AI that can actually spot errors (or learn from them, which is what every other life form does, example a baby learning how to walk, falling over 20'000 times and learning each time from it, till it can manage it, even with wildly shifting variables every day when the legs and arms grow)
Well, again we don't have that BETTER AI that can detect errors and correct them. If we had, we did not need another AI monitoring the work of the first, we would just use THAT AI.

I have had situations where I asked the AI for correction and it gladly did it, just to spew out another error and wrong solution (in terms of code for example) and doing that 4 times, but never doing it by itself, but just when I pointed out the error.

Also, vividly seen in picture generation: You can't reproduce the very same picture twice, even if you use the exact same seed, it seems that generative AI always has a factor of randomness that cannot be removed, it is always there.
 
Upvote
0 (0 / 0)
Another example, go try to catch a rabbit with your hands. Just do it.

The rabbit is not as smart as you (or are you going to seriously make the argument that a rabbit is as smart as a human?). And yet, it knows plenty well enough how to run away from you. How can this be that the rabbit predicted your movements?!? Perhaps it's because you don't have to fully understand something to avoid killing or being killed by it.

That or you are seriously arguing that a rabbit must understand literature and science in order to run away when you approach it.

The same applies to autonomous cars; they don't have to be as smart as us, they just have to be smart enough to not run us over.
Problem with self driving cars: Many human beings are not too smart, they are very emotional and emotionally driven to do totally illogical things (drive too fast, risk their lives and the ones of others, be impatient, get angry at other car drivers "because they suck at driving" (interesting fact: It is always THE OTHERS that are bad, never the person itself)
So the human element makes it really impossible for an AI, because what we do is we predict human misbehavior, when a person is ranting and shouting, carrying a baseball bat and crosses the street and then starts swinging the club, we know that that person will likely want to harm other people, smash cars and will likely do that in a second, he might also be armed with a gun (as these type of people always are, not with one gun, not with 2 or 5 or 8, but probably with 20) We all know that, but an AI can't process that, they don't know what it means, they have no idea why that bicylist is now leaving his lane and driving away from that hmm, that must be a baseball player (thinks the AI), hmm, he must be going to a field to play and practicing.

The big problem for AI are typically chaotic human beings, if all cars were controlled by AI, the problem would go away in a second and we would have self driving cars tomorrow.
 
Upvote
-1 (1 / -2)
You are making nonsense arguments. Physical interactions, and especially physical interactions in the realm of streets and cars are a small subset of human behavior and conscious breadth. And you hardly even need to "predict" anything, autonomous vehicles can measure your exact distance and velocity a hundred times a second and just react as needed. Their reaction times far outstrip the blob of goo between your ears.

In any case, this argument is already dead: we already have autonomous cars. They operate every day in SF. And at least in the case of Waymo, they cause relatively little harm. Sure there have been some cringe-worthy screwups, like the ambulance thing, but that was easily solved by updating the software and also collaborating with local emergency responders to design a system to allow them to redirect autonomous cars around ongoing incident areas.
Hold on here. Autonomous cars are not safe. They might be safer, but they are not safe. You're describing a physical space (physicist here) where each object in the space is measured, assigned a potential velocity vector that they are either demonstrating or could be demonstrating before the next sample, which produces a potential impact cone with that object. This is done for everything (pedestrians, cars, cyclists, dogs, kids, etc.) in the scene and for the vehicle being driven with an additional calculation for how quickly the vehicle could slow down/turn/etc if needed. And the vehicle would proceed ensuring that none of those potential impact cones intersect. And you're arguing that self-driving vehicles do this. And they don't. They cheat. That's part of why this is hard to solve.

If you do the 'proceed in a manner that guarantees no collision', the self driving vehicle in an urban environment would struggle to exceed 10mph, and often would not be able to move at all. So in order to make an autonomous vehicle actually useful to anyone, it has to cheat and make assumptions that the other things in the scenes are behaving in a similar manner, and that they're trying to avoid the collision just like the self-driving car is. And it needs to do that contextually based on who has right of way, and so on. You can probably trust the adult pedestrian is better at not walking into traffic than the child, or the dog, or the person using a cane, and if the model can't strictly avoid all possible interactions and still be a be able to be useful, it has to make assumptions, use some kind of judgement. What assumptions does it make about a potential vehicle behind a building or truck where it doesn't have vision?

This is what motorists instinctively do - you don't account too much for the motorist 3 lanes over when changing lanes because you assume they are also paying attention and won't aim for the same spot that you are. And usually that's right, but not always. Sometimes you crash. That's what autonomous vehicles are also doing. And some of these driving conventions are local. Japanese parents need to teach their kids to be afraid of American motorists because Japanese drivers don't drive near pedestrians, or at least children, but American motorists will brush past them expecting the child is focused on avoiding the car. So an autonomy model would need to work differently in Japan than in the US until we get kids in both places to behave the same way, or we make the most conservative model, which might grind to a halt in a place like India or Vietnam.

So, these things are not as safe as you think they are. Again, they may be safer than human drivers, but they aren't safe, and a system that is safe starts to look an awful lot like public transit.
 
Upvote
3 (4 / -1)
Um, we have celestial calendars that very accurately predicted eclipses and the like made by people that couldn't remotely understand why those things were happening.

There is a difference between being able to predict something, and being able to defend why the prediction is valid, which is what I think you're describing.

To use a current event, pollsters are pretty good at predicting the outcome of an election, but they are all over the goddamn map in their explanations for why voters are voting that way to the degree that it's almost impossible to work out which, if any of them, are correct. But the polling stands.
You are making an assertion without any supporting facts. They DID understand the movements of celestial bodies. They understood it quite well.

It was only stupid Europeans and their arrogant Christian-centric world view that left them incapable of grasping concepts not reliant on bible-based pseudo-scientific drivel, and in their believed superiority, what they couldn't understand didn't exist. That was very much NOT the case for every culture. Specifically the cultures that developed those sophisticated celestial calendars you think proves your point but does nothing of the sort.
 
Last edited:
Upvote
-10 (1 / -11)
Another example, go try to catch a rabbit with your hands. Just do it.

The rabbit is not as smart as you (or are you going to seriously make the argument that a rabbit is as smart as a human?). And yet, it knows plenty well enough how to run away from you. How can this be that the rabbit predicted your movements?!? Perhaps it's because you don't have to fully understand something to avoid killing or being killed by it.

That or you are seriously arguing that a rabbit must understand literature and science in order to run away when you approach it.

The same applies to autonomous cars; they don't have to be as smart as us, they just have to be smart enough to not run us over.
I've WATCHED rabbits flee predators. On many occasions. They're not predicting anything. They're just running in a direction where they don't see a threat. It's effective because they're also faster and more agile than most predators. But that is also precisely how they get caught by predators who can and do very much understand how rabbits flee. More to the point, Native American hunters who have grown up learning how to hunt small game absolutely can and do catch rabbits by understanding and predicting their flight behavior. Heck, I've come to understand a little bit about it and can reasonably predict how the rabbits who live around me will react and where they'll go, even if I'm not fast enough to put it to any sort of practical use (were I so inclined, which I'm not).

An autonomous vehicle has to interact with people. To interact safely with people, it needs to be able to predict human actions and behaviors. The only way to do that is to understand human actions and behaviors. It will never be able to that without also being able to THINK like us. And that's not possible for any intelligence not at least as capable as our own.
 
Last edited:
Upvote
-4 (0 / -4)
You are making an assertion without any supporting facts. They DID understand the movements of celestial bodies. They understood it quite well.
No they didn't. You're just saying the same thing I said with different words.

They didn't understand gravity or why that object could be relied on to return to the same place every year. They could predict it could return there, but not explain why it would return there. That's my point. Most people today cannot really explain that. Most people understand that celestial mechanics is predictable and don't question it, but they don't actually understand celestial mechanics - at least not on a level that would allow them to predict eclipses.

We could measure and predict the precession of the perihelion of Mercury for pretty much a century before we could explain why it was doing that. We could presume that the prediction wouldn't change without an outside influence, but we didn't have an explanation for why the prediction was correct. It took understanding the general theory of relativity before we could derive that prediction and explain it.

You can make a prediction without even having a hypothesis (a lot of hypothesis were offered for Mercury that were all wrong) let alone a proof.
 
Upvote
8 (9 / -1)

LetterRip

Ars Tribunus Militum
2,967
The nematode C. elegans has 302 neurons and lives in the wild, feeding on certain bacteria, and displays waking and sleeplike states. I wonder how an LLM model with 302 parameters would function.

See 'digital C. elegans twin' research,

This study presents a novel model of the digital twin C. elegans, designed to investigate its neural mechanisms of behavioral modulation. The model seamlessly integrates a connectome-based C. elegans neural network model and a realistic virtual worm body with realistic physics dynamics, allowing for closed-loop behavioral simulation. This integration enables a more comprehensive study of the organism’s behavior and neural mechanisms. The digital twin C. elegans is trained using backpropagation through time on extensive offline data of chemotaxis behavior generated with a PID controller. The simulation results demonstrate the efficacy of this approach, as the model successfully achieves realistic closed-loop control of sinusoidal crawling and chemotaxis behavior.
By conducting node ablation experiments on the digital twin C. elegans, this study identifies 119 pivotal neurons for sinusoidal crawling, including B-type, A-type, D-type, and PDB motor neurons and AVB and AVA interneurons, which have been experimentally proven to be involved. The results correctly predicted the involvement of DD04 and DD05, as well as the irrelevance of DD02 and DD03, in line with experiment findings. Additionally, head motor neurons (HMNs), sublateral motor neurons (SMNs), layer 1 interneurons (LN1s), layer 1 sensory neurons (SN1s), and layer 5 sensory neurons (SN5s) also exhibit significant impact on sinusoidal crawling. Furthermore, 40 nodes are found to be essential for chemotaxis navigation, including 10 sensory neurons, 15 interneurons, and 11 motor neurons, which may be associated with ASE sensory processing and turning behavior. Our findings shed light on the role of neurons in the behavioral modulation of sinusoidal crawling and chemotaxis navigation, which are consistent with experimental results.


and similar work

 
Upvote
3 (3 / 0)

graylshaped

Ars Legatus Legionis
57,836
Subscriptor++
I find irony in all these announcements that “OpenAI will definitely have magic AI results any minute now, y’all” being posted on Elon Musk’s website.
With his announcement that Tesla is an AI company, his entire fortune is now based on pumping this meme quite literally for all he is worth.
 
Upvote
4 (4 / 0)

One off

Ars Scholae Palatinae
854
Basically it is induction based on emergent behaviour and test performance seen already from simply scaling (more data and more parameters). Many AI researchers are skeptical, but on the other hand the progress already seen has been pretty shocking. Most AI researchers think at a minimum it will have to be a combination of LLM+search; LLM+symbolic reasoning; LLM+planner; or more likely more complex designs etc. and plenty believe that additional breakthroughs are needed.
I'd certainly agree that the + seems to be the missing element. What we have at the moment is a shockingly good natural language interface. The method of creating that interface has encoded significant amounts of human knowledge, including best practice, within it. Large parts of that knowledge are retrievable, with perhaps improving levels of accuracy. Agency, understanding, and application of knowledge seem to be capabilities that require more than an LLM, however complex.

My uneducated interest is piqued by progress in audio/visual/other input processing. Improving real time recognition of physical entities, events, items, and words in the real world and tying those labels to the 'concepts' encoded in an LLM. I wonder if that is a path to learning from observation, creating something similar to understanding. But I have no idea of the resources that would be required to develop, train, and utilise a system that has more than a narrow 'understanding'.
 
Last edited:
Upvote
-3 (1 / -4)