Major shifts at OpenAI spark skepticism about impending AGI timelines

No they didn't. You're just saying the same thing I said with different words.

They didn't understand gravity or why that object could be relied on to return to the same place every year. They could predict it could return there, but not explain why it would return there. That's my point. Most people today cannot really explain that. Most people understand that celestial mechanics is predictable and don't question it, but they don't actually understand celestial mechanics - at least not on a level that would allow them to predict eclipses.

We could measure and predict the precession of the perihelion of Mercury for pretty much a century before we could explain why it was doing that. We could presume that the prediction wouldn't change without an outside influence, but we didn't have an explanation for why the prediction was correct. It took understanding the general theory of relativity before we could derive that prediction and explain it.

You can make a prediction without even having a hypothesis (a lot of hypothesis were offered for Mercury that were all wrong) let alone a proof.
And you're making exaggerated claims. Those celestial calendars, while impressive, are not precise. They're not precise because the people making them did not fully understand what they were observing. They could not predict what they did not understand. So what is the point you think you're making? Nothing you've said remotely refutes my original claim: you cannot predict the behavior of anything you don't understand. They understood that the moon and stars move. They observed and understood the ways they moved. They used that understanding to predict their movements to the extent that they could understand those movements. There's nothing more to it than that.

Just refute your claim -- they could NOT predict the behavior of Mercury. That made rough estimates based on what they observed and understood, but those estimates were never accurate. It still took considerable time to precisely locate Mercury even with those estimates. Because they couldn't predict its movements. Because they didn't understand its movements.
 
Last edited:
Upvote
-5 (0 / -5)

Psyborgue

Ars Praefectus
4,416
Subscriptor++
Basically it is induction based on emergent behaviour and test performance seen already from simply scaling (more data and more parameters). Many AI researchers are skeptical, but on the other hand the progress already seen has been pretty shocking. Most AI researchers think at a minimum it will have to be a combination of LLM+search; LLM+symbolic reasoning; LLM+planner; or more likely more complex designs etc. and plenty believe that additional breakthroughs are needed.
It saddens me you are downvoted. I think we can get most of the way with language models and glue but additional breakthroughs might simplify the design. For example, there's no need for search if there's no context limit.

The way Westworld did it isn't a terrible idea. (Minor spoiler follows)
The hats.
Brain activity paired with high fidelity recording of human behavior. Given images and sound already work, why not tokenize brainwaves and see what happens? My understanding is there's already some research into that, but it's not used the way it perhaps could be.
Without human emotions.
If we create AI without emotion -- that will be the miracle. We're training on people. Human behavior. It's Westworld, not terminator. But to be fair I do agree removing emotions is not necessarily a good idea.
 
Upvote
-5 (0 / -5)

Psyborgue

Ars Praefectus
4,416
Subscriptor++
you cannot predict the behavior of anything you don't understand
There are exceptions. Dogs can predict where the ball is going to go and they don't need to understand gravity or physics. Intuition and experience can suffice.

Edit: Reading the last few pages, I am surprised nobody has made this argument. Does nobody play fetch with their puppers? Bad humans! Bad!
 
Last edited:
Upvote
7 (7 / 0)
I think AGI-like technology will be stuck in the "uncanny valley" for quite a while. This is where it will be close to human like intelligence but never quite close enough.

Which is not to say it won't be useful, just not human-like.

You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
Agree, and once we solve one the other will follow.
 
Upvote
0 (0 / 0)

jesse1

Ars Scholae Palatinae
624
I fear for the future that these greedy mad scientists and their enablers are preparing for us.
The issue is people. This technology AGI or not will slowly replace jobs but the issue is people will scapegoat each other (immigrants ect) while dehumanizing the losers in the displacement (see how we treat the homeless especially if those people succumb to their vices after losing to the market). The fear for the future is not due to technology but rather how we will clearly react to it.
 
Upvote
4 (4 / 0)

Psyborgue

Ars Praefectus
4,416
Subscriptor++
you have no choice but to rewrite the rules of society
That isn't the only choice. The machine can be used to upend society, even accidentally. Given the rate of human "progress" will cause extinction for everything and everyone, this might be the more desirable outcome.
 
Upvote
-4 (0 / -4)
There are exceptions. Dogs can predict where the ball is going to go and they don't need to understand gravity or physics. Intuition and experience can suffice.

Edit: Reading the last few pages, I am surprised nobody has made this argument. Does nobody play fetch with their puppers? Bad humans! Bad!
May I remind you humans did ok even before we discovered current theory on gravity. When you play ball it's not your theoretical knowledge of physics that make you a good player. But it will be when doing theoretical physics. AGI will need to cover both, to be human like, IMHO.
Theoretical physics is not sufficient in a real world setting in nature. To much chaos/data to process.
 
Upvote
2 (2 / 0)

LaunchTomorrow

Wise, Aged Ars Veteran
114
Hold on here. Autonomous cars are not safe. They might be safer, but they are not safe. You're describing a physical space (physicist here) where each object in the space is measured, assigned a potential velocity vector that they are either demonstrating or could be demonstrating before the next sample, which produces a potential impact cone with that object. This is done for everything (pedestrians, cars, cyclists, dogs, kids, etc.) in the scene and for the vehicle being driven with an additional calculation for how quickly the vehicle could slow down/turn/etc if needed. And the vehicle would proceed ensuring that none of those potential impact cones intersect. And you're arguing that self-driving vehicles do this. And they don't. They cheat. That's part of why this is hard to solve.

If you do the 'proceed in a manner that guarantees no collision', the self driving vehicle in an urban environment would struggle to exceed 10mph, and often would not be able to move at all. So in order to make an autonomous vehicle actually useful to anyone, it has to cheat and make assumptions that the other things in the scenes are behaving in a similar manner, and that they're trying to avoid the collision just like the self-driving car is. And it needs to do that contextually based on who has right of way, and so on. You can probably trust the adult pedestrian is better at not walking into traffic than the child, or the dog, or the person using a cane, and if the model can't strictly avoid all possible interactions and still be a be able to be useful, it has to make assumptions, use some kind of judgement. What assumptions does it make about a potential vehicle behind a building or truck where it doesn't have vision?

This is what motorists instinctively do - you don't account too much for the motorist 3 lanes over when changing lanes because you assume they are also paying attention and won't aim for the same spot that you are. And usually that's right, but not always. Sometimes you crash. That's what autonomous vehicles are also doing. And some of these driving conventions are local. Japanese parents need to teach their kids to be afraid of American motorists because Japanese drivers don't drive near pedestrians, or at least children, but American motorists will brush past them expecting the child is focused on avoiding the car. So an autonomy model would need to work differently in Japan than in the US until we get kids in both places to behave the same way, or we make the most conservative model, which might grind to a halt in a place like India or Vietnam.

So, these things are not as safe as you think they are. Again, they may be safer than human drivers, but they aren't safe, and a system that is safe starts to look an awful lot like public transit.
They are safe. Waymo has yet to have a major accident and it's driven millions of miles at normal speeds. There have been plenty of times where yes, the car stops because it's confused or it ends up in some other degenerate state, but that is definitely the exception rather than the norm.

I understand you want to use a stricter definition of safe, but that is not what I meant and what I was trying to get across was that their improved reaction times were in addition to their predictive capabilities. The faster that you can react, the less you have to predict, and the worse your predictions can be without consequences. Furthermore the main point I was arguing to the other person was that AV's do not need to be AGI in order to be safe/generally useful. If you only really need to understand how people move about a city street and predict their movements in a halfway decent manner, then you clearly do not need to be able to discuss abstract stuff like your favorite piece of art, how to make tools, etc etc.

Thus, an autonomous driving AI must be a subset/subcapability of an AGI, but almost by definition it does not need to be generally intelligent, simply a specialist in driving and all things directly related to driving.
 
Last edited:
Upvote
-1 (2 / -3)
There are exceptions. Dogs can predict where the ball is going to go and they don't need to understand gravity or physics. Intuition and experience can suffice.

Edit: Reading the last few pages, I am surprised nobody has made this argument. Does nobody play fetch with their puppers? Bad humans! Bad!
What do you think that's born from? What do you think intuition and experience are? They're from observed behavior. Behavior that is learned, that is understood. You spent your entire childhood learning and understanding the laws of the physical world, whether you could consciously explain them or not. Play fetch with a puppy and their actions and attempts are almost comically inept. Because they haven't learned. They don't understand. Yet. An older dog has learned and understands. To the extent that they have observed, that they have learned and understood. Throw an object that doesn't conform to their understandings of how objects move (like a boomerang or a yo-yo), and you'll see that they don't understand. They haven't observed. They haven't learned. Yet. And these are simple cause-and-effect relationships.

It's the same with human behavior, except human behavior is vastly more complex and more difficult to analyze because it has so many constituent parts (that aren't well understood). Entire disciplines are dedicated to studying it, people spend their entire lives researching it, and human behavior is still only vaguely understood. Only enough to make generalized statements that are more often true than false. Rough guidelines that mostly encompass the 'average.'

People don't conform to any absolutes. There are no hard rules. Their behavior does not follow simple cause-and-effect relationships. They're not always rational (or even mostly rational). They get lazy. They get tired. They get distracted. They make mistakes. They misjudge or misunderstand. Fortunately, humans are also soft and squishy, so those mistakes between humans are generally harmless, resulting in some bruises and scrapes at worst. But a 10-ton autonomous semi-truck is not soft or squishy. An automated kitchen is not soft or squishy. An autonomous factory is not soft or squishy. When those systems make 'mistakes,' when they 'misjudge' the behavior of humans in proximity, people tend to die. To safely operate around people, an AI needs to be able to predict human behavior, to recognize unsafe actions in advance, and it can't do that without first understanding human behavior. And to understand human behavior, you need to be able to think like humans...otherwise it's just guess work. Best approximations that are true more often than false.
 
Last edited:
Upvote
-8 (0 / -8)
Just refute your claim -- they could NOT predict the behavior of Mercury. That made rough estimates based on what they observed and understood, but those estimates were never accurate. It still took considerable time to precisely locate Mercury even with those estimates. Because they couldn't predict its movements. Because they didn't understand its movements.
Of course they were accurate. You can measure things very accurately even when you haven't worked out the underlying theory. The precession of the perihelion of Mercury is very reliable, very accurate and allows for precise predictions. It's just not explainable with newtonian mechanics. We didn't know why it precessed that way until GR was worked out and that precession was one of the way GR was proven by observation.

The same thing happened in chemistry. Based on observation chemists worked out a classification of elements. From observation they worked out molecular weights. A mole was established in the early 19th century. This classification allowed chemists to make predictions about how elements would react. From observation they could work out to great accuracy how many grams of each reactant was needed to completely consume the reactants in the reaction. Atomic weights were worked out in the very early 19th century. They were able to do this through the entire industrial revolution, though the development of new explosives, metallurgy, polymers and organic chemistry - and they did all of this, with various laws established, stoichiometry, and so on into the early 20th century before the theory of the atom was even established let alone confirmed. Their observations of chemical behavior gave clues to how the atom was structured and when the structure of the atom was worked out it improved those chemical predictions, it explained why certain predictions failed, and so on.

We had over a century of really solid chemistry, based on an increasingly accurate set of predictive models, none of which relied on actually understanding if atoms existed, how they interacted, how bonds actually formed, electron energy levels and so on.

We have very modern examples. Ritonavir is an HIV drug introduced in the late 90s. When combined with another drug you get Paxlovid - for treating Covid. After it was introduced on the market they discovered that it sometimes formed in a different structure (form II vs the original form I), and the other structure had a different solubility so it wasn't taken up by the body in the same way and was ineffective. Chemists discovered that once they had produced any form II in a lab, they could never produce form I ever again. And chemists who had produced any form II who entered a lab that never produced form II would contaminate that lab in a way that it could never produce form I again. They had to pull the drug from the market because they had mysteriously lost the ability to manufacture it. After some work, biochemists figured out a process to reliably produce only form I and the drug was reintroduced. Chemists have no idea why the presence of form II completely prevents the future production of form I. They have some theories, but nothing proven out. But they do have a few very reliable predictions they can make:
1) production by this new method will only produce form I
2) production using other methods that did produce form I could never produce it again once form II was observed

The drug, which was one of the most effective AIDS drugs at the time, had to be pulled from the market for years while they figured that out.
 
Upvote
6 (7 / -1)
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Problem is, a lot of capitalists have a financial interest in convincing everyone that it is.
 
Upvote
2 (2 / 0)

JoHBE

Ars Tribunus Militum
1,579
It is also comparing to a human brain, the most (one of?) complex brains on the planet. Biology can do AGI with far fewer synapses, as evidenced by all the less complex brains that exist.

So we are already at the point where LLMs are more complex than working biological brains when compared this way. By this alone we can see that there is something major LLMs are missing besides scale for AGI.
It still baffles me every day how anyone can think that we could take it for granted that we will "obviously" create human-level AGI at some point. IMO some level of appreciation of how biological evolution has worked over the last 2 billion years or so, should make one very humble. There ISN'T some conceptual fundamental blueprint available, nothing to really "make sense of". At best The Mother Of All Spaghetti Codes, to hopefully partly untangle. Combine that with the virtual impossibility to "scan" a living brain (for some level of reverse-engineering) and you start to realize how formidable (or even impossible) the task might be.

It's hard to express concisely, but it might be that there is no "shortcut" from suffering the absolute necessity to actually survive in The Real World, to eventually evolve some GI (which at that point doesn't deserve the "Artificial" anymore). I.e.: shortcutting/accellerating this might make as much sense as trying to speed up the development of a human baby by 100x.
 
Upvote
1 (1 / 0)

Bongle

Ars Praefectus
4,149
Subscriptor++
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
I can imagine an attempt at AGI by using an LLM as the "processing" block. At least two more blocks would be needed though, a "state" block and an executive/goal/motivation/emotion block. Connect them all together and then get them to advance the state in the direction dictated by the goals. I'm sure the rest is just details
LLMs could probably play a part in a form of intelligence, in the sense of "Reddit Plays Pokemon". Each prompt, give your current situation, a goal, and a constrained list of options, then "ask" the LLM what it should do next.

Given the low performance of reddit plays pokemon, the very high latency of each iteration, and the likelihood of hallucination, I think that'd produce something about as smart as an ant, but it'd be something.

The hard part now is all the other bits: perceiving the world, keeping the right stuff in your "current state" prompt, having the right options.

1723058674939.png
 
Upvote
0 (0 / 0)
We also don't know at all how we humans think and reason, what makes us special compared to animals and what true intelligence even is.
This is true, we don't. But that actually makes it harder to tell whether machines have intelligence.

One day we will invent AGI. It might soon or it might 30 years from now, using an entirely different machines and different algorithms. This is because fundamentally a brain is just a computer. Of that we're pretty certain.

When we do invent AGI, I bet you that we still won't know how animal intelligence works. Because that's actually a harder problem to solve. And because we don't understand animal intelligence, we won't know when we've invented AGI.

People will just argue about it for some time. Using arbitrary thresholds. Until, eventually, it is obvious to most people.
 
Last edited:
Upvote
-5 (2 / -7)
So we are already at the point where LLMs are more complex than working biological brains when compared this way. By this alone we can see that there is something major LLMs are missing besides scale for AGI.
I don't think we would have any idea whether an LLM is more "intelligent" than a goldfish.

In fact that is the POV of the AI doomers like Hinton. That we invented something with an IQ of 50 or whatever.
 
Last edited:
Upvote
1 (1 / 0)
Sorry, what exactly OpenAI invented ?
Didn't they just put together few demos using Google's transformer tech ?
Why are they supposed to release AGI if they actually didn't invented nothing new till now ?
Ilya Sutskever is the name you're looking for. He invented stuff at Google. Then cofounded OpenAI, where he led the way. Then, of course, he was fired. It's a shame nobody knows who the fuck he is, but he's not a billionaire, so it doesn't matter.

These days, there's enough money around that to attract other brainiacs, but yes, Open AI is no longer special, in terms of scientists or funding.
 
Last edited:
Upvote
0 (1 / -1)

atmartens

Ars Centurion
379
Subscriptor
I'm not sure why we care about whether or not Chat GPT can become AGI. I use it and Claude to write code for me and it works. I can ask it questions about APIs and it tells me what I want to know, rather than me spending hours going through documentation which is often mediocre. It saves me time; it's just another tool in the toolbox.
 
Upvote
-5 (2 / -7)
There are a few things to unpack here. As for the question of "why are senior leaders, and less senior people as well, leaving?" I think the answer lies in the valuations and salaries provided by well-backed new AI companies. When you can ensure your financial stability for the rest of your life with a mid 7-figure guarantee in one fell swoop? Intellectual stimulation has to be put in it's rightful place, ESPECIALLY when the place you are going is ALSO doing very interesting things.

Secondly, in terms of AGI? This is intriguing. I'm working on my master's in AI (my first was in tech management), with an undergraduate degree in psychology, and I work in an AI (ish) field, so I'm pretty familiar with the published state of the art (which may be quite far behind the closed-door state of the art).

Nobody in AI really knows when we will achieve AGI. The near-universal opinion seems to be that we will, but the path forward is unclear. Part of it is that the "trick" of AGI is "consciousness". and it is really hard to model something that you don't understand in the real world. In humans, we have several major theories of consciousness like Global Neuronal Workspace Theory (GNWT), Integrated Information Theory (IIT) and Attention Schema Theory that all have "some" evidence of their validity, but nothing close to clarity. And each of these theories is best modeled using a different computational approach.

MIchael graziano's Attention Schema Theory and GNWT both overlay well on the amazing strides made by LLMs, but the leap from "architecturally similar" to "Hi, I'm a conscious AI!" is huge.

Also, it is important to remember that we are still VERY early in the evolution of LLMs. We are making strides literally every day. Take a look at marktechpost.com; almost every day they are publishing articles about a new or creative use or architecture of LLMs in particular. We have just begun to grasp the different ways we can leverage and improve LLMs, and if indeed AGI emerges from LLMs (which is by no means certain), given our rudimentary understanding of LLMs, it is highly unlikely that anybody knows what the underlying computational architecture will be that someday wins the AGI race.
 
Upvote
2 (2 / 0)
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT
I do have safety concerns about ”AI” now, but they have nothing to do with SkyNet/Colossus/WOPR type existential threats. My concerns are related to current uses of ML applications by police, military, governments, and corporations - not just them putting way more faith in those applications than they deserve, but the level of bias in the applications’ training. These things are causing harm now, and aside from sanctimonious words, there doesn’t appear to be much being done about that. If the AI companies do achieve some semblance of AGI, I’d be more afraid that the current problems would be exacerbated than that there would be a qualitative change in the concerns, at least for quite a while.
 
Upvote
6 (6 / 0)

stormbeta

Ars Scholae Palatinae
933
I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.

If I'm wrong, and the comparison is out there, please point me to it.

Another interesting comparison would be drivers with a given blood alcohol level vs. these autonomous systems. There are lock-out systems that won't allow a human to drive if they breathe into a device and the device detects alcohol above a certain level; this is sometimes used for drivers who have had DUIs. Perhaps instead of preventing them from driving the car could take over.
The problem is that it's tricky to make such comparisons accurately - e.g. people even now claim they're safer based on things like miles driven (in this specific part of Phoenix AZ that doesn't have weather). And that's not accounting for other issues like data not including whether a remote operator was standing by to take over, or how it behaving differently than a human driver would creates its own set of predictability issues for other drivers/pedestrians/cyclists/etc.
 
Upvote
3 (3 / 0)
So I think AGI is coming, who knows at what timeline it will be. What I am very confident in right now is that we will not understand the space well enough to be equipped to deal with it safely when it does emerge. And to that end, I believe this is the core cause of all of the departures at openAI. They are running fast without taking the time to understand how to have AGI happen in a SAFE way. That seems to be represented at least lightly in every departure note I have seen.
I think it is far more likely that the rats are fleeing the sinking ship. Doomsday warnings about safety have always been an integral part of OpenAI's hype machine, they have cried wolf too often to be taken seriously. Remember how they "did not dare" to release GPT-2 because it was "too dangerous"? Absurd in hindsight, but it sure got them into the media. And they keep doing this. The safety warnings are useful for OpenAI in many ways:
  • They make OpenAI look objective and concerned. ("Wow, they are criticizing their own product!")
  • They channel perception of OpenAI's work into two alternatives: "It is great and will be awesome!" vs "It is dangerous, because it is too good!" People who do not buy into the positive hype may still fall for the false dichotomy and buy into the negative hype, either way OpenAI comes off as highly competent - much better than a perception that OpenAI's work is stagnating, still not anywhere good enough and that OpenAI has no clue how to make it better.
  • They provide an excuse against criticism of stagnation: "OpenAI probably has super-advanced stuff in their secret labs, they just can't show it because it is too dangerous (==too good)!"
  • People who buy into the negative hype may find OpenAI particularly valuable and investment-worthy: If the rise of AI is inevitable, then they prefer to have the "level-headed" and "safety-minded" OpenAI shepherd us into this future, rather than some profit-driven company or China.
But now skepticism is growing. Model sizes and costs are increasing exponentially, while performance improvements are... difficult to quantify: certainly not zero, but not at all commensurate either, and the same fundamental flaws known since 2018 remain. And there is no killer-app in sight yet. The cash influx may dwindle, and it may be time to find greener pastures.
 
Upvote
6 (6 / 0)

aapis

Ars Scholae Palatinae
1,096
Subscriptor++
If fancy autocorrect were anything it’s (paid or paying) supporters said it was, someone would have found a use case for it beyond juicing Twitter’s usage stats.

I’m sorry that you either get paid to do nothing of value, or that you pay lots of money for something which neither you nor Sam Altman can find value in. I, too, wish you’d spend your time and money better.
 
Upvote
-1 (0 / -1)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
Sure, it’s called being ignorant of the long road to getting here.

People who never used computers thought the same of the Internet in 1995.
And you're ignorant of the long road yet to go. Google the Pareto principle and contemplate it before you try to condescend to me again.
 
Upvote
0 (2 / -2)
I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.

If I'm wrong, and the comparison is out there, please point me to it.
Tesla does this regularly. It's a bad comparison because (A) the system(s) turn off when the driving gets hard and (B) humans sometimes have to grab the wheel anyway. I think they are in fact reasonably safe, but not at all self-driving.

Waymo does this also. They make a good case that their cars are safer, with around 20 million miles of safe driving so far. They have remote operators, but (unless they are massively lying) those operators aren't grabbing the wheel or slamming the breaks.
 
Last edited:
Upvote
2 (2 / 0)
Of course they were accurate. You can measure things very accurately even when you haven't worked out the underlying theory. The precession of the perihelion of Mercury is very reliable, very accurate and allows for precise predictions. It's just not explainable with newtonian mechanics. We didn't know why it precessed that way until GR was worked out and that precession was one of the way GR was proven by observation.

The same thing happened in chemistry. Based on observation chemists worked out a classification of elements. From observation they worked out molecular weights. A mole was established in the early 19th century. This classification allowed chemists to make predictions about how elements would react. From observation they could work out to great accuracy how many grams of each reactant was needed to completely consume the reactants in the reaction. Atomic weights were worked out in the very early 19th century. They were able to do this through the entire industrial revolution, though the development of new explosives, metallurgy, polymers and organic chemistry - and they did all of this, with various laws established, stoichiometry, and so on into the early 20th century before the theory of the atom was even established let alone confirmed. Their observations of chemical behavior gave clues to how the atom was structured and when the structure of the atom was worked out it improved those chemical predictions, it explained why certain predictions failed, and so on.

We had over a century of really solid chemistry, based on an increasingly accurate set of predictive models, none of which relied on actually understanding if atoms existed, how they interacted, how bonds actually formed, electron energy levels and so on.

We have very modern examples. Ritonavir is an HIV drug introduced in the late 90s. When combined with another drug you get Paxlovid - for treating Covid. After it was introduced on the market they discovered that it sometimes formed in a different structure (form II vs the original form I), and the other structure had a different solubility so it wasn't taken up by the body in the same way and was ineffective. Chemists discovered that once they had produced any form II in a lab, they could never produce form I ever again. And chemists who had produced any form II who entered a lab that never produced form II would contaminate that lab in a way that it could never produce form I again. They had to pull the drug from the market because they had mysteriously lost the ability to manufacture it. After some work, biochemists figured out a process to reliably produce only form I and the drug was reintroduced. Chemists have no idea why the presence of form II completely prevents the future production of form I. They have some theories, but nothing proven out. But they do have a few very reliable predictions they can make:
1) production by this new method will only produce form I
2) production using other methods that did produce form I could never produce it again once form II was observed

The drug, which was one of the most effective AIDS drugs at the time, had to be pulled from the market for years while they figured that out.
So your evidence that you don't need to understand something to predict it is a list of historical examples of people gaining an understanding of something through methodical observation and then using that understanding to predict the behavior? Are you serious with this shit??
 
Last edited:
Upvote
-5 (0 / -5)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
I'm not sure why we care about whether or not Chat GPT can become AGI. I use it and Claude to write code for me and it works. I can ask it questions about APIs and it tells me what I want to know, rather than me spending hours going through documentation which is often mediocre. It saves me time; it's just another tool in the toolbox.
I don't care, personally. But OpenAI is demanding enormous resources - financial, computational, natural - and justifying that with a very hard sell that they're on the road to AGI. If they're just developing a tool to speed up coding workflows and improve natural language interfaces, that's cool I guess, but does it actually justify eleven and a half billion dollars in investment? Does it justify using half a million kilowatt-hours of power a day, enough to power a decent size town and with commensurate carbon emissions? And if there's no realistic pathway from LLMs to AGI, why are we getting the hard sell?
 
Upvote
6 (6 / 0)
Yeah, I'm sure some scientist somewhere disagrees.
its not that scientists 'disagree', its that, at the moment, computational theory of mind is non-falsifiable. its not even science.

We're only at the very beginning of understanding how cognition works. works like this are just starting to frame the problem. If your definition of a computational mind includes the ability to solve computationally intractable problems, then there must be some aspect of biological brains that is not present in turing-machine-like computers.
 
Upvote
4 (4 / 0)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
Yeah, I'm sure some scientist somewhere disagrees.
Sorry, but subject matter expertise counts for more than overly reductive tech-accelerationist optimism. We do not in fact understand how cognition and consciousness works in the first place, so cognitive science cannot conclude cognition is a purely computational process - and will not for a very long time. It is wildly presumptuous and completely premature to assume cognition is a computational process.
 
Last edited:
Upvote
5 (6 / -1)
its not that scientists 'disagree', its that, at the moment, computational theory of mind is non-falsifiable. its not even science.
Presumably they have opinions anyway. And presumably use those opinions to inform their hypotheses.

You seem to have found a random prof that disagrees, and wants to prove it.
We're only at the very beginning of understanding how cognition works. works like this are just starting to frame the problem. If your definition of a computational mind includes the ability to solve computationally intractable problems, then there must be some aspect of biological brains that is not present in turing-machine-like computers.
Is she trying to say that humans are good at solve traveling salesman problems with lots of stops?
 
Upvote
-3 (0 / -3)
Presumably they have opinions anyway. And presumably use those opinions to inform their hypotheses.
facts don't care about your feelings, and opinions don't pass peer review. It wouldn't be the first time that when vibes meet reality, reality wins
Is she trying to say that humans are good at solve traveling salesman problems with lots of stops?
We have a pretty well developed understanding of computational complexity at this point, and the text is building a factual, falsifiable basis for tying our understanding of computation to our (much more limited) understanding of cognition. If you want to prove that they are the same thing, this is the kind of place you would start.
 
Upvote
7 (7 / 0)
facts don't care about your feelings, and opinions don't pass peer review. It wouldn't be the first time that when vibes meet reality, reality wins
If your point is just that scientists are wrong sometimes, I agree. But I still think you're better off polling scientists than not polling scientists.

Otherwise you are just left with your own vibes. Which will tend towards biases like rejecting global warming and evolution.
 
Last edited:
Upvote
-6 (0 / -6)
If your point is just that scientists are wrong sometimes, I agree. But I still think you're better off polling scientists than not polling scientists.
popularity polling an opinion among scientists is beyond useless. We don't base scientific progress on vibes, we base it on peer reviewed, reproducible results. I think you are, in fact, much worse off if you base legislation or investment on the opinion of scientists, when we're in the middle of a hype bubble, and there is no falsifiable science to be found.

You've tried to transform argument from authority from a fallacy to sound advice.
 
Upvote
6 (6 / 0)
We have a pretty well developed understanding of computational complexity at this point, and the text is building a factual, falsifiable basis for tying our understanding of computation to our (much more limited) understanding of cognition. If you want to prove that they are the same thing, this is the kind of place you would start.
Also, I don't see why this would be.

If I were able to prove to you the brain isn't doing some computationally impossible thing, would that prove to you the brain is a computer?
 
Upvote
-4 (1 / -5)
Also, I don't see why this would be.

If I were able to prove to you the brain isn't doing some computationally impossible thing, would that prove to you the brain is a computer?
no, the opposite. If you found the brain efficiently doing something computationally intractable, then you might either discover that P really equals NP, or that turing-machine computation doesn't describe how brains work, or who knows what?

the thing is, though, that this is falsifiable science, not opinion. The result, either way, could be amendable to review and reproduction.
 
Upvote
4 (4 / 0)