Major shifts at OpenAI spark skepticism about impending AGI timelines

no, the opposite. If you found the brain efficiently doing something computationally intractable, then you might either discover that P really equals NP, or that turing-machine computation doesn't describe how brains work, or who knows what?

the thing is, though, that this is falsifiable science, not opinion. The result, either way, could be amendable to review and reproduction.
Okay, yes.

If you want to prove a brain is not a computer, you come up with something a brain can do, but a computer can't.

If you want to prove that brain is a computer, you get a computer to do that thing.

That's basically how the argument has played out for 70 or so years. With a lot of different definitions of what that thing is.
 
Last edited:
Upvote
-1 (0 / -1)

wildsman

Ars Scholae Palatinae
664
The nematode C. elegans has 302 neurons and lives in the wild, feeding on certain bacteria, and displays waking and sleeplike states. I wonder how an LLM model with 302 parameters would function.
This is a red herring. Any basic AI can easily imitate the functions of c elegans almost exactly - except the part of it that evolves. But that's more a function of natural selection than c elegans itself.
 
Upvote
-3 (1 / -4)
So your evidence that you don't need to understand something to predict it is a list of historical examples of people gaining an understanding of something through methodical observation and then using that understanding to predict the behavior? Are you serious with this shit??
Yes, because correlation is not causation. They didn't know why those elements reacted with each other, just that they reliably did, and when they observed that other elements had similar properties they predicted that they would react in a similar way, and they did.

Zoologists examined animal species based on similar characteristics and developed a hierarchical classification (1758) long before they had any understanding of genetics, genetic drift, even before Darwins observations on how environment might cause specialization. The end result was a classification that was pretty good at predicting which species were in fact related (all of Darwin's finches, for example) once we understood genetics and DNA and could verify how closely their DNA was to each other. But it didn't get everything right - hedgehogs and echidnas may have a lot of similar characteristics but they're wildly different genetically - both having evolved toward a similar role in the ecosystem form different starting points. Overall, however, yeah, it turns out the prediction that humans and chimpanzees are pretty closely related was correct. Most of the predictions turned out to be correct. They had no fucking idea why it was correct, just that the correlation of features was sensible, and when a new species was discovered the dutifully slotted it in, argued over disagreements (platypus was a tricky one) and most of the time predicted correctly.

I don't know why you are arguing this point so badly. Social scientists pretty much fully live in a space where they do prediction with at best a tenuous grasp of causation. Correlation predictions aren't only a thing, they are by far the most common kind of prediction. Raising the prime lending rate predicts a lowering of inflation is more correlation than causal. There are causal elements in there (making access to capital more difficult lowers consumption which lowers prices) but there are also other elements as in the most recent spate of inflation where corporations saw an opportunity to raise prices to boost profits and have consumers blame that on the government, rather than on them - it had less to do with access to capital. And eventually people figured out that In-N-Out was still charging $8 for a combo meal while McDonalds was charing $11 probably meant that McDonalds was just being greedy (their CEO said as much in an earnings report). That part wasn't causal. Marketing is entirely correlation predictions. Political Science is entirely correlation predictions.

And to your original point - animals routinely predict human behavior. Your dog will know based on time of day and your behavior that they are about to be fed and will get very excited. Simple pattern recognition is not a higher-order cognitive function - it's a pretty low-level survival one that you find in most species to some degree. I have a possum who lives in my yard and has determined that I'm not a threat to it and will walk straight under my chair because I spend so much time on my patio. But if my wife comes out the possum will retreat. That's a prediction - I am safe to walk near, but nobody else is. The possum does not need to be as smart as me to do that. What's more, that an interaction that is impossible to accurately predict because I have agency and therefore my behavior cannot be completely predicted. I may get mad one night for no reason and try and hurt the possum. Even I can't predict that perfectly. And the possums behavior is the same - it may get angry one night and try to hurt me when walking under my chair. Right now each of us is predicting the other won't do these things to the same degree of accuracy and neither of us can predict it with complete accuracy.

Read what I wrote up ahead - I'm not arguing that autonomous cars are safe. You're at least arguing that given enough computing capability they can be safe. I'm arguing they can NEVER be safe, but not because of the degree of computing, but because in order for the infrastructure as we have built it to be in service to people, it requires that it be unsafe. It would be safer if we slowed everything down but then it take too long to get places and it wouldn't serve our needs. So we trade danger for efficiency to a degree that is unavoidable even for a perfect machine. And if you want safety, you have to reconsider the infrastructure itself. Cars as a primary means of transportation require you to build the environment in an unsafe way in order to get the necessary efficiency out of them. It has nothing to do with how good the compute is. Trains hit cars at level crossing not because the train engineer is irresponsible but because we didn't want to pay the money necessary to grade separate the road from the tracks. That's the only answer. It is a failure of infrastructure, not a failure of individual agency whether you substitute a computer in for the human or not.
 
Upvote
4 (4 / 0)
Not really. Even then scientists were predicting warming. Though there was no wide consensus.
There was consensus among scientists who were studying the topic. The issue was that most were not and still weighed in on the subject with an 'I don't know', which instead of meaning 'that's not my field' we retcon to mean 'I've studied the evidence and it's inconclusive'.
 
Upvote
5 (6 / -1)
Okay, yes.

If you want to prove a brain is not a computer, you come up with something a brain can do, but a computer can't.
A brain can grow. And it's unclear if the growth of the brain is related to learning/activity. It is unlikely that computation will be able to replicate the conditions in a meaningful enough way to ever do the experiment.
 
Upvote
1 (1 / 0)
What do you think has changed in the last 50 years that would make electric cars unsustainable?

I do not like cars, but most folks seem to think they are great.
In the last 50 years we stopped ignoring at least some of the environmental impacts of the car. There are plenty we continue to ignore. Additionally, the recurring costs of building around cars is increasingly being felt.

Most people think they're great because they have a job they need to get to, and we've culturally accepted that spending $47,000 (median) is a reasonable thing to do for that job. Marketing helps us rationalize that decision by making the car an extension of your personality. So long as American society prevents people from seeing a viable alternative, what choice do they have but to love their car. It's so flexible that even if they refused to use it to get to a job, they can then live in it when they lose their home.

But it's pretty clear that the secondary costs of car dependency are rising unaffordabilty and environmental unsustainabilty. Getting rid of the tailpipe emissions helps, but you're still left with low density due to single family homes that are only feasible due to cars, with high heating/cooling costs, high construction costs, and which only encourage increased consumerism. You need to pave more land area for parking than you reserve for housing, which itself has huge environmental costs and incurs a large tax on goods and services (how do you think that land is being paid for, etc.) In much of the US people are now spending more money on their car than their home. In other parts of the country housing prices are so high in part because all available land has been developed and these areas are having to reconfigure at high cost to secure new land for housing usually by either degrading the utility of cars by removing parking or by further increasing the cost of goods and services by replacing parking lots with structures. And that asphalt and concrete has its own environment footprint which is expensive to maintain. Road maintenance costs are typically higher than road construction costs and the move to larger and heavier vehicles means that maintenance (and the environmental cost associated with it) needs to be more frequent.

Cities are increasingly struggling with their budgets because they never anticipated that ongoing road costs would be so high. Putting the road in was relatively cheap, and they presumed that the ongoing maintenance of this network would be easily covered by tax revenue. But cars force businesses apart to make space for parking, which increases how much infrastructure (water, sewer, as well) needs to be covered by the same tax revenue from a business. And the cost of maintaining roads is increasing well above inflation both as we price in some of the environmental impact, but also because of increased wear from these larger vehicles, trucks, etc. So either roads degrade, or tax rates have to go up. Parts of the country tore out their paved roads and replaced them with gravel because it was impossible to balance the equation otherwise.
 
Upvote
3 (3 / 0)

wildsman

Ars Scholae Palatinae
664
this is news to me, got a paper on that?
Sure, this is now almost a year and a half old -


"These results nonetheless show that it is feasible to develop recurrent neural network models able to infer input-output behaviours of realistic models of biological systems, enabling researchers to advance their understanding of these systems even in the absence of detailed level of connectivity."
 
Upvote
-2 (1 / -3)
A brain can grow. And it's unclear if the growth of the brain is related to learning/activity. It is unlikely that computation will be able to replicate the conditions in a meaningful enough way to ever do the experiment.
Replicate what-- growing? learning? I don't think there's any reason to try to perfectly replicate a human brain.

And we might never know if it's possible to replicate something like "the desire to eat a big-mac."

But sure, if literally growing is necessary to learn (something or other), then that would be a problem.
 
Last edited:
Upvote
0 (0 / 0)
LLMs are nowhere near AGI. They are nowhere near even approaching bacteria in terms of being able to interact with their environment and adapt beyond their training.

They poop out words in an order that is not always nonsensical.
on the other hand LLMs "poop out words in an order that" has more sense than what comes out of influencers, celebrities and conspiracy theorists. i know, i know, not a high bar to clear but they have at least that going for them.
 
Upvote
0 (1 / -1)

LaunchTomorrow

Wise, Aged Ars Veteran
114
I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.

If I'm wrong, and the comparison is out there, please point me to it.

Another interesting comparison would be drivers with a given blood alcohol level vs. these autonomous systems. There are lock-out systems that won't allow a human to drive if they breathe into a device and the device detects alcohol above a certain level; this is sometimes used for drivers who have had DUIs. Perhaps instead of preventing them from driving the car could take over.
Literally the CA DMV requires all licensed AV operators to release accident/incident statistics and descriptions as well as "miles per disengagement" statistics. Tesla is not a licensed AV operator. Waymo, however, is, and their statistics are much better than humans. I don't think they have had an accident with human injury yet after millions of miles of testing. They also averaged like low thousands of miles per disengagement (like 1500-2500 iirc).
 
Upvote
2 (2 / 0)

"These results nonetheless show that it is feasible to develop recurrent neural network models able to infer input-output behaviours of realistic models of biological systems, enabling researchers to advance their understanding of these systems even in the absence of detailed level of connectivity."
In this work we propose a methodology for generating a reduced order model of the neuronal behaviour of organisms using only peripheral information

this wasn't a full model, and didn't even attempt to fully model the nervous system of c. elegans. The paper stated in the beginning that full simulation is too slow/resource intensive. The paper took a traditional, non-ML, biological simulation and used it to generate training data for an artificial neural network. It aimed, only, to reproduce neural output from inputs, and made no attempt to use ML to model the internal state of the neurons themselves.

This work is a very long way from fully replicating the behavior of actual c. elegans.
 
Upvote
0 (1 / -1)

wildsman

Ars Scholae Palatinae
664
this wasn't a full model, and didn't even attempt to fully model the nervous system of c. elegans. The paper stated in the beginning that full simulation is too slow/resource intensive. The paper took a traditional, non-ML, biological simulation and used it to generate training data for an artificial neural network. It aimed, only, to reproduce neural output from inputs, and made no attempt to use ML to model the internal state of the neurons themselves.

This work is a very long way from fully replicating the behavior of actual c. elegans.
I'm not saying that it is there yet - just that it has shown that it is technically feasible. There is an opensoure project that I contributed to in the past (which models c. elegans) but one reason I stopped contributing is that it wants to model every part of the worm (down to the molecular level) and not just the brain itself - I wasn't as interested in that even though it is a noble cause:

 
Last edited:
Upvote
0 (1 / -1)
Because Sama and some others have lied to investors that they could achieve human-like intelligence within 5 years by mostly scaling up feed-forward neural-networks and the data used to train them. At the very least he promised them that hallucinations and basic reliable reasoning would be "done" pretty soon.

In reality these things will take 10+ years and will require new architectures like IJEPA or others. Probably many more fundamental advances are required.

Any investors that can wait for 10+ years will be OK, as long as the AI company they invested in survives and moves on to the next thing successfully. Many will take a soaking when the LLM/VLM bubble pops (gonna happen within 5 years, maybe a lot sooner).

EDIT: And some people don't want to be there when it starts raining.
Where do you get 10+ years from? I see absolutely no indication whatsoever that anyone has any idea how to get from what exists today, which is a fancy auto-complete, to actual reasoning. The failure of ChatGPT to deal with even the simplest logic puzzles is not some bug or minor shortcoming to be fixed, it's indicative of us not even moving in the direction of AGI. All we have is a word salad generator which often generates something that isn't obvious nonsense.
 
Upvote
2 (3 / -1)
Most people think they're great because they have a job they need to get to, and we've culturally accepted that spending $47,000 (median) is a reasonable thing to do for that job. Marketing helps us rationalize that decision by making the car an extension of your personality. So long as American society prevents people from seeing a viable alternative, what choice do they have but to love their car. It's so flexible that even if they refused to use it to get to a job, they can then live in it when they lose their home.
Forced is a bit much. It is our tendency because the country is large and was built up after people had cars. But individuals also choose long commutes, road trips, distant visits. And they choose large, shiny, and fast vehicles.

They love them, and they don't have to. In terms of large expenses they could think of cars like they do health care, or taxes, or college debt, with deep resentment, and avoidance.

Maybe what you said about infrastructure outpacing inflation will cause people to rethink priorities, but at the moment, it feels like the opposite. There are more cars and they are much larger than ever.

I also would think it would take quite a while to rebuild America.
 
Last edited:
Upvote
0 (0 / 0)
If you want to prove a brain is not a computer, you come up with something a brain can do, but a computer can't.

If you want to prove that brain is a computer, you get a computer to do that thing.

The first one, yes. The second one, no. Showing that a computer can do [some given thing] that a brain can do does not prove that the brain is a computer. It just shows that the computer can perform these tasks that a brain does.

Unfortunately, to prove that a brain is a computer, you have to deeply understand how the brain works. And... we're just not there yet.

I fully expect that the brain and mind are computational (read: can be modeled by a Turing machine), because every other time that humans have proposed "magic" stuff as an explanation, it's turned out to just have natural causes instead. Think of how, before scientists isolated organic chemicals, they thought there was an "elan vital" that made living organisms animated and alive. But, in the end, it just turned out to biochemistry. Fantastically complex, yes, but ultimately a materialist, physics-based explanation.

That's likely how it'll be for the mind, as well, as it's the way the accumulated evidence leans. But it's yet to be solidly, provably shown: there's still too much we don't understand for us to say for sure.
 
Upvote
0 (0 / 0)
So your evidence that you don't need to understand something to predict it is a list of historical examples of people gaining an understanding of something through methodical observation and then using that understanding to predict the behavior? Are you serious with this shit??

Does the bold type help? Wouldn't it, I dunno, make more sense to explain your objections?

PS - when you say that people have to "understand" something to predict it, most of us take that word "understand" to signify that you have to understand how it works. Methodical observations of how something acts is not the same as understanding the underlying mechanics. That's the key distinction.

I could give cyanide to mammals and build up a very, very thorough set of data about what the LD50 is for different species (by age/sex/weight/etc). Yet none of that would tell me how, functionally, cyanide disrupts life. Right? For that, I have to dive into the biochemistry.
 
Upvote
1 (1 / 0)
It's the same problem facing autonomous vehicles -- the vehicles have to safely interact with irrational and unpredictable humans. The only way they can do that is if they are able to understand and predict human behavior, which, by necessity, means that they are just as complex, just as aware and cognizant as humans
I think you could produce safety comparable to human drivers by strict adherence to the road rules including driving prepared to give way to anything with, or which could acquire, right of way over you. It would still make mistakes, but while the situations would be different the overall severity would probably be better. Unfortunately the driverless car industry appears to be trying to implement the monkey’s paw version of that by lobbying to get the road rules changed so that everyone else has to stay out of their way.
 
Upvote
0 (0 / 0)
The second one, no.
Yes, I agree. I was being a bit flippant, which maybe I shouldn't here.

What I meant is: that's how the conversation has gone historically and how it will continue to go. Some not-great philosopher comes up with an impossible task. Then some engineer builds a computer that can do the impossible task.

There's no actual proving that occurs. The philosopher is wrong. And people will blithely agree or disagree until the machine is invented.

The same will be the case with "AGI". Whatever the hell that is.
 
Last edited:
Upvote
0 (1 / -1)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
Also, I don't see why this would be.

If I were able to prove to you the brain isn't doing some computationally impossible thing, would that prove to you the brain is a computer?
So if you were, hypothetically, able to prove a negative, using some kind of unspecified evidence, would that prove anything?

That is quite literally not how this works.
opinion polling scientists on AGW in 1970 might not have had the result you were looking for
It sure would have.
 
Upvote
1 (2 / -1)

Snark218

Ars Legatus Legionis
29,677
Subscriptor
Yes, I agree. I was being a bit flippant, which maybe I shouldn't here.

What I meant is: that's how the conversation has gone historically and how it will continue to go. Some not-great philosopher comes up with an impossible task. Then some engineer builds a computer that can do the impossible task.

There's no actual proving that occurs. The philosopher is wrong. And people will blithely agree or disagree until the machine is invented.

The same will be the case with "AGI". Whatever the hell that is.
This is a statement of faith. Like, well, nobody flipped as high as Simone Biles and everyone said it couldn't be done, so if she can flip 12 feet, there's really no reason the next GOATed gymnast can't fucking levitate with the power of her mind and soar a hundred feet in the air.
 
Upvote
1 (2 / -1)

matthieub

Wise, Aged Ars Veteran
183
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Perfect answer. It's not and will never be able to achieve AGI, even with order of magnitude more of everything. It's just not built for that, and the journalist keeps on forgetting that. It's very poor journalism when you don't understand what's going on and can't properly identify the fallacies in PR.
 
Upvote
2 (2 / 0)

matthieub

Wise, Aged Ars Veteran
183
… in all of a year …

A bit impatient, no? This shit talking is a little dumb. The answer to why veterans are leaving OpenAI probably has more to do with Sam Altman being a reckless dick and the fact that these same people can work on AGI at other companies in a more ethical way. Or have more money, power, and/or interesting work at a younger venture. OpenAI is far from the only game in town.

Many of the veterans are far more concerned about “sci-fi fantasies” than most people here, and are more than a little unsettled to be working on what they perceive to be more dangerous than the thermonuclear bomb, as fascinating as it might be. They might also be burnt out and wishing to live more of their ordinary life while they still have the chance. Of course most people here think they’re wacko for thinking these things.
In the past 50 years. Get in line, neural networks are still using the same backward propagation as in the 70s. they have not found anything.
 
Upvote
1 (1 / 0)
So if you were, hypothetically, able to prove a negative, using some kind of unspecified evidence, would that prove anything?

That is quite literally not how this works.

It sure would have.

Hmmm. I'm not sure what you would have expected from a poll of climate scientists in 1970, but at that point they weren't really solidly decided. They were still hashing out the relative importance of aerosols (cooling) to CO2 (warming). The earliest half-decent models had just been published (Manabe, 1967; he later got a Nobel for his decades of work in climate modeling). We didn't have ice core data demonstrating the glacial-interglacial cycles, their relation to the Milankovitch cycles, or the CO2 levels during those glacial swings; that ice core data came in the late 1970s and was a major piece of what pulled the scientific community together. Important because it showed that modern CO2 levels are high compared to the past few million years. Also important because it confirmed that the climate can swing pretty wildly, given the right forcings.

We did have solid reasons to be worried about CO2, but in 1970 there were waaaaay too many loose ends still remaining. Cloud feedbacks, paleoclimate, water vapor feedback. The scientific consensus really firmed up during the 1970s and 1980s, which is why the IPCC was founded in 1988.

But don't take my word for it: you can go back and read the literature from that era, and see for yourself the caution that scientists within that field had. The modern confidence in anthropogenic climate change is hard-won, after decades of research, skeptical argument and a slow, slow accumulation of evidence.
 
Upvote
1 (1 / 0)
This is a statement of faith. Like, well, nobody flipped as high as Simone Biles and everyone said it couldn't be done, so if she can flip 12 feet, there's really no reason the next GOATed gymnast can't fucking levitate with the power of her mind and soar a hundred feet in the air.

Is "AGI can emerge from the right set of algorithms" really equivalent to "the next GOAT gymnast can levitate with the power of their mind"?

... the first one seems to be a really fair assessment, given what we know about information processing.
The latter is, as far as we know, physically impossible.

By default, we should assume materialist / natural explanations for how the universe works. Every time someone's said otherwise so far, they've been wrong. This includes how our brains generate our minds and our mental experiences - it is very very likely that this is natural, just like everything else we've found. This is not a matter of faith, but induction. And if our organic matter can generate an AGI, then it seems fair to conclude that we'll eventually figure out how to do this with other substrates, too.
 
Last edited:
Upvote
0 (1 / -1)
And if our organic matter can generate an AGI, then it seems fair to conclude that we'll eventually figure out how to do this with other substrates, too.
why is that a reasonable conclusion? Nature results in lots of things we are unable to replicate.

There are phenomena in nature that are non-linear or chaotic that are simply not amenable to simulation. We don't have any well-grounded scientific theories that would tell us whether cognition is one of them, or not.

I suspect we're a lot closer to growing a 'working' brain in a vat than we are to replicating one in-silico, but such an experiment would be deeply amoral and very troubling.
 
Upvote
1 (1 / 0)
We did have solid reasons to be worried about CO2, but in 1970 there were waaaaay too many loose ends still remaining. Cloud feedbacks, paleoclimate, water vapor feedback. The scientific consensus really firmed up during the 1970s and 1980s, which is why the IPCC was founded in 1988.

But don't take my word for it: you can go back and read the literature from that era, and see for yourself the caution that scientists within that field had. The modern confidence in anthropogenic climate change is hard-won, after decades of research, skeptical argument and a slow, slow accumulation of evidence.
and, in a notable parallel, many scientists at the time were, in one way or another, dependent for their livelihood on some aspect of the energy or fossil fuel sector. Or, said more succinctly:
It is difficult to get a man to understand something when his salary depends on his not understanding it.
 
Upvote
1 (1 / 0)
why is that a reasonable conclusion? Nature results in lots of things we are unable to replicate.

There are phenomena in nature that are non-linear or chaotic that are simply not amenable to simulation. We don't have any well-grounded scientific theories that would tell us whether cognition is one of them, or not.

I suspect we're a lot closer to growing a 'working' brain in a vat than we are to replicating one in-silico, but such an experiment would be deeply amoral and very troubling.

Non-linear isn't, by itself, a problem. Hell, nearly all the interesting systems we model are non-linear.

Chaos, though, is a problem if you need the results to be deterministic. But that's not always the case. For instance, weather is chaotic, but we have no problem modeling climate, because for climate we're interested in the broader statistics, weather-over-time. The chaos of weather is not a problem for climate modeling.

So:
(a) is cognition chaotic?
(b) is chaotic cognition an issue, if we're trying to make an AGI? Or is non-deterministic cognition still useful? (E.g., the exact cognitive process might be sensitive to small peturbations, but the end conclusions are nearly identical. This is an analogy to weather/climate).

This is not, so far, a solid argument that we won't be able to model cognition well enough to eventually produce an AGI.
 
Upvote
0 (0 / 0)
why is that a reasonable conclusion? Nature results in lots of things we are unable to replicate.

There are phenomena in nature that are non-linear or chaotic that are simply not amenable to simulation. We don't have any well-grounded scientific theories that would tell us whether cognition is one of them, or not.

I suspect we're a lot closer to growing a 'working' brain in a vat than we are to replicating one in-silico, but such an experiment would be deeply amoral and very troubling.
With a sufficiently powerful computer we would in principle be able to run a statistical QM + CFD model of all the ion pumps, chemical flows, and so on within the brain, but I don’t know if statistical approaches would be meaningful at the neuron level or if we need a complete QM simulation (requiring an insanely powerful computer). I suspect the answer is that it would cost an insane amount even to model a nematode accurately, let alone something that has demonstrated reasoning beyond the training set without explicit guidance, such as a crow. The two questions then are how do we initialise the simulation and how much can we simplify it and still produce the a working brain.

I do think you’re right that we’d have an artificial brain in a jar long before we have a silicon simulation
 
Upvote
1 (1 / 0)
eventually
that's the rub, isn't it. The question at hand is, is 'eventually' within a timeframe where OpenAI, et al, are viable commercial concerns? Or NFT-like ponzi schemes.

personally, I think we'll have fusion power on the grid long before we have computers we can consider capable of autonomous cognition.
 
Upvote
0 (0 / 0)
why is that a reasonable conclusion? Nature results in lots of things we are unable to replicate.

There are phenomena in nature that are non-linear or chaotic that are simply not amenable to simulation. We don't have any well-grounded scientific theories that would tell us whether cognition is one of them, or not.
I don't know what you mean by this. Could you give an example? I understand enough why many things like weather patterns can't be predicted or perfectly replicated. But, I don't see why they couldn't be simulated or physically built though.

That is, if cognition is a chaotic process, probably we'll make a chaotic AI.
 
Last edited:
Upvote
0 (0 / 0)
I think it's because the young'uns dont yet realize the "80% of the way there!" is the easy part, and it's the last 20% that takes all the time (or proves to be impossible).
"Geeze, five years later and now we're only 81% of the way there..."
Sort of like charging electric cars. That first 80% gives you a lot of utility.
 
Upvote
0 (0 / 0)
However let me end with a fun fact. The human brain has like 100 trillion synapses which in our current understanding are most analogous to parameters in a model. However, recent research has suggested that the microtubules in neurons could also do computations inside each neuron. Each neuron has billions of microtubules. That could mean we need to model more like 100 million trillion or more parameters.
i would like to see a reference to this research if you have it.
 
Upvote
1 (1 / 0)
i would like to see a reference to this research if you have it.

He is probably talking about the Hameroff-Penrose Orch OR (Orchestrated Objective Reduction) theory of quantum consciousness,


to say it is 'highly controversial' is a vast understatement.
 
Upvote
1 (1 / 0)