De Kraker: "If OpenAI is right on the verge of AGI, why do prominent people keep leaving?"
See full article...
See full article...
And you're making exaggerated claims. Those celestial calendars, while impressive, are not precise. They're not precise because the people making them did not fully understand what they were observing. They could not predict what they did not understand. So what is the point you think you're making? Nothing you've said remotely refutes my original claim: you cannot predict the behavior of anything you don't understand. They understood that the moon and stars move. They observed and understood the ways they moved. They used that understanding to predict their movements to the extent that they could understand those movements. There's nothing more to it than that.No they didn't. You're just saying the same thing I said with different words.
They didn't understand gravity or why that object could be relied on to return to the same place every year. They could predict it could return there, but not explain why it would return there. That's my point. Most people today cannot really explain that. Most people understand that celestial mechanics is predictable and don't question it, but they don't actually understand celestial mechanics - at least not on a level that would allow them to predict eclipses.
We could measure and predict the precession of the perihelion of Mercury for pretty much a century before we could explain why it was doing that. We could presume that the prediction wouldn't change without an outside influence, but we didn't have an explanation for why the prediction was correct. It took understanding the general theory of relativity before we could derive that prediction and explain it.
You can make a prediction without even having a hypothesis (a lot of hypothesis were offered for Mercury that were all wrong) let alone a proof.
It saddens me you are downvoted. I think we can get most of the way with language models and glue but additional breakthroughs might simplify the design. For example, there's no need for search if there's no context limit.Basically it is induction based on emergent behaviour and test performance seen already from simply scaling (more data and more parameters). Many AI researchers are skeptical, but on the other hand the progress already seen has been pretty shocking. Most AI researchers think at a minimum it will have to be a combination of LLM+search; LLM+symbolic reasoning; LLM+planner; or more likely more complex designs etc. and plenty believe that additional breakthroughs are needed.
If we create AI without emotion -- that will be the miracle. We're training on people. Human behavior. It's Westworld, not terminator. But to be fair I do agree removing emotions is not necessarily a good idea.Without human emotions.
There are exceptions. Dogs can predict where the ball is going to go and they don't need to understand gravity or physics. Intuition and experience can suffice.you cannot predict the behavior of anything you don't understand
Agree, and once we solve one the other will follow.I think AGI-like technology will be stuck in the "uncanny valley" for quite a while. This is where it will be close to human like intelligence but never quite close enough.
Which is not to say it won't be useful, just not human-like.
You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
The issue is people. This technology AGI or not will slowly replace jobs but the issue is people will scapegoat each other (immigrants ect) while dehumanizing the losers in the displacement (see how we treat the homeless especially if those people succumb to their vices after losing to the market). The fear for the future is not due to technology but rather how we will clearly react to it.I fear for the future that these greedy mad scientists and their enablers are preparing for us.
That isn't the only choice. The machine can be used to upend society, even accidentally. Given the rate of human "progress" will cause extinction for everything and everyone, this might be the more desirable outcome.you have no choice but to rewrite the rules of society
May I remind you humans did ok even before we discovered current theory on gravity. When you play ball it's not your theoretical knowledge of physics that make you a good player. But it will be when doing theoretical physics. AGI will need to cover both, to be human like, IMHO.There are exceptions. Dogs can predict where the ball is going to go and they don't need to understand gravity or physics. Intuition and experience can suffice.
Edit: Reading the last few pages, I am surprised nobody has made this argument. Does nobody play fetch with their puppers? Bad humans! Bad!
They are safe. Waymo has yet to have a major accident and it's driven millions of miles at normal speeds. There have been plenty of times where yes, the car stops because it's confused or it ends up in some other degenerate state, but that is definitely the exception rather than the norm.Hold on here. Autonomous cars are not safe. They might be safer, but they are not safe. You're describing a physical space (physicist here) where each object in the space is measured, assigned a potential velocity vector that they are either demonstrating or could be demonstrating before the next sample, which produces a potential impact cone with that object. This is done for everything (pedestrians, cars, cyclists, dogs, kids, etc.) in the scene and for the vehicle being driven with an additional calculation for how quickly the vehicle could slow down/turn/etc if needed. And the vehicle would proceed ensuring that none of those potential impact cones intersect. And you're arguing that self-driving vehicles do this. And they don't. They cheat. That's part of why this is hard to solve.
If you do the 'proceed in a manner that guarantees no collision', the self driving vehicle in an urban environment would struggle to exceed 10mph, and often would not be able to move at all. So in order to make an autonomous vehicle actually useful to anyone, it has to cheat and make assumptions that the other things in the scenes are behaving in a similar manner, and that they're trying to avoid the collision just like the self-driving car is. And it needs to do that contextually based on who has right of way, and so on. You can probably trust the adult pedestrian is better at not walking into traffic than the child, or the dog, or the person using a cane, and if the model can't strictly avoid all possible interactions and still be a be able to be useful, it has to make assumptions, use some kind of judgement. What assumptions does it make about a potential vehicle behind a building or truck where it doesn't have vision?
This is what motorists instinctively do - you don't account too much for the motorist 3 lanes over when changing lanes because you assume they are also paying attention and won't aim for the same spot that you are. And usually that's right, but not always. Sometimes you crash. That's what autonomous vehicles are also doing. And some of these driving conventions are local. Japanese parents need to teach their kids to be afraid of American motorists because Japanese drivers don't drive near pedestrians, or at least children, but American motorists will brush past them expecting the child is focused on avoiding the car. So an autonomy model would need to work differently in Japan than in the US until we get kids in both places to behave the same way, or we make the most conservative model, which might grind to a halt in a place like India or Vietnam.
So, these things are not as safe as you think they are. Again, they may be safer than human drivers, but they aren't safe, and a system that is safe starts to look an awful lot like public transit.
What do you think that's born from? What do you think intuition and experience are? They're from observed behavior. Behavior that is learned, that is understood. You spent your entire childhood learning and understanding the laws of the physical world, whether you could consciously explain them or not. Play fetch with a puppy and their actions and attempts are almost comically inept. Because they haven't learned. They don't understand. Yet. An older dog has learned and understands. To the extent that they have observed, that they have learned and understood. Throw an object that doesn't conform to their understandings of how objects move (like a boomerang or a yo-yo), and you'll see that they don't understand. They haven't observed. They haven't learned. Yet. And these are simple cause-and-effect relationships.There are exceptions. Dogs can predict where the ball is going to go and they don't need to understand gravity or physics. Intuition and experience can suffice.
Edit: Reading the last few pages, I am surprised nobody has made this argument. Does nobody play fetch with their puppers? Bad humans! Bad!
Of course they were accurate. You can measure things very accurately even when you haven't worked out the underlying theory. The precession of the perihelion of Mercury is very reliable, very accurate and allows for precise predictions. It's just not explainable with newtonian mechanics. We didn't know why it precessed that way until GR was worked out and that precession was one of the way GR was proven by observation.Just refute your claim -- they could NOT predict the behavior of Mercury. That made rough estimates based on what they observed and understood, but those estimates were never accurate. It still took considerable time to precisely locate Mercury even with those estimates. Because they couldn't predict its movements. Because they didn't understand its movements.
Problem is, a lot of capitalists have a financial interest in convincing everyone that it is.Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
It still baffles me every day how anyone can think that we could take it for granted that we will "obviously" create human-level AGI at some point. IMO some level of appreciation of how biological evolution has worked over the last 2 billion years or so, should make one very humble. There ISN'T some conceptual fundamental blueprint available, nothing to really "make sense of". At best The Mother Of All Spaghetti Codes, to hopefully partly untangle. Combine that with the virtual impossibility to "scan" a living brain (for some level of reverse-engineering) and you start to realize how formidable (or even impossible) the task might be.It is also comparing to a human brain, the most (one of?) complex brains on the planet. Biology can do AGI with far fewer synapses, as evidenced by all the less complex brains that exist.
So we are already at the point where LLMs are more complex than working biological brains when compared this way. By this alone we can see that there is something major LLMs are missing besides scale for AGI.
Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
LLMs could probably play a part in a form of intelligence, in the sense of "Reddit Plays Pokemon". Each prompt, give your current situation, a goal, and a constrained list of options, then "ask" the LLM what it should do next.I can imagine an attempt at AGI by using an LLM as the "processing" block. At least two more blocks would be needed though, a "state" block and an executive/goal/motivation/emotion block. Connect them all together and then get them to advance the state in the direction dictated by the goals. I'm sure the rest is just details
This is true, we don't. But that actually makes it harder to tell whether machines have intelligence.We also don't know at all how we humans think and reason, what makes us special compared to animals and what true intelligence even is.
I don't think we would have any idea whether an LLM is more "intelligent" than a goldfish.So we are already at the point where LLMs are more complex than working biological brains when compared this way. By this alone we can see that there is something major LLMs are missing besides scale for AGI.
Ilya Sutskever is the name you're looking for. He invented stuff at Google. Then cofounded OpenAI, where he led the way. Then, of course, he was fired. It's a shame nobody knows who the fuck he is, but he's not a billionaire, so it doesn't matter.Sorry, what exactly OpenAI invented ?
Didn't they just put together few demos using Google's transformer tech ?
Why are they supposed to release AGI if they actually didn't invented nothing new till now ?
cognitive scientists, the people who study this, are very much not certain of that.This is because fundamentally a brain is just a computer. Of that we're pretty certain.
I do have safety concerns about ”AI” now, but they have nothing to do with SkyNet/Colossus/WOPR type existential threats. My concerns are related to current uses of ML applications by police, military, governments, and corporations - not just them putting way more faith in those applications than they deserve, but the level of bias in the applications’ training. These things are causing harm now, and aside from sanctimonious words, there doesn’t appear to be much being done about that. If the AI companies do achieve some semblance of AGI, I’d be more afraid that the current problems would be exacerbated than that there would be a qualitative change in the concerns, at least for quite a while.If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT
The problem is that it's tricky to make such comparisons accurately - e.g. people even now claim they're safer based on things like miles driven (in this specific part of Phoenix AZ that doesn't have weather). And that's not accounting for other issues like data not including whether a remote operator was standing by to take over, or how it behaving differently than a human driver would creates its own set of predictability issues for other drivers/pedestrians/cyclists/etc.I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.
If I'm wrong, and the comparison is out there, please point me to it.
Another interesting comparison would be drivers with a given blood alcohol level vs. these autonomous systems. There are lock-out systems that won't allow a human to drive if they breathe into a device and the device detects alcohol above a certain level; this is sometimes used for drivers who have had DUIs. Perhaps instead of preventing them from driving the car could take over.
Yeah, I'm sure some scientist somewhere disagrees.cognitive scientists, the people who study this, are very much not certain of that.
I think it is far more likely that the rats are fleeing the sinking ship. Doomsday warnings about safety have always been an integral part of OpenAI's hype machine, they have cried wolf too often to be taken seriously. Remember how they "did not dare" to release GPT-2 because it was "too dangerous"? Absurd in hindsight, but it sure got them into the media. And they keep doing this. The safety warnings are useful for OpenAI in many ways:So I think AGI is coming, who knows at what timeline it will be. What I am very confident in right now is that we will not understand the space well enough to be equipped to deal with it safely when it does emerge. And to that end, I believe this is the core cause of all of the departures at openAI. They are running fast without taking the time to understand how to have AGI happen in a SAFE way. That seems to be represented at least lightly in every departure note I have seen.
And you're ignorant of the long road yet to go. Google the Pareto principle and contemplate it before you try to condescend to me again.Sure, it’s called being ignorant of the long road to getting here.
People who never used computers thought the same of the Internet in 1995.
Tesla does this regularly. It's a bad comparison because (A) the system(s) turn off when the driving gets hard and (B) humans sometimes have to grab the wheel anyway. I think they are in fact reasonably safe, but not at all self-driving.I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.
If I'm wrong, and the comparison is out there, please point me to it.
So your evidence that you don't need to understand something to predict it is a list of historical examples of people gaining an understanding of something through methodical observation and then using that understanding to predict the behavior? Are you serious with this shit??Of course they were accurate. You can measure things very accurately even when you haven't worked out the underlying theory. The precession of the perihelion of Mercury is very reliable, very accurate and allows for precise predictions. It's just not explainable with newtonian mechanics. We didn't know why it precessed that way until GR was worked out and that precession was one of the way GR was proven by observation.
The same thing happened in chemistry. Based on observation chemists worked out a classification of elements. From observation they worked out molecular weights. A mole was established in the early 19th century. This classification allowed chemists to make predictions about how elements would react. From observation they could work out to great accuracy how many grams of each reactant was needed to completely consume the reactants in the reaction. Atomic weights were worked out in the very early 19th century. They were able to do this through the entire industrial revolution, though the development of new explosives, metallurgy, polymers and organic chemistry - and they did all of this, with various laws established, stoichiometry, and so on into the early 20th century before the theory of the atom was even established let alone confirmed. Their observations of chemical behavior gave clues to how the atom was structured and when the structure of the atom was worked out it improved those chemical predictions, it explained why certain predictions failed, and so on.
We had over a century of really solid chemistry, based on an increasingly accurate set of predictive models, none of which relied on actually understanding if atoms existed, how they interacted, how bonds actually formed, electron energy levels and so on.
We have very modern examples. Ritonavir is an HIV drug introduced in the late 90s. When combined with another drug you get Paxlovid - for treating Covid. After it was introduced on the market they discovered that it sometimes formed in a different structure (form II vs the original form I), and the other structure had a different solubility so it wasn't taken up by the body in the same way and was ineffective. Chemists discovered that once they had produced any form II in a lab, they could never produce form I ever again. And chemists who had produced any form II who entered a lab that never produced form II would contaminate that lab in a way that it could never produce form I again. They had to pull the drug from the market because they had mysteriously lost the ability to manufacture it. After some work, biochemists figured out a process to reliably produce only form I and the drug was reintroduced. Chemists have no idea why the presence of form II completely prevents the future production of form I. They have some theories, but nothing proven out. But they do have a few very reliable predictions they can make:
1) production by this new method will only produce form I
2) production using other methods that did produce form I could never produce it again once form II was observed
The drug, which was one of the most effective AIDS drugs at the time, had to be pulled from the market for years while they figured that out.
I don't care, personally. But OpenAI is demanding enormous resources - financial, computational, natural - and justifying that with a very hard sell that they're on the road to AGI. If they're just developing a tool to speed up coding workflows and improve natural language interfaces, that's cool I guess, but does it actually justify eleven and a half billion dollars in investment? Does it justify using half a million kilowatt-hours of power a day, enough to power a decent size town and with commensurate carbon emissions? And if there's no realistic pathway from LLMs to AGI, why are we getting the hard sell?I'm not sure why we care about whether or not Chat GPT can become AGI. I use it and Claude to write code for me and it works. I can ask it questions about APIs and it tells me what I want to know, rather than me spending hours going through documentation which is often mediocre. It saves me time; it's just another tool in the toolbox.
its not that scientists 'disagree', its that, at the moment, computational theory of mind is non-falsifiable. its not even science.Yeah, I'm sure some scientist somewhere disagrees.
Sorry, but subject matter expertise counts for more than overly reductive tech-accelerationist optimism. We do not in fact understand how cognition and consciousness works in the first place, so cognitive science cannot conclude cognition is a purely computational process - and will not for a very long time. It is wildly presumptuous and completely premature to assume cognition is a computational process.Yeah, I'm sure some scientist somewhere disagrees.
Presumably they have opinions anyway. And presumably use those opinions to inform their hypotheses.its not that scientists 'disagree', its that, at the moment, computational theory of mind is non-falsifiable. its not even science.
Is she trying to say that humans are good at solve traveling salesman problems with lots of stops?We're only at the very beginning of understanding how cognition works. works like this are just starting to frame the problem. If your definition of a computational mind includes the ability to solve computationally intractable problems, then there must be some aspect of biological brains that is not present in turing-machine-like computers.
facts don't care about your feelings, and opinions don't pass peer review. It wouldn't be the first time that when vibes meet reality, reality winsPresumably they have opinions anyway. And presumably use those opinions to inform their hypotheses.
We have a pretty well developed understanding of computational complexity at this point, and the text is building a factual, falsifiable basis for tying our understanding of computation to our (much more limited) understanding of cognition. If you want to prove that they are the same thing, this is the kind of place you would start.Is she trying to say that humans are good at solve traveling salesman problems with lots of stops?
If your point is just that scientists are wrong sometimes, I agree. But I still think you're better off polling scientists than not polling scientists.facts don't care about your feelings, and opinions don't pass peer review. It wouldn't be the first time that when vibes meet reality, reality wins
popularity polling an opinion among scientists is beyond useless. We don't base scientific progress on vibes, we base it on peer reviewed, reproducible results. I think you are, in fact, much worse off if you base legislation or investment on the opinion of scientists, when we're in the middle of a hype bubble, and there is no falsifiable science to be found.If your point is just that scientists are wrong sometimes, I agree. But I still think you're better off polling scientists than not polling scientists.
Also, I don't see why this would be.We have a pretty well developed understanding of computational complexity at this point, and the text is building a factual, falsifiable basis for tying our understanding of computation to our (much more limited) understanding of cognition. If you want to prove that they are the same thing, this is the kind of place you would start.
Really couldn't disagree harder on this. Having watched Global Warming occur in real-time for my entire fucking life....I think you are, in fact, much worse off if you base legislation or investment on the opinion of scientists...
no, the opposite. If you found the brain efficiently doing something computationally intractable, then you might either discover that P really equals NP, or that turing-machine computation doesn't describe how brains work, or who knows what?Also, I don't see why this would be.
If I were able to prove to you the brain isn't doing some computationally impossible thing, would that prove to you the brain is a computer?
opinion polling scientists on AGW in 1970 might not have had the result you were looking forReally couldn't disagree harder on this. Having watched Global Warming occur in real-time for my entire fucking life.