De Kraker: "If OpenAI is right on the verge of AGI, why do prominent people keep leaving?"
See full article...
See full article...
this makes a ton of assumptions that are just that assumptions
A) that a parameter in a NN is equivalent to a synaptic connection
B) That synaptic connections are the most efficient way to do "intelligence" so that it could be used as a benchmark to go
C) That learning as done for the brain is what is most efficient for how it should be done with computers.
natural selection and genetics are completely different processes than learning. natural selection took billions of years to make intelligence, can openAI wait that long?First of all, “guess and check” is how natural selection got us here. The simplicity of the algorithm does not make it ineffective.
Second of all, that is not at all what GD is. GD is the opposite of guessing. You literally calculate the favorable mutations.
Congrats! You are now more qualified to be an "AI researcher" than most of the VC tech bros!Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
But as Niels Bohr said, "Predictions are hard, especially about the future."There’s two incorrect ideas floating around here:
1) That the only thing OpenAI (or others) work on is LLMs. They’re not. They’re studying various architectures and approaches, and constantly researching new ideas. Nobody is insisting that LLMs will be AGI. Also, “LLMs” take different forms. Most models today (such as for speech recognition or image synhesis) use some form of transformers and tokens and other approaches pioneered by LLMs but aren’t LLMs.
2) That predicting the next word is trivial. It’s not. Predicting the next word is how many questions on standardized tests work. The problem is limitlessly hard. To predict the next word perfectly would require godlike intelligence and would compress Wikipedia to 0 bytes. Prediction, whether of words, actions, the weather, etc, is the very essence of science and intelligence.
That’s not what stochastic refers to. And actually, the first modern deep learning models were initialized with Boltzman machines. It just turned out to not matter.you dropped 'stochastic'. the network is initiated with random values. at no point is the training data introduced into the model, the model is simply curve fit to the training data. The model doesn't learn, in any way that word has ever been defined. learning is an active process that involves taking in information through the senses.
The models don't have senses and are incapable of learning. Training is something done to, or with, the model, which is an entirely latent set of data structures with no intent or agency.
Doesn’t seem to stop anyone here…But as Niels Bohr said, "Predictions are hard, especially about the future."
That is true. But that is a slightly more nuanced question. Does infallibility define AGI? It will certainly constrain applications in the coming years, but does that invalidate the “sci-fi fantasies?” It’s not like humans are infallible, and yet we are dangerously intelligent.I think AGI-like technology will be stuck in the "uncanny valley" for quite a while. This is where it will be close to human like intelligence but never quite close enough.
Which is not to say it won't be useful, just not human-like.
You see a similar sort of asymptotic relationship with fully autonomous road vehicles. It looked like it might be a solved problem back in 2018, and in a lot of ways the best systems seem to be 98% there. But that last 2% makes all the difference in the real world.
That is true. But that is a slightly more nuanced question. Does infallibility define AGI? It will certainly constrain applications in the coming years, but does that invalidate the “sci-fi fantasies?” It’s not like humans are infallible, and yet we are dangerously intelligent.
But blockchain, cryptocoins, NFT have totally revolutionized databases, financial transactions, ownership of digital apes. Totally.(Emphasis mine)
This seems to assume an enduring contribution and relevance that I’m not confident OpenAI and its competitors will achieve…you bros may just be churning VC bucks, at the end of the day.
That's kind of the thing, right? If the people investing in AGI were interested in simple 'intelligence' systems, we've already got robotic systems that are fully capable of replacing people for nearly all mundane and mechanical tasks. But they're 'single-use' systems. They're of no use outside of the specific task for which they're created, and each one constitutes a considerable investment to design, build, test, and deploy. If all they wanted were 'worker ant' AI systems, they've already got that...along with all the limitations and inflexibility that comes with it. AGI is the promise of a single 'tool' for all tasks. A 'multi-tool' if you will. Which necessitates a 'tool' that's at least as complex as what it'd be replacing -- people. And these AI prototype 'tools' are no where near that level of complexity or capability.There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while we can't do an apples to apples comparison between parameters to synapses - how does your math work there?
Achieving AGI, and perhaps finding out that its name is SkyNet is not the only safety concern. The more immediate concerns are that the current LLMs produce lots of false information, and that there may be no dependable way to fix that. Misinformation can sometimes be amusing, but it can also be dangerous. Imagine if a government's military made a decision based on that. Obviously they can and do, based on misinformation derived from humans, but the fear is that these LLMs could make the problem even worse than it already is.If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI.
You mean the "genius" of a couple of crypto grifters pumping up the wow factor of image generators (which have been around for a number of years now - I remember playing with tools that created photo-realistic headshots almost a decade ago) and having precisely 0 explanation other than "magic" if enough compute - and money - is gathered not only for how they might achieve what they want, but even how to define it is turning out to be a bunch of BS?Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Well, it really depends, my personal view is that it won't really be AGI in the sense that it will be exactly human like (because that would cause the issues you're talking about) but it will be close enough that it can actually do most of our jobs quite easily.That's kind of the thing, right? If the people investing in AGI were interested in simple 'intelligence' systems, we've already got robotic systems that are fully capable of replacing people for most mundane mechanical tasks. But they're 'single-use' systems. They're of no use outside of the specific task for which they're created, and each one constitutes a considerable investment to design, build, test, and deploy. If all they wanted were 'worker ant' AI systems, they've already got that...along with all the limitations and inflexibility that comes with it. AGI is the promise of a single 'tool' for all tasks. A 'multi-tool' if you will. Which necessitates a 'tool' that's at least as complex as what it'd be replacing -- people. And these AI prototype 'tools' are no where near that level of complexity or capability.
And I daresay, they won't like an AGI system when/if they do get built. Because it'll be just as complex as people...with all the problems that entails.
I understand this; these AI systems don't deal well with things they didn't see in training. But how many humans have had a serious accident while they were driving a car? What we really need to compare is a figure like miles per serious accident, averaged over all American (for example) drivers vs. some autonomous system. I just haven't seen that comparison.That said, I have 52,000KM on the odometer of my car and not once have I blocked an ambulance because I'm too stupid to pull over.
There's really no reason to believe we'd create a human-like intelligence.Well, it really depends, my personal view is that it won't really be AGI in the sense that it will be exactly human like
About 96%Outsider (to AI) looking in:
What are the chances that there are no big breakthroughs and LLMs are all we get for a long time and the wait for AGI has just been a pump n dump?
How convenient for you that you don't have to present a counterargument to the points you vaguely decry as "ignorance." Must be nice to be so obviously smart and correct that you are relieved of the trouble involved in having to demonstrate any of the things you assert.Yes please. Let's endlessly speculate on what little information we have (and apparently even less knowledge of AI).
The ignorance in this forum about how AI and biological intelligence works is off the charts.
Most of the time the highest voted comments are so wrong/ignorant that it has become an exercise in futility to debunk them.
Any system that can accurately and safely interact with humans has to be able to accurately predict human actions and behaviors and must be, by necessity, at least as complex and cognizant as humans. It's the same problem facing autonomous vehicles -- the vehicles have to safely interact with irrational and unpredictable humans. The only way they can do that is if they are able to understand and predict human behavior, which, by necessity, means that they are just as complex, just as aware and cognizant as humans. An ant will never be able to understand humans or predict human action and behavior. As such an 'ant' operating heavy machinery, driving vehicles, or running a kitchen will never be safe around people. It will always be a hazard to anyone around them. The ant has to be smart enough to think like a human to be safe around humans. And if the ant can think like a human....Well, it really depends, my personal view is that it won't really be AGI in the sense that it will be exactly human like (because that would cause the issues you're talking about) but it will be close enough that it can actually do most of our jobs quite easily.
Again, the way I see it, a level of AGI that replaces most human cognitive tasks is probably at least a couple of decades away. Once we get humanoid robots and train them in the real world with our chores/jobs, I really don't see any reason why an AI robot can't do 90% of current human jobs in the next 50 years.
a parameter is as little as a a single bit. the behavior of a real biological neuron, even one in an organism as simple and well documented as c. elegans is obviously vastly more complicated than a boolean value. How many more times? Put another way, how large an artificial neural network would be required to completely model just a single c. elegans neuron?There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while I don't think we can do an apples to apples comparison between parameters to synapses - you did that comparison. So how does your math work there?
There are 100 trillion synapses in the human brain. GPT-4 has around 1.76 trillion parameters. So while we can't do an apples to apples comparison between parameters to synapses - how does your math work there?
There are papers on self improving models, either from a feedback model or from user feedback https://arxiv.org/pdf/2310.00898v3LLMs are nowhere near AGI. They are nowhere near even approaching bacteria in terms of being able to interact with their environment and adapt beyond their training.
They poop out words in an order that is not always nonsensical.
Imo, AGI is a meaningless buzzword, and we will never arrive because it has no definition.Outsider (to AI) looking in:
What are the chances that there are no big breakthroughs and LLMs are all we get for a long time and the wait for AGI has just been a pump n dump?
I've assumed that the logic there is that the LLM will compute that to maximize the probability of getting the next word right, it has to figure out how to turn itself into an AGI. So, it will just <<insert magic step>> in order to do that.Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
But if all humans are removed that also lets the LLM answer every question asked of it by a human correctly. No humans, no questions, zero incorrect answers, Problem solved.I've assumed that the logic there is that the LLM will compute that to maximize the probability of getting the next word right, it has to figure out how to turn itself into an AGI. So, it will just <<insert magic step>> in order to do that.
If OpenAI is no where near AGI, then it seems all the "safety" concerns are a bit unfounded. In 5 years we are going to learn that LLMs are probably not that way to achieve AGI. Purely anecdotal but from daily use of Claude and ChatGPT, i don't find Claude to be anymore safe and secure in out output than ChatGPT
Where do strawberries grow? In fields. What does everyone know about strawberry fields? That nothing is real. Where else is nothing real? In a dream. Put it all together and what do you get? Field of Dreams. The message is clear: OpenAI is going to release an AI-powered Apple Vision Pro app that allows people to play catch with simulations of their dead fathers. It's obvious, really.(Just today, Altman dropped a hint on X about strawberries that some people interpret as being a hint of a potential major model undergoing testing or nearing release.)
My current favorite benchmark is trying to get them to play wordle, LLMs seem completely incapable of doing that even with the most patient and cooperative human assistant,. Even a 6 year old can play worldle better than most llms. Worse is that LLMs, unlike 5yos, can't understand that they aren't accomplishing the task. Until they can perform all cognitive tasks as well as a 6yo and many as well as an adult expert, I'll be unconvinced we are approaching agi.LLMs are nowhere near AGI. They are nowhere near even approaching bacteria in terms of being able to interact with their environment and adapt beyond their training.
They poop out words in an order that is not always nonsensical.
There seem to be many similarities between Musk's "Full self-driving" and OpenAI's "AGI". I'm not expecting either any time soon.I find irony in all these announcements that “OpenAI will definitely have magic AI results any minute now, y’all” being posted on Elon Musk’s website.
I think it's because the young'uns dont yet realize the "80% of the way there!" is the easy part, and it's the last 20% that takes all the time (or proves to be impossible).There seem to be many similarities between Musk's "Full self-driving" and OpenAI's "AGI". I'm not expecting either any time soon.