Having followed AI off-and-on since the Prolog/Society of Mind days, I've never understood how a scaled up LLM is supposed to make the leap to AGI. Now, I'm not an AI researcher or scientist, but perhaps the answer is "it's not".
Same. I don't even understand what the underpinnings of AGI is supposed to look like.
I work with and get LLMs to some extent, but more LLM is still just more math, vectors ... output. More LLM is just more LLM, not necessarily magically different or any fewer fundamental shortcomings.