AI Breakthrough Leaves ChatGPT In The Dust With Human-Like Cognitive Capabilities

robot learning pixabay
If you follow technology news much at all, you're probably familiar with the latest iteration of the GPT line of AI models, known simply as GPT-4. At its heart, GPT-4 is a large language model with billions of parameters, able to converse in many languages on almost any topic. GPT-4 is so smart that it can ace the SAT, overcome the BAR exam, and even pass the MCAT to become a doctor.

GPT-4 might come off as incredibly intelligent in some ways, but it's still just as limited as most contemporary neural networks in other ways. A big one is its capability for what researchers call "systematic generalization." This is basically the ability to take incomplete knowledge and make use of it by placing what you've learned into a mental system and then generalizing based on that. A key example would be the way humans can hear a new word and derive what it means from context, then begin using it correctly.

Humans are extremely good at systematic generalization. However, neural networks are nearly entirely incapable of doing this. As smart as AIs like GPT might seem, they have no real understanding of the things they've been trained on; they typically need to be trained on hundreds or thousands of examples to begin to understand a new word or concept. Because of this limitation, some researchers have argued that neural networks can't really be considered a model of human thought, as systematic generalization is so core to the human experience.

few shot instruction learning task
A summary of the instruction-learning task given to the human participants.

Well, it turns out that you actually can get a neural network to perform systematic generalization—you just have to specifically teach them to do it. A cognitive computational scientist at New York University named Brenden Lake has just co-authored a paper detailing a new approach for guided training called "meta-learning for compositionality" that appears to give so-trained networks "human-like generalization."

It goes like this: if you want to teach systematic generalization to an AI, first you need to figure out how to model it in humans. The researchers created an experimental paradigm that asks human participants to "process instructions in a pseudolanguage in order to generate abstract outputs." Specifically, they were given 14 study instructions—questions with answers—and then asked to extrapolate from that to ten more questions. These tasks required participants to learn the meaning of gibberish words quickly from just a few examples, and then re-use those words in a new context afterward.

Humans were able to perform this task correctly 80.7% of the time, but arguably the gold standard for large language models right now, GPT-4, messed it up "between 42% and 86% of the time," with the variance dependent on exactly how the researchers presented the task to the AI. Lake notes that "it's not magic, it's practice; much like a child also gets practice when learning their native language, the models improve their compositional skills through a series of compositional learning tasks."

meta learning episode
An example meta-learning episode showing how the MLC AI processes them.


By using this same training method with a neural network, the researchers were able to get an AI to produce approximately the same level of results as a human. This is fairly incredible, because no one's ever done this before. A cognitive scientist at John Hopkins University, Paul Smolensky, has stated that the performance of the network on this generalization task is indicative of "a breakthrough in the ability to train networks to be systematic."

So what does this mean? Are we on the verge of another AI revolution? Well, maybe, but it's not clear. Lake's AI became very good at generalizing in the specific tasks it had been trained on, but it's not a given that this same technique can be applied to large language models or computer vision models. These models have gigantic input data sets, and training them takes a very long time even when using hyperscale supercomputers, so it may be a bit before we see the fruits of these findings—assuming they even apply at all to larger models.

However, if it is possible, this breakthrough could have two massive benefits for systems like ChatGPT. First of all, it will slash the incidence of hallucinations, which is where the AI makes up things that don't exist or don't make sense. If the model is able to function in a systematic way, it becomes easier to discard patterns that aren't actually there. Secondly, it would allow these models to be trained in a much more efficient fashion, as they will need drastically less data for training, if they can generalize in a systematic way.