<RETURN_TO_BASE

Scientists Harness AI to Decode Human Cognition Through Neural Networks

Scientists are leveraging AI neural networks to predict human behavior and explore the workings of the human mind, but challenges remain in interpreting these complex models.

Neural Networks and the Human Brain

Today's AI systems, especially neural networks, differ greatly from human brains in energy consumption and learning methods. While a toddler learns language with minimal resources, AI models require enormous data and computational power. Nevertheless, both human brains and neural networks consist of millions of neurons—biological in humans and simulated in AI. Both systems uniquely produce fluent and flexible language, yet the intricacies of how they function remain largely mysterious.

Building Brainlike Neural Networks to Understand Cognition

Neuroscientists believe that creating neural networks that mimic brain processes is a promising approach to understanding cognition. Recently, the journal Nature published two studies showcasing neural networks' role in predicting behaviors in psychological experiments. These studies suggest that such AI models could advance insights into human and animal minds, though predicting behavior is different from explaining the underlying mechanisms.

The Centaur Model: A Foundation Model of Human Cognition

One study transformed Meta's Llama 3.1 large language model into "Centaur" by fine-tuning it on data from 160 psychology experiments involving tasks like slot machine choices and memory sequences. Centaur outperformed traditional psychological models in predicting human behavior. This capability could allow scientists to simulate experiments virtually before involving human participants, saving resources. Researchers propose that analyzing how Centaur replicates human behavior might lead to new cognitive theories.

Skepticism About AI Models Explaining Human Mind

Some psychologists remain skeptical about Centaur's explanatory power. Despite superior predictions, Centaur has billions of parameters compared to simple traditional models. A model's ability to mimic behavior externally doesn't guarantee it mirrors human cognitive processes internally. Olivia Guest, a computational cognitive scientist, likens Centaur to a calculator that predicts math answers without revealing how humans perform addition.

The Challenge of Understanding Complex AI Models

Extracting meaningful insights from AI models with millions of neurons is difficult. Researchers are still struggling to interpret large language models. Understanding a complex neural network that models the human mind may be as challenging as understanding the brain itself.

Alternative: Tiny Neural Networks for Behavioral Prediction

The second Nature study explores very small neural networks, some with a single neuron, that can predict behaviors in mice, rats, monkeys, and humans. These tiny networks allow detailed tracking of individual neuron activity, facilitating the study of how behavior predictions arise. Although these models might not function exactly like biological brains, they can generate valuable hypotheses for cognitive science.

Trade-offs Between Prediction Accuracy and Interpretability

Unlike Centaur, which handles many tasks, tiny networks specialize in single tasks, limiting complexity but improving interpretability. Marcelo Mattar, who led the tiny-network study, notes that complex behaviors require large networks, which are much harder to understand. This reflects a fundamental trade-off in AI-driven science between prediction power and comprehensibility.

Progress and Ongoing Challenges

Efforts like Mattar's tiny networks and interpretability research at organizations like Anthropic are gradually bridging the gap between prediction and understanding. However, our grasp of complex systems—from human cognition to climate and proteins—still lags behind our predictive capabilities.

This article originally appeared in The Algorithm newsletter.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский