Cross-Pollination Between Neuroscience, Psychology, and AI Research Provides Fundamental Understanding of Thought

Progress in artificial intelligence enabled the creation of AIs that perform tasks once thought only possible for humans, such as translate languages, drive cars, play board games at world champion level and extract protein structure. However, each of these AIs has been comprehensively designed and trained for a single task and has the ability to learn only what is necessary for that specific task.

Recent AIs that produce fluent text, including in conversations with humans, and generate impressive and unique art can give the false impression of a mind at work. But even these systems are specialized that perform narrowly defined tasks and require massive amounts of training.

It still remains a daunting challenge to combine multiple AIs into a single one that can learn and perform many different tasks, let alone continue the range of tasks performed by humans or take advantage of the range of experiences available to humans that reduce the amount of data otherwise needed to learn how to perform these tasks. The current best AIs in this regard, such as AlphaZero and Gatocan handle a variety of tasks that fit a single mold, such as gaming. General Artificial Intelligence (AGI) who is capable of a wide range of tasks remains elusive.

Ultimately, AGIs need for power interact effectively with each other and with people in a variety of physical environments and social contexts, integrating the wide varieties of skills and knowledge needed to do so, and learning flexibly and effectively from these interactions.

Building AGIs is like building artificial minds, albeit greatly simplified compared to human minds. And to build an artificial mind, you have to start with a cognition model.

This robot, powered by an AI called Rosie, learned how to solve this puzzle thanks to a human who communicated with the robot using natural language.
James Kirk, CC BY-ND

From human to Artificial General Intelligence

Humans have an almost limitless set of skills and knowledge, and learn new information quickly without needing to be redesigned to do so. It is conceivable that an AGI could be constructed using a fundamentally different approach to human intelligence. However, like three researchers in AI and cognitive sciences, our approach is to draw inspiration from the structure of the human mind. We work toward AGI by trying to better understand the human mind, and to better understand the human mind by working toward AGI.

From research in neuroscience, cognitive science, and psychology, we know that the human brain is neither a huge, homogeneous collection of neurons nor a massive set of task-specific programs that each solve a single problem. Instead, it’s a set of regions with different properties that support the basic cognitive abilities that together form the human mind.

These abilities include perception and action; short-term memory for what is relevant in the current situation; long-term memories for skills, experience and knowledge; reasoning and decision making; emotion and motivation; and acquiring new skills and knowledge from the full range of what a person perceives and experiences.

Instead of focusing on specific abilities in isolation, the AI ​​pioneer allen newel in 1990 suggested developing Unified theories of cognition which integrate all aspects of human thought. The researchers were able to create software called cognitive architectures that embody such theories, allowing them to be tested and refined.

Cognitive architectures are rooted in multiple scientific fields with distinct perspectives. Neuroscience focuses on the organization of the human brain, cognitive psychology on human behavior in controlled experiments, and artificial intelligence on useful abilities.

The common model of cognition

We participated in the development of three cognitive architectures: ACT-R, To go up and Sigma. Other researchers have also been concerned with alternative approaches. A paper identified nearly 50 active cognitive architectures. This proliferation of architectures is partly a direct reflection of the multiple perspectives involved, and partly an exploration of a wide range of potential solutions. Yet whatever the cause, it raises tricky questions both scientifically and in terms of finding a consistent path to AGI.

Fortunately, this proliferation has brought the field to a major inflection point. All three of us identified a striking convergence between the architectures, reflecting a combination of neural, behavioral and computational studies. In response, we initiated a community-wide effort to capture this convergence in a manner similar to Standard Model of Particle Physics which emerged in the second half of the 20th century.

a graphic showing a human head and brain on the left, a robot head with circuits on the right, and a board with five colored blocks and arrows connecting the blocks
This basic model of cognition both explains human thought and provides a blueprint for true artificial intelligence.
Andrea Stocco, CC BY-ND

This Common model of cognition divides human thought into multiple modules, with a short-term memory module at the center of the model. The other modules – perception, action, skills and knowledge – interact through it.

Learning, rather than happening intentionally, happens automatically as a side effect of treatment. In other words, you don’t decide what is stored in long-term memory. Instead, the architecture determines what is learned based on whatever you think about. It can allow you to learn new facts that you are exposed to or new skills that you try. It can also bring improvements to existing facts and skills.

The modules themselves operate in parallel; for example, allowing you to remember something while listening and looking around. The calculations of each module are massively parallel, which means that many small calculation steps are happening at the same time. For example, by retrieving a relevant fact from a vast store of past experiences, the long-term memory module can determine the relevance of all known facts simultaneously, in a single step.

Paving the way to Artificial General Intelligence

The common model is based on the current consensus in research on cognitive architectures and has the potential to guide research on natural and artificial general intelligence. When used to model communication patterns in the brain, the common model yields more accurate results than leading neuroscience models. This expands its ability to model humans – the only system proven capable of general intelligence – beyond cognitive considerations to include the organization of the brain itself.

We are starting to see efforts to link existing cognitive architectures to the common model and to use it as a reference for new work – for example, interactive AI designed to coach people towards better health behavior. One of us participated in the development of an AI based on Soar, dubbed Pink, which learns new tasks via English instructions given by human teachers. He learns 60 different puzzles and games and can transfer what he learns from game to game. He also learns to control a mobile robot for tasks such as picking up and delivering packages and patrolling buildings.

Rosie is just one example of how to build an AI that approaches AGI via a cognitive architecture well characterized by the common model. In this case, the AI ​​automatically acquires new skills and knowledge during general reasoning that combines natural language teaching by humans and minimal experience – in other words, an AI that works more like a human mind than today’s AIs, which learn by brute force. computing power and massive amounts of data.

From a broader AGI perspective, we see the common model both as a guide in the development of such architectures and AIs, and as a way to integrate the ideas derived from these attempts into a consensus that ultimately leads to AGI. .

Comments are closed.