Brain-inspired AI learns like humans.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Abstract: Today's AI can read, talk and analyze data, but it still has significant limitations. NeuroAI researchers designed a new AI model inspired by the performance of the human brain.

This model allows AI neurons to receive feedback and adjust in real time, enhancing learning and memory processes. This innovation could lead to a new generation of more efficient and accessible AI, bringing AI and neuroscience closer together.

Important facts:

  1. Inspired by the mind: The new AI model is based on how the human brain effectively processes and adjusts data.
  2. Real time adjustment: AI neurons can receive feedback and adjust on the fly, improving performance.
  3. Possible effect: This breakthrough could lead to a new generation of AI that learns like humans, expanding both the AI ​​and neuroscience fields.

Source: CSHL

It reads. It speaks. It combines mountains of data and recommends business decisions. Today's artificial intelligence can seem more human than ever before. However, AI still has several significant shortcomings.

“As impressive as ChatGPT and all these existing AI technologies are, in terms of interacting with the physical world, they're still very limited. Even in things like solving math problems and subjects. writing, they take billions and billions of training examples before they can do them well,” explains Cold Spring Harbor Laboratory (CSHL) neuroAI scholar Kyle Darovala.

Daruwala is exploring new, unconventional ways to design AI that can overcome such computational constraints. And he might just have found one.

A new machine learning model provides evidence for a yet unproven theory that links working memory with learning and academic performance. Credit: Neuroscience News

The key was moving the data. Today, most of the energy consumption of modern computing comes from bouncing data. In artificial neural networks, which contain billions of connections, data has to travel very long distances.

So, to find a solution, Daruwala looked for inspiration in one of the most computationally powerful and energy-efficient machines in existence: the human brain.

Daruwalla designed a new way for AI algorithms to move and process data more efficiently, based on how our brains process new information. The design allows individual AI “neurons” to receive feedback and adjust on the fly rather than waiting for the entire circuit to update at once. Thus, data does not have to travel far and is processed in real time.

“In our brains, our connections are changing and adjusting all the time,” says Daruwala. “It's not like you stop everything, adjust, and then start over being you.”

A new machine learning model provides evidence for a yet unproven theory that links working memory with learning and academic performance. Working memory is the cognitive system that enables us to stay on task by recalling stored knowledge and experiences.

“There are theories in neuroscience about how working memory circuits can facilitate learning. But there's nothing concrete like our theory that actually connects the two.

“And so that was one of the cool things we stumbled into. This theory led to a principle where by adjusting each synapse individually, that working memory needs to sit with it. falls,” says the pharmacist.

Daruwala's design could help usher in a new generation of AI that learns like we do. Not only will this make AI more efficient and accessible, but it will also be somewhat of a full-circle moment for neuroAI. Neuroscience has been providing AI with valuable data long before ChatGPT uttered its first digital syllable. Soon it looks like AI may return the favor.

About this artificial intelligence research news

the author: Sara Giarneri
Source: CSHL
contact: Sarah Giarneri – CSHL
Image: This image is credited to Neuroscience News.

Original research: Open access.
“A Hebbian learning principle based on information constraints naturally links working memory and synaptic updates” by Kyle Daruwalla et al. Frontiers in Computational Neuroscience


The Hebbian learning principle based on information constraints naturally links working memory and synaptic updates.

Deep neural feed-forward networks are efficient models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), modeled after biologically realistic neurons, offer a potential solution when properly deployed on neuromorphic computing hardware.

Still, many applications train SNNs. Off-line, and running network training directly on neuromorphic hardware is an ongoing research problem. The main obstacle is that backpropagation, which makes such artificial deep network training possible, is biologically unimaginable.

Neuroscientists are uncertain about how the brain would propagate a correct error signal backwards through a network of neurons. Recent developments address part of this question, such as the problem of weight transport, but a complete solution remains elusive.

In contrast, novel learning principles based on information constraint (IB) train each layer of the network independently, avoiding the need to propagate errors across layers. Instead, diffusion is implicit due to the feedforward connectivity of the layers.

These principles take the form of a three-factor Hebbian update with a global error signal modulating local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires multiple samples to be processed simultaneously, and the brain sees only one sample at a time.

We propose a new three-factor update principle where the global signal accurately retrieves the information in the samples through an auxiliary memory network. A support network can be trained. A priority Independently of the dataset used with the underlying network.

We perform a comparison of baselines on image classification tasks. Interestingly, unlike schemes such as backpropagation where there is no link between learning and memory, our principle posits a direct link between working memory and synaptic updates. To the best of our knowledge, this is the first rule to clarify this link.

We explore these implications in preliminary experiments examining the effect of memory capacity on learning performance. Moving forward, this task presents an alternative approach to learning where each layer balances memory-related compression against task performance.

This approach naturally incorporates several important aspects of neural computation, including memory, performance, and locality.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment