Why are some photos more memorable than others?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Abstract: A new study shows that the brain prefers to remember images that are difficult to explain. The researchers used a computational model and behavioral experiments to show that scenes that were harder for the model to reconstruct were more memorable for participants.

This finding helps explain why some visual experiences stick in our memory. The study could also inform the development of AI memory systems.

Important facts:

  • Memory formation: The brain remembers images that are difficult to interpret or explain.
  • Computational model: A model addressing visual signal compression and reconstruction was used.
  • Implications of AI: Insights can help create more efficient memory systems for artificial intelligence.

Source: Yale

The human brain filters through a flood of experiences to create specific memories. Why do some experiences become “memorable” in this flood of sensory information, while most are discarded by the brain?

A computational model and behavioral study developed by Yale scientists offers a new clue to this age-old question, they report in the journal. Nature is human behavior.

The Yale team found that the more difficult it was for the computational model to reconstruct an image, the more likely participants were to remember that image. Credit: Neuroscience News

“The brain prefers to remember things it can't explain well,” said Alkar Yildirim, assistant professor of psychology in Yale's Faculty of Arts and Sciences and senior author of the paper. “If a scene is predictable, and not surprising, it can be ignored.”

For example, a person may be briefly confused by the presence of a fire hydrant in a distant natural environment, making the image more difficult to interpret, and therefore more memorable. “Our study explored the question of what visual information is memorable by combining a computational model of scene complexity with behavioral studies,” Yıldırım said.

For the study, which was led by Yildirim and John Lafferty, the John C. Malone Professor of Statistics and Data Science, the researchers developed a computational model that focused on two stages of memory formation—compression of visual cues and Rebuilding them.

Based on this model, they designed a series of experiments in which people were asked whether they could remember specific images from a sequence of natural images that were shown in rapid succession. The Yale team found that the more difficult it was for the computational model to reconstruct an image, the more likely participants were to remember that image.

“We tried to shed light on people's perception of scenes using an AI model — an understanding that could help AI develop more efficient memory systems in the future,” said Lafferty, who led the Center for Neurocomputation. He is also the director. and machine intelligence at Yale's Wu Tsai Institute.

Former Yale graduate students Qiu Lin (psychology) and Zifen Lin (statistics and data science) are co-first authors of the paper.

About this visual memory research news

the author: Bill Hathaway.
Source: Yale
contact: Bill Hathaway – Yale
Image: This image is credited to Neuroscience News.

Original Research: closed access
“Images that are difficult to form for visual representation leave strong memory traces,” by Ilker Yildirim et al. Nature is human behavior


Abstract

Images with difficult-to-reconstruct visual representations leave strong memory traces.

Much of what we remember is not due to conscious choice, but simply a byproduct of perception.

This raises a fundamental question about the architecture of the mind: How does perception connect with and influence memory?

Here, inspired by memory persistence, a classic proposition related to perceptual processing from level-of-processing theory, we present a sparse coding model for compressing feature embeddings of images, and show that this model The reconstruction residuals predict how well the images are encoded. In memory

In an open-memory dataset of scene images, we show that reconstruction error explains not only memory accuracy, but also response latency during retrieval, in the latter case, only strong All variation explained by vision models. We also confirm the predictions of this account with 'model-driven psychophysics'.

This work establishes reconstruction error as an important signal interfacing perception and memory, possibly through adaptive modulation of perceptual processing.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment