Mon. Sep 26th, 2022
Why can't I remember?  Model can show how recall can fail

Serdar Acar / EyeEm

Physicists can create serious mathematical models of things that are far removed from physics – things like biology or the human brain. These models are hilarious, but I’m still a sucker for them for the hope they offer: Perhaps a simple mathematical model can explain the disinterested panda’s sexual choices? (And yes, I know there is an XKCD on this topic). So a bunch of physicists claiming to have found a fundamental law of memory recall was catnip to me.

To get a sense of how interesting their work is, it helps to understand the unwritten rules of “simple models for biology.” First, the model must be so general that the predictions are vague and unsatisfactory. Second, if you need to compare experimental data, do it on a logarithmic scale so that the huge differences between theory and experiment seem at least small. Third, if possible, make the mathematical model so abstract that it loses all connection with the actual biology.

By breaking all these rules, a group of physicists have come up with a recall model that seems to work. The model is based on a concrete idea of ​​how the recall works, and with virtually no fine-tuning whatsoever, it provides a pretty good prediction of how well people will remember items from a list.

Put your model on the catwalk

It is generally accepted that memories are encoded in networks of neurons. We know that people have a remarkable ability to remember events, words, people and many other things. Yet some aspects of remembering are terrible. I’m known for leaving out the names of people I’ve known for a decade or more.

But even simpler challenges fail. For example, if we have a list of words, most people will not remember the whole list. In fact, something remarkable is happening. Most people will start by memorizing words from the list. At some point, they come back and remember a word they’ve already said. Every time this happens, there’s a chance it will trigger another new word; alternatively, the loop can start looping through other words that have already been called. The more often a person returns, the greater the chance that no new words will be recalled.

Based on these observations, the researchers created a model based on similarity. Each memory is stored in a different but overlapping network of neurons. Recall jumps from a starting point to the next item that has the largest network overlap with the previous item. The recall process suppresses the jump back to the item that was just recalled earlier, which would have the most overlap.

Using these simple rules, recall follows a trajectory that returns to itself at a random interval. However, if recall were completely deterministic, the first loop back to a word that had already been recalled would result in an endless repetition of the same few items. To avoid this, the model is probabilistic, not deterministic: there is always a chance to jump to a new word and get out of a loop.

Summarizing all this, the researchers show that, given a list of items of known length, the model predicts the average number of items that can be recalled. There is no tuning here at all: if you take the above model and examine the consequences, you get a fixed relationship between the length of the list and the number of recalled items. That’s pretty amazing. But is it true?

Experiments are messy

At first glance, some experiments immediately contradict the researcher’s model. For example, if the subject has a longer time to look at each word on the list, they will remember more words. Likewise, age and many other details affect remembering.

But the researchers point out that their model assumes that every word in the list is stored in memory. In reality, people are distracted. They may be missing words altogether or simply not saving the words they see. This means that the model will always overestimate the number of words that can be recalled.

To explain this, the researchers conducted a second set of experiments: recognition tests. Some subjects did a standard recall test. They were successively shown a list of words and had to remember as many words as possible. Other subjects were shown a list of words sequentially, then words in a random order, and asked to choose which words were on the list.

The researchers then used their measured recognition data to set the total number of words stored. With this limit, the agreement between their theoretical calculations and experiments is remarkable. The data appears to be independent of all parameters except the list length, just as the model predicts.

The result also seems to tell us that the variation in experimental data observed in previous experiments is not in memory, but in memorization.

A delicate point

So what does the model tell us? It may provide some insight into the actual recall mechanisms. It may also point to how we can construct and predict the behavior of neural network-based memories. But (and maybe this is my lack of imagination) I can’t see how you would actually use the model beyond what it already tells us.

Physical Review Letters, 2020, DOI: 10.1103/PhysRevLett.124.018101 (About DOIs)

By akfire1

Leave a Reply

Your email address will not be published.