New Map of Meaning in the Brain Changes Ideas About Memory
Introduction
We often think of memory as a rerun of the past — a mental duplication of events and sensations that we’ve experienced. In the brain, that would be akin to the same patterns of neural activity getting expressed again: Remembering a person’s face, for instance, might activate the same neural patterns as the ones for seeing their face. And indeed, in some memory processes, something like this does occur.
But in recent years, researchers have repeatedly found subtle yet significant differences between visual and memory representations, with the latter showing up consistently in slightly different locations in the brain. Scientists weren’t sure what to make of this transformation: What function did it serve, and what did it mean for the nature of memory itself?
Now, they may have found an answer — in research focused on language rather than memory.
A team of neuroscientists created a semantic map of the brain that showed in remarkable detail which areas of the cortex respond to linguistic information about a wide range of concepts, from faces and places to social relationships and weather phenomena. When they compared that map to one they made showing where the brain represents categories of visual information, they observed meaningful differences between the patterns.
And those differences looked exactly like the ones reported in the studies on vision and memory.
The finding, published last October (opens a new tab) in Nature Neuroscience, suggests that in many cases, a memory isn’t a facsimile of past perceptions that gets replayed. Instead, it is more like a reconstruction of the original experience, based on its semantic content.
That insight might help to explain why memory is so often such an imperfect record of the past — and could provide a better understanding of what it really means to remember something.
A Mosaic of Meaning
The new work on semantics was completely independent of the work on memory — the two unfolding at around the same time but on opposite sides of the United States.
In 2012, Jack Gallant (opens a new tab), a computational and cognitive neuroscientist at the University of California, Berkeley, had spent the better part of a decade developing functional MRI (magnetic resonance imaging) tools and models for studying the human visual system. Because fMRI machines can measure changes in blood flow and electrical activity in the brain, neuroscientists often use them to study which parts of the cortex respond to different stimuli.
One of Gallant’s graduate students at the time, Alex Huth (opens a new tab), used the Gallant lab’s cutting-edge techniques to analyze where the brain might encode different kinds of visual information. Huth, Gallant and their colleagues had participants watch hours of silent videos while inside fMRI scanners. Then, segmenting the data into records for roughly pea-size volumes of brain tissue called voxels, they analyzed the scans to determine where hundreds of objects and actions were represented across the cortex.
They found remarkably consistent patterns (opens a new tab) in all the participants — patterns that formed a generalized map of visual meaning. It confirmed the identity of some regions of the visual cortex that were already known from earlier research, such as areas selectively responsive to faces or places. But it also turned up hundreds of other selective patches for the first time: regions that responded to images of animals, family members, indoor scenes, outdoor scenes, people in motion, and more.
Huth didn’t stop there. He and his team decided to try something similar, only this time using language instead of visual stimuli. They had people listen to hours of podcast recordings, and then assessed how their brains responded to the hundreds of concepts they’d heard in those stories. The semantic network that the researchers compiled and reported in Nature in 2016 (opens a new tab) — another patchwork map, a mosaic of meaning that tiled large swaths of the cortex — was “a really new thing” at this scale and dimensionality, Gallant said. “Nobody was looking for it.”
With these two cortical maps in hand, they realized that the studies had used some of the same participants. “It was just a happy accident,” said Huth, who is now an assistant professor of neuroscience and computer science at the University of Texas at Austin. But it cleared the way for them to ask: How were the visual and linguistic representations related?
Previous imaging studies had identified rough regions of overlap, which made sense: We humans assign labels to what we perceive in the world, so it’s fitting that our brains would combine those representations. But Huth and his colleagues took a more precise approach. They modeled what each individual voxel responded to among nearly 1,000 semantic categories found in both the video and the linguistic stimuli.
As in the earlier research, they found evidence of overlap. But then Huth noticed something strange.
He had made a visualization of the 2016 data (opens a new tab) that allowed him to tap into each voxel to see which language-based categories it responded to. When he zoomed in on a region selective for places, he realized that only voxels at the anterior edge of the region, closest to the front of the brain, represented place words: apartment, house, car, floor, farm, California. The back part of the region didn’t represent this linguistic information at all.
“This led us to think that maybe there’s something more interesting going on here,” Huth said.
An Orderly Transition Zone
So Huth called up the data from his 2012 vision experiments and saw that in this place-selective area of the cortex, the back part responded exclusively to place-related images. When he looked in areas closer to the front, both place images and place words were represented — until, at the boundary of the region, only words evoked brain activity, just as he’d seen when he was toying around with his 2016 visualization. There seemed to be a gradual, continuous shift from visual representations of places to linguistic representations over just a couple of centimeters of cortex.
“It was surprisingly neat,” Huth said. “This was the exciting ‘aha’ moment, seeing this pattern pop out.”
To test how systematic the pattern might be, Sara Popham (opens a new tab), then a graduate student in Gallant’s lab, developed a statistical analysis for the team that looked for these gradients along the border of the visual cortex. They found it everywhere. For every one of the hundreds of categories studied in the experiments, the representations aligned in transition zones that formed a nearly perfect ribbon around the entire visual cortex. “There’s a match between what happens behind the border and what happens in front of the border,” Gallant said.
That alignment alone was remarkable. “It’s actually rare that we see borders and delineated regions in the brain,” said Wilma Bainbridge (opens a new tab), a psychologist at the University of Chicago who was not involved in the study. “I haven’t really seen anything quite like this.”
The pattern was also systematic across individuals, appearing over and over in each participant. “This real boundary in the brain seems to be a general organizing principle,” said Adam Steel (opens a new tab), a postdoctoral fellow studying perception and memory at Dartmouth College.
It shows how the visual cortex interfaces with the rest of the cortex through these gradients: Many parallel channels each seem to preserve meaning across different types of representations. In hierarchical models of visual processing, the brain first extracts specific features such as edges and contours, then combines those to build more complex representations. But it’s been unclear how those complex representations then get increasingly abstract. Sure, visual details might get pieced together to create an image of, say, a cat. But how does that final image get assigned to the conceptual category of “cats”?
Now, this work hints at how that progression from visual specifics to greater abstractions might start to happen at a more granular level. “We’re gluing together a part of the brain that we really understand well to another part of the brain that we barely understand at all,” Gallant said. “And what we see is that the principles of design are not really changing all that much.”
In fact, one traditional theory of brain organization posits that representations of semantic knowledge occur in a dedicated region — a hub-like command center that receives information from various systems, including perceptual ones. But the results from Gallant’s team suggest that these different networks might be too intimately intertwined to be separable. “Our understanding, our knowledge about things, is actually somewhat embedded in the perceptual systems,” said Chris Baker (opens a new tab) of the National Institute of Mental Health.
That discovery might have implications for how humans’ abstract knowledge of the world develops. Perhaps, Huth said, language-based representations are partly patterned off of perceptual ones — and this alignment serves as part of a mechanism for how that might happen. The perceptual capacities of various brain regions might in effect “dictate the emergent structure of a broader conceptual space,” said Ev Fedorenko (opens a new tab), a cognitive neuroscientist at the Massachusetts Institute of Technology. Perhaps that could even say something about the nature of meaning itself. “What is meaning?” she said. “Maybe more of it is embodied than some have argued.”
But the most intriguing thing is that this graded transition among types of representations in the cortex echoes recent findings on the relationship between perception and memory.
Records of Meanings, Not Perceptions
In 2013, Christopher Baldassano (opens a new tab), a cognitive neuroscientist at Columbia University, found an intriguing pattern (opens a new tab) when he observed neural activity in an area known to respond selectively to places. Patterns of activity toward the back of the region were correlated with patterns that characterized a known visual network, while activity in the forward part of the region seemed to be more related to activity in a memory network instead.
This suggested that memory representations might involve not an exact reactivation but rather a subtle shift across the real estate of the cortex, to a location immediately adjacent to where the corresponding visual representation can be found.
Over the past year, several new studies — including research by Bainbridge, Baker, Steel and Caroline Robertson (opens a new tab) of Dartmouth College — have reinforced that finding (opens a new tab) by directly comparing people’s brain activity as they looked at and later recalled or imagined various images. In each case, a systematic spatial transformation marked the difference between the brain’s sensory and memory representations. And the visual representations appeared just behind the associated memory ones — just as they had in Huth’s language-based study.
Like that study, this one seemed to indicate that perception and memory are also deeply entangled. “It doesn’t make sense to think of our memory system as a totally separate workspace,” Baldassano said.
“A lot of people have this intuitive idea that the perceptual experience is like a roaring flame, and the memory experience is like a flickering candle,” said Brice Kuhl (opens a new tab), a neuroscientist at the University of Oregon. But memories clearly aren’t just a weaker echo of the original experience. The physical shifts seen in these recent experiments instead suggest that systematic changes in the representations themselves encode an experience that is entirely distinct but still tethered to the original.
Huth’s work provides new insights into the nature of that transformation. Perhaps memory isn’t as visually driven as we thought. Maybe it’s more abstract, more semantic, more linguistic. “We often have this impression that we have these fantastic visual representations of things,” Baker said. “You feel like you can see it. But maybe you can’t.”
To Kuhl, that makes sense. After all, “we know that when we’re imagining something or remembering something, it’s distinct from actually seeing it,” he said. What we see in our mind’s eye might be a reinterpretation of a remembered scene or object based on its semantic content rather than a literal replay of it. “We’re so fixated on using perceptual experience as a template. But I think that has blinded us a little bit.”
To test these hypotheses, researchers are now studying people who seem unable to conjure mental images, a condition called aphantasia. Perhaps, Bainbridge said, people with aphantasia will exhibit a larger forward shift in their neural representations — one that might not dwell so much on the combined visual and semantic responses, suggesting a quicker transition to abstract thought.
“Conceptually, it really seems like the field is on to something that I think is a really new idea,” said Kuhl. “It’s shaking up our thinking.”