Turning Face Perception on Its Head
David Leopold is devising new ways to study how our brains process social cues.
Humans have evolved with other people in mind. Our brains are built to perceive and interpret social signals, like body language and facial expressions. The ability to recognize the latter is an adaptation millions of years in the making and one that makes us so inclined to spot faces that we do it even when there isn’t one — just try looking at an electrical outlet without seeing a surprised emoji staring back.
“We're really good at trying to read people. It's so intuitive to us, we don't even spend much time thinking about it,” says neuroscientist David Leopold, whose lab works to understand how the brain sorts through a constant flurry of visual input from our surroundings to instantly perceive faces and register the non-verbal behaviors and emotional cues they communicate.
By warning our primitive ancestors of enemies and strengthening their social bonds with allies, facial recognition became a cornerstone of our survival and consequently engrained in our biology. Yet, it remains unclear how the neural networks necessary for face recognition form in the brain; do they need life experiences to take shape or are they pre-programmed ready to seek out faces from birth?
“I’ll give the usual unsatisfying answer that it’s both,” says Dr. Leopold. “The human brain is receptive and malleable, but it’s also programmed to pursue certain instinctive behaviors that gather important information and help shape our sensory pathways. The predilection to look at faces from birth is an example. You’re driven through hardwired neural circuitry to direct your gaze towards mom’s face and to be interested in other faces increasingly over time.”
In other words, our brains are born with a rough sketch of what constitutes a face, but we are also driven by an innate desire to fill in the finer details through our interactions with the world around us. In this way, the curiosity we observe in infants and young children might be the brain striving to fine-tune itself. Thus, the more information we gather through our experiences with faces, the more we prune and strengthen the dedicated neural circuitry that makes recognizing faces so automatic and precise.
“We add more and more sensitivity to the specialized circuits for understanding nuance, and that leads to the perception of more and more subtle things,” Dr. Leopold says.
He is devising new ways to study how the brain establishes those neural connections. Traditional experimental paradigms that present subjects with static images of faces and objects and measure the resulting patterns of brain activity reveal discrete patches of the brain dedicated to face recognition. However, when static images become moving pictures, the brain activation “looks totally different,” Dr. Leopold says. By showing experimental subjects videos of social interactions rather than still images, an approach more similar to natural vision, Dr. Leopold’s team has revealed that the brain’s organization is not as clear-cut as expected.
Using a multi-level approach that pairs functional magnetic resonance imaging (fMRI) to broadly measure activity across the brain with localized electrophysiology to measure the firing of individual neurons, Dr. Leopold’s lab has observed that neighboring neurons in the same region take part in vastly different brain networks during viewing of the same video.
“When we play the videos and look at the face-recognition areas of the brain, we find that neural responses don’t just depend on seeing faces; they are informed by other visual elements in a scene, like spatial layout and movement,” explains Dr. Leopold.
This diversity of responses challenges the prevailing views about the functional segregation in the brain and underscores the importance of studying it in conditions that resemble the way we naturally experience the world. Doing so could provide a better understanding of how our brains naturally process visual cues to extract social information and possibly shed light on how deficits in social perception develop in psychiatric disorders, like autism and schizophrenia. It also bears on higher-order cognitive functions, like using vision to predict others’ actions and plan accordingly.
“Primates, including humans, spend a lot of time sitting at a distance observing each other and trying to understand relationships and what will happen next,” says Dr. Leopold. “There's a whole level of visually guided interpretation and planning that is built into the primate brain that fundamentally shapes human cognition in a way that I think is not to be underestimated.”
As neuroscientists continue to investigate the intricacies of the mind, Dr. Leopold’s discoveries may lead them to reimagine their approach, not just with respect to methodology, but in the overarching ways they think about the brain. The analogy that likens the brain to a computer, for example, may need some updating.
“Computers have circuits, processing power, memory storage, and things like that, but biology thinks in a different way,” he says. “Brain biology is about evolution, development, genetics, ecology, and relationships with other species. The computer metaphor omits so many critical perspectives on the brain.”
Dr. Leopold’s openness to coursecorrect and consider new avenues for studying the brain are well-matched for the unique environment of the IRP, which allows him to embrace the uncertainty in science as a chance to explore and innovate.
“The NIH intramural program allows researchers to find and develop their most creative instincts and pursue projects that would likely seem too unmoored to receive funding in a grant-based systems,” he says. “Having the freedom to step out of the fray and pursue or invent new directions in the laboratory, complete with false starts and failures, is the most a scientist can hope for and is critical for advancing biomedical research over the longer term.”
This page was last updated on Wednesday, May 24, 2023