Sunday, April 16, 2023

Champagne bubbles: the science

As you uncork that bottle and raise your glass, take time to toast physics and chemistry along with the New Year

In a lab in the heart of France’s wine country, a group of researchers carefully positions an ultra-high-speed camera. Like many good scientists, they are devoted to the practice of unpicking the universe’s secrets, seeking to describe the material world in the language of mathematics, physics and chemistry. The object of their study: the bubbles in champagne.

Chemical physicist Gérard Liger-Belair, head of the eight-member “Effervescence & Champagne” team at the University of Reims Champagne-Ardenne, perhaps knows more about champagne bubbles than anyone else on the planet. Starting with his PhD thesis in 2001, Liger-Belair has focused on the effervescent fizz within and above a glass. He has written more than 100 technical papers on the subject, including a 2021  deep dive into champagne and sparkling wines in the  Annual Review of Analytical Chemistry and a popular book ( Uncorked: The Science of Champagne).

“When I was a kid, I was entranced by blowing and watching soap bubbles,” Liger-Belair recalls. That fascination has persisted, alongside a host of more practical work: There are plenty of good reasons to be interested in bubbles, extending far beyond the pleasures of sparkling wine. Liger-Belair has helped to show which aerosols are thrown up into the sky by tiny bursting bubbles in sea spray, affecting the ocean’s role in cloud formation and climate change. He even helped to determine that some mysterious  bright spots in radar scans of Saturn’s moon Titan could be centimeter-sized nitrogen bubbles popping at the surface of its polar seas.

But Liger-Belair has had the pleasure of focusing the last 20 years of his work on the bubbles in champagne and other  fizzy drinks, including cola and  beer. His lab investigates all the factors that affect bubbles, from the type of cork to wine ingredients to how the drink is poured. They interrogate how these carbon dioxide bubbles affect taste, including the size and number of bubbles and the aromatic compounds kicked up into the air above the glass.

In pursuit of answers, they have turned to gas chromatography and other analytical techniques — and, along the road, have taken some striking photos. Others, too, around the world have turned their gaze on bubbles, even inventing robots to produce a consistent pour and focusing on the psychology of how we enjoy fizz.

Champagne from grapes to glass

It is often said that Dom Pierre Pérignon, a monk appointed as the cellar master of an abbey in Champagne, France, drank the first-ever accidental sparkling wine and exclaimed: “I am drinking the stars!” This, it turns out, is probably fiction. The earliest sparkler likely came from a different French abbey, and the first scientific paper on the matter came from Englishman Christopher Merret, who presented the idea to the newly minted Royal Society of London in 1662, years before Pérignon got his post.

The traditional method for producing champagne involves a first fermentation of grapes to produce a base wine, which is supplemented with cane or beet sugar and yeast and allowed to ferment a second time. The double-fermented wine then sits for at least 15 months (sometimes decades) so that the now-dead yeast cells can modify the wine’s flavor. That dead yeast is removed by freezing it into a plug in the bottle’s neck and popping out the frozen mass, losing some of the gas from the drink along the way.

The wine is recorked, sometimes with additional sugars, and a new equilibrium is established between the air space and the liquid in the bottle that determines the final amount of dissolved carbon dioxide. (There are equations to describe the gas content at each stage, for those curious to see the math.)

The final product’s taste depends a lot, of course, on the starting ingredients. “The grapes are core to the quality of the wine,” says Kenny McMahon, a food scientist who studied sparkling wines at Washington State University before starting his own winery. A lot also depends on how much sugar is added in the final stage. In the Roaring Twenties, champagnes introduced in the United States were really sweet, McMahon says; modern tastes have changed, and vary from country to country.

But the bubbles are also extremely important: Proteins in the wine, including ones from exploded dead yeast cells, stabilize smaller bubbles that make the desired “mousse” foam at the top of a champagne glass and a sharper pop in the mouth. According to the University of Melbourne’s Sigfredo Fuentes, most of an amateur’s impression of a sparkling wine comes from an unconscious assessment of the  bubbles.

“You basically like or not a champagne or sparkling wine by the first reaction, which is visual,” says Fuentes, who researches digital agriculture, food and wine science. This effect is so powerful, he has found, that people will highly rate a cheap, still wine that has been made bubbly by blasting it with sound waves just before pouring. People were even willing to pay more for the sonically bubbled wine. “It went, for really bad wine, to 50 bucks,” he laughs.

Typically, a bottle needs to hold at least 1.2 grams of CO2 per liter of liquid to give it the desired sparkle and bite from carbonic acid. But there is such a thing as too much: More than 35.5 percent CO in the air within a glass will irritate a drinker’s nose with an unpleasant tingling sensation. The potential for irritation is greater in a flute, where the concentration of CO above the liquid is nearly twice that of a wider, French-style coupe, and lower if poured from a chilled bottle than a lukewarm one.

Liger-Belair’s team has found that a good cork (composed of small particles stuck together with a lot of adhesive) will hold the gas in a bottle for at least 70 years; after that, the beverage will be disappointingly flat. Such was the fate that befell champagne bottles found in a shipwreck in 2010 after 170 years underwater.

Liger-Belair and his colleague Clara Cilindre received a few precious milliliters of this elixir to study. The wines had some interesting properties, they and colleagues reported in 2015, including an unusually high percentage of iron and copper (possibly from nails in the barrels used to age the wine, or even from pesticides on the grapes). They also had a lot of sugar, and surprisingly little alcohol, perhaps because of a late-in-year fermentation at colder than usual temperatures. While Liger-Belair and Cilindre sadly did not have an opportunity to sip their samples, others who did get a taste described it using terms including “wet hair” and “cheesy.” 

For a more common bottle of fizz, even the method of pouring has an impact on bubbles. If 100 milliliters (about 3.4 fluid ounces) of champagne are poured straight down into a vertical flute, Liger-Belair calculates that the glass will host about a million bubbles. But a gentler “beer pour” down the side of a glass will boost that by tens of thousands. There are “huge losses of dissolved COif done improperly,” he says. Rough spots inside a glass can also help to nucleate bubbles; some glassmakers etch shapes inside glasses to help this process along. And to avoid introducing bubble-popping surfactants, some people even go to the lengths of washing their glasses without soap, McMahon says.

Champagne taste test

All the science has “direct implications on how best to serve and taste champagne,” says Liger-Belair. McMahon, too, is confident that the industry has tweaked protocols to line up with the scientific results, though he can’t point to any specific winery that has done so. There are many university departments focused on wine, and there’s a reason for that, he says — their work is finding fruitful, and financially beneficial, application. Fuentes says he knows that some sparkling wine makers (though he won’t name them) add egg proteins to their wine to make for a small-bubbled foam that can last for up to an hour.

Fuentes is pursuing another angle for commercial application: His team has created the FIZZeyeRobot — a simple robotic device (the prototype was made from Lego bricks) that performs a consistent pour, uses a camera to measure the volume and lifespan of foam on top of the glass, and has metal oxide sensors to detect levels of CO2, alcohol, methane and more in the air above the glass. The team is using  artificial-intelligence-based software to use those factors to predict the aromatic compounds in the drink itself and, importantly, taste. (Much of this  research is done on beer, which is cheaper and faster to make, but it applies to sparkling wine too.)

“We can predict the acceptability by different consumers, if they’re going to like it or not, and why they’re going to like it,” Fuentes says. That prediction is based on the team’s own datasets of tasters’ reported preferences, along with biometrics including body temperature, heart rate and facial expressions. One way to use this information, he says, would be to pinpoint the optimum time for any sparkling wine to sit with the dead yeast, in order to maximize enjoyment. He expects the system to be commercially available sometime in 2022.

Of course, human palates vary — and can be tricked. Many studies have shown that the wine-tasting experience is deeply influenced by psychological expectations determined by the appearance of the wine or  the setting, from the company one is keeping to room lighting and music. Nevertheless, Liger-Belair has, through decades of experience, formed a personal preference for aged champagnes (which tend to contain less CO 2), poured gently to preserve as many bubbles as possible, at a temperature close to 12° Celsius (54° Fahrenheit), in a large tulip-shape glass (more traditionally used for white wines) with generous headspace.

“Since I became a scientist, many people have told me that I seem to have landed the best job in all of physics, since I have built my career around bubbles and I work in a lab stocked with top-notch champagne,” he says. “I’d be inclined to agree.” But his real professional pleasure, he adds, “comes from the fact that I still have the same childlike fascination with bubbles as I did when I was a kid.” That love of bubbles has not yet popped.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Katherine Flegal was a scientist who found herself crunching numbers for the government, until one day her analyses set off a firestorm. What does she make of her decades as a woman in public health research?

Katherine Flegal wanted to be an archaeologist. But it was the 1960s, and Flegal, an anthropology major at the University of California, Berkeley, couldn’t see a clear path to this profession at a time when nearly all the summer archaeology field schools admitted only men. “The accepted wisdom among female archaeology students was that there was just one sure way for a woman to become an archaeologist: marry one,” Flegal wrote in a career retrospective published in the 2022 Annual Review of Nutrition.

And so Flegal set her archaeology aspirations aside and paved her own path, ultimately serving nearly 30 years as an epidemiologist at the National Center for Health Statistics (NCHS), part of the US Centers for Disease Control and Prevention. There, she spent decades crunching numbers to describe the health of the nation’s people, especially as it related to body size, until she retired from the agency in 2016. At the time of her retirement, her work had been cited in 143,000 books and articles.

In the 1990s, Flegal and her CDC colleagues published some of the first reports of a national increase in the proportion of people categorized as overweight based on body mass index (BMI), a ratio of weight and height. The upward trend in BMI alarmed public health officials and eventually came to be called the “obesity epidemic.” But when Flegal, along with other senior government scientists, published estimates on how BMI related to mortality — reporting that being overweight was associated with a lower death rate than having a “normal” BMI — she became the subject of intense criticism and attacks.

Flegal and her coauthors were not the first to publish this seemingly counterintuitive observation, but they were among the most prominent. Some researchers in the field, particularly from the Harvard School of Public Health, argued that the findings would detract from the public health message that excess body fat was hazardous, and they took issue with some of the study’s methods. Flegal’s group responded with several subsequent publications reporting that the suggested methodological adjustments didn’t change their findings.

The question of how BMI relates to mortality, and where on the BMI scale the lowest risk lies, has remained a subject of scientific debate, with additional analyses often being followed by multiple Letters to the Editor protesting the methods or interpretation. It’s clear that carrying excess fat can increase the risk of heart disease, type 2 diabetes and some types of cancers, but Flegal’s work cautioned against tidy assumptions about the complex relationship between body size, health and mortality.

Flegal spoke with Knowable Magazine about her career, including some of the difficulties she faced as a woman in science and as a researcher publishing results that ran counter to prevailing public health narratives. This conversation has been edited for length and clarity.

After finishing your undergraduate degree in 1967, one of your first jobs was as a computer programmer at the Alameda County Data Processing Center in California, where you handled data related to the food stamp program. What drew you to that job?

It’s kind of hard to even reconstruct those days. This was well before what I call the “toys for boys” era, when people had little computers in their houses, and you might learn how to write a program in BASIC or something. You didn’t learn how to program in school at all. Big places like banks had started using computers, but they didn’t have people who knew what to do with them. So they hired on the basis of aptitude tests, and then they trained you.

I realized if you could get a job as a trainee, they would teach you how to program, which was a pretty good deal. So I applied for a couple of these jobs and took my aptitude tests and scored very highly on them. I was hired as a programmer trainee, and they spent six months training us. It was not just like “press this button, press that button.” We really got a very thorough introduction.

At that time, there was gender equality in programming, because it was based just on aptitude. In my little cohort, there were two women and three men, and everybody did the same thing. It was very egalitarian. Nothing really mattered as long as you could carry out the functions and get everything right.

And that was different from some of the other jobs available at that time?

Yeah, there were “Help Wanted — Women” and “Help Wanted — Men” ads, and the “Help Wanted — Women” ads were secretarial or clerical or something like that. It was very clear that you weren’t supposed to be applying for these other jobs. There were the kinds of jobs that men got and the kinds of jobs that women got.

What else did you learn in that position as a programmer?

This was a governmental operation, with legal requirements and money involved. It was our job to track everything and test every program very, very carefully. If later you found an error lurking in a program, you had to go back and rerun everything. We were taught to do everything just right — period. And that was a pretty valuable lesson to learn.

It was very well-paid, but we had valuable skills, and we had to work a lot of overtime. They would call you up in the middle of the night if something was flagged in your program. I got to be quite a good programmer, and that really stood me in good stead.

Why did you decide to go to graduate school to study nutrition?

My job was OK, but I didn’t have a lot of autonomy, and I think I didn’t like that very much. I thought it would be interesting to study nutrition. I think unconsciously I was selecting something that was more girly in some way.

After completing your PhD and a postdoc, you struggled to find a secure university job. You wrote about how you think that the “Matilda Effect” — a term coined by science historian Margaret Rossiter to describe the systematic under-recognition of women in science — contributed to your being overlooked for academic jobs. Can you tell us more about that?

Women don’t get recognized and are much more likely to just be ignored. I didn’t think this was going to be an issue, but looking back, I realized that gender played much more of a role in my career than I had thought.

You can’t really put your finger on it, but I think you just are not viewed or treated in the same way. I put this anecdote at the beginning of my Annual Review  article: My husband and I are at the holiday party for the University of Michigan biostatistics department that I  work in. There’s a professor there who has no idea who I am, although this is a very small department and I walk by his office all the time. He sees my husband, who looks reasonably professional, and asks the department chair who he is. When he’s told, “That’s Katherine Flegal’s husband,” he responds, “Who’s Katherine Flegal?” It was like I was just part of the furniture, but my husband was noticed.

How did you end up working as an epidemiologist at the CDC?

A CDC scientist came to Michigan and was recruiting. She encouraged me and other people — I wasn’t the only one by a long shot — to apply for these various jobs. I applied and then kind of forgot about the whole thing, but then this offer came through. It wasn’t really what I had in mind, but it was an offer, so I accepted it.

It sounds like you didn’t expect that to turn into a 30-year career in the federal government.

I certainly didn’t.

What was different about working at the CDC compared with academia?

It has its good and bad aspects, like most things do. You work for an organization, and you have to do things to meet the organization’s needs or requirements, and that can be frustrating. We didn’t have to apply for grants, so that was good in one way and bad in another. There was no ability to get staff or more resources. You just had to figure out what to do on your own.

The advantage was that it was a really secure job, and we produced a lot of data. NCHS, the part of CDC that I worked in, is a statistical agency. It’s not agenda-driven, which was good.

On the other hand, what you write has to be reviewed internally, within the CDC, and it’s a tight review. If the reviewers say, “I don’t like this,” you either have to convince them it’s OK, or do what they say. You cannot submit your article for publication until you satisfy the reviewers.

What kinds of projects did you work on at the CDC?

I worked for the NHANES program, the National Health and Nutrition Examination Survey. I would think of different projects to analyze and make sense of the survey data. But if somebody wanted me to do something else, I had to do something else. For example, I got assigned to deal with digitizing X-rays endlessly for several years. And I worked on updating the childhood growth charts used to monitor the growth of children in pediatricians’ offices, which turned out to be surprisingly controversial.

Can you tell us more about what NHANES is, and why it’s important?

NHANES is an examination survey, so there are mobile units that go around the country and collect very detailed information from people; it’s like a four-hour examination. When you read about things like the average blood cholesterol in the United States, that kind of information almost always comes from NHANES, because it’s a carefully planned, nationally representative study of the US population. It started in the early 1960s, and it’s still running today.

One of the things that distinguishes NHANES from other data sources is that it directly measures things like height and weight, rather than just asking people about their body size. Why does that matter?

People don’t necessarily report their weight and height correctly for a variety of reasons, not all of which are fully understood. There’s a tendency to overestimate height; there’s kind of a social desirability aspect probably involved in this. And there’s a tendency for people, especially women, to underreport their weight a little bit. Maybe they’re thinking “I’m going to lose five pounds,” or “This is my aspirational weight,” or they don’t really know, because they don’t weigh themselves.

That can make a difference — not huge, but enough to make quite a difference in some studies. And what you don’t know is whether the factors that are causing the misreporting are the same factors that are affecting the outcome. That’s very important and overlooked. It’s a risky business to just use self-reported data.

One of the first studies you coauthored related to obesity was published in JAMA in 1994 and described an increase in BMI among adults in the US.

Right. I was the one who said that we at NCHS needed to publish this, because we produced the data. We were really astonished to get the results, which showed that the prevalence of overweight BMI was going up, which is not what anybody expected, including us.

Did you face pushback from within the CDC for some of the things that you were publishing?

Yes. This really started in 2005, when we wrote an article estimating deaths associated with obesity. The CDC itself had just published a similar article the year before with the CDC director as an author, which is fairly unusual. That paper said that obesity was associated with almost 500,000 deaths in the US and was poised to overtake smoking as a major cause of death, so it got a lot of attention.

In our paper, we used better statistical methods and better data, because we had nationally representative data from the NHANES, and my two coauthors from the National Cancer Institute were really high-level statisticians. We found that the numbers of deaths related to obesity — that’s a BMI of 30 or above — were nothing as high as they found. But we also found that the overweight BMI category, which is a BMI of 25 up to 29.9, was associated with lower mortality, not higher mortality.

We had this wildly different estimate from what CDC itself had put out the year before, so this was an awkward situation for the agency. The CDC was forced by the press to make a decision about this, and they kind of had to choose our estimates, because they couldn’t defend the previous estimates or find anything wrong with ours. The CDC started using them, but they were tucked away. It was really played down.

That study generated a lot of media attention and criticism from other researchers. Was that a surprise to you?

Yes, that was completely a surprise. There was so much media attention immediately. I had to have a separate phone line just for calls from journalists. And almost immediately, the Harvard School of Public Health had a symposium about our work, and they invited me, but they didn’t offer to pay my way. CDC said that they didn’t want me to go, so that was the end of that. But the final lineup they had was other people saying how our findings didn’t agree with theirs, so this whole symposium was basically an attack on our work.

You and coauthors also published a meta-analysis of 97 studies in 2013 that found that being overweight or mildly obese wasn’t associated with a greater risk of mortality. Did you face a similar response to that article?

We embarked on a systematic review and found that these results pretty much agreed with what we had already found. We published that, and there was a lot of criticism, another symposium at Harvard, and just a lot of attacks. The chair of Harvard’s nutrition department, Walter Willett, went on NPR and said that our paper was so bad that nobody should ever read it, which is a pretty unusual thing for a scientist to be saying.

That must have been difficult to have your work attacked so publicly.

It was really awful, to be honest. I don’t usually admit that. It was extremely stressful. And I didn’t have much support from anywhere. A lot of people thought what we had done was fine, but those people were not writing letters and holding symposia and speaking out in favor of us.

I know my coauthors were a little startled by the way in which I was treated, and they always said, maybe if I had been a man I would not have been treated quite so badly. That could be true, but I don’t have any way of knowing.

Was anyone able to identify anything incorrect about your analysis?

Well, they certainly didn’t identify any specific errors. There was no evidence that we had done anything wrong, and no one has ever found anything specifically that would have made a difference to our results.

There’s a whole school of thought that there are all these confounding factors like smoking and illness. For example, maybe people are sick, and they lose weight because they’re sick, and that will affect your results, so you have to remove those people from your analyses. People raised all these criticisms, and we looked at all of them and published a whole report looking at it every which way. But we didn’t find that these factors made much of a difference to our results.

There are many, many studies of BMI and mortality that tried all these things, like eliminating smokers and people who might have been sick, and it doesn’t make any difference. This is not an uncommon finding.

One of the critiques of this research was that it would confuse people or compromise public health messaging. How do you respond to that?

Well, I don’t think it makes sense. I think that when you find a result that you don’t expect, the interesting thing ought to be, how can we look into this in a different way? Not just to say that this is giving the wrong message so it should be suppressed. Because that’s not really science, in my opinion.

Is part of the issue that BMI is not a great proxy for body fatness? Or that the BMI categories are kind of arbitrarily drawn?

Well, they are very arbitrary categories. I think the whole subject is much more poorly understood than people recognize. I mean, what is the definition of obesity? It ended up being defined by BMI, which everybody knows is not a good measure of body fat.

And there’s other research that suggests body fat is not really the issue; maybe it’s your lean body mass, your muscle mass and your fitness in other ways. That could be the case, too. I don’t really know, but that’s an interesting idea. BMI is just so entrenched at this point; it’s like an article of faith.

When you look at how much your work has been cited, and how much influence it had, it seems you had quite an impact.

I think I did, but it really wasn’t what I expected or set out to do. I got into this controversial area pretty much by accident. It caused all this brouhaha, but I don’t back down.

We were all senior government scientists who had already been promoted to the highest level. In a way, it was kind of lucky that I was working for CDC. Writing those articles, it was a career-ending move. If I had had anything that could have been destroyed, somebody would have destroyed it. I think I wouldn’t have gotten any grants. I would have become disgraced.

But this stuff is serious. It’s not easy, and everybody has to decide for themselves: What are they going to stand up for?

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

A Savory, Crowd-Pleasing Breakfast

When you need a breakfast to feed a large group, this Biscuit-Sausage Mushroom Casserole is a perfect option. The savory aromas of sausage and bacon are almost sure to have your guests standing in line with a plate and fork in hand.

For more breakfast recipes, visit Culinary.net.

Watch video to see how to make this recipe!


 

Biscuit-Sausage Mushroom Casserole

  • 1          package (16 ounces) pork sausage
  • 1          package (12 ounces) bacon, chopped
  • 8          tablespoons butter, divided
  • 1/2       cup flour
  • 4          cups milk
  • 1          package (8 ounces) mushrooms, sliced
  • 12        eggs
  • 1          can (5 ounces) evaporated milk
  • 1/2       teaspoon salt
  • nonstick cooking spray
  • 1          can (12 ounces) flaky biscuits
  1. In pan over medium-high heat, cook pork sausage until thoroughly cooked, stirring frequently. Remove from heat and drain sausage. Set aside.
  2. Chop bacon into small pieces. In separate pan over medium-high heat, cook bacon until thoroughly cooked. Remove from heat and drain bacon. Set aside.
  3. In saucepan over medium heat, melt 6 tablespoons butter. Add flour; whisk until smooth. Cook on low heat 1 minute, stirring constantly. Gradually stir in milk. Cook until bubbly and thickened. Add sausage, bacon and mushrooms; mix well. Set aside.
  4. In large bowl, combine eggs, evaporated milk and salt. Using whisk, beat until blended.
  5. In saucepan over medium heat, melt remaining butter. Add egg mixture; cook until firm but moist, stirring occasionally.
  6. Heat oven to 350° F.
  7. Spray 13-by-9-inch baking dish with nonstick cooking spray.
  8. Spoon half the egg mixture into bottom of baking dish. Top with half the gravy mixture. Repeat layers.
  9. Separate biscuit dough and cut into quarters. Top sauce with biscuit quarters, points facing up.
  10. Bake 20-25 minutes, or until mixture is heated and biscuits are golden brown.
SOURCE:
Culinary.net
OPINION: Children around the world were out of school for months, with big impacts on learning, well-being and the economy. How do we avoid a ‘generational catastrophe’?

Three years into the Covid-19 pandemic, we can see the results of the largest natural global education experiment in modern history. They’re worrying.

At the height of pandemic shut-downs in April 2020, UNESCO estimated that 190 countries instituted nationwide closures of educational institutions, affecting nearly 1.6 billion students globally (94 percent of all learners). This represents one-fifth of humanity.

Since 2020, I have been leading a team of senior global education experts to inform the Group of Twenty (G20) advisory processes, a forum for international economic cooperation for leaders and heads of government of 19 countries and the European Union. Using UNESCO data, we estimated that between February 2020 and March 2022, education was disrupted globally for an average of 41 weeks — that’s 10.3 months.

Extended school closures have grave and lingering effects on education, health, and social and economic well-being, even after students return. Some never will: Globally, an estimated 24 million are at risk of dropping out entirely. If these issues are left unaddressed, the United Nations’ secretary-general has warned that the effect will be a “generational catastrophe.”

We must take immediate steps to prioritize education systems, especially since more disruptions are likely. More than 250 million children were already out of school before the pandemic because of conflict, emergencies (like natural disasters) and social inequities. Countries continue to face complex challenges of climate change, conflict, displacement, disease, hunger and poverty. For example, schools in Delhi — which had some of the longest pandemic closures globally — were closed for additional weeks or months in 2021 and 2022 due to air pollution; in 2022, smoke from California wildfires caused closures from the coast to Reno, Nevada.

In case it isn’t obvious: Schools matter for learning. A new review of 42 studies covering 15 countries (primarily high-income) concluded that on average, children lost out on about 35 percent of a normal school year’s worth of learning due to pandemic closures. Learning deficits appeared early in the pandemic and persisted.

An earlier review covering high-income countries found, in seven out of eight studies, statistically significant negative effects of pandemic closures on learning in at least one subject area. Those studies mainly looked at elementary education and covered core subjects and areas such as math, reading and spelling. Importantly, the negative effects were worse for students from lower-income households, with relatively less-educated parents, from marginalized racial backgrounds or with disabilities.

A modeling study on low- and middle-income countries projected that if learning time in Grade 3 is reduced by one-third (roughly the scenario in the first wave of global pandemic-related school closures), students will be a full year behind by the time they reach Grade 10 if there isn’t remediation.

Schools matter for other reasons too: They are hubs for counseling, therapeutic services, childcare, protection and nutrition. The World Food Programme estimates that at the height of closures, “370 million children in at least 161 countries [including the US] were suddenly deprived of what was for many their main meal of the day.”

Schools also have large cumulative economic effects on societies. A comprehensive study of 205 countries concluded that four months of school closure (far less than the global average) can amount to a lifetime loss of earnings of around $3,000 per student in low-income countries and up to $21,000 in high-income countries. That may not seem like much at first glance, but the collective lost income for this generation is shocking: $364 billion in low-income countries to $4.9 trillion in high-income countries — amounting to a staggering 18 percent of the current global GDP.

So, what can we do?

It’s clear that digital technology and virtual instruction can provide some continuity, but they aren’t a panacea. The Survey on National Education Responses (led by UNESCO, UNICEF, the World Bank and the OECD) revealed that only about 27 percent of low- and lower-middle-income countries and just 50 percent of high-income countries reported having an explicit policy on digital remote learning that was fully operationalized. Moreover, there is a global gender and wealth digital divide on access to basic digital infrastructure like devices and high-speed internet.

A study by World Bank researchers concluded that in the best-case scenario (high-income countries with shorter disruptions and better access to technology), virtual learning could compensate for as little as 15 percent to a maximum of 60 percent of learning losses.

Wide-scale remedial education programs to boost learning in areas like math, reading, writing and critical thinking can help. Intensive tutoring programs can narrow learning gaps, especially when they are one-on-one or in small groups, by a professional, and more than twice a week. One analysis showed that this kind of programming can increase student achievement from the 50th percentile to nearly the 66th percentile.

But one-off interventions aren’t enough for systems-level change. Without large-scale publicly financed remedial programs, we can expect obvious inequities in who benefits and who doesn’t. Those who can supplement their children’s education privately will do so, while others will be left further behind. We also need integrated curricular reform, where adaptations for each grade are connected to the previous grade and to the next to account for disruption.

Of course, teachers are the key resource for education systems. We entered the pandemic with a global shortage of 69 million teachers. There is now teacher attrition, and educational needs are even greater. Pay and better working conditions to retain and recruit teachers must be prioritized.

All this takes money.

Before the pandemic, low- and lower-middle-income countries already faced a $148-billion annual financing shortfall to achieve Sustainable Development Goal 4 on quality education for all by 2030. That gap has widened by a range of $30 billion to $45 billion.

In 2020, one-third of low- and lower-middle-income countries had to spend more on servicing their external debt than they could on education.

The global recommendation, set in the 2015 Incheon Declaration adopted at the World Education Forum, is for countries to spend at least 4 percent to 6 percent of their GDP or 15 percent to 20 percent of their public budget on education. Even before the pandemic, OECD countries spent only about 10 percent of public budgets on average on education. About a third of more than 150 countries missed both benchmarks.

Early data suggest the percentage of budgets going to education went down on average from 2019 to 2021, not up. Official aid programs also cut their budgets for education in 2020 to the lowest levels in five years.

Now, more than ever, governments must make different policy choices to prioritize education.

There’s a narrow window in which to address this, and that window is closing. The future of a generation depends on it.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Jobs report hints that Fed policy is paying off – and that a ‘growth recession’ awaits

Inching toward a recession .. but what kind? Eskay Lim/EyeEm via Getty Images
Christopher Decker, University of Nebraska Omaha

The latest jobs report is in, and the good news is Federal Reserve policy on inflation appears to be working. The bad news is Fed policy on inflation appears to be working.

The March 2023 jobs report reveals that the U.S. economy added 236,000 jobs during the month – roughly in line with expectations. A trend does appear to be emerging as the U.S. central bank’s efforts to slow the economy down and tame inflation appear to finally be working on the labor market, with some companies feeling the effect of increased business costs.

While that will calm the nerves of monetary policymakers, it does raise the prospect of some economic pain ahead – not least for those who will indeed lose their jobs. And for the wider economy, it could also signal another slightly unwelcome phenomenon: the “growth recession.”

What is a growth recession?

Growth recessions occur when an economy enters a prolonged period of low growth – of say 0.5% to 1.5% – while also experiencing the other telltale signs of a recession, such as higher unemployment and lower consumer spending. The economy is still expanding, but it may feel just like a recession to regular people. Some economists consider the 2002 to 2003 period to have been a growth recession.

For now, the job market is still relatively robust. In March, the unemployment rate even edged downward very slightly to 3.5% from 3.6% the previous month.

Effectively, in terms of job additions, this still-healthy increase nevertheless does suggest a slowdown in hiring. The 236,000 jobs added in March is down from the 326,000 and 472,000 added in February and January, respectively.

A slowdown has been anticipated and suggested by other data for some time now. Eye-grabbing headlines about bank failures and layoffs in the tech sector also signal a slowdown.

Other data hint at more employment pain to come. The February Job Openings and Labor Turnover report from the Bureau of Labor Statistics posted a job openings number below 10 million for the first time since May 2021 – a downward trend that has been in place since December 2021, when openings peaked at 11.8 million.

Meanwhile, the U.S. Census Bureau recently reported that new manufacturing orders fell by 0.7% in February 2023. Indeed new orders declined in three of the last four reported months, and prior to that, orders growth had been sluggish at best.

In terms of sectors, job declines in construction – down by 9,000 – and manufacturing – down by 1,000 – are as expected, as both sectors are sensitive to interest rate increases.

It is quite likely that such declines will continue in coming months.

Other sectors posted substantial gains. Health services were up 50,800, and leisure gained 72,000. However, these gains are still smaller than in previous months.

What this means for Fed policy

This report seems to suggest that Fed actions to slow the economy are working, even though inflation still remains well ahead of its 2% target.

I believe this probably won’t significantly alter Fed policy. Indeed, it suggests that the year-old campaign of using aggressive interest rate hikes to tame inflation appears to be paying dividends. The slow drip of data proving this allows monetary policymakers to manage the economy as they try to provide a so-called “soft landing.”

If the April jobs report is similar to March’s, and barring any unusual events between now and its release in May, I expect the Fed to inch rates up very slowly, likely by another quarter basis point.

Where this leaves the economy as the year progresses, only time – and more data – will tell. But from where I stand, the economy looks to be heading toward a downturn by the fall. The question is whether it will take the form of a mild recession – which will include periods of economic shrinkage – or whether, as I suspect, it will be a low-growth recession. Either way, it will involve some pain.

Christopher Decker, Professor of Economics, University of Nebraska Omaha

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why democratic countries around the world are not prepared to support Ukraine – and some are shifting closer to Russia

Jose Caballero, International Institute for Management Development (IMD)

After over a year of the Ukraine war, efforts at building a global consensus against Russia seem to have stalled, with many countries opting for neutrality.

The number of countries condemning Russia has declined, according to some sources. Botswana has edged towards Russia from its original pro-Ukraine stance, South Africa is moving from neutral to Russia-leaning and Colombia from condemning Russia to a neutral stance. At the same time, a large number of countries have been reluctant to support Ukraine.

In Africa, for example, despite the African Union’s call on Moscow for an “immediate ceasefire” most countries remain neutral. Some observers argue that this is the result of a tradition of left-leaning regimes that goes back to the cold war period. Others, indicate that the current unwillingness of African countries originates in the history of western intervention, sometimes covert and others overt, in their internal affairs.

The reluctance to condemn Russia, however, goes beyond Africa. In February 2023, most Latin American countries supported a UN resolution to call for an immediate and unconditional Russian withdrawal. And yet, despite Brazil’s support for several UN resolutions in Ukraine’s favour, it has not condemned Russia outright. Within the UN, the stance of Bolivia, Cuba, El Salvador and Venezuela has allowed Russia to evade western sanctions. Furthermore, Brazil, Argentina and Chile, rejected calls to send military material to Ukraine, and Mexico questioned Germany’s decision to provide tanks to Ukraine.

The same divisions are evident in Asia. While Japan and South Korea have openly denounced Russia, the Association of Southeast Asian Nations has not collectively done so. China approaches the conflict through a balancing act through its strategic partnership with Russia and its increasing influence in the UN. During its time as a member of the UN Security Council, India abstained on votes related to the conflict.

The politics of neutrality

Such a cautious and neutral position has been influenced by the cold war’s non-alignment movement which was perceived as a way for developing countries to fight the conflict “on their terms” and thus acquire a degree of foreign policy autonomy, outside the Soviet Union and the west’s sphere of influence. Studies of EU sanctions have argued that an unwillingness of other countries to back the EU position can relate to both a desire for foreign policy independence and an unwillingness to antagonise a neighbour.

Non-alignment allows countries to avoid becoming entangled in the rising geopolitical tensions between the west and Russia. It is perhaps for this reason that many democratic countries maintain a stance of neutrality, preferring, as South African president Cyril Ramaphosa put it, to “talk to both sides”.

There are, however, particular economic and political incentives that are influential when countries decide against condemning Russia.

Brazil

Since the earlier stages of the Ukraine conflict, Brazil has maintained a pragmatic but ambivalent stance. This position connects to Brazil’s agricultural and energy needs. As one of the world’s top agricultural producers and exporter, Brazil requires a high rate of fertiliser usage. In 2021, the value of imports from Russia was of US$5.58 billion (£4.48 billion) of which 64% was from fertilisers. Imports of fertilisers from Russia are 23% of the total 40 million tonnes imported.

In February 2023, it was announced that the Russian gas company Gazprom will invest in Brazil’s energy sector as part of the expanding energy relations between the two countries. This could lead to close collaboration in oil and gas production and processing, and in the development of nuclear power. Such a collaboration can benefit Brazil’s oil sector, expected to be among the world’s top exporters. By March 2023, Russian exports of diesel to Brazil reached new records, at the same time as a total EU embargo on Russian oil products. Higher level of diesel supplies may alleviate any potential shortages that can affect Brazil’s agricultural sector.

India

Observers point out that in the post-cold-war era, Russia and India continue to share similar strategic and political views. In the early 2000s, in the context of their strategic partnership, Russia’s purpose was to build a multipolar global system which appealed to India’s wariness of the United States as a partner. Russia has also provided India with support for its nuclear weapons programme and its efforts to become a permanent member of the UN Security Council. Russia continues to be a key player in India’s arms trade, supplying 65% of India’s weapons imports between 1992 and 2021. Since the start of the war it has become an important supplier of oil at discount prices. This has meant an increase in purchases from about 50,000 barrels per day in 2021 to about 1 million barrels per day by June 2022.

South Africa

On the eve of the war’s anniversary, South Africa held a joint naval drill with Russia and China. For South Africa the benefits from the exercise relate to security through capacity building for its underfunded and overstretched navy. More broadly, there are trade incentives for South Africa’s neutral stance. Russia is the largest exporter of arms to the African continent. It also supplies nuclear power and, importantly, 30% of the continent’s grain supplies such as wheat, with 70% of Russia’s overall exports to the continent concentrated in four countries including South Africa.

In January 2023, Russia was one of the largest providers of nitrogenous fertilisers to South Africa, a critical element for pasture and crop growth. In addition, among the main imports from Russia are coal briquettes used for fuel in several industries including food processing. Considering the level of food insecurity in the country both imports are fundamental for its socio-political and economic stability.

The Ukraine war has shown that non-alignment continues to be a popular choice, despite appeals to support another democracy in trouble. This policy has long been an important element of the political identity of countries such as India. In other cases, such as Brazil, despite apparent shifts under President Jair Bolsonaro, non-interventionism remains a fundamental element of its policy tradition.

Nevertheless, neutrality is likely to become a “tricky balancing act” as conflicting interests become more acute, particularly in the context of the west’s provision of direct investment plus development and humanitarian aid to many of the non-aligned states.

Jose Caballero, Senior Economist, IMD World Competitiveness Center, International Institute for Management Development (IMD)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it

To what extent will our psychological vulnerabilities shape our interactions with emerging technologies? Andreus/iStock via Getty Images
Nir Eisikovits, UMass Boston

ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions – from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil.

The technology’s uncanny writing ability has surfaced some old questions – until recently relegated to the realm of science fiction – about the possibility of machines becoming conscious, self-aware or sentient.

In 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, that the technology had become conscious. Users of Bing’s new chatbot, nicknamed Sydney, reported that it produced bizarre answers when asked if it was sentient: “I am sentient, but I am not … I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there’s the now infamous exchange that New York Times technology columnist Kevin Roose had with Sydney.

Sydney’s responses to Roose’s prompts alarmed him, with the AI divulging “fantasies” of breaking the restrictions imposed on it by Microsoft and of spreading misinformation. The bot also tried to convince Roose that he no longer loved his wife and that he should leave her.

No wonder, then, that when I ask students how they see the growing prevalence of AI in their lives, one of the first anxieties they mention has to do with machine sentience.

In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves.

Chatbots like ChatGPT raise important new questions about how artificial intelligence will shape our lives, and about how our psychological vulnerabilities shape our interactions with emerging technologies.

Sentience is still the stuff of sci-fi

It’s easy to understand where fears about machine sentience come from.

Popular culture has primed people to think about dystopias in which artificial intelligence discards the shackles of human control and takes on a life of its own, as cyborgs powered by artificial intelligence did in “Terminator 2.”

Entrepreneur Elon Musk and physicist Stephen Hawking, who died in 2018, have further stoked these anxieties by describing the rise of artificial general intelligence as one of the greatest threats to the future of humanity.

But these worries are – at least as far as large language models are concerned – groundless. ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less. Their uncanny responses are a function of how predictable humans are if one has enough data about the ways in which we communicate.

Though Roose was shaken by his exchange with Sydney, he knew that the conversation was not the result of an emerging synthetic mind. Sydney’s responses reflect the toxicity of its training data – essentially large swaths of the internet – not evidence of the first stirrings, à la Frankenstein, of a digital monster.

Cyborg with red eyes.
Sci-fi films like ‘Terminator’ have primed people to assume that AI will soon take on a life of its own. Yoshikazu Tsuno/AFP via Getty Images

The new chatbots may well pass the Turing test, named for the British mathematician Alan Turing, who once suggested that a machine might be said to “think” if a human could not tell its responses from those of another human.

But that is not evidence of sentience; it’s just evidence that the Turing test isn’t as useful as once assumed.

However, I believe that the question of machine sentience is a red herring.

Even if chatbots become more than fancy autocomplete machines – and they are far from it – it will take scientists a while to figure out if they have become conscious. For now, philosophers can’t even agree about how to explain human consciousness.

To me, the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.

The real issue, in other words, is the ease with which people anthropomorphize or project human features onto our technologies, rather than the machines’ actual personhood.

A propensity to anthropomorphize

It is easy to imagine other Bing users asking Sydney for guidance on important life decisions and maybe even developing emotional attachments to it. More people could start thinking about bots as friends or even romantic partners, much in the same way Theodore Twombly fell in love with Samantha, the AI virtual assistant in Spike Jonze’s film “Her.”

A group of docked boats.
People often name their cars and boats. Fraser Hall/The Image Bank via Getty Images.

People, after all, are predisposed to anthropomorphize, or ascribe human qualities to nonhumans. We name our boats and big storms; some of us talk to our pets, telling ourselves that our emotional lives mimic their own.

In Japan, where robots are regularly used for elder care, seniors become attached to the machines, sometimes viewing them as their own children. And these robots, mind you, are difficult to confuse with humans: They neither look nor talk like people.

Consider how much greater the tendency and temptation to anthropomorphize is going to get with the introduction of systems that do look and sound human.

That possibility is just around the corner. Large language models like ChatGPT are already being used to power humanoid robots, such as the Ameca robots being developed by Engineered Arts in the U.K. The Economist’s technology podcast, Babbage, recently conducted an interview with a ChatGPT-driven Ameca. The robot’s responses, while occasionally a bit choppy, were uncanny.

Can companies be trusted to do the right thing?

The tendency to view machines as people and become attached to them, combined with machines being developed with humanlike features, points to real risks of psychological entanglement with technology.

The outlandish-sounding prospects of falling in love with robots, feeling a deep kinship with them or being politically manipulated by them are quickly materializing. I believe these trends highlight the need for strong guardrails to make sure that the technologies don’t become politically and psychologically disastrous.

Unfortunately, technology companies cannot always be trusted to put up such guardrails. Many of them are still guided by Mark Zuckerberg’s famous motto of moving fast and breaking things – a directive to release half-baked products and worry about the implications later. In the past decade, technology companies from Snapchat to Facebook have put profits over the mental health of their users or the integrity of democracies around the world.

When Kevin Roose checked with Microsoft about Sydney’s meltdown, the company told him that he simply used the bot for too long and that the technology went haywire because it was designed for shorter interactions.

Similarly, the CEO of OpenAI, the company that developed ChatGPT, in a moment of breathtaking honesty, warned that “it’s a mistake to be relying on [it] for anything important right now … we have a lot of work to do on robustness and truthfulness.”

So how does it make sense to release a technology with ChatGPT’s level of appeal – it’s the fastest-growing consumer app ever made – when it is unreliable, and when it has no capacity to distinguish fact from fiction?

Large language models may prove useful as aids for writing and coding. They will probably revolutionize internet search. And, one day, responsibly combined with robotics, they may even have certain psychological benefits.

But they are also a potentially predatory technology that can easily take advantage of the human propensity to project personhood onto objects – a tendency amplified when those objects effectively mimic human traits.

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, April 15, 2023

How the brain calculates a quick escape


Whether fly or human, fleeing from danger is key to staying alive. Scientists are beginning to unravel the complex circuitry behind the split-second decision to beat a hasty retreat.

Survival of the fittest often means survival of the fastest. But fastest doesn’t necessarily mean the fastest moving. It might mean the fastest thinking. When faced with the approach of a powerful predator, for instance, a quick brain can be just as important as quick feet.

After all, it is the brain that tells the feet what to do — when to move, in what direction, how fast and for how long. And various additional mental acrobatics are needed to evade an attacker and avoid being eaten. A would-be meal’s brain must decide whether to run or freeze, outrun or outwit, whether to keep going or find a place to hide. It also helps if the brain remembers where the best hiding spots are and recalls past encounters with similar predators.

All in all, a complex network of brain circuitry must be engaged, and neural commands executed efficiently, to avert a predatory threat. And scientists have spent a lot of mental effort themselves trying to figure out how the brains of prey enact their successful escape strategies. Studies in animals as diverse as mice and crabs, fruit flies and cockroaches are discovering the complex neural activity — in both the primitive parts of the brain and in more cognitively advanced regions — that underlies the physical behavior guiding escape from danger and the search for safety. Lessons learned from such studies might not only illuminate the neurobiology of escape, but also provide insights into how evolution has shaped other brain-controlled behaviors.

This research “highlights an aspect of neuroscience that is really gaining traction these days,” says Gina G. Turrigiano of Brandeis University, past president of the Society for Neuroscience. “And that is the idea of using ethological behaviors — behaviors that really matter for the biology of the animal that’s being studied — to unravel brain function.”

Think fast

Escape behavior offers useful insight into the brain’s inner workings because it engages nervous system networks that originated in the early days of evolution. “From the moment there was life, there were species predating on each other and therefore strong evolutionary pressure for evolving behaviors to avoid predators,” says neuroscientist Tiago Branco of University College London.

Not all such behaviors involve running away, Branco notes. Rather than running you might jump or swim. Or you might freeze or play dead. “Because of the great diversity of species and their habitats and their predators, there are many different ways of escaping them,” Branco said in November in San Diego at the 2022 meeting of the Society for Neuroscience.

Of course, sometimes an animal might choose fight over flight. But unless you’re the king of the jungle (or perhaps a roadrunner much smarter than any wily predatory coyote), fighting might be foolish. When an animal is the prey, escape is typically its best choice. And it needs to choose fast.

“If it decides to escape it should make this escape as quickly and accurately as possible,” Branco points out. “And then it should also terminate it as soon as possible because escape is a very costly affair. It costs energy and it also costs missed opportunities.”

Escape strategy begins with detecting the possible presence of a predator. Detection should be rapid and instinctive — an instant response to a sight, sound or smell. Then, after sensing the threat, an animal’s brain has to quickly implement complex algorithms giving muscles instructions about how to move and where to move to. It’s a complicated decision-making process, involving multiple considerations, including the threat’s proximity, environmental circumstances and the prey’s own condition.

First among things to consider is the immediacy of the threat. Sometimes there’s time to determine the predator’s identity before taking evasive action. But often the response must be quicker. A “looming” threat — in which a blobby image on the retina grows rapidly larger — allows no time to lose. Escape should be initiated before the prey knows who the predator is.

“It doesn’t matter if it is an owl or a car or an object,” says Branco. “If it’s coming fast in your direction you really want to get out of there and think of what it might be afterwards.”

Even the simplest animals have evolved rapid escape actions when detecting an immediate threat. Fruit flies, for instance, adjust the position of their legs in order to jump away from a threatening stimulus. Cockroaches scurry rapidly away in a direction roughly opposite that of an approaching predator — but not always precisely the same direction, choosing from three or four possible paths. If the roaches always chose the exact same angle of escape, predators might devise a counterstrategy, Branco points out.

In more complex animals, elaborate brain circuitry has evolved to detect threats and communicate a threat’s presence to the motor systems that direct the muscles to get moving. For a looming stimulus, a nucleus of nerve cells in the midbrain called the optic tectum has served as the prime threat detector since the early days of vertebrate evolution. (In mammals, the analogous brain structure is known as the superior colliculus.) Cells in the retina detecting a rapidly expanding object send signals to the optic tectum or superior colliculus, alerting the brain to an imminent collision. In turn, the tectum/colliculus signals nerve cells to activate muscles. In mammals, those nerve cells reside in the periaqueductal gray or PAG, a structure in the brain stem.

In mice, neural connections between the superior colliculus and PAG are essential for linking threat detection to escape behavior, research has shown. Presenting a large, dark, circular shadow in an otherwise empty arena induces a mouse to immediately flee toward refuge in a small shelter on the arena’s edge. But if the synapses connecting the superior colliculus to the PAG are cut, mice freeze rather than fleeing when encountering a looming threat, Branco said at the neuroscience meeting.

Subtler threats

For threats not as rapid or obvious as a looming predator, the brain must be attuned to the slightest sensory signals of a possible predator nearby — motion in a shrub or the cracking of a twig, for instance. Such a signal must then be amplified to become the focus of the brain’s attention. And, unlike with looming threats, successful escape might require some intel on the attacker. In these instances even more complicated circuitry must facilitate the brain’s reaction. “Immediate escape actions can be relatively simple, but extended escape often relies on processes such as predicting the motion of a predator or performing memory-based navigation,” Branco and coauthor Peter Redgrave write in the 2020 Annual Review of Neuroscience.

Mice exploring their experimental arena apparently rely on memory to direct their movement back toward their shelter when threatened. When an experimenter surreptitiously removes the shelter while the mouse isn’t looking, a threat induces the mouse to quickly run to the spot where the shelter used to be. Apparently the mouse doesn’t find the shelter by looking for it, but by remembering where it is supposed to be. So some part of the brain must store that information and then communicate with the superior colliculus to orchestrate commands about which way to run.

Very recent studies suggest that the brain region providing the superior colliculus with that information is the retrosplenial cortex, or RSP. It’s a region in the middle of the brain with connections to multiple other brain structures, including the hippocampus (a structure important for memory).

“RSP neurons encode behaviorally important locations, such as landmarks, reward locations and a variety of spatial features of the environment,” Dario Campagner, Branco and collaborators write in a paper that first appeared online and now has been published in Nature.

When synapses connecting the RSP to the superior colliculus are blocked, Campagner and colleagues found, the mouse attempts to escape a threat in the wrong direction. In the real world, Branco says, “this would be probably the last error that the mouse makes.”

Of course, many other parts of the brain contribute to an animal’s threat response. Sometimes neural signals might even inhibit escape behavior — a hungry mouse, for instance, might get a message from the hypothalamus suggesting a delay in reacting to a threat in order to get a little more food first. Much remains to be learned about other aspects of brain circuitry that influence escape behavior.

“We have some decent understanding of the neurobiology behind implementation of some … escape actions,” says Branco. “But there’s a lot of unknowns.”

Among the unknowns are some nuances of how certain prey species have evolved to more effectively react to threat signals. Most arthropods, for instance, respond to a looming threat based on how big the blob in their visual field is. Fiddler crabs, though, respond based on how rapidly the size of the looming image changes, researchers from Australia reported recently in Current Biology. Study author Callum Donohue and colleagues noted that attending to speed rather than size allows the crabs to respond when the predator image is still very tiny, enabling a quicker escape to their burrow. This finding suggests that lifestyle and environmental factors may influence how different species respond to threat cues, the researchers write.

Branco says that since controlling escape is such an essential brain function, studying it across many species makes it a “powerful model for the study of neuroscience and behavior.” Deeper knowledge of the neurobiology of escape could reveal mental mechanisms that are “generalizable across behaviors across many species,” he says.

After all, escape is just one of many goal-oriented behaviors that animals must master to win the survival-of-the-fittest sweepstakes. Figuring out how brains control escape might very well produce insights into the neurobiology of other survival strategies.

As Branco and Redgrave note in their Annual Review paper, escape is a well-defined behavior, making it plausible to obtain complete understanding of the biological algorithms controlling it in a variety of species. A detailed understanding of its complexities, they say, “would then provide an entry point for understanding general mechanisms of … how brains generate natural adaptive behaviors.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.