Monday, April 17, 2023

ChatGPT, DALL-E 2 and the collapse of the creative process

Does the moment of imagination carry more value than the work of making something real? DeAgostini/Getty Images
Nir Eisikovits, UMass Boston and Alec Stubbs, UMass Boston

In 2022, OpenAI – one of the world’s leading artificial intelligence research laboratories – released the text generator ChatGPT and the image generator DALL-E 2. While both programs represent monumental leaps in natural language processing and image generation, they’ve also been met with apprehension.

Some critics have eulogized the college essay, while others have even proclaimed the death of art.

But to what extent does this technology really interfere with creativity?

After all, for the technology to generate an image or essay, a human still has to describe the task to be completed. The better that description – the more accurate, the more detailed – the better the results.

After a result is generated, some further human tweaking and feedback may be needed – touching up the art, editing the text or asking the technology to create a new draft in response to revised specifications. Even the DALL-E 2 art piece that recently won first prize in the Colorado State Fair’s digital arts competition required a great deal of human “help” – approximately 80 hours’ worth of tweaking and refining the descriptive task needed to produce the desired result.

It could be argued that by being freed from the tedious execution of our ideas – by focusing on just having ideas and describing them well to a machine – people can let the technology do the dirty work and can spend more time inventing.

But in our work as philosophers at the Applied Ethics Center at University of Massachusetts Boston, we have written about the effects of AI on our everyday decision-making, the future of work and worker attitudes toward automation.

Leaving aside the very real ramifications of robots displacing artists who are already underpaid, we believe that AI art devalues the act of artistic creation for both the artist and the public.

Skill and practice become superfluous

In our view, the desire to close the gap between ideation and execution is a chimera: There’s no separating ideas and execution.

It is the work of making something real and working through its details that carries value, not simply that moment of imagining it. Artistic works are lauded not merely for the finished product, but for the struggle, the playful interaction and the skillful engagement with the artistic task, all of which carry the artist from the moment of inception to the end result.

The focus on the idea and the framing of the artistic task amounts to the fetishization of the creative moment.

Novelists write and rewrite the chapters of their manuscripts. Comedians “write on stage” in response to the laughs and groans of their audience. Musicians tweak their work in response to a discordant melody as they compose a piece.

In fact, the process of execution is a gift, allowing artists to become fully immersed in a task and a practice. It allows them to enter what some psychologists call the “flow” state, where they are wholly attuned to something that they are doing, unaware of the passage of time and momentarily freed from the boredom or anxieties of everyday life.

This playful state is something that would be a shame to miss out on. Play tends to be understood as an autotelic activity – a term derived from the Greek words auto, meaning “self,” and telos meaning “goal” or “end.” As an autotelic activity, play is done for itself – it is self-contained and requires no external validation.

For the artist, the process of artistic creation is an integral part, maybe even the greatest part, of their vocation.

But there is no flow state, no playfulness, without engaging in skill and practice. And the point of ChatGPT and DALL-E is to make this stage superfluous.

A cheapened experience for the viewer

But what about the perspective of those experiencing the art? Does it really matter how the art is produced if the finished product elicits delight?

We think that it does matter, particularly because the process of creation adds to the value of art for the people experiencing it as much as it does for the artists themselves.

Part of the experience of art is knowing that human effort and labor has gone into the work. Flow states and playfulness notwithstanding, art is the result of skillful and rigorous expression of human capabilities.

Recall the famous scene from the 1997 film “Gattaca,” in which a pianist plays a haunting piece. At the conclusion of his performance, he throws his gloves into the admiring audience, which sees that the pianist has 12 fingers. They now understand that he was genetically engineered to play the transcendent piece they just heard – and that he could not play it with the 10 fingers of a mere mortal.

Does that realization retroactively change the experience of listening? Does it take away any of the awe?

As the philosopher Michael Sandel notes: Part of what gives art and athletic achievement its power is the process of witnessing natural gifts playing out. People enjoy and celebrate this talent because, in a fundamental way, it represents the paragon of human achievement – the amalgam of talent and work, human gifts and human sweat.

Baseball player raises arms before a cheering crowd.
Boston Red Sox Hall of Famer David Ortiz celebrates before a crowd of adoring fans in 2016. Michael Ivins/Boston Red Sox via Getty Images

Is it all doom and gloom?

Might ChatGPT and DALL-E be worth keeping around?

Perhaps. These technologies could serve as catalysts for creativity. It’s possible that the link between ideation and execution can be sustained if these AI applications are simply viewed as mechanisms for creative imagining – what OpenAI calls “extending creativity.” They can generate stimuli that allow artists to engage in more imaginative thinking about their own process of conceiving an art piece.

Put differently, if ChatGPT and DALL-E are the end results of the artistic process, something meaningful will be lost. But if they are merely tools for fomenting creative thinking, this might be less of a concern.

For example, a game designer could ask DALL-E to provide some images about what a Renaissance town with a steampunk twist might look like. A writer might ask about descriptors that capture how a restrained, shy person expresses surprise. Both creators could then incorporate these suggestions into their work.

But in order for what they are doing to still count as art – in order for it to feel like art to the artists and to those taking in what they have made – the artists would still have to do the bulk of the artistic work themselves.

Art requires makers to keep making.

The warped incentives of the internet

Even if AI systems are used as catalysts for creative imaging, we believe that people should be skeptical of what these systems are drawing from. It’s important to pay close attention to the incentives that underpin and reward artistic creation, particularly online.

Consider the generation of AI art. These works draw on images and video that already exist online. But the AI is not sophisticated enough – nor is it incentivized – to consider whether works evoke a sense of wonder, sadness, anxiety and so on. They are not capable of factoring in aesthetic considerations of novelty and cross-cultural influence.

Rather, training ChatGPT and DALL-E on preexisting measurements of artistic success online will tend to replicate the dominant incentives of the internet’s largest platforms: grabbing and retaining attention for the sake of data collection and user engagement. The catalyst for creative imagining therefore can easily become subject to an addictiveness and attention-seeking imperative rather than more transcendent artistic values.

It’s possible that artificial intelligence is at a precipice, one that evokes a sense of “moral vertigo” – the uneasy dizziness people feel when scientific and technological developments outpace moral understanding. Such vertigo can lead to apathy and detachment from creative expression.

If human labor is removed from the process, what value does creative expression hold? Or perhaps, having opened Pandora’s box, this is an indispensable opportunity for humanity to reassert the value of art – and to push back against a technology that may prevent many real human artists from thriving.

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston and Alec Stubbs, Postdoctoral Fellow in Philosophy, UMass Boston

This article is republished from The Conversation under a Creative Commons license.

Sunday, April 16, 2023

The technology could transform how growers protect their harvests, by detecting plant diseases very early on. But the challenge is to develop tools that are as affordable as they are effective.

Swarms of locusts devastating crops in East Africa, corn rootworms wreaking havoc in the Midwestern US. Blights destroying rubber trees in Brazil and ravaging potatoes in South India. Unpredictable and erratic weather patterns brought on by climate change will only exacerbate these problems — and, scientists say, make crop diseases more likely to strike and inflict major damage .

A single warm winter can enable a pest to invade new territories. Maize- and millet-chomping armyworms and fruit-and vegetable-feasting Tephritid fruit flies have spread to new locations as a result of warming weather. Desert locusts, which destroy entire crops when they swarm, are expected to strike new regions as they change their migratory routes. It is a serious problem in a world in which an estimated 700 million to 800-plus million people faced hunger in 2021 and with the global population set to further grow.

Plant pathologist Karen Garrett of the University of Florida, Gainesville, believes that artificial intelligence (AI) could be immensely valuable in fighting these blights. If agriculture is equipped with cost-effective AI tools that can identify crop diseases and pest infestations early in their development, growers and others can catch problems before they take off and cause real damage, she says — a topic she and colleagues explored in the 2022 Annual Review of Phytopathology. This conversation has been edited for length and clarity.

You specialize in studying plant diseases, so let’s dive into this topic from that angle. How do changes in environment and climate affect plants and the emergence of plant diseases?

Most pathogens have a range of temperatures that favor them. From a pathogen’s standpoint, some years can be better than others. Sometimes, a hard winter or a long drought will kill off a pathogen. But it will not in a mild year — so the pathogen will thrive, and there may be more disease in the following seasons.

Consider potato late blight. It’s a famous example of a plant disease that had a big impact on European society during the mid-1840s. Late blight was one of the drivers of the Irish potato famine, which generated a big exodus of people from Ireland.

First, the pathogen was introduced. Then there were some years that had weather conditions that strongly favored the pathogen: cool and wet weather. As a result, the pathogen thrived, wreaking havoc on the crop. It’s estimated that a million people died and a million fled the country during that time.

Today, where temperatures are getting milder, such as at higher elevations and toward the Earth’s poles, pathogens favored by mild conditions can move into new regions and become more destructive.

When new crop diseases arise, how can anyone be sure that they are linked with climate change?

Any given crop epidemic is kind of like a storm. It’s hard to say whether an individual storm is due to climate change or not, but you can start to draw conclusions.

One thing that plant pathologists talk about all the time is the “disease triangle.” Getting a disease requires three things: a pathogen that is able to infect, a conducive environment, and a host plant that can get infected. If the environment changes, for example through climate change, so that weather favoring a pathogen is more common, it will make it easier for the pathogen to thrive and attack more plants. People’s decisions about how to manage plant disease are another dimension. Often, several of these components change at the same time, so it’s challenging to say how much of an epidemic’s damage is strictly due to climate change.

Let’s add artificial intelligence to this discussion. How can AI help to mitigate the threats of pathogens to crops?

Artificial intelligence is intelligence produced by a machine, such as a computer system equipped with learning algorithms that can keep improving its ability to make predictions as it gets more information. These tools are so advanced that they can process huge amounts of information within seconds. For crop resiliency, AI can help by making better tools for crop surveillance, designing better robots to deliver pesticides or harvest, and better software to help in breeding for traits like disease resistance and drought tolerance. It has a strong social angle, as it can help farmers and policymakers to make smart decisions.

Let’s break down each of these. How has AI been used in surveillance techniques and what are the existing technologies? Can you explain?

If you think about the rise of an epidemic in an area, at the early stages, the disease is only in a few locations. And then later, it will start to grow rapidly. There is potential for surveillance to employ remote sensing techniques like drones and satellite imagery that can identify the location of crops in farmlands that are infected with pathogens. AI tools can already use image analysis to spot changes in the coloration of leaves, flowers or fruits, and even their shapes or sizes.

Identifying diseases and taking action early can make it a lot easier to manage an epidemic. In the past, satellite data used to be very coarse: You couldn’t get a high enough resolution to diagnose a problem. But the resolution keeps getting better. As a result, their potential has been growing for use in surveillance.

How exactly does AI use image analysis in these tools?

Well, there’s a lot of work at the beginning. First, people have to collect and curate thousands of images of healthy and diseased plants in a range of conditions. So collecting and curating these images takes time and investment. Then algorithms are developed to learn from these images of healthy and diseased plants, to identify signatures of disease.

A lot of diseases have distinctive symptoms that can be detected visually. So if you have a drone, for example, that can go and take images in large fields, then those images can be compared and analyzed using AI to efficiently diagnose visible crop disease.

For example, our coauthor Michael Selvaraj in Colombia has been working on this technology for identifying diseases in bananas. In Florida, some growers have invested in drones for surveillance. Currently, some growers scan images from drones themselves, to get a quick view of their orchards. This will probably gradually be replaced by automated image analysis of the videos of orchards as image analysis develops further and can efficiently find diseased plants.

But there are also safety regulations issues, because unplanned use of drones could create safety hazards for the public as well. It’s still a young industry. But as there are many advantages, I think it’ll only expand, as policies strike a balance between protecting the public and providing benefits in agriculture.

And how can AI be used with robotic tools to aid in crop resiliency?

Agricultural robotics is a growing field right now. An interesting AI example already in place is segregating healthy fruit from those infected with pathogens or otherwise damaged.

Fruit can often be distinguished as diseased or not, based on color and shape. These AI tools can process those images a lot faster and more consistently so that the discolored and low-quality fruit — which are often infected with pathogens — are automatically separated.

Also, there’s the idea of using drones that can collect and analyze images and then take immediate actions based on the analyses — for example, to decide to spray a pesticide. I think these tools will probably be ready for wider use in the near future, and will again need good policies.

Tell me more about how AI tools can help in plant breeding and in making more resilient strains.

You can think of plant breeding partly as a numbers game, because you have to breed plants and process lots of individual offspring when you are breeding for a trait. Crop breeders search among these offspring to find good traits for further development.

Plant breeders can use AI tools to predict which plants will grow quickly in a particular climate, which genes will help them thrive there, and which crosses between plant parents will likely yield better traits. The traits can relate to speed of growth, cooking properties, yield and resistance to pathogens. Crop breeders inoculate the offspring with a pathogen and see which ones are resistant, and what genes are associated with resistance.

AI can speed up the analysis of great numbers of genetic sequences related to these properties and find the right combination of DNA sequences you need for a desirable trait. And image analysis is increasingly being used for characterizing the offspring in breeding programs for major economic crops such as wheat, maize and soybean.

How have farmers gone about incorporating AI tools across the world?

People have been working on tools for image analysis of diseases so that farmers can take a photo of their plant and then get an assessment using a phone. For example, PlantVillage Nuru is a phone application that uses image analysis to diagnose potential diseases in crops. It uses machine learning and thousands of images of crop diseases collected by experts from around the world. The images are analyzed by AI and support growers in making informed decisions about crop management.

Image analysis for disease diagnosis is generally not 100 percent accurate, but it can provide a level of confidence to help growers diagnose their crop diseases and understand the uncertainty.

What are some of the challenges involved in developing these kinds of AI tools?

For one thing, you need a lot of data for the AI system to learn from. To make an image analysis tool for diagnostics, you need to include a representative set of crop varieties, which can have a wide range of shapes and colorations. One big challenge is just getting enough of these images that are labeled correctly to be used for the image analysis tool to learn.

Another big issue is cost. There can be a lot of tools that do what you want them to do, but is the benefit that they bring big enough that it’s worth the cost investment? I think there are a lot of AI tools that are already useful, but they might not be profitable for farmers yet. Many current applications are in cases where very high-value materials are processed, such as in postharvest fruit handling and in crop breeding.

Another sort of challenge is training and capacity building so that the use of such tools isn’t dependent on one expert but is more broadly used. A challenge for AI, and new technologies in general, is to make sure that the costs and benefits are fairly distributed in society.

What’s your ideal vision for securing a climate-resilient food security system for the future?

To be resilient to climate change, our food systems need to be designed to respond rapidly to new challenges. We can predict some future challenges, but some changes are likely to be a surprise. Education and capacity building are key to resilience, along with effective cooperation locally and globally. An international proposal for a global surveillance system for plant disease is an inspiring vision.

For food security in general, we need to support science education and capacity building, to make the best use of our current technologies and to support the development of better technologies. We need to work for food systems that minimize negative effects of agriculture on wildlands and maximize benefits for human health.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Champagne bubbles: the science

As you uncork that bottle and raise your glass, take time to toast physics and chemistry along with the New Year

In a lab in the heart of France’s wine country, a group of researchers carefully positions an ultra-high-speed camera. Like many good scientists, they are devoted to the practice of unpicking the universe’s secrets, seeking to describe the material world in the language of mathematics, physics and chemistry. The object of their study: the bubbles in champagne.

Chemical physicist Gérard Liger-Belair, head of the eight-member “Effervescence & Champagne” team at the University of Reims Champagne-Ardenne, perhaps knows more about champagne bubbles than anyone else on the planet. Starting with his PhD thesis in 2001, Liger-Belair has focused on the effervescent fizz within and above a glass. He has written more than 100 technical papers on the subject, including a 2021  deep dive into champagne and sparkling wines in the  Annual Review of Analytical Chemistry and a popular book ( Uncorked: The Science of Champagne).

“When I was a kid, I was entranced by blowing and watching soap bubbles,” Liger-Belair recalls. That fascination has persisted, alongside a host of more practical work: There are plenty of good reasons to be interested in bubbles, extending far beyond the pleasures of sparkling wine. Liger-Belair has helped to show which aerosols are thrown up into the sky by tiny bursting bubbles in sea spray, affecting the ocean’s role in cloud formation and climate change. He even helped to determine that some mysterious  bright spots in radar scans of Saturn’s moon Titan could be centimeter-sized nitrogen bubbles popping at the surface of its polar seas.

But Liger-Belair has had the pleasure of focusing the last 20 years of his work on the bubbles in champagne and other  fizzy drinks, including cola and  beer. His lab investigates all the factors that affect bubbles, from the type of cork to wine ingredients to how the drink is poured. They interrogate how these carbon dioxide bubbles affect taste, including the size and number of bubbles and the aromatic compounds kicked up into the air above the glass.

In pursuit of answers, they have turned to gas chromatography and other analytical techniques — and, along the road, have taken some striking photos. Others, too, around the world have turned their gaze on bubbles, even inventing robots to produce a consistent pour and focusing on the psychology of how we enjoy fizz.

Champagne from grapes to glass

It is often said that Dom Pierre Pérignon, a monk appointed as the cellar master of an abbey in Champagne, France, drank the first-ever accidental sparkling wine and exclaimed: “I am drinking the stars!” This, it turns out, is probably fiction. The earliest sparkler likely came from a different French abbey, and the first scientific paper on the matter came from Englishman Christopher Merret, who presented the idea to the newly minted Royal Society of London in 1662, years before Pérignon got his post.

The traditional method for producing champagne involves a first fermentation of grapes to produce a base wine, which is supplemented with cane or beet sugar and yeast and allowed to ferment a second time. The double-fermented wine then sits for at least 15 months (sometimes decades) so that the now-dead yeast cells can modify the wine’s flavor. That dead yeast is removed by freezing it into a plug in the bottle’s neck and popping out the frozen mass, losing some of the gas from the drink along the way.

The wine is recorked, sometimes with additional sugars, and a new equilibrium is established between the air space and the liquid in the bottle that determines the final amount of dissolved carbon dioxide. (There are equations to describe the gas content at each stage, for those curious to see the math.)

The final product’s taste depends a lot, of course, on the starting ingredients. “The grapes are core to the quality of the wine,” says Kenny McMahon, a food scientist who studied sparkling wines at Washington State University before starting his own winery. A lot also depends on how much sugar is added in the final stage. In the Roaring Twenties, champagnes introduced in the United States were really sweet, McMahon says; modern tastes have changed, and vary from country to country.

But the bubbles are also extremely important: Proteins in the wine, including ones from exploded dead yeast cells, stabilize smaller bubbles that make the desired “mousse” foam at the top of a champagne glass and a sharper pop in the mouth. According to the University of Melbourne’s Sigfredo Fuentes, most of an amateur’s impression of a sparkling wine comes from an unconscious assessment of the  bubbles.

“You basically like or not a champagne or sparkling wine by the first reaction, which is visual,” says Fuentes, who researches digital agriculture, food and wine science. This effect is so powerful, he has found, that people will highly rate a cheap, still wine that has been made bubbly by blasting it with sound waves just before pouring. People were even willing to pay more for the sonically bubbled wine. “It went, for really bad wine, to 50 bucks,” he laughs.

Typically, a bottle needs to hold at least 1.2 grams of CO2 per liter of liquid to give it the desired sparkle and bite from carbonic acid. But there is such a thing as too much: More than 35.5 percent CO in the air within a glass will irritate a drinker’s nose with an unpleasant tingling sensation. The potential for irritation is greater in a flute, where the concentration of CO above the liquid is nearly twice that of a wider, French-style coupe, and lower if poured from a chilled bottle than a lukewarm one.

Liger-Belair’s team has found that a good cork (composed of small particles stuck together with a lot of adhesive) will hold the gas in a bottle for at least 70 years; after that, the beverage will be disappointingly flat. Such was the fate that befell champagne bottles found in a shipwreck in 2010 after 170 years underwater.

Liger-Belair and his colleague Clara Cilindre received a few precious milliliters of this elixir to study. The wines had some interesting properties, they and colleagues reported in 2015, including an unusually high percentage of iron and copper (possibly from nails in the barrels used to age the wine, or even from pesticides on the grapes). They also had a lot of sugar, and surprisingly little alcohol, perhaps because of a late-in-year fermentation at colder than usual temperatures. While Liger-Belair and Cilindre sadly did not have an opportunity to sip their samples, others who did get a taste described it using terms including “wet hair” and “cheesy.” 

For a more common bottle of fizz, even the method of pouring has an impact on bubbles. If 100 milliliters (about 3.4 fluid ounces) of champagne are poured straight down into a vertical flute, Liger-Belair calculates that the glass will host about a million bubbles. But a gentler “beer pour” down the side of a glass will boost that by tens of thousands. There are “huge losses of dissolved COif done improperly,” he says. Rough spots inside a glass can also help to nucleate bubbles; some glassmakers etch shapes inside glasses to help this process along. And to avoid introducing bubble-popping surfactants, some people even go to the lengths of washing their glasses without soap, McMahon says.

Champagne taste test

All the science has “direct implications on how best to serve and taste champagne,” says Liger-Belair. McMahon, too, is confident that the industry has tweaked protocols to line up with the scientific results, though he can’t point to any specific winery that has done so. There are many university departments focused on wine, and there’s a reason for that, he says — their work is finding fruitful, and financially beneficial, application. Fuentes says he knows that some sparkling wine makers (though he won’t name them) add egg proteins to their wine to make for a small-bubbled foam that can last for up to an hour.

Fuentes is pursuing another angle for commercial application: His team has created the FIZZeyeRobot — a simple robotic device (the prototype was made from Lego bricks) that performs a consistent pour, uses a camera to measure the volume and lifespan of foam on top of the glass, and has metal oxide sensors to detect levels of CO2, alcohol, methane and more in the air above the glass. The team is using  artificial-intelligence-based software to use those factors to predict the aromatic compounds in the drink itself and, importantly, taste. (Much of this  research is done on beer, which is cheaper and faster to make, but it applies to sparkling wine too.)

“We can predict the acceptability by different consumers, if they’re going to like it or not, and why they’re going to like it,” Fuentes says. That prediction is based on the team’s own datasets of tasters’ reported preferences, along with biometrics including body temperature, heart rate and facial expressions. One way to use this information, he says, would be to pinpoint the optimum time for any sparkling wine to sit with the dead yeast, in order to maximize enjoyment. He expects the system to be commercially available sometime in 2022.

Of course, human palates vary — and can be tricked. Many studies have shown that the wine-tasting experience is deeply influenced by psychological expectations determined by the appearance of the wine or  the setting, from the company one is keeping to room lighting and music. Nevertheless, Liger-Belair has, through decades of experience, formed a personal preference for aged champagnes (which tend to contain less CO 2), poured gently to preserve as many bubbles as possible, at a temperature close to 12° Celsius (54° Fahrenheit), in a large tulip-shape glass (more traditionally used for white wines) with generous headspace.

“Since I became a scientist, many people have told me that I seem to have landed the best job in all of physics, since I have built my career around bubbles and I work in a lab stocked with top-notch champagne,” he says. “I’d be inclined to agree.” But his real professional pleasure, he adds, “comes from the fact that I still have the same childlike fascination with bubbles as I did when I was a kid.” That love of bubbles has not yet popped.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Katherine Flegal was a scientist who found herself crunching numbers for the government, until one day her analyses set off a firestorm. What does she make of her decades as a woman in public health research?

Katherine Flegal wanted to be an archaeologist. But it was the 1960s, and Flegal, an anthropology major at the University of California, Berkeley, couldn’t see a clear path to this profession at a time when nearly all the summer archaeology field schools admitted only men. “The accepted wisdom among female archaeology students was that there was just one sure way for a woman to become an archaeologist: marry one,” Flegal wrote in a career retrospective published in the 2022 Annual Review of Nutrition.

And so Flegal set her archaeology aspirations aside and paved her own path, ultimately serving nearly 30 years as an epidemiologist at the National Center for Health Statistics (NCHS), part of the US Centers for Disease Control and Prevention. There, she spent decades crunching numbers to describe the health of the nation’s people, especially as it related to body size, until she retired from the agency in 2016. At the time of her retirement, her work had been cited in 143,000 books and articles.

In the 1990s, Flegal and her CDC colleagues published some of the first reports of a national increase in the proportion of people categorized as overweight based on body mass index (BMI), a ratio of weight and height. The upward trend in BMI alarmed public health officials and eventually came to be called the “obesity epidemic.” But when Flegal, along with other senior government scientists, published estimates on how BMI related to mortality — reporting that being overweight was associated with a lower death rate than having a “normal” BMI — she became the subject of intense criticism and attacks.

Flegal and her coauthors were not the first to publish this seemingly counterintuitive observation, but they were among the most prominent. Some researchers in the field, particularly from the Harvard School of Public Health, argued that the findings would detract from the public health message that excess body fat was hazardous, and they took issue with some of the study’s methods. Flegal’s group responded with several subsequent publications reporting that the suggested methodological adjustments didn’t change their findings.

The question of how BMI relates to mortality, and where on the BMI scale the lowest risk lies, has remained a subject of scientific debate, with additional analyses often being followed by multiple Letters to the Editor protesting the methods or interpretation. It’s clear that carrying excess fat can increase the risk of heart disease, type 2 diabetes and some types of cancers, but Flegal’s work cautioned against tidy assumptions about the complex relationship between body size, health and mortality.

Flegal spoke with Knowable Magazine about her career, including some of the difficulties she faced as a woman in science and as a researcher publishing results that ran counter to prevailing public health narratives. This conversation has been edited for length and clarity.

After finishing your undergraduate degree in 1967, one of your first jobs was as a computer programmer at the Alameda County Data Processing Center in California, where you handled data related to the food stamp program. What drew you to that job?

It’s kind of hard to even reconstruct those days. This was well before what I call the “toys for boys” era, when people had little computers in their houses, and you might learn how to write a program in BASIC or something. You didn’t learn how to program in school at all. Big places like banks had started using computers, but they didn’t have people who knew what to do with them. So they hired on the basis of aptitude tests, and then they trained you.

I realized if you could get a job as a trainee, they would teach you how to program, which was a pretty good deal. So I applied for a couple of these jobs and took my aptitude tests and scored very highly on them. I was hired as a programmer trainee, and they spent six months training us. It was not just like “press this button, press that button.” We really got a very thorough introduction.

At that time, there was gender equality in programming, because it was based just on aptitude. In my little cohort, there were two women and three men, and everybody did the same thing. It was very egalitarian. Nothing really mattered as long as you could carry out the functions and get everything right.

And that was different from some of the other jobs available at that time?

Yeah, there were “Help Wanted — Women” and “Help Wanted — Men” ads, and the “Help Wanted — Women” ads were secretarial or clerical or something like that. It was very clear that you weren’t supposed to be applying for these other jobs. There were the kinds of jobs that men got and the kinds of jobs that women got.

What else did you learn in that position as a programmer?

This was a governmental operation, with legal requirements and money involved. It was our job to track everything and test every program very, very carefully. If later you found an error lurking in a program, you had to go back and rerun everything. We were taught to do everything just right — period. And that was a pretty valuable lesson to learn.

It was very well-paid, but we had valuable skills, and we had to work a lot of overtime. They would call you up in the middle of the night if something was flagged in your program. I got to be quite a good programmer, and that really stood me in good stead.

Why did you decide to go to graduate school to study nutrition?

My job was OK, but I didn’t have a lot of autonomy, and I think I didn’t like that very much. I thought it would be interesting to study nutrition. I think unconsciously I was selecting something that was more girly in some way.

After completing your PhD and a postdoc, you struggled to find a secure university job. You wrote about how you think that the “Matilda Effect” — a term coined by science historian Margaret Rossiter to describe the systematic under-recognition of women in science — contributed to your being overlooked for academic jobs. Can you tell us more about that?

Women don’t get recognized and are much more likely to just be ignored. I didn’t think this was going to be an issue, but looking back, I realized that gender played much more of a role in my career than I had thought.

You can’t really put your finger on it, but I think you just are not viewed or treated in the same way. I put this anecdote at the beginning of my Annual Review  article: My husband and I are at the holiday party for the University of Michigan biostatistics department that I  work in. There’s a professor there who has no idea who I am, although this is a very small department and I walk by his office all the time. He sees my husband, who looks reasonably professional, and asks the department chair who he is. When he’s told, “That’s Katherine Flegal’s husband,” he responds, “Who’s Katherine Flegal?” It was like I was just part of the furniture, but my husband was noticed.

How did you end up working as an epidemiologist at the CDC?

A CDC scientist came to Michigan and was recruiting. She encouraged me and other people — I wasn’t the only one by a long shot — to apply for these various jobs. I applied and then kind of forgot about the whole thing, but then this offer came through. It wasn’t really what I had in mind, but it was an offer, so I accepted it.

It sounds like you didn’t expect that to turn into a 30-year career in the federal government.

I certainly didn’t.

What was different about working at the CDC compared with academia?

It has its good and bad aspects, like most things do. You work for an organization, and you have to do things to meet the organization’s needs or requirements, and that can be frustrating. We didn’t have to apply for grants, so that was good in one way and bad in another. There was no ability to get staff or more resources. You just had to figure out what to do on your own.

The advantage was that it was a really secure job, and we produced a lot of data. NCHS, the part of CDC that I worked in, is a statistical agency. It’s not agenda-driven, which was good.

On the other hand, what you write has to be reviewed internally, within the CDC, and it’s a tight review. If the reviewers say, “I don’t like this,” you either have to convince them it’s OK, or do what they say. You cannot submit your article for publication until you satisfy the reviewers.

What kinds of projects did you work on at the CDC?

I worked for the NHANES program, the National Health and Nutrition Examination Survey. I would think of different projects to analyze and make sense of the survey data. But if somebody wanted me to do something else, I had to do something else. For example, I got assigned to deal with digitizing X-rays endlessly for several years. And I worked on updating the childhood growth charts used to monitor the growth of children in pediatricians’ offices, which turned out to be surprisingly controversial.

Can you tell us more about what NHANES is, and why it’s important?

NHANES is an examination survey, so there are mobile units that go around the country and collect very detailed information from people; it’s like a four-hour examination. When you read about things like the average blood cholesterol in the United States, that kind of information almost always comes from NHANES, because it’s a carefully planned, nationally representative study of the US population. It started in the early 1960s, and it’s still running today.

One of the things that distinguishes NHANES from other data sources is that it directly measures things like height and weight, rather than just asking people about their body size. Why does that matter?

People don’t necessarily report their weight and height correctly for a variety of reasons, not all of which are fully understood. There’s a tendency to overestimate height; there’s kind of a social desirability aspect probably involved in this. And there’s a tendency for people, especially women, to underreport their weight a little bit. Maybe they’re thinking “I’m going to lose five pounds,” or “This is my aspirational weight,” or they don’t really know, because they don’t weigh themselves.

That can make a difference — not huge, but enough to make quite a difference in some studies. And what you don’t know is whether the factors that are causing the misreporting are the same factors that are affecting the outcome. That’s very important and overlooked. It’s a risky business to just use self-reported data.

One of the first studies you coauthored related to obesity was published in JAMA in 1994 and described an increase in BMI among adults in the US.

Right. I was the one who said that we at NCHS needed to publish this, because we produced the data. We were really astonished to get the results, which showed that the prevalence of overweight BMI was going up, which is not what anybody expected, including us.

Did you face pushback from within the CDC for some of the things that you were publishing?

Yes. This really started in 2005, when we wrote an article estimating deaths associated with obesity. The CDC itself had just published a similar article the year before with the CDC director as an author, which is fairly unusual. That paper said that obesity was associated with almost 500,000 deaths in the US and was poised to overtake smoking as a major cause of death, so it got a lot of attention.

In our paper, we used better statistical methods and better data, because we had nationally representative data from the NHANES, and my two coauthors from the National Cancer Institute were really high-level statisticians. We found that the numbers of deaths related to obesity — that’s a BMI of 30 or above — were nothing as high as they found. But we also found that the overweight BMI category, which is a BMI of 25 up to 29.9, was associated with lower mortality, not higher mortality.

We had this wildly different estimate from what CDC itself had put out the year before, so this was an awkward situation for the agency. The CDC was forced by the press to make a decision about this, and they kind of had to choose our estimates, because they couldn’t defend the previous estimates or find anything wrong with ours. The CDC started using them, but they were tucked away. It was really played down.

That study generated a lot of media attention and criticism from other researchers. Was that a surprise to you?

Yes, that was completely a surprise. There was so much media attention immediately. I had to have a separate phone line just for calls from journalists. And almost immediately, the Harvard School of Public Health had a symposium about our work, and they invited me, but they didn’t offer to pay my way. CDC said that they didn’t want me to go, so that was the end of that. But the final lineup they had was other people saying how our findings didn’t agree with theirs, so this whole symposium was basically an attack on our work.

You and coauthors also published a meta-analysis of 97 studies in 2013 that found that being overweight or mildly obese wasn’t associated with a greater risk of mortality. Did you face a similar response to that article?

We embarked on a systematic review and found that these results pretty much agreed with what we had already found. We published that, and there was a lot of criticism, another symposium at Harvard, and just a lot of attacks. The chair of Harvard’s nutrition department, Walter Willett, went on NPR and said that our paper was so bad that nobody should ever read it, which is a pretty unusual thing for a scientist to be saying.

That must have been difficult to have your work attacked so publicly.

It was really awful, to be honest. I don’t usually admit that. It was extremely stressful. And I didn’t have much support from anywhere. A lot of people thought what we had done was fine, but those people were not writing letters and holding symposia and speaking out in favor of us.

I know my coauthors were a little startled by the way in which I was treated, and they always said, maybe if I had been a man I would not have been treated quite so badly. That could be true, but I don’t have any way of knowing.

Was anyone able to identify anything incorrect about your analysis?

Well, they certainly didn’t identify any specific errors. There was no evidence that we had done anything wrong, and no one has ever found anything specifically that would have made a difference to our results.

There’s a whole school of thought that there are all these confounding factors like smoking and illness. For example, maybe people are sick, and they lose weight because they’re sick, and that will affect your results, so you have to remove those people from your analyses. People raised all these criticisms, and we looked at all of them and published a whole report looking at it every which way. But we didn’t find that these factors made much of a difference to our results.

There are many, many studies of BMI and mortality that tried all these things, like eliminating smokers and people who might have been sick, and it doesn’t make any difference. This is not an uncommon finding.

One of the critiques of this research was that it would confuse people or compromise public health messaging. How do you respond to that?

Well, I don’t think it makes sense. I think that when you find a result that you don’t expect, the interesting thing ought to be, how can we look into this in a different way? Not just to say that this is giving the wrong message so it should be suppressed. Because that’s not really science, in my opinion.

Is part of the issue that BMI is not a great proxy for body fatness? Or that the BMI categories are kind of arbitrarily drawn?

Well, they are very arbitrary categories. I think the whole subject is much more poorly understood than people recognize. I mean, what is the definition of obesity? It ended up being defined by BMI, which everybody knows is not a good measure of body fat.

And there’s other research that suggests body fat is not really the issue; maybe it’s your lean body mass, your muscle mass and your fitness in other ways. That could be the case, too. I don’t really know, but that’s an interesting idea. BMI is just so entrenched at this point; it’s like an article of faith.

When you look at how much your work has been cited, and how much influence it had, it seems you had quite an impact.

I think I did, but it really wasn’t what I expected or set out to do. I got into this controversial area pretty much by accident. It caused all this brouhaha, but I don’t back down.

We were all senior government scientists who had already been promoted to the highest level. In a way, it was kind of lucky that I was working for CDC. Writing those articles, it was a career-ending move. If I had had anything that could have been destroyed, somebody would have destroyed it. I think I wouldn’t have gotten any grants. I would have become disgraced.

But this stuff is serious. It’s not easy, and everybody has to decide for themselves: What are they going to stand up for?

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

A Savory, Crowd-Pleasing Breakfast

When you need a breakfast to feed a large group, this Biscuit-Sausage Mushroom Casserole is a perfect option. The savory aromas of sausage and bacon are almost sure to have your guests standing in line with a plate and fork in hand.

For more breakfast recipes, visit Culinary.net.

Watch video to see how to make this recipe!


 

Biscuit-Sausage Mushroom Casserole

  • 1          package (16 ounces) pork sausage
  • 1          package (12 ounces) bacon, chopped
  • 8          tablespoons butter, divided
  • 1/2       cup flour
  • 4          cups milk
  • 1          package (8 ounces) mushrooms, sliced
  • 12        eggs
  • 1          can (5 ounces) evaporated milk
  • 1/2       teaspoon salt
  • nonstick cooking spray
  • 1          can (12 ounces) flaky biscuits
  1. In pan over medium-high heat, cook pork sausage until thoroughly cooked, stirring frequently. Remove from heat and drain sausage. Set aside.
  2. Chop bacon into small pieces. In separate pan over medium-high heat, cook bacon until thoroughly cooked. Remove from heat and drain bacon. Set aside.
  3. In saucepan over medium heat, melt 6 tablespoons butter. Add flour; whisk until smooth. Cook on low heat 1 minute, stirring constantly. Gradually stir in milk. Cook until bubbly and thickened. Add sausage, bacon and mushrooms; mix well. Set aside.
  4. In large bowl, combine eggs, evaporated milk and salt. Using whisk, beat until blended.
  5. In saucepan over medium heat, melt remaining butter. Add egg mixture; cook until firm but moist, stirring occasionally.
  6. Heat oven to 350° F.
  7. Spray 13-by-9-inch baking dish with nonstick cooking spray.
  8. Spoon half the egg mixture into bottom of baking dish. Top with half the gravy mixture. Repeat layers.
  9. Separate biscuit dough and cut into quarters. Top sauce with biscuit quarters, points facing up.
  10. Bake 20-25 minutes, or until mixture is heated and biscuits are golden brown.
SOURCE:
Culinary.net
OPINION: Children around the world were out of school for months, with big impacts on learning, well-being and the economy. How do we avoid a ‘generational catastrophe’?

Three years into the Covid-19 pandemic, we can see the results of the largest natural global education experiment in modern history. They’re worrying.

At the height of pandemic shut-downs in April 2020, UNESCO estimated that 190 countries instituted nationwide closures of educational institutions, affecting nearly 1.6 billion students globally (94 percent of all learners). This represents one-fifth of humanity.

Since 2020, I have been leading a team of senior global education experts to inform the Group of Twenty (G20) advisory processes, a forum for international economic cooperation for leaders and heads of government of 19 countries and the European Union. Using UNESCO data, we estimated that between February 2020 and March 2022, education was disrupted globally for an average of 41 weeks — that’s 10.3 months.

Extended school closures have grave and lingering effects on education, health, and social and economic well-being, even after students return. Some never will: Globally, an estimated 24 million are at risk of dropping out entirely. If these issues are left unaddressed, the United Nations’ secretary-general has warned that the effect will be a “generational catastrophe.”

We must take immediate steps to prioritize education systems, especially since more disruptions are likely. More than 250 million children were already out of school before the pandemic because of conflict, emergencies (like natural disasters) and social inequities. Countries continue to face complex challenges of climate change, conflict, displacement, disease, hunger and poverty. For example, schools in Delhi — which had some of the longest pandemic closures globally — were closed for additional weeks or months in 2021 and 2022 due to air pollution; in 2022, smoke from California wildfires caused closures from the coast to Reno, Nevada.

In case it isn’t obvious: Schools matter for learning. A new review of 42 studies covering 15 countries (primarily high-income) concluded that on average, children lost out on about 35 percent of a normal school year’s worth of learning due to pandemic closures. Learning deficits appeared early in the pandemic and persisted.

An earlier review covering high-income countries found, in seven out of eight studies, statistically significant negative effects of pandemic closures on learning in at least one subject area. Those studies mainly looked at elementary education and covered core subjects and areas such as math, reading and spelling. Importantly, the negative effects were worse for students from lower-income households, with relatively less-educated parents, from marginalized racial backgrounds or with disabilities.

A modeling study on low- and middle-income countries projected that if learning time in Grade 3 is reduced by one-third (roughly the scenario in the first wave of global pandemic-related school closures), students will be a full year behind by the time they reach Grade 10 if there isn’t remediation.

Schools matter for other reasons too: They are hubs for counseling, therapeutic services, childcare, protection and nutrition. The World Food Programme estimates that at the height of closures, “370 million children in at least 161 countries [including the US] were suddenly deprived of what was for many their main meal of the day.”

Schools also have large cumulative economic effects on societies. A comprehensive study of 205 countries concluded that four months of school closure (far less than the global average) can amount to a lifetime loss of earnings of around $3,000 per student in low-income countries and up to $21,000 in high-income countries. That may not seem like much at first glance, but the collective lost income for this generation is shocking: $364 billion in low-income countries to $4.9 trillion in high-income countries — amounting to a staggering 18 percent of the current global GDP.

So, what can we do?

It’s clear that digital technology and virtual instruction can provide some continuity, but they aren’t a panacea. The Survey on National Education Responses (led by UNESCO, UNICEF, the World Bank and the OECD) revealed that only about 27 percent of low- and lower-middle-income countries and just 50 percent of high-income countries reported having an explicit policy on digital remote learning that was fully operationalized. Moreover, there is a global gender and wealth digital divide on access to basic digital infrastructure like devices and high-speed internet.

A study by World Bank researchers concluded that in the best-case scenario (high-income countries with shorter disruptions and better access to technology), virtual learning could compensate for as little as 15 percent to a maximum of 60 percent of learning losses.

Wide-scale remedial education programs to boost learning in areas like math, reading, writing and critical thinking can help. Intensive tutoring programs can narrow learning gaps, especially when they are one-on-one or in small groups, by a professional, and more than twice a week. One analysis showed that this kind of programming can increase student achievement from the 50th percentile to nearly the 66th percentile.

But one-off interventions aren’t enough for systems-level change. Without large-scale publicly financed remedial programs, we can expect obvious inequities in who benefits and who doesn’t. Those who can supplement their children’s education privately will do so, while others will be left further behind. We also need integrated curricular reform, where adaptations for each grade are connected to the previous grade and to the next to account for disruption.

Of course, teachers are the key resource for education systems. We entered the pandemic with a global shortage of 69 million teachers. There is now teacher attrition, and educational needs are even greater. Pay and better working conditions to retain and recruit teachers must be prioritized.

All this takes money.

Before the pandemic, low- and lower-middle-income countries already faced a $148-billion annual financing shortfall to achieve Sustainable Development Goal 4 on quality education for all by 2030. That gap has widened by a range of $30 billion to $45 billion.

In 2020, one-third of low- and lower-middle-income countries had to spend more on servicing their external debt than they could on education.

The global recommendation, set in the 2015 Incheon Declaration adopted at the World Education Forum, is for countries to spend at least 4 percent to 6 percent of their GDP or 15 percent to 20 percent of their public budget on education. Even before the pandemic, OECD countries spent only about 10 percent of public budgets on average on education. About a third of more than 150 countries missed both benchmarks.

Early data suggest the percentage of budgets going to education went down on average from 2019 to 2021, not up. Official aid programs also cut their budgets for education in 2020 to the lowest levels in five years.

Now, more than ever, governments must make different policy choices to prioritize education.

There’s a narrow window in which to address this, and that window is closing. The future of a generation depends on it.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Jobs report hints that Fed policy is paying off – and that a ‘growth recession’ awaits

Inching toward a recession .. but what kind? Eskay Lim/EyeEm via Getty Images
Christopher Decker, University of Nebraska Omaha

The latest jobs report is in, and the good news is Federal Reserve policy on inflation appears to be working. The bad news is Fed policy on inflation appears to be working.

The March 2023 jobs report reveals that the U.S. economy added 236,000 jobs during the month – roughly in line with expectations. A trend does appear to be emerging as the U.S. central bank’s efforts to slow the economy down and tame inflation appear to finally be working on the labor market, with some companies feeling the effect of increased business costs.

While that will calm the nerves of monetary policymakers, it does raise the prospect of some economic pain ahead – not least for those who will indeed lose their jobs. And for the wider economy, it could also signal another slightly unwelcome phenomenon: the “growth recession.”

What is a growth recession?

Growth recessions occur when an economy enters a prolonged period of low growth – of say 0.5% to 1.5% – while also experiencing the other telltale signs of a recession, such as higher unemployment and lower consumer spending. The economy is still expanding, but it may feel just like a recession to regular people. Some economists consider the 2002 to 2003 period to have been a growth recession.

For now, the job market is still relatively robust. In March, the unemployment rate even edged downward very slightly to 3.5% from 3.6% the previous month.

Effectively, in terms of job additions, this still-healthy increase nevertheless does suggest a slowdown in hiring. The 236,000 jobs added in March is down from the 326,000 and 472,000 added in February and January, respectively.

A slowdown has been anticipated and suggested by other data for some time now. Eye-grabbing headlines about bank failures and layoffs in the tech sector also signal a slowdown.

Other data hint at more employment pain to come. The February Job Openings and Labor Turnover report from the Bureau of Labor Statistics posted a job openings number below 10 million for the first time since May 2021 – a downward trend that has been in place since December 2021, when openings peaked at 11.8 million.

Meanwhile, the U.S. Census Bureau recently reported that new manufacturing orders fell by 0.7% in February 2023. Indeed new orders declined in three of the last four reported months, and prior to that, orders growth had been sluggish at best.

In terms of sectors, job declines in construction – down by 9,000 – and manufacturing – down by 1,000 – are as expected, as both sectors are sensitive to interest rate increases.

It is quite likely that such declines will continue in coming months.

Other sectors posted substantial gains. Health services were up 50,800, and leisure gained 72,000. However, these gains are still smaller than in previous months.

What this means for Fed policy

This report seems to suggest that Fed actions to slow the economy are working, even though inflation still remains well ahead of its 2% target.

I believe this probably won’t significantly alter Fed policy. Indeed, it suggests that the year-old campaign of using aggressive interest rate hikes to tame inflation appears to be paying dividends. The slow drip of data proving this allows monetary policymakers to manage the economy as they try to provide a so-called “soft landing.”

If the April jobs report is similar to March’s, and barring any unusual events between now and its release in May, I expect the Fed to inch rates up very slowly, likely by another quarter basis point.

Where this leaves the economy as the year progresses, only time – and more data – will tell. But from where I stand, the economy looks to be heading toward a downturn by the fall. The question is whether it will take the form of a mild recession – which will include periods of economic shrinkage – or whether, as I suspect, it will be a low-growth recession. Either way, it will involve some pain.

Christopher Decker, Professor of Economics, University of Nebraska Omaha

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why democratic countries around the world are not prepared to support Ukraine – and some are shifting closer to Russia

Jose Caballero, International Institute for Management Development (IMD)

After over a year of the Ukraine war, efforts at building a global consensus against Russia seem to have stalled, with many countries opting for neutrality.

The number of countries condemning Russia has declined, according to some sources. Botswana has edged towards Russia from its original pro-Ukraine stance, South Africa is moving from neutral to Russia-leaning and Colombia from condemning Russia to a neutral stance. At the same time, a large number of countries have been reluctant to support Ukraine.

In Africa, for example, despite the African Union’s call on Moscow for an “immediate ceasefire” most countries remain neutral. Some observers argue that this is the result of a tradition of left-leaning regimes that goes back to the cold war period. Others, indicate that the current unwillingness of African countries originates in the history of western intervention, sometimes covert and others overt, in their internal affairs.

The reluctance to condemn Russia, however, goes beyond Africa. In February 2023, most Latin American countries supported a UN resolution to call for an immediate and unconditional Russian withdrawal. And yet, despite Brazil’s support for several UN resolutions in Ukraine’s favour, it has not condemned Russia outright. Within the UN, the stance of Bolivia, Cuba, El Salvador and Venezuela has allowed Russia to evade western sanctions. Furthermore, Brazil, Argentina and Chile, rejected calls to send military material to Ukraine, and Mexico questioned Germany’s decision to provide tanks to Ukraine.

The same divisions are evident in Asia. While Japan and South Korea have openly denounced Russia, the Association of Southeast Asian Nations has not collectively done so. China approaches the conflict through a balancing act through its strategic partnership with Russia and its increasing influence in the UN. During its time as a member of the UN Security Council, India abstained on votes related to the conflict.

The politics of neutrality

Such a cautious and neutral position has been influenced by the cold war’s non-alignment movement which was perceived as a way for developing countries to fight the conflict “on their terms” and thus acquire a degree of foreign policy autonomy, outside the Soviet Union and the west’s sphere of influence. Studies of EU sanctions have argued that an unwillingness of other countries to back the EU position can relate to both a desire for foreign policy independence and an unwillingness to antagonise a neighbour.

Non-alignment allows countries to avoid becoming entangled in the rising geopolitical tensions between the west and Russia. It is perhaps for this reason that many democratic countries maintain a stance of neutrality, preferring, as South African president Cyril Ramaphosa put it, to “talk to both sides”.

There are, however, particular economic and political incentives that are influential when countries decide against condemning Russia.

Brazil

Since the earlier stages of the Ukraine conflict, Brazil has maintained a pragmatic but ambivalent stance. This position connects to Brazil’s agricultural and energy needs. As one of the world’s top agricultural producers and exporter, Brazil requires a high rate of fertiliser usage. In 2021, the value of imports from Russia was of US$5.58 billion (£4.48 billion) of which 64% was from fertilisers. Imports of fertilisers from Russia are 23% of the total 40 million tonnes imported.

In February 2023, it was announced that the Russian gas company Gazprom will invest in Brazil’s energy sector as part of the expanding energy relations between the two countries. This could lead to close collaboration in oil and gas production and processing, and in the development of nuclear power. Such a collaboration can benefit Brazil’s oil sector, expected to be among the world’s top exporters. By March 2023, Russian exports of diesel to Brazil reached new records, at the same time as a total EU embargo on Russian oil products. Higher level of diesel supplies may alleviate any potential shortages that can affect Brazil’s agricultural sector.

India

Observers point out that in the post-cold-war era, Russia and India continue to share similar strategic and political views. In the early 2000s, in the context of their strategic partnership, Russia’s purpose was to build a multipolar global system which appealed to India’s wariness of the United States as a partner. Russia has also provided India with support for its nuclear weapons programme and its efforts to become a permanent member of the UN Security Council. Russia continues to be a key player in India’s arms trade, supplying 65% of India’s weapons imports between 1992 and 2021. Since the start of the war it has become an important supplier of oil at discount prices. This has meant an increase in purchases from about 50,000 barrels per day in 2021 to about 1 million barrels per day by June 2022.

South Africa

On the eve of the war’s anniversary, South Africa held a joint naval drill with Russia and China. For South Africa the benefits from the exercise relate to security through capacity building for its underfunded and overstretched navy. More broadly, there are trade incentives for South Africa’s neutral stance. Russia is the largest exporter of arms to the African continent. It also supplies nuclear power and, importantly, 30% of the continent’s grain supplies such as wheat, with 70% of Russia’s overall exports to the continent concentrated in four countries including South Africa.

In January 2023, Russia was one of the largest providers of nitrogenous fertilisers to South Africa, a critical element for pasture and crop growth. In addition, among the main imports from Russia are coal briquettes used for fuel in several industries including food processing. Considering the level of food insecurity in the country both imports are fundamental for its socio-political and economic stability.

The Ukraine war has shown that non-alignment continues to be a popular choice, despite appeals to support another democracy in trouble. This policy has long been an important element of the political identity of countries such as India. In other cases, such as Brazil, despite apparent shifts under President Jair Bolsonaro, non-interventionism remains a fundamental element of its policy tradition.

Nevertheless, neutrality is likely to become a “tricky balancing act” as conflicting interests become more acute, particularly in the context of the west’s provision of direct investment plus development and humanitarian aid to many of the non-aligned states.

Jose Caballero, Senior Economist, IMD World Competitiveness Center, International Institute for Management Development (IMD)

This article is republished from The Conversation under a Creative Commons license. Read the original article.