Tuesday, April 25, 2023

How algorithms discern our mood from what we write online

Researchers and companies are harnessing computers to identify the emotions behind our written words. While sentiment analysis is far from perfect, it manages to distill meaning from huge amounts of data — and could one day even monitor mental health.

Many people have declared 2020 the worst year ever. While such a description may seem hopelessly subjective, according to one measure, it’s true.

That yardstick is the Hedonometer, a computerized way of assessing both our happiness and our despair. It runs day in and day out on computers at the University of Vermont (UVM), where it scrapes some 50 million tweets per day off Twitter and then gives a quick-and-dirty read of the public’s mood. According to the Hedonometer, 2020 has been by far the most horrible year since it began keeping track in 2008.

The Hedonometer is a relatively recent incarnation of a task computer scientists have been working on for more than 50 years: using computers to assess words’ emotional tone. To build the Hedonometer, UVM computer scientist Chris Danforth had to teach a machine to understand the emotions behind those tweets — no human could possibly read them all. This process, called sentiment analysis, has made major advances in recent years and is finding more and more uses.

In addition to taking Twitter user’s emotional temperature, researchers are employing sentiment analysis to gauge people’s perceptions of climate change and to test conventional wisdom such as, in music, whether a minor chord is sadder than a major chord (and by how much). Businesses who covet information about customers’ feelings are harnessing sentiment analysis to assess reviews on platforms like Yelp. Some are using it to measure employees’ moods on the internal social networks at work. The technique might also have medical applications, such as identifying depressed people in need of help.

Sentiment analysis is allowing researchers to examine a deluge of data that was previously time-consuming and difficult to collect, let alone study, says Danforth. “In social science we tend to measure things that are easy, like gross domestic product. Happiness is an important thing that is hard to measure.”

Deconstructing the ‘word stew’

You might think the first step in sentiment analysis would be teaching the computer to understand what humans are saying. But that’s one thing that computer scientists cannot do; understanding language is one of the most notoriously difficult problems in artificial intelligence. Yet there are abundant clues to the emotions behind a written text, which computers can recognize even without understanding the meaning of the words.

The earliest approach to sentiment analysis is word-counting. The idea is simple enough: Count the number of positive words and subtract the number of negative words. An even better measure can be obtained by weighting words: “Excellent,” for example, conveys a stronger sentiment than “good.” These weights are typically assigned by human experts and are part of creating the word-to-emotion dictionaries, called lexicons, that sentiment analyses often use.

But word-counting has inherent problems. One is that it ignores word order, treating a sentence as a sort of word stew. And word-counting can miss context-specific cues. Consider this product review: “I’m so happy that my iPhone is nothing like my old ugly Droid.” The sentence has three negative words (“nothing,” “old,” “ugly”) and only one positive (“happy”). While a human recognizes immediately that “old” and “ugly” refer to a different phone, to the computer, it looks negative. And comparisons present additional difficulties: What does “nothing like” mean? Does it mean the speaker is not comparing the iPhone with the Android? The English language can be so confusing.

To address such issues, computer scientists have increasingly turned to more sophisticated approaches that take humans out of the loop entirely. They are using machine learning algorithms that teach a computer program to recognize patterns, such as meaningful relationships between words. For example, the computer can learn that pairs of words such as “bank” and “river” often occur together. These associations can give clues to meaning or to sentiment. If “bank” and “money” are in the same sentence, it is probably a different kind of bank.

A major step in such methods came in 2013, when Tomas Mikolov of Google Brain applied machine learning to construct a tool called word embeddings. These convert each word into a list of 50 to 300 numbers, called a vector. The numbers are like a fingerprint that describes a word, and particularly the other words it tends to hang out with.

To obtain these descriptors, Mikolov’s program looked at millions of words in newspaper articles and tried to predict the next word of text, given the previous words. Mikolov’s embeddings recognize synonyms: Words like “money” and “cash” have very similar vectors. More subtly, word embeddings capture elementary analogies — that king is to queen as boy is to girl, for example — even though it cannot define those words (a remarkable feat given that such analogies were part of how SAT exams assessed performance).

Mikolov’s word embeddings were generated by what’s called a neural network with one hidden layer. Neural networks, which are loosely modeled on the human brain, have enabled stunning advances in machine learning, including AlphaGo (which learned to play the game of Go better than the world champion). Mikolov’s network was a deliberately shallower network, so it could be a useful for a variety of tasks, such as translation and topic analysis.

Deeper neural networks, with more layers of “cortex,” can extract even more information about a word’s sentiment in the context of a particular sentence or document. A common reference task is for the computer to read a movie review on the Internet Movie Database and predict whether the reviewer gave it a thumbs up or thumbs down. The earliest lexicon methods achieved about 74 percent accuracy. The most sophisticated ones got up to 87 percent. The very first neural nets, in 2011, scored 89 percent. Today they perform with upwards of 94 percent accuracy — approaching that of a human. (Humor and sarcasm remain big stumbling blocks, because the written words may literally express the opposite of the intended sentiment.)

Despite the benefits of neural networks, lexicon-based methods are still popular; the Hedonometer, for instance, uses a lexicon, and Danforth has no intention to change it. While neural nets may be more accurate for some problems, they come at a cost. The training period alone is one of the most computationally intensive tasks you can ask a computer to do.

“Basically, you’re limited by how much electricity you have,” says the Wharton School’s Robert Stine, who covers the evolution of sentiment analysis in the 2019 Annual Review of Statistics and Its Application. “How much electricity did Google use to train AlphaGo? The joke I heard was, enough to boil the ocean,” Stine says.

In addition to the electricity needs, neural nets require expensive hardware and technical expertise, and there’s a lack of transparency because the computer is figuring out how to tackle the task, rather than following a programmer’s explicit instructions. “It’s easier to fix errors with a lexicon,” says Bing Liu of the University of Illinois at Chicago, one of the pioneers of sentiment analysis.

Measuring mental health

While sentiment analysis often falls under the purview of computer scientists, it has deep roots in psychology. In 1962, Harvard psychologist Philip Stone developed the General Inquirer, the first computerized general purpose text analysis program for use in psychology; in the 1990s, social psychologist James Pennebaker developed an early program for sentiment analysis (the Linguistic Inquiry and Word Count) as a view into people’s psychological worlds. These earlier assessments revealed and confirmed patterns that experts had long-observed: Patients diagnosed with depression had distinct writing styles, such as using pronouns “I” and “me” more often. They used more words with negative affect, and sometimes more death-related words.

Researchers are now probing mental health’s expression in speech and writing by analyzing social media posts. Danforth and Harvard psychologist Andrew Reece, for example, analyzed the Twitter posts of people with formal diagnoses of depression or post-traumatic stress disorder that were written prior to the diagnosis (with consent of participants). Signs of depression began to appear as many as nine months earlier. And Facebook has an algorithm to detect users who seem to be at risk of suicide; human experts review the cases and, if warranted, send the users prompts or helpline numbers.

Yet social network data is still a long way from being used in patient care. Privacy issues are of obvious concern. Plus, there’s still work to be done to show how useful these analyses are: Many studies assessing mental health fail to define their terms properly or don’t provide enough information to replicate the results, says Stevie Chancellor an expert in human-centered computing at Northwestern University, and coauthor of a recent review of 75 such studies. But she still believes that sentiment analysis could be useful for clinics, for example, when triaging a new patient. And even without personal data, sentiment analysis can identify trends such as the general stress level of college students during a pandemic, or the types of social media interactions that trigger relapses among people with eating disorders.

Reading the moods

Sentiment analysis is also addressing more lighthearted questions, such as weather’s effects on mood. In 2016, Nick Obradovich, now at the Max Planck Institute for Human Development in Berlin, analyzed some 2 billion posts from Facebook and 1 billion posts from Twitter. An inch of rain lowered people’s expressed happiness by about 1 percent. Below-freezing temperatures lowered it by about twice that amount. In a follow-up — and more disheartening — study, Obradovich and colleagues looked to Twitter to understand feelings about climate change. They found that after about five years of increased heat, Twitter users’ sense of “normal” changed and they no longer tweeted about a heat wave. Nevertheless, users’ sense of well-being was still affected, the data show. “It’s like boiling a frog,” Obradovich says. “That was one of the more troubling empirical findings of any paper I’ve ever done.”

Monday’s reputation as the worst day of the week was also ripe for investigation. Although “Monday” is the weekday name that elicits the most negative reactions, Tuesday was actually the day when people were saddest, an early analysis of tweets by Danforth’s Hedonometer found. Friday and Saturday, of course, were the happiest days. But the weekly pattern changed after the 2016 US presidential election. While there’s probably still a weekly signal, “Superimposed on it are events that capture our attention and are talked about more than the basics of life,” says Danforth. Translation: On Twitter, politics never stops. “Any day of the week can be the saddest,” he says.

Another truism put to the test is that in music, major chords are perceived as happier than minor chords. Yong-Yeol Ahn, an expert in computational social science at Indiana University, tested this notion by analyzing the sentiment of the lyrics that accompany each chord of 123,000 songs. Major chords indeed were associated with happier words, 6.3 compared with 6.2 for minor chords (on a 1-9 scale). Though the difference looks small, it is about half the difference in sentiment between Christmas and a normal weekday on the Hedonometer. Ahn also compared genres and found that 1960s rock was the happiest; heavy metal was the most negative.

Business acumen

The business world is also taking up the tool. Sentiment analysis is becoming widely used by companies, but many don’t talk about it so precisely gauging its popularity is hard. “Everyone is doing it: Microsoft, Google, Amazon, everyone. Some of them have multiple research groups,” Liu says. One readily accessible measure of interest is the sheer number of commercial and academic sentiment analysis software programs that are publicly available: A 2018 benchmark comparison detailed 28 such programs.

Some companies use sentiment analysis to understand what their customers are saying on social media. As a possibly apocryphal example, Expedia Canada ran a marketing campaign in 2013 that went viral in the wrong way, because people hated the screechy background violin music. Expedia quickly replaced the annoying commercial with new videos that made fun of the old one — for example, they invited a disgruntled Twitter user to smash the violin. It is frequently claimed that Expedia was alerted to the social media backlash by sentiment analysis. While this is hard to confirm, it is certainly the sort of thing that sentiment analysis could do.

Other companies use sentiment analysis to keep track of employee satisfaction, say, by monitoring intra-company social networks. IBM, for example, developed a program called Social Pulse that monitored the company’s intranet to see what employees were complaining about. For privacy reasons, the software only looked at posts that were shared with the entire company. Even so, this trend bothers Danforth, who says, “My concern would be the privacy of the employees not being commensurate with the bottom line of the company. It’s an ethically sketchy thing to be doing.”

It’s likely that ethics will continue to be an issue as sentiment analysis becomes more common. And companies, mental health professionals and any other field considering its use should keep in mind that while sentiment analysis is endlessly promising, delivering on that promise can still be fraught. The mathematics that underly the analyses is the easy part. The hard part is understanding humans. As Liu says, “We don’t even understand what is understanding.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

A summary of “Quantum-Matter Heterostructures” by H. Boschker and J. Mannhart that appears in the 2017 Annual Review of Condensed Matter Physics

For decades, physicists have appreciated the power of sandwiching layers of different substances to create materials with novel properties. Such sandwiches, called heterostructures because they are not edible, are essential components in a wide range of modern technologies. Heterostructure materials are used in various products relying on transistors, from supercomputers to cell phones, and are crucial materials for such devices as electronic sensors and solar cells.

Nature provides about 90 different atoms and a huge number of compounds made from them. But even the vast range of natural substances does not always offer the precise combination of properties needed for a specific technological task. Heterostructures’ usefulness stems from their ability to exhibit properties and perform feats that homogeneous substances — the elements and compounds provided by ordinary chemistry — cannot achieve on their own.

Magnetic, electric, optical and electronic properties of a substance generally depend on its arrangement of atoms and their electrons. Layering materials into sandwiches creates novel atomic positions and electron arrangements that engineers can exploit for technological purposes.

Traditionally, heterostructure materials have been built from layers of standard insulators, semiconductors and metals composed of such elements as silicon, gallium and aluminum. But in recent years physicists have turned to a new heterostructure strategy: building sandwiches that incorporate layers of “quantum matter.”

The quantum arena

Quantum matter heterostructures are opening a new arena of solid state physics, Hans Boschker and Jochen Mannhart write in this year’s Annual Review of Condensed Matter Physics. “Unprecedented effects” can be achieved by stacking layers of quantum matter, they write, and “the phenomena thus induced are unforeseeable in their breadth and complexity.”

“Quantum matter” might seem at first a redundant label — all matter is ultimately “quantum,” composed of particles that form atoms and molecules obeying the bizarre rules of quantum mechanics. But physicists restrict the term “quantum matter” to substances in which odd quantum effects are observable on large scales. One such quantum effect has been known since ancient times: magnetism. Elements such as iron and nickel are examples of quantum matter because their magnetic ability depends on quantum properties of the electrons orbiting their atomic nuclei.

In the twentieth century, scientists found other observable properties rooted in quantum effects, such as superconductivity and superfluidity. Superconductors transmit electric current without resistance; superfluids exhibit weird flowing abilities. Both are inexplicable without invoking quantum processes. Usually such phenomena are observed only at very low temperatures — near absolute zero — but some materials, such as cuprate oxide ceramics, superconduct in the relatively balmy realm of temperatures above the boiling point of liquid nitrogen (77 kelvins or — 321 degrees Fahrenheit). Such quantum matter materials provide attractive candidates for making quantum heterostructure sandwiches.

Enthusiasm for quantum matter heterostructures, Boschker and Mannhart point out, has been accompanied by progress in enlisting more of the elements in the periodic table.

“We discern a trend toward growing multilayers using elements from throughout the periodic table, opening up an ever increasing choice of material combinations and stacking sequences,” they write. “This expansion of the material space is a grandiose, singular undertaking.… It provides new degrees of freedom and a toolset to tailor and create materials, phases, effects, and functionalities that nature would not make on her own.”

Researchers expect quantum matter heterostructures to have numerous intriguing applications. No doubt many such applications will be surprises, unforeseeable today. But already some structures are being devised to build improved transistors for various electronic uses or to make more effective catalysts. Transistors incorporating quantum heterostructures could pack more power into ever smaller devices. Enhanced quantum matter catalysts could be useful for energy storage and conversion, such as splitting water molecules to make hydrogen fuel. Multilayered structures based on the high-temperature superconducting cuprates are being designed for use in electric power transmission cables.

Manufacturing challenges

Achieving the theoretical potential of quantum matter heterostructures will require advances in the technologies used to build them. Refined methods will also be needed to test the new materials and to predict what combinations of layers will be likely to possess particular properties.

Making the most of heterostructures will require improvements in precision manufacturing on the atomic scale. In some cases, the slightest defect would ruin a structure’s usefulness, and with the complexity of heterostructure components, many different types of defects can occur. Better tools are needed to identify defects, Boschker and Mannhart write, and to determine which of a material’s properties are intrinsic to its composition and which are side effects of defects. In some cases, it may be that the very presence of defects, such as a missing atom in a particular spot, is just what’s needed to make the material work. Of course, even the tools used to measure a heterostructure’s features might alter its properties in the measuring process.

“In our opinion, the effects of defects and analytical tools on sample properties pose considerable challenges to the development of the field,” Boschker and Mannhart write. “Real-life heterostructures usually do not match the idealized Lego-like structures envisioned during design.” In fact, another major challenge is improving the match between theorists’ predictions of how a heterostructure will behave and its actual experimental performance.

All in all, Boschker and Mannhart expect that efforts to build quantum matter heterostructures will benefit from a series of advances that are “likely to happen” — including a wider range of higher-quality substrates on which to deposit layers of atoms, along with better methods of controlling and monitoring the growth process as layers are added.

“Merely by the vast possibilities and surprising discoveries that will be made, the exploration of quantum matter heterostructures will continue to be a burgeoning, highly rewarding field of science for decades to come,” Boschker and Mannhart write. “The field is veritably exploding in width and depth. Science has just scratched the surface of an enormously fertile ground of great application potential and unimaginable limits.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Sudan crisis explained: What’s behind the latest fighting and how it fits nation’s troubled past

Sudan army soldiers are fighting a rival paramilitary group. AFP via Getty Images
Christopher Tounsel, University of Washington

Days of violence in Sudan have resulted in the deaths of at least 180 people, with many more left wounded.

The fighting represents the latest crisis in the North African nation, which has contended with numerous coups and periods of civil strife since becoming independent in 1956.

The Conversation asked Christopher Tounsel, a Sudan specialist and interim director of the University of Washington’s African Studies Program, to explain the reasons behind the violence and what it means for the chances of democracy being restored in Sudan.

What is going on in Sudan?

It all revolves around infighting between two rival groups: the Sudanese army and a paramilitary group known as the RSF, or Rapid Support Forces.

Since a coup in the country in 2021, which ended a transitional government put in place after the fall of longtime dictator Omar al-Bashir two years earlier, Sudan has been run by the army, with coup leader General Abdel-Fattah Burhan as de facto ruler.

The RSF, led by General Mohammed Hamdan Dagalo – who is generally known by the name Hemedti – has worked alongside the Sudanese army to help keep the military in power.

Following Bashir’s ouster, the political transition was supposed to result in elections by the end of 2023, with Burhan promising a transition to civilian rule. But it appears that neither Burhan nor Dagalo has any intention of relinquishing power. Moreover, they are locked in a power struggle that turned violent on April 15, 2023.

Since then, members of the RSF and the Sudanese army have engaged in gunfights in the capital, Khartoum, as well as elsewhere in the country. Over the course of three days, the violence has spiraled.

The recent background to the violence was a disagreement over how RSF paramilitaries should be incorporated into the Sudanese army. Tensions boiled over after the RSF started deploying members around the country and in Khartoum without the expressed permission of the army.

But in reality, the violence has been brewing for a while in Sudan, with concern over the RSF seeking to control more of the country’s economic assets, notably its gold mines.

The developments in Sudan over the last few days are not good for the stability of the nation or its prospects for any transition to democratic rule.

Who are the two men at the center of the dispute?

Dagalo rose to power within the RSF beginning in the early 2000s when he was at the head of the militia known as Janjaweed – a group responsible for human right atrocities in the Darfur region.

While then-Sudanese President Bashir was the face of the violence against people in Darfur – and was later indicted on crimes against humanity by the International Criminal Court – the Janjaweed is also held responsible by the ICC for alleged acts of genocide. While they were doing so, Dagalo was rising up the ranks.

As head of the RSF, Dagalo has faced accusations of overseeing the bloody crackdown of pro-democracy activists, including the massacre of 120 protesters in 2019.

The actions of Burhan, similarly, have seen the military leader heavily criticized by human rights groups. As the head of the army in power and the country’s de facto head of government for the last two years, he oversaw a crackdown of pro-democracy activists.

One can certainly interpret both men to be obstacles to any chance of Sudan transitioning to civilian democracy. But this is first and foremost a personal power struggle.

To use an African proverb, “When the elephants fight, it is the grass that gets trampled.”

So this is about power rather than ideology?

In my opinion, very much so.

We are not talking about two men, or factions, with ideological differences over the future direction of the country. This cannot be framed as a left-wing versus right-wing thing, or about warring political parties. Nor is this a geo-religious conflict – pitting a majority Muslim North against a Christian South. And it isn’t racialized violence in the same way that the Darfur conflict was, with the self-identified Arab Janajaweed killing Black people.

Some observers are interpreting what is happening in Sudan – correctly, in my opinion – as a battle between two men who are desperate not to be ejected from the corridors of power by means of a transition to an elected government.

How does the violence fit Sudan’s troubled past?

One thing that is concerning about the longer dynamics at play in Sudan is the violence now forms part of a history that fits the trope of the “failed African nation.”

Sudan has, to my knowledge, had more coups than any other African nation. Since gaining independence from the U.K. in 1956, there have been coups in 1958, 1969, 1985, 1989, 2019 and 2021.

The coup in 1989 brought Bashir to power for a three-decade run as dictator during which the Sudanese people suffered from the typical excesses of autocratic rule – secret police, repressions of opposition, corruption.

When Bashir was deposed in 2019, it was shocking to many observers – myself included – who assumed he would die in power, or that his rule would end only by assassination.

But any hopes that the end of Bashir would mean democratic rule were short-lived. Two years after his ouster – when elections were due to be held – the army decided to take power for itself, claiming it was stepping in to avert a civil war.

As striking as the recent violence is now, in many ways what is playing out is not unusual in the context of Sudan’s history.

The army has long been at the center of political transitions in Sudan. And resistance to civilian rule has been more than less the norm since independence in 1956.

Is there a danger the violence will escalate?

A coalition of civilian groups in the country has called for an immediate halt to the violence – as has the U.S. and other international observers. But with both factions dug in, that seems unlikely. Similarly, the prospect of free and fair elections in Sudan seems some ways off.

There doesn’t appears to be an easy route to a short-term solution, and what makes it tougher is that you have two powerful men, both with a military at their disposal, fighting each other for power that neither seem prepared to relinquish.

The concern is that the fighting might escalate and destabilize the region, jeopardizing Sudan’s relations with its neighbors. Chad, which borders Sudan to the west, has already closed its border with Sudan. Meanwhile, a couple of Egyptian soldiers were captured in northern Sudan while violence was happening in Khartoum. Ethiopia, Sudan’s neighbor to the east, is still reeling from a two-year war in the Tigray region. And the spread of unrest in Sudan will be a concern to those watching an uneasy peace deal in South Sudan – which gained independence from Sudan in 2011 and has been beset by ethnic fighting ever since.

As such, the stakes in the current unrest could go beyond the immediate future of Burhan, Dagalo and even the Sudanese nation. The stability of the region could also be out at risk.

Christopher Tounsel, Associate Professor of History, University of Washington

This article is republished from The Conversation under a Creative Commons license. 

4 Steps You Can Take to Control Your Asthma

Did you know that asthma affects 1 in 13 people in the United States (U.S.)? Asthma is a long-term condition that can make it harder for you to breathe because the airways of your lungs become inflamed and narrow. If you have the disease – or think you do – don’t tough it out. While there’s no cure for asthma, it can usually be managed by taking a few key steps that can help you live a full and active life.

Here are some important facts to know first:

  • Asthma affects some communities more than others. Black people and American Indian/Alaska Native people have the highest asthma rates of any racial or ethnic group, according to the Centers for Disease Control and Prevention (CDC). In fact, Black people are over 40% more likely to have asthma than white people.
  • Asthma rates vary within some communities. For example, Puerto Rican Americans have twice the asthma rate of the overall U.S. Hispanic/Latino population.
  • Some groups are more likely to have serious consequences from asthma. The CDC found Black people are almost four times more likely to be hospitalized because of their asthma than white people.
  • Almost twice as many women as men have asthma.

Talk to a health care provider. You can work with a health care provider to set up an asthma action plan. This plan explains how to manage your asthma, what medicines to take and when and what to do if your symptoms get worse. It also tells you what to do in an emergency.

Know and track your asthma symptoms. Are you experiencing symptoms such as coughing, wheezing, chest tightness or shortness of breath? Tell a health care provider about them and make sure to keep track of any changes. That way you and the provider can know if your treatment plan is working.

Identify and manage your triggers. Some common asthma triggers include dust, mold, pollen, pests like cockroaches or rodents and pet hair. The asthma action plan can help you figure out what triggers make your asthma worse and how to manage them.

Avoid cigarette smoke. If you smoke, talk to a health care provider about ways to help you quit. If you have loved ones who smoke, ask them to quit. Do your best to avoid smoke in shared indoor spaces, including your home and car.

Asthma doesn’t have to stop you from leading a full and active life. Find out more about asthma and how to manage it from NHLBI’s Learn More Breathe Better® program at nhlbi.nih.gov/breathebetter.
  

SOURCE:
National Heart, Lung, and Blood Institute

A Comforting Casserole

(Culinary.net) Almost nothing says comfort food quite like a freshly baked casserole. Next time your family asks for a warm, comforting meal, try this Rotisserie Chicken-Biscuit Casserole with just a handful of ingredients and less than 15 minutes of cook time.

Find more comfort food recipes at Culinary.net.

Watch video to see how to make this recipe!

Rotisserie Chicken-Biscuit Casserole

  • 1          whole rotisserie chicken
  • 8          refrigerated biscuits
  • 1          can (10 3/4 ounces) cream of mushroom soup
  • 1/2       cup milk
  • 1/4       cup sour cream
  • 2          cups frozen vegetables
  • 1/2       teaspoon dried basil
  • 1/8       teaspoon pepper
  1. Heat oven to 450° F.
  2. Remove meat from rotisserie chicken and shred; set aside. Discard bones.
  3. Cut biscuits into quarters; set aside.
  4. In saucepan, stir soup, milk, sour cream, chicken, vegetables, basil and pepper. Cook until boiling.
  5. Spoon chicken mixture into baking dish. Arrange quartered biscuits over filling.
  6. Bake 10-12 minutes, or until biscuits are golden brown.
SOURCE:
Culinary.net

How green are biofuels? Scientists are at loggerheads

Replacing gasoline with ethanol has changed landscapes across the globe as grasslands and forests give way to cornfields. Researchers are deeply divided over what this means for the planet. Here's the science behind the conflict.

Tyler Lark, a geographer at the University of Wisconsin-Madison, grew up among farms, working on a neighbor’s dairy, vaguely aware of the tension between clearing land to grow food and preserving nature. As an engineering student working on water projects in Haiti, he saw an extreme version of that conflict: forests cleared for firewood or to grow crops, producing soil erosion, environmental denudation and worsening poverty. “I think it was that experience that told me, ‘Hey, land use is important,’” he says.

He decided to study how farmers transform landscapes through their collective decisions to plow up grasslands, clear trees or drain wetlands — decisions that lie at the heart of some of the planet’s greatest environmental challenges, and also provoke controversy. Lark carries professional scars from recently stumbling into one of the fiercest of these fights: the debate over growing crops that are used to make fuel for cars and trucks.

About 15 years ago, government incentives helped to launch a biofuel boom in the United States. Ethanol factories now consume about 130 million metric tons of corn every year. It’s about a third of the country’s total corn harvest, and growing that corn requires more than 100,000 square kilometers of land. In addition, more than 4 million metric tons of soybean oil is turned into diesel fuel annually, and that number is growing fast.

Scientists have long warned that biofuel production on this scale involves costs: It claims land that otherwise could grow food or, alternatively, grass and trees that capture carbon from the air and provide a home for birds and other wildlife. But government agencies, relying on the results of economic models, concluded that those costs would be modest, and that replacing gasoline with ethanol or biodiesel would help to meet greenhouse gas reduction goals.

Lark and a group of colleagues recently jolted this debate back to life. In a February 2022 study, they concluded that the law that unleashed the ethanol boom persuaded farmers to plant corn on millions of acres of land that would otherwise have remained grassland. Environmentalists had long feared that biofuel production could lead to deforestation abroad; this paper showed a similar phenomenon happening within the United States.

That land conversion, the scientists concluded, would have released large amounts of carbon dioxide and other greenhouse gases into the air and makes ethanol fuel every bit as bad for the climate as the gasoline it’s intended to replace.

Farmers and biofuel trade groups lashed out against these findings — and against Lark himself. A biofuel industry association demanded that he and one of his coauthors be blackballed from a government expert review panel on renewable fuels.

The dispute came at a moment when world events laid bare the tradeoffs of biofuels. Less than two weeks after Lark’s paper appeared, Russia invaded Ukraine, provoking a spike in prices for both food and fuel — which already had been scarce and expensive because of the pandemic. Biofuel supporters have called for incentives to blend more ethanol into gasoline in order to bring down gasoline prices. Anti-hunger advocates are demanding less biofuel production, in order to free up land to grow more food. And natural ecosystems continue to disappear. 

As the controversy roils on, a more technical debate among scientists and economists is simmering out of public view: How reliable are the economic models used to evaluate biofuels anyway? Their users defend them; others disagree. “The results coming out of these models are driven more by assumptions than by actual information,” says Stephanie Searle, an ecologist specializing in biofuel sustainability at the International Council on Clean Transportation (ICCT). She and others say that one influential model, in particular, adopts assumptions that whitewash the fuels’ environmental risks.

Optimism and early warnings

America’s biofuel boom launched in 2005 as Congress passed a law that created the Renewable Fuel Standard (RFS), which required sharp increases in the use of biofuels over the following decade. Congress increased those biofuel targets in 2007. Fuel companies could satisfy the law by mixing more ethanol into gasoline, or by supplementing standard diesel fuel with a version of diesel made from plant oil or animal fat.

The law rested on a foundation of mixed goals. Farmers wanted new markets for their crops. Others hoped that biofuels could be a homegrown, cleaner alternative to foreign oil. Biofuels were supposed to cut greenhouse gas emissions because the carbon contained in them is recycled: It had previously been captured from the air by growing the corn or soybeans to begin with. And even though the factories that turn corn into ethanol require lots of energy and typically burn fossil fuels, it was assumed there would still be a net climate benefit.

At the time, “you could easily envision an incredibly optimistic view” of the future, says Sivan Kartha, an environmental scientist with the Stockholm Environment Institute. Bioenergy supporters promised fuels made from plants that were similar to those in native ecosystems, delivering the environmental benefits of grasslands, for instance, while simultaneously replacing fossil fuels.

Yet Kartha could also imagine a darker future, with profit-driven plantations of biofuel crops displacing native forests. He urged caution in an article published in the  Annual Review of Environment and Resources in 2007. “Bioenergy has the potential to contribute to sustainable development,” he wrote. But “the fulfillment of this potential cannot be presumed.”

As US ethanol production headed toward the RFS-mandated goal of roughly 15 billion gallons a year, scientists grew increasingly worried that the appetite for biofuel, added to rising demand for food, could consume vast amounts of land. “It got us thinking about what the consequences might be, for the climate,” says Jason Hill, an environmental scientist at the University of Minnesota. In 2010, Hill and coauthors wrote in the Annual Review of Ecology, Evolution, and Systematics  that “the largest ecological impact of biofuel production may well come from … land-use change.”

Scientists have been trying to measure that impact ever since, but it’s surprisingly difficult. New ethanol factories don’t clear land directly. They merely buy corn. Those purchases, however, can drive up corn prices and persuade farmers to expand their fields in pursuit of profits.

And the impact of ethanol production can easily be lost amid many other factors affecting the price of corn, including weather disasters and demand from cattle feedlots and dairy farmers. “You can’t go out on the landscape and say, ‘This parcel was converted 100 percent due to this policy,’” says Lark.

So, in their search for biofuel’s fingerprints, researchers have turned to computer simulations of the global economy, such as one created by the Global Trade Analysis Project at Purdue University. GTAP-BIO, as it’s called, has been specifically adapted to study biofuels and their effect on land. Some government agencies — notably, the California Air Resources Board — rely on it to calculate the “carbon intensity,” or climate impact, of biofuels.

GTAP-BIO is like a giant spreadsheet of the world economy. It contains data on production and consumption of goods and services across the entire globe, along with assumptions about the mathematical relationships between them — between, for instance, the area of land devoted to growing corn and how it is used.

In this simulated world, researchers can change just one element, such as corn demand from new ethanol factories, and watch the model calculate the cascade of consequences. They can create alternate versions of history, such as one in which the ethanol boom didn’t happen, and see whether farmers still expanded their cornfields. They can also use it to predict what will happen if biofuel production expands in the future.

Over the past decade, refinements of the GTAP-BIO model have delivered increasingly reassuring verdicts. They find that biofuel production induces only a modest amount of land-clearing. When ethanol factories expand, they do bid up the price of corn, but then the world adjusts. Other buyers of corn, such as cattle feedlots, cut back on their purchases. Farmers find ways to boost crop yields, perhaps by investing in better seeds or more effective weed control. This all reduces the need for additional land.

In addition, even when US farmers do expand their cornfields, GTAP-BIO shows them often claiming marginal land called cropland-pasture, so named because farmers use it for either purpose, depending on circumstances or economic conditions. In the model, this land lacks the carbon-rich soil of native prairie, accumulated from many generations of deep-rooted grasses. When you dig it up to plant corn, very little carbon dioxide is released into the air. 

Yet several of these assumptions have come in for harsh criticism. Chris Malins, a UK-based mathematician who has worked as a consultant on biofuels for environmental groups and the European Commission, says the GTAP-BIO team’s work exhibits a pro-biofuel bias. He says they readily adopt assumptions that produce lower estimates of greenhouse gas emissions from biofuels, while challenging evidence that would move its calculations in the opposite direction. As a result, GTAP-BIO has made ethanol look better and better over the past decade, Malins says. 

A prime example, he says, is GTAP-BIO’s conclusion that cropland-pasture releases relatively little carbon when it’s converted to cornfields. One version of the model, in fact, calculates that converting this land actually tends to capture carbon dioxide from the atmosphere, rather than releasing more of it. In a study published in 2020, Malins and two coauthors wrote that this result rests on a “bizarre” assumption that the land had already been used to grow crops for several decades before switching to corn for ethanol. In reality, Malins and other scientists say, much of this land previously had been covered in grasses for many years and had relatively carbon-rich soil. 

GTAP-BIO’s critics also doubt that farmers actually boost their yields of corn in response to higher prices. Yields have indeed increased steadily, researchers say, but not because prices went up. They’ve increased during periods of low prices and high prices alike.

Richard Plevin, a biofuel expert now retired from the University of California, Berkeley, says that GTAP-BIO also ignores the reality of land-grabbing and deforestation in countries like Brazil. The model classifies large areas of natural forest as “inaccessible” — and assumes that this land, by definition, cannot be converted into cropland. This assumption also results in low estimates of deforestation and carbon emissions.

Farzad Taheripour, an agricultural economist at Purdue and a key member of the GTAP-BIO team, rejects these criticisms out of hand. The assumptions in the model, he says, are based on the best evidence that the team can find, and nobody is trying to make biofuels look more climate-friendly than they really are. “All the changes,” he says, “are based on facts.” 

Taheripour adds that history validates the model: Thanks to steady increases in crop yields, farmers have been able to satisfy demand for both food and fuel without destroying natural ecosystems, at least within the United States. “That’s the lesson of the past 15 years,” he says. “We produced more food, we produced more biodiesel, more ethanol. We eat more meat. Where are those coming from? From yield improvement. The only significant land conversion in the United States has been conversion of unused cropland to cropland. So, then, why do I have to be worried?”

There’s little dispute that in the US, the ethanol boom has mainly affected land that was farmed at some time in the past, and that higher-yielding crops have helped to meet the growing demand for fuel. But that’s not the end of the argument. There’s another question, one that Lark and his colleagues also explored: If ethanol factories had not claimed the expanding harvest of corn, what other benefits might that land have delivered?

The changing landscape

In his office at the University of Wisconsin-Madison’s Center for Sustainability and the Global Environment, Lark brings up images of agricultural land on his computer screen and zooms in on a small river winding through several square kilometers of grassland in South Dakota.

This land could have been a wheat field in 1932, when the footprint of American agriculture reached its peak, with  375 million acres planted in crops. But at some point, its owners let the grass grow again, perhaps to graze cattle.

They weren’t alone. Following the epic disasters of the Dust Bowl and Great Depression, areas of cropland in the US shrank by 22 percent. Cropland almost returned to its all-time peak in 1981, then fell again by 13 percent, in fits and starts, for two-and-a-half decades — until 2007, when Congress approved the final version of the Renewable Fuel Standard. At that point, the area of cropland stabilized.

The photo Lark is examining was taken about a decade ago. With the aid of Google Earth, he does a bit of time travel, scrolling forward through images captured in later years. As he scrolls, much of the grassland disappears, replaced by fields of corn or soybeans. “It looks like, here, 2012, still in grass; 2014, pretty clearly eaten up into the surrounding fields,” he says.

South Dakota was a hot spot of land conversion during those years, but people noticed similar trends across other parts of the Midwest, and they wondered why. “We always got asked, ‘What portion of this is due to biofuels?’” Lark says. “It’s a really tough question. We never really had a good answer.” The National Wildlife Federation gave him a grant to find that answer.

Lark and his team of economists and soil experts sidestepped global economic models with their complicated assumptions. They started with what Lark knew from his previous work — actual shifts in land use during the years when ethanol production was expanding. They then used a simple model of supply and demand for major crops to describe what might have happened if the Renewable Fuel Standard had never become law.

Part of their answer was unsurprising. Without the ethanol boom, the pre-2007 trend in land use would have continued. More land — 5 million acres — would have remained in grass between 2008 and 2016, rather than being converted to grow crops.

The attention-grabbing part was their estimate of the change in greenhouse gas emissions for the path that was actually taken. In contrast to GTAP-BIO, they found that many of the newly expanded cornfields contained soil rich in carbon because it had been grassland for a decade or more. Tilling and fertilizing that additional land released a burst of carbon dioxide and nitrous oxide — so much, in fact, that ethanol produced from that corn was just as bad for the climate as gasoline, and likely more than 20 percent worse.

When the paper appeared in the  Proceedings of the National Academy of Sciences, the decade-old battle over biofuels erupted anew. Taheripour, joined by other scientists, posted a  critique of the paper online, slamming its methodology and arguing that it systematically overestimated carbon emissions from land conversion. Industry groups cited that criticism in their own attacks. The Renewable Fuels Association called Lark’s study a “hit piece” on its industry and  asked the Environmental Protection Agency to exclude Lark from a review panel on biofuels that the agency was organizing because his work “suffered from known flaws and inaccuracies.”

When Lark and his coauthors responded, defending their methods and conclusions, Taheripour’s group rebutted with an even harsher  35-page critique.

Much of the dispute involves technical issues involved in calculating carbon emissions from land conversion. But Lark and Taheripour also have deeper differences, rooted in different priorities for the country’s land.

Taheripour warns of a return to the years before the biofuel boom, when US farmers were plagued by a glut of grain, driving down prices. “There was no market for corn,” he says. “We started to produce biofuels to not throw away our crops into the ocean.”

If ethanol production plants weren’t there to buy corn, he says, farmers would have to idle some of their land — and idle land, he says, “doesn’t have any value.”

But the counterfactual scenario in Lark’s paper — the path not taken — implicitly makes a different point. If land is freed from the need to supply ethanol plants, it can deliver vital environmental benefits. Grasslands can capture carbon from the atmosphere and store it in the soil, a kind of natural climate solution that also cleans up waterways and provides habitat for birds, pollinators and other wildlife. Such solutions are a crucial part of many scenarios for reaching net zero emissions goals.

The hard part — and Lark and Taheripour agree on this point — is figuring out ways to measure those environmental benefits and pay landowners for them, just as they get paid for growing corn. To some extent, the US Department of Agriculture does this already, with programs that pay farmers to preserve areas of grassland or forest. Such initiatives are set to expand; the Inflation Reduction Act, which Congress passed in August, gives them an extra $18 billion in funding.

A grass that’s greener

There is one version of biofuel that both Lark and Taheripour would welcome: energy from perennial vegetation such as native prairie grasses. The grass could be harvested, leaving the roots to grow undisturbed, building up carbon-rich organic matter in the soil and avoiding most of the environmental damage that results from converting land into cornfields. That harvested cellulosic biomass could be fermented to produce ethanol or simply burned in power plants. “You’d have all these environmental benefits of reduced runoff, improved water quality, providing some wildlife habitat, and still be able to harvest that and use it for bioenergy,” says Lark.

Biofuel enthusiasts have dreamed of such fuels for decades, and research on them continues, including at the Great Lakes Bioenergy Research Center, right down the street from Lark’s office. So far, though, they haven’t been commercially successful. Unlike starchy kernels of corn, stalks of grass have to go through additional stages of processing before ethanol-producing microbes can feed on them, and that’s expensive.

Instead, enthusiasm has shifted to another version of biofuel, called renewable diesel. It’s made in oil refineries that have been configured to process soybean or corn oil, or animal fats like tallow from beef slaughterhouses.

But unfortunately, renewable diesel doesn’t end the competition for land. If anything, it intensifies that conflict, because renewable diesel increasingly is manufactured directly from vegetable oils that might otherwise nourish people. Its use currently is rising more steeply than that of ethanol.

Production of renewable diesel is still relatively small, but it’s growing fast thanks to financial incentives from California’s Low Carbon Fuel Standard, the centerpiece of the state’s effort to cut greenhouse emissions from transportation.

California relies on Purdue University’s GTAP-BIO model to calculate the greenhouse gas emission scores for every type of biofuel produced at individual factories. The model typically gives good scores to renewable diesel — which means that companies earn lots of lucrative carbon credits for making it.

Stephanie Searle, from the ICCT, says those scores are far too favorable. The environmental impact of renewable diesel, she says, will be felt as far away as the forests of Indonesia. Renewable diesel refineries are bidding up the price of soybean oil, she says, and it’s pushing traditional users of that oil to buy palm oil instead.

This boost in demand for palm oil, in turn, could threaten Indonesia’s tropical forests — including areas of carbon-rich peat soils that release massive amounts of carbon dioxide when cultivated.

Production capacity of renewable diesel doubled in the past year. Together with other, similar, renewable biofuels, it has surpassed 2 billion gallons a year. It, and an earlier version of biomass-based diesel called biodiesel, now account for nearly a third of all diesel fuel sold in California. Canada and Oregon are implementing similar laws that will also boost demand.

Even more alarming, critics say, is that — unlike the Renewable Fuel Standard, which merely mandated a minimum amount of biofuel use — California’s incentives could drive an unchecked upward spiral in biofuel production. “It unintentionally supports this massive expansion of use of vegetable oils for renewable diesel,” Searle says.

It’s this possibility — that a blind quest for alternatives to fossil fuels could drive explosive growth in demand for biofuels — that worries Kartha, of the Stockholm Environment Institute. “Our appetite for energy, as we know, is pretty insatiable,” he says. Switching to electric cars will cut demand for ethanol, but there’s a new push to deploy biofuels in places where batteries struggle to do the job, such as aircraft, ships and long-haul trucks.

According to Kartha, the world’s croplands, which have claimed vast ecosystems, cover less than half an acre per person on the planet. Producing enough biofuel to power one typical passenger car, meanwhile, requires more than 1.2 acres. (Photovoltaic solar arrays produce many times more usable energy per acre of land than biofuels, and can also be located in dry areas that can’t grow food.)

It’s clear, Kartha says, that relying on crops to fuel the world’s cars would massively multiply the demand for fertile land — with potentially disastrous consequences for those who depend on that land to survive.

It is also becoming clearer to the scientists who’ve been debating biofuels that they’ll never resolve their differences on the exact effects of biofuel production on greenhouse emissions. “It’s a very polarized question,” says Madhu Khanna, an agricultural economist at the University of Illinois at Urbana-Champaign who coauthored the critiques of Lark’s paper. For some, she says, concerns will remain, “no matter what the evidence is.”

Searle, for her part, says attempts to fine-tune economic models and calculate the impacts of biofuels are “an exercise in futility” and she thinks that governments should stop relying so heavily on models to calculate economic incentives for biofuels. Instead, they should limit production to a level that won’t provoke more destructive land-clearing. Searle and her colleagues are calling on California to put a cap on the amount of plant-based oil that can be legally processed into fuel. “Maybe it could be something like current usage, increasing very slightly over time,” she says. “Just find some way to limit the explosive growth.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

How ChatGPT robs students of motivation to write and think for themselves

AI writing tools may carry hidden dangers that harm the creative process. Guillaume via Getty Images
Naomi S. Baron, American University

When the company OpenAI launched its new artificial intelligence program, ChatGPT, in late 2022, educators began to worry. ChatGPT could generate text that seemed like a human wrote it. How could teachers detect whether students were using language generated by an AI chatbot to cheat on a writing assignment?

As a linguist who studies the effects of technology on how people read, write and think, I believe there are other, equally pressing concerns besides cheating. These include whether AI, more generally, threatens student writing skills, the value of writing as a process, and the importance of seeing writing as a vehicle for thinking.

As part of the research for my new book on the effects of artificial intelligence on human writing, I surveyed young adults in the U.S. and Europe about a host of issues related to those effects. They reported a litany of concerns about how AI tools can undermine what they do as writers. However, as I note in my book, these concerns have been a long time in the making.

Users see negative effects

Tools like ChatGPT are only the latest in a progression of AI programs for editing or generating text. In fact, the potential for AI undermining both writing skills and motivation to do your own composing has been decades in the making.

Spellcheck and now sophisticated grammar and style programs like Grammarly and Microsoft Editor are among the most widely known AI-driven editing tools. Besides correcting spelling and punctuation, they identify grammar issues as well as offer alternative wording.

AI text-generation developments have included autocomplete for online searches and predictive texting. Enter “Was Rome” into a Google search and you’re given a list of choices like “Was Rome built in a day.” Type “ple” into a text message and you’re offered “please” and “plenty.” These tools inject themselves into our writing endeavors without being invited, incessantly asking us to follow their suggestions.

Young adults in my surveys appreciated AI assistance with spelling and word completion, but they also spoke of negative effects. One survey participant said that “At some point, if you depend on a predictive text [program], you’re going to lose your spelling abilities.” Another observed that “Spellcheck and AI software … can … be used by people who want to take an easier way out.”

One respondent mentioned laziness when relying on predictive texting: “It’s OK when I am feeling particularly lazy.”

Personal expression diminished

AI tools can also affect a person’s writing voice. One person in my survey said that with predictive texting, “[I] don’t feel I wrote it.”

A high school student in Britain echoed the same concern about individual writing style when describing Grammarly: “Grammarly can remove students’ artistic voice. … Rather than using their own unique style when writing, Grammarly can strip that away from students by suggesting severe changes to their work.”

In a similar vein, Evan Selinger, a philosopher, worried that predictive texting reduces the power of writing as a form of mental activity and personal expression.

“[B]y encouraging us not to think too deeply about our words, predictive technology may subtly change how we interact with each other,” Selinger wrote. “[W]e give others more algorithm and less of ourselves. … [A]utomation … can stop us thinking.”

In literate societies, writing has long been recognized as a way to help people think. Many people have quoted author Flannery O’Connor’s comment that “I write because I don’t know what I think until I read what I say.” A host of other accomplished writers, from William Faulkner to Joan Didion, have also voiced this sentiment. If AI text generation does our writing for us, we diminish opportunities to think out problems for ourselves.

One eerie consequence of using programs like ChatGPT to generate language is that the text is grammatically perfect. A finished product. It turns out that lack of errors is a sign that AI, not a human, probably wrote the words, since even accomplished writers and editors make mistakes. Human writing is a process. We question what we originally wrote, we rewrite, or sometimes start over entirely.

Challenges in schools

When undertaking school writing assignments, ideally there is ongoing dialogue between teacher and student: Discuss what the student wants to write about. Share and comment on initial drafts. Then it’s time for the student to rethink and revise. But this practice often doesn’t happen. Most teachers don’t have time to fill a collaborative editorial – and educational – role. Moreover, they might lack interest or the necessary skills, or both.

Conscientious students sometimes undertake aspects of the process themselves – as professional authors typically do. But the temptation to lean on editing and text generation tools like Grammarly and ChatGPT makes it all too easy for people to substitute ready-made technology results for opportunities to think and learn.

Educators are brainstorming how to make good use of AI writing technology. Some point up AI’s potential to kick-start thinking or to collaborate. Before the appearance of ChatGPT, an earlier version of the same underlying program, GPT-3, was licensed by commercial ventures such as Sudowrite. Users can enter a phrase or sentence and then ask the software to fill in more words, potentially stimulating the human writer’s creative juices.

A fading sense of ownership

Yet there’s a slippery slope between collaboration and encroachment. Writer Jennifer Lepp admits that as she increasingly relied on Sudowrite, the resulting text “didn’t feel like mine anymore. It was very uncomfortable to look back over what I wrote and not really feel connected to the words or ideas.”

Students are even less likely than seasoned writers to recognize where to draw the line between a writing assist and letting an AI text generator take over their content and style.

As the technology becomes more powerful and pervasive, I expect schools will strive to teach students about generative AI’s pros and cons. However, the lure of efficiency can make it hard to resist relying on AI to polish a writing assignment or do much of the writing for you. Spellcheck, grammar check and autocomplete programs have already paved the way.

Writing as a human process

I asked ChatGPT whether it was a threat to humans’ motivation to write. The bot’s response:

“There will always be a demand for creative, original content that requires the unique perspective and insight of a human writer.”

It continued: “[W]riting serves many purposes beyond just the creation of content, such as self-expression, communication, and personal growth, which can continue to motivate people to write even if certain types of writing can be automated.”

I was heartened to find the program seemingly acknowledged its own limitations.

My hope is that educators and students will as well. The purpose of making writing assignments must be more than submitting work for a grade. Crafting written work should be a journey, not just a destination.

Naomi S. Baron, Professor of Linguistics Emerita, American University

This article is republished from The Conversation under a Creative Commons license. 

Global shipping is under pressure to stop its heavy fuel oil use fast – that’s not simple, but changes are coming

Don Maier, University of Tennessee

Most of the clothing and gadgets you buy in stores today were once in shipping containers, sailing across the ocean. Ships carry over 80% of the world’s traded goods. But they have a problem – the majority of them burn heavy sulfur fuel oil, which is a driver of climate change.

While cargo ships’ engines have become more efficient over time, the industry is under growing pressure to eliminate its carbon footprint.

The European Union Parliament this year voted to require an 80% drop in shipping fuels’ greenhouse gas intensity by 2050 and to require shipping lines to pay for the greenhouse gases their ships release. The International Maritime Organization, the United Nations agency that regulates international shipping, also plans to strengthen its climate strategy this summer. The IMO’s current goal is to cut shipping emissions 50% by 2050. President Joe Biden said on April 20, 2023, that the U.S. would push for a new international goal of zero emissions by 2050 instead.

We asked maritime industry researcher Don Maier if the industry can meet those tougher targets.

Why is it so hard for shipping to transition away from fossil fuels?

Economics and the lifespan of ships are two primary reasons.

Most of the big shippers’ fleets are less than 20 years old, but even the newer builds don’t necessarily have the most advanced technology. It takes roughly a year and a half to come out with a new build of a ship, and it will still be based on technology from a few years ago. So, most of the engines still run on fossil fuel oil.

If companies do buy ships that run on alternative fuels, such as hydrogen, methanol and ammonia, they run into another challenge: There are only a few ports so far with the infrastructure to provide those fuels. Without a way to refuel at all the ports that a ship might use, companies will lose their return on investment, so they will keep using the same technology instead.

It isn’t necessarily that the maritime industry doesn’t want to go the direction of cleaner fuels. But their assets – their fleets – were purchased with a long lifespan in mind, and alternative fuels aren’t yet widely available.

Ships are being built that can run on liquefied natural gas (LNG) and methanol, and even hydrogen is coming online. Often these are dual-fuel – ships that can run on either alternative fuels or fossil fuels. But so far, not enough of this type of ship is being ordered for the costs to make financial sense for most builders or buyers.

The costs of alternative fuels, like methanol and hydrogen fuels made with renewable energy (as opposed to being made with natural gas), are also still significantly higher than fuel oil or LNG. But the good news is those costs are starting to decline. As production ramps up, emissions will drop further.

Can tougher regulations and carbon pricing effectively push the industry to change?

A little bit of pressure on the industry can be helpful, but too much, too fast can really make things more disruptive.

Like most industries, shipping lines want standardized rules they can count on not to change next year. Some of these companies have invested millions of dollars in new ships in recent years, and they’re now being told that those ships might not meet the new standards – even though the ships may be almost brand new.

Another concern with the EU’s moves is whether it has a grasp on all the “what if” scenarios. For example, if the EU has stricter rules than other countries, that affects which ships companies can use on European routes. Any vessels that they put on routes to Europe will have to meet those emissions standards. If there’s a greater demand for products in Europe, they may have fewer vessels they could use.

Press the play button or zoom out and use the filters to see where different ship types travel. Created by London-based data visualization studio Kiln and the UCL Energy Institute

I do think the change will be coming soon in the industry, but changes have to make financial sense to the shipping lines and their customers, too.

Economists have estimated that the cost of cutting emissions 50% by 2050 are anywhere from US$1 trillion to, more realistically, over $3 trillion, and full decarbonization would be even higher. Many of those costs will be passed down to charterers, shippers and eventually consumers – meaning you and me.

Are there ways companies can cut emissions now while preparing to upgrade their fleets?

There are a number of options ship companies are using now to lower emissions.

One that has been used for at least 10 years is putting higher quality paint on the hulls, which reduces the friction between the hull and the water. With less friction, the engine isn’t working as hard, which reduces emissions.

Another is slow speed. If ships run at a higher speed, their engines work harder, which means they use more fuel and release more emissions. So shippers will use slow steaming. Most of the time, ships will go slow when they’re close to shore to reduce emissions that cause smog in port cities like Los Angeles. On the open ocean, they will go back to normal speed.

2 long, thick electric cables are lowered from a ship to workers on the dock below.
Workers at the Port of Long Beach, Calif., prepare to plug in a container ship. Tim Rue/Getty Images

Another option common in the U.S. and Europe is shutting down the ship’s engines while in port and plugging into the port’s electricity. It’s called “cold ironing.” It avoids burning more of the ship’s fuel, which affects air quality. The Ports of Los Angeles and Long Beach, where smog from idling ships has been a health concern, have been a big driver of electrification. It’s also less expensive for shipping companies than burning their fuel while in port.

As simple as those may sound, they have made huge improvements in terms of emissions, but they aren’t enough on their own.

Will a higher goal set by the IMO be enough to pressure the industry to change?

I used to work in shipping, and I know the maritime industry is a very old-school industry from centuries ago. But the industry has invested millions in new ships with the most effective technology available in recent years.

When the IMO began requiring all ships using heavy fuel in global trade to shift to low-sulfur fuel, the industry pivoted to meet the rule, even though retrofits were costly and time consuming. Many shipping lines complied by installing “scrubbers” that essentially filter the ship’s engine, and new ships were built to run on the low-sulfur fuel oil.

Now, the industry is being told the standards are changing again.

All industries want consistency so they can be confident investing in a new technology. The shipping lines will follow what the IMO says. They will push back, but they will still do it. That’s in part because the IMO supports the maritime industry, too.

Don Maier, Associate Professor of Business, University of Tennessee

This article is republished from The Conversation under a Creative Commons license.

5 things worth knowing about empathy

Empathy is a skill that can be learned and enhanced. But it has its limits, and can even promote conflict. Here’s what some experts say about how it works.

A tortoise lies on its back, legs waving in distress, until a second tortoise crawls up to turn it over. Millions have watched this scene on YouTube, with many leaving heartfelt comments. “Great sense of solidarity,” says one. “There is hope,” says another.

The viewers are responding to what many interpret as empathy — a sign that even in the animal world, life isn’t just dog-eat-dog. Alas, they’re probably wrong. As one reptile expert observed, the second tortoise’s motives were likelier more sexual than sympathetic.

Consider it a cautionary tale for our times, in which politicians urge us to cultivate more empathy, and scientists churn out volumes of work on the subject, with more than 2,000 published papers in 2019 alone. For all its popularity, empathy isn’t nearly as simple as so many blogs and books make it seem. Researchers can’t even agree on what empathy means: one paper noted 43 different definitions, ranging from basic shared emotions to more lofty mixtures of concern and kindness.

Whatever definition we choose, do we really need more empathy? Knowable Magazine checked in with several experts to help elucidate this surprisingly elusive concept. Here are the top take-aways:

1) Empathy is primitive…

Evidence of the most basic sort of empathy — “emotional contagion,” or the sharing of another being’s emotions — has been found in many species, suggesting it’s innate in humans.

“We are biologically programmed to have empathy. It’s something we can’t suppress,” says Frans de Waal, a primatologist at Emory University in Atlanta.

Clues about empathy’s mechanisms emerged in the early 1990s, when Italian neuroscientists studying the brains of macaque monkeys discovered a class of brain cells that fired both when the monkeys moved and when they observed another monkey in motion. De Waal and other experts propose that these monkey-see, monkey-do “mirror neurons” may be part of the cellular basis of shared feelings. Some researchers have gone on to study whether mirror neuron dysfunction helps explain the social challenges of people with autistic spectrum disorders.

Abundant evidence exists for “emotional contagion” in animals. Rats that watch other rats suffer electric shocks show their shared fear by freezing in place. Rats will even avoid pressing a lever dispensing a sugar pellet if it means another rat won’t get shocked, in what scientists suggest is an effort to avoid that shared fear and pain. That vicarious sense of pain is evident in humans as well: Even newborn infants will cry reflexively on hearing another infant cry.

Empathy evolved because of all the ways it served our ancestors, de Waal argued in an article on the evolution of empathy in the 2008 Annual Review of Psychology. The ability to feel others’ feelings helps parents be more sensitive to the needs of their children, increasing the chance that their genes will endure. This basic sort of empathy also inspires us to take care of friends and relatives, encouraging cooperation that helps our tribe survive.

2) … but empathy isn’t automatic

Despite its deep and ancient roots, the quality of human empathy can vary, depending on the context.

Some studies have suggested that we get less skillful at empathy as adulthood progresses, note the German psychologists Michaela Riediger and Elisabeth Blanke, who explore this question in the 2020 Annual Review of Developmental Psychology. That may be because empathy demands cognitive skills such as paying attention, processing information and holding that information in memory, all resources that usually become scarcer with age. Older adults can perform equally well in those skills, however, when a topic of conversation is more relevant or pleasant for them — in other words, when they care more, which presumably increases their willingness to invest those resources.

In a 2013 study, researchers in Hong Kong tested 49 younger people (aged 15 to 28) and 49 older people (aged 60 to 83) on their ability to read several different expressions of emotions on other people’s faces. The younger participants, as expected, were more skillful overall, but the older ones caught up if they were told in advance that the people they were observing “share a lot of common interests with you.” In other words, when sufficiently motivated, older adults can do just as well as younger people.

The nature of empathy also appears to have changed throughout human history. Harvard psychologist Steven Pinker argues that our ability to empathize with others has expanded over the past several centuries, due to trends such as increasing literacy and global commerce that make people more interdependent.

But other scientists contend that empathy has been waning among young people in recent years. In a much-publicized study first published in 2010, Sara Konrath, then a researcher at the University of Michigan, compared students’ responses spanning three decades to statements expressing empathy, such as “I often have tender, concerned feelings for people less fortunate than me,” and “I sometimes try to understand my friends better by imagining how things look from their perspective.” Students’ scores on a measure of empathy called Empathic Concern declined between 1979 and 2009, with the steepest drop after the year 2000, she found. Other researchers reported a similar trend in 2012.

Even today, Konrath says she knows of no research that has pinned down the reasons for this apparent shift. She suspects, however, that one reason might be simple “burnout” due to a variety of new pressures on young people, such as increasing income inequality, greater economic competition and sharply increasing college tuition rates, which she says may also be tied to a rise in mental illness over the same period. “When I’ve talked to college students about this, they tell me I’ve got it right,” she says. “These pressures are crowding out their focus on their ability to care for others, because they’re just too focused on trying to make it.”

3) Empathy is often selfish

Declining modern rates of empathy are often cited by those complaining about the alleged selfishness of millennials, whom one writer dubbed the “Me Me Me generation.” Yet empathy itself tends to be selfish, in that it’s usually directed toward those we care about the most — reflecting those evolutionary drives to care for children, relatives and others similar to ourselves. Researchers have illustrated this point by studying such simple measures of empathy as contagious yawning in humans, which occurs at a much higher rate in response to kin than to strangers. (Another blow to tortoises: They don’t catch yawns at all.)

De Waal points to a similar sort of preference in other primates, which often lick and clean each other’s wounds, which helps them heal. In one study, scientists described how a macaque that was injured while trying to enter a new group retreated to its former group, where it was cared for.

In our polarized times, the innate drive to empathize more with one’s in-group may worsen political divisions. In a study first published in 2019, a team of US researchers randomly assigned 1,232 people to read one of two versions of an article describing a college campus protest against an inflammatory political speech. In both versions the protest turns violent and police are called, but in one version the speaker is criticizing Democrats while in the other the target is Republicans. The subjects were more likely to want to stop the speech when the speaker was attacking their own party — but only if the subjects scored high on a measure of empathy.

In addition to empathy’s bias toward those nearest, dearest and most familiar is its preference for individuals over groups. Donations from all over the world flooded to refugee aid organizations after the publication of a photo of a drowned Syrian toddler on a Turkish beach. Yet they leveled off after six weeks, even as the media continued to report on the deaths of many other would-be migrants.

4) Empathy can be learned

Despite the controversies over empathy, most people say they want to be more empathetic, says Jamil Zaki, a psychologist at Stanford University. The good news is that they can be. The first step is believing that empathy is a skill that can be improved, he says. People who believe they can “grow” their empathy, Zaki has found, will spend more time and effort expending empathy in challenging situations, such as trying to understand someone from a different political party.

Other researchers have found that a meditation practice can also help enhance empathy, or at least improve people’s accuracy at reading emotions from facial expressions.

Through the years, studies have found that readers of fiction tend to be more skilled in empathy. In a 2009 study, researchers showed that people exposed to fiction performed better on an empathy test. The idea is that reading about other people helps us extend empathy to a wider circle.

There’s even an app for that. Konrath helped design a free one called Random App of Kindness (RAKi) to help teach empathy. It offers a series of games in which players help characters through an interconnected journey, with each game providing a way to practice basic forms of empathy, such as identifying emotions on other peoples’ faces and reading the signals of crying babies. Konrath’s preliminary research suggests that after young people had been randomly assigned to play the game for two months, they had more empathetic emotional responses to someone in distress compared to those who had played a more emotionally neutral game.

5)  Empathy only goes so far

In its simplest form, as emotional contagion, empathy may fail to lead to altruistic action, because altruism often demands some sort of sacrifice, argues Jesse Prinz, a philosopher at the Graduate Center of the City University of New York. “When I ask my students, how many of you have given money to a homeless person, every hand goes up,” he says. “But when I ask, how many of you have crossed the street to do so, or if you see someone who isn’t squarely in your path? There will often be no hands at all.”

Instead of more research on empathy, Prinz wants to see more work on understanding what he says are more powerful moral drivers, such as anger, disgust, contempt, guilt, the joy many people feel in helping others, and solidarity, the sense of agreement among people with a common interest. Amid today’s renewed concern about racial justice, it’s less helpful for a white person to tell a Black person: “I feel your pain,” than to say something like: “I can’t imagine what it’s like to be you. I see what’s happening and will not stand for it,” Prinz says.

“Empathy just doesn’t get people out in the streets,” he says. “I’m in favor of the full emotional arsenal.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.