Sunday, April 30, 2023

Genes, microbes and other factors govern how each person’s body processes nutrients. Understanding the connections could help optimize diets — and health.

For many years, researchers and clinicians assumed that nutrition was a one-size-fits-all affair. Everybody needs the same nutrients from their food, they thought, and a vitamin pill or two could help dispense with any deficiencies.

But now scientists are learning that our genes and environment, along with the microbes that dwell in us and other factors, alter our individual abilities to make and process nutrients. These differences mean that two given people can respond to identical diets in different ways, contributing to varied health outcomes and patterns of disease.

Until recently, scientists didn’t fully appreciate that individual metabolic differences can have a big impact on how diet affects the risk for chronic diseases, says Steven Zeisel, director of the Nutrition Research Institute at the University of North Carolina, Chapel Hill. The new knowledge is resolving long-standing mysteries about human health and paving the way toward a world of “precision nutrition,” Zeisel writes in a recent article in the Annual Review of Food Science and Technology.

Although the findings are unlikely to lead all the way to hyper-individualized dietary recommendations, they could help to tailor nutrition to subsets of people depending on their genetics or other factors: Zeisel’s company, SNP Therapeutics, is working on a test for the genetic patterns of 20-odd variants that can identify individuals at risk of fatty liver disease, for example. Knowable Magazine spoke with Zeisel about our developing understanding of precision nutrition.

This interview has been edited for length and clarity.

Why has nutrition lagged behind other research areas in medicine?

Nutrition studies have always had a problem with variability in experimental results. For instance, when infants were given the fatty acid DHA [docosahexaenoic acid], some had an improvement in their cognitive performance and others didn’t. Because some showed improvements, it was added to infant formula. But we didn’t understand why they were responding differently, so scientists continued to debate why we did this if only 15 percent of children improved and 85 percent showed no response.

The confusion came from an expectation that everybody was essentially the same. People didn’t realize that there were predictable sources of variation that could separate those who responded to something from those who did not. For DHA, it turned out that if the mother had a difference in her genes that made her slow to produce DHA, then her baby needed extra DHA and responded when given it. That gene difference occurs in about 15 percent of women — and, it turns out, it’s their babies that get better when given DHA.

How are researchers starting to make sense of this variability?

Studying differences in human genetics is one way. We conducted a series of studies that found a good deal of variation in the amounts of choline [an essential nutrient] that people required: Men and postmenopausal women got sick when deprived of it, but only half of young women became sick.

We found that some women can make choline because the hormone estrogen turns on the gene to make choline. Other women have a difference in this gene that makes it unresponsive to estrogen. Men and postmenopausal women need to get the nutrient another way — by eating it — because they have minimal amounts of estrogen.

If I had initially done the choline study and chosen only young women participants, I would have found that half needed choline, half didn’t, and had a lot of noise in my data. Now that we can explain it, it makes sense. What seemed to be noisy data can be better described using a precision nutrition approach.

Are there other nutritional conundrums that suggest these sorts of variations are common?

There are some things for which we already know the underlying genetic reasons. For example, there’s a great deal of information on genetic differences that make some people’s cholesterol go up when they eat a high-fat diet while other people’s doesn’t. Researchers are discovering genetic variants that account for why some people need more vitamin D than others to get the same levels in their blood.

Every metabolic step is controlled by such variants. So, when we find people who seem to be responding differently in our studies, that’s a hint that there is some underlying variation. Rather than throwing the data away or saying participants didn’t comply with the study protocol, we can look at the data to discover some of the genetic reasons for these differences. Precision nutrition is really a change in how we do nutrition research, in that we’re starting to identify why some people respond and some don’t.

Besides genetic variants, are there other factors that precision nutrition needs to take into account?

Right now, much of our ability to be more precise comes from better tools to understand genetic variation. But genetics alone doesn’t determine your response to nutrients. It interacts with other factors too.

The microbiome [the community of bacteria and other microbes that live in and on our body] clearly also affects how nutrients work. Most microbiome research until now has been to name the organisms in the gut, but it’s now getting to the point where researchers can measure what microbial genes are switched on, what nutrients are made by gut microbes, and so on. As that research matures, we’ll be able to get much better recommendations than we do now.

Our environment could be a very important factor as well. We’re starting to be able to measure different environmental exposures by testing for thousands of chemicals in a drop of blood. Epigenetics, which is the science of chemical marks placed on DNA to turn genes­ on and off, will also likely contribute to important differences. It’s been a hard field because these marks vary in different tissues, and we can’t easily get a sample of liver or heart tissue for a nutrition test. We have to track these changes in the bloodstream, and estimate whether they’re the same changes that occurred in the organs themselves.

We’ll have to include each of these factors to improve our predictions of who will or won’t respond to a certain nutrient. Eventually, precision nutrition will have all of these inputs at its early stages.

There are various precision nutrition tests now being sold by different companies. Do they have anything useful to offer?

Right now, most tests look at one gene at a time in a database and say, “You have this gene difference and it makes you more susceptible to something.” But the metabolic pathways for most nutrients are not controlled by a single gene. There may be 10 or 20 steps that all add up to how you respond to sugars, for example, and any one of those steps can cause a problem. Knowing where you have variations all along the pathway can help us predict how likely you are to have a problem metabolizing sugar. It’s more sophisticated, but it’s also harder to do.

Are there ethical concerns with precision nutrition?

Once I know something about a person’s genetics for nutrition, I may be able to predict that they’re more likely to develop a disease or a health problem. That could change whether an insurance company wants to cover them. We have to try to make that risk clear to people, and also work on improving privacy so their information isn’t available to anybody but them.

The other problem is that wealthier people can afford to do these genetic tests and others can’t. But we can use precision nutrition to find alternate solutions. For instance, women who can’t turn choline production genes on with the hormone estrogen are at higher risk of having babies with neural tube defects and poor brain development. If we need a test for only that one gene difference, a test like that could be reduced to a few dollars and made widely available. Or we might choose to just give everybody choline supplements, if that proves to be a more cost-effective solution.

In the long run, will these discoveries help prevent disease?

There is an advantage in seeking more precise advice for some problems right now. With obesity, for instance, we know that as people gain weight, they develop a group of problems called metabolic syndrome that’s related to the accumulation of fat in the liver. We know that because of genetic differences, about 20 percent of the population is much more likely to develop fatty liver — and is at higher risk for developing these related problems. If we can test for these gene differences, then we can identify those who will benefit the most from changes in diet and weight loss and treat them, either with supplements, drugs or lifestyle changes.

Salt sensitivity is another example. About 10 percent of people have higher blood pressure when they eat high-salt diets. Right now, because we don’t know the metabolic differences that contribute, we say everybody should stay away from salt. But the truth is, only about 10 percent of people are benefiting from that recommendation, and 90 percent are getting bland food that they don’t like. If we could do genetic testing and tell whether a person is salt-sensitive, then they know that effort is worth it for their health. I think that helps to make people comply with recommendations and change their lifestyles.

Unlike some drugs, which have an all-or-nothing effect, nutrition’s effects tend to be modest. But it’s clearly an important, easy intervention. And if we don’t fix a diet, then we have to treat the problems that arise from a bad diet.

Nutrition is always going to be a tough field to get precise results. It isn’t going to be perfect until we can get all the variables identified. Part of what precision nutrition is doing is helping to refine the tools we have to understand these complex systems.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Justice Clarence Thomas and his wife have bolstered conservative causes as he is poised to lead the Supreme Court rolling back more landmark rulings

U.S. Supreme Court Justice Clarence Thomas at the White House on Oct. 26, 2020. Jonathan Newton /The Washington Post via Getty Images)
Neil Roberts, University of Toronto

With the opening of the U.S. Supreme Court’s new session on Oct. 3, 2022, Clarence Thomas is arguably the most powerful justice on the nation’s highest court.

In 1991, after Thomas became an associate justice and only the second African American to do so, his power was improbable to almost everyone except him and his wife, Virginia “Ginni” Thomas.

He received U.S. Senate confirmation despite lawyer Anita Hill’s explosive workplace sexual harassment allegations against him.

Today, Thomas rarely speaks during oral arguments, yet he communicates substantively through his prolific written opinions that reflect a complicated mix of self-help, racial pride and the original intent of America’s Founding Fathers.

He isn’t chief justice. John Roberts Jr. is.

But with Thomas’ nearly 31 years of service, he’s the longest-serving sitting justice and on track to have the lengthiest court tenure ever.

June Jordan, pioneering poet and cultural commentator, observed in 1991 when President George H.W. Bush nominated Thomas that people “focused upon who the candidate was rather than what he has done and will do.”

As a scholar of political theory and Black politics, I contend we haven’t learned from this vital insight.

Conservative activism

Thomas’ service is under increasing scrutiny as his wife, a conservative activist, testified on Sept. 27, 2022, before the House committee investigating the Jan. 6 attack on the U.S. Capitol that she still believes false claims that the 2020 election was rigged against Donald Trump.

According to documents obtained by that committee, Ginni Thomas was instrumental in coordinating efforts to keep former President Donald Trump in office. Her efforts included sending emails to not only former White House Chief of Staff Mark Meadows but also state officials in Arizona and Wisconsin.

Of particular concern to the Jan. 6 committee is testimony from Thomas on her email correspondence with John Eastman, her husband’s former law clerk, who is considered to be the legal architect of Trump’s last-ditch bid to subvert the 2020 election.

In my view, Clarence and Ginni Thomas’ intertwined lives highlight a distressing underside to their personal union: the blurring of their professional and personal lives, which has had the appearance of fracturing the independence of the executive and judicial branches of government.

In this light, Thomas’ sole dissent in the case involving Trump’s turning over documents to the Jan. 6 committee is all the more alarming.

‘What he has done and will do’

Clarence Thomas has cultivated a distinct judicial philosophy and vision of the world – and a view of his place in it.

From what can be gleaned from his own writings and speeches, his vision has been derived from Black nationalism, capitalism, conservatism, originalism and his own interpretations of the law.

Since Thomas’ confirmation, his ideas and rulings have attracted many critics.

A white womn dressed in a full-length green gown is walking next to a middle-aged Black man wearing a black tuxedo.
Supreme Court Justice Clarence Thomas and Virginia Thomas arrive at White House dinner in 2019. Paul Morigi/Getty Images

But his interpetations of the law are now at the center of the high court’s jurisprudence.

In his concurring opinion of the court’s decision to overturn Roe v. Wade, Thomas argued that the court should reconsider reversing other related landmark rulings, including access to contraception in Griswold v. Connecticut, LGBTQ+ sexual behavior and sodomy laws in Lawrence v. Texas and same-sex marriage in Obergefell v. Hodges.

In short, Thomas’ sentiments reveal a broader ultraconservative agenda to roll back the social and political gains that marginalized communities have won since the 1960s.

The rulings in those cases, Thomas wrote, relied on the due process clause of the 14th Amendment and “were demonstrably erroneous decisions.”

“In future cases,” Thomas explained, “we should reconsider all of this Court’s substantive due process precedents, including Griswold, Lawrence, and Obergefell … we have a duty to ‘correct the error’ established in those precedents.”

Other recent Supreme Court rulings, on Second Amendment rights, Miranda rights, campaign finance regulations and tribal sovereignty, are also evidence of Thomas’ impact on the nation’s highest court.

The long game

In his memoir and public speeches, Thomas identifies as a self-made man.

Though he has benefited from affirmative action programs – and the color of his skin played a role in his Supreme Court nomination – Thomas has staunchly opposed such efforts to remedy past racial discrimination. Like other notable Black conservatives, Thomas argues that group-based preferences reward those who seek government largesse rather than individual initiative.

With the exception of guidance of Catholic Church institutions and his grandfather Myers Anderson, Thomas claims he earned his accomplishments by effort, hard work and his own initiative.

In a 1998 speech, Thomas foreshadowed his judicial independence and made clear that his attendance before the National Bar Association, the nation’s largest Black legal association, was not to defend his conservative views – or further anger his critics.

“But rather,” he explained, “to assert my right to think for myself, to refuse to have my ideas assigned to me as though I was an intellectual slave because I’m black.”

“I come to state that I’m a man, free to think for myself and do as I please,” Thomas went on. “I’ve come to assert that I am a judge and I will not be consigned the unquestioned opinions of others. But even more than that, I have come to say that, isn’t it time to move on?”

But like many of Thomas’ complexities, his own self-made narrative distorts the ideas of the first prominent Black Republican who remains one of his intellectual heroesFrederick Douglass, the statesman, abolitionist and fugitive ex-slave whose portrait has hung on the wall of Thomas’ office.

But in “Self-Made Men,” a speech he first delivered in 1859. Douglass disagreed with the idea that accomplishments result from solely individual upliftment.

“Properly speaking,” Douglass wrote, “there are in the world no such men as self-made men. That term implies an individual independence of the past and present which can never exist.”

Law against the people

Thomas’ view of the law is rooted in the originalism doctrine of an immutable rather than living U.S. Constitution.

Since the 1776 Declaration of Independence, modern America for Thomas has been predominantly a republic, where laws are made for the people through their elected representatives. Unlike a pure democracy, where the people vote directly and the majority rules, the rights of the minority are protected in a republic.

Dating back to ancient Rome, the history of republicanism is a story of denouncing domination, rejecting slavery and championing freedom.

Yet in my view, American republicanism has an underside: its long-standing basis in inequality that never intended its core ideals to apply beyond a small few.

An elderly Black and an elderly white man stand next to each other as onlookers applaud.
Clarence Thomas is seen here with GOP leader Mitch McConnell. Drew Angerer/Getty Images

Thomas claims consistency with America’s original founding.

In my view, Thomas’ perilous conservative activism works against a fundamental principle of the U.S. Constitution – “to form a more perfect union.”

Thomas’ rulings reveal a broader ultraconservative agenda to roll back the social and political gains that marginalized communities have won since the 1960s.

Neil Roberts, Professor of Political Science, University of Toronto

This article is republished from The Conversation under a Creative Commons license.

‘Got polio?’ messaging underscores a vaccine campaign’s success but creates false sense of security as memories of the disease fade in US

For much of the 20th century, Americans were used to seeing people bearing the signs of past polio infection. Genevieve Naylor/Corbis via Getty Images
Katherine A. Foss, Middle Tennessee State University

Got Polio? Me neither. Thanks, Science.

Messages like this are used in memes, posters, T-shirts and even some billboards to promote routine vaccinations. As this catchy statement reminds people of once-feared diseases of the past, it – perhaps unintentionally – conveys the message that polio has been relegated to the history books.

Leonardo DiCaprio meme 'Remember that time you got polio? Nope? Me neither? Thanks Science!'
This pro-science message uses a popular ‘cheers’ meme format.

Phrasing that aims to encourage immunizations by highlighting their accomplishments implies that some diseases are no longer a threat.

Few people today know much about polio. In 2022, only one-third of surveyed adults in the U.S. were aware that polio has no cure. Moreover, a 2020 poll had found that 84% of adults viewed vaccinating children as important, a 10% decline from 2001. The COVID-19 pandemic amplified anti-vaccination messaging, while also delaying routine immunization.

Vaccine-preventable diseases are far from eradicated. Measles outbreaks in unvaccinated or under-vaccinated American communities have begun resurfacing in the past few years, despite a 2000 declaration that the virus had been eliminated in the U.S. Pertussis cases have been on the rise, with more than 18,000 cases reported in 2019. And in July 2022, polio reappeared in an unvaccinated New York man – the first U.S. diagnosis since 1979. This case helped return attention to polio, causing at least some young adults to wonder about their own vaccination status.

A shift in focus to immunization in developing countries has further lulled Americans into a false sense of security. While global approaches have been effective and are certainly needed, as the author of “Constructing the Outbreak: Epidemics in Media and Collective Memory,” I suggest that the celebratory messaging is no longer as effective as it once was and runs the risk of making it seem as if polio only lives in history books.

semicircle of patients in iron lungs use mirrors to watch a TV
Polio patients at Baltimore’s Children’s Hospital watched television from inside the iron lungs that breathed for them. Bettmann via Getty Images

Campaigning against a devastating disease

Before vaccines, polio – called infantile paralysis or poliomyelitis – was the most feared childhood disease in the U.S. Frequently affecting elementary school kids, the disease sometimes presented like a cold or flu – fever, sore throat and headache. In other cases, limb or spinal pain and numbness first indicated that something was wrong. Paralysis of legs, arms, neck, diaphragm or a combination could occur and, depending on the area affected, render patients unable to walk, lift their arms, or breathe outside of an iron lung.

magazine add with images of kids with polio asks for donations
Full page ads like this one from 1953 solicited funds to help polio patients. March of Dimes

Only time could reveal whether the paralysis was permanent or would recede, sometimes to return decades later as Post-Polio Syndrome. Enough people were infected in outbreaks in the 1930s, 1940s and early 1950s that the effects of paralytic polio were quite visible in everyday life in the form of braces, crutches, slings and other mobility devices.

Thanks to the National Foundation for Infantile Paralysis, beating polio became a national priority. The NFIP grew out of President Franklin Delano Roosevelt’s Warm Springs Foundation. Roosevelt himself had been partially paralyzed by polio, and the NFIP provided funds for public education, research and survivors’ rehabilitation.

Eleanor Roosevelt smiles with a young boy holding a 'Mothers March on Polio' scroll
Eleanor Roosevelt helped inaugurate the Mothers’ March on Polio to raise money to fight the disease. Bettmann/CORBIS via Getty Images

Its campaigns were prolific and diverse, combining interpersonal and mass communication strategies.

From FDR “Birthday Ball” celebrations to parades and elementary school fundraising competitions, various groups raised money. High schoolers performed polio-themed plays, putting the disease itself on trial in “The People vs. Polio.” People passed around collection boxes at movie theaters and other public gatherings.

text of three 'I will not' and 'I will' points
An ad placed in Vogue in 1952 laid out the ‘Polio Pledge.’ National Foundation for Infantile Paralysis

Campaigns used every medium. Brochures and short films raised awareness of the threat of polio, emphasizing the need for funding to support patient rehabilitation and scientific research. The National Foundation for Infantile Paralysis generated scores of radio scripts and hired Frank Sinatra, Elvis Presley and other famous voices to read them. Judy Garland, Mickey Rooney, Lucille Ball and other Hollywood stars also joined the fight. Comic strips and cartoons featuring Mickey Mouse and Donald Duck rallied for March of Dimes funds to help polio patients.

Starting in 1946, the NFIP featured children with crutches and braces who had survived polio as “poster children” asking for funds to help them walk again. News stories covered outbreaks and polio epidemics, detailing the devastation of the disease on individuals, families and communities, while advising families how to reduce risk through the “Polio Pledge for Parents,” which provided a list of do’s and don'ts during summer months.

From public enemy No. 1 to success story

The work of the National Foundation for Infantile Paralysis yielded unprecedented and continuous success, providing hospitals with equipment during epidemics and supporting the development of vaccines. Following the largest vaccine trial in history, on April 12, 1955, the Poliomyelitis Vaccine Evaluation Center announced that Jonas Salk’s vaccine was 80%-90% effective against paralytic polio and officially ready for general use.

families in line outside a school with a sign 'Entrance for polio shots' in 1955
Once a vaccine was available, people lined up to protect themselves and their families from the virus. Bettmann via Getty Images

Over the next decade, the NFIP shifted its focus to widespread immunization, again using both mass media and local campaigns. With Salk’s vaccine, and then Albert Sabin’s, polio cases fell quickly, from the peak of 57,879 cases in 1952 to only 72 cases in 1965, with the last naturally occurring U.S. case in 1979.

The repeated declaration of what polio vaccines could and were accomplishing was strategically effective in persuading more people to get their shots. The American public of the 1960s and 1970s had lived through repeated polio epidemics and knew both the fear of contracting the disease and its visible aftereffects. As of 2021, 92.7% of Americans were fully protected by the vaccine, though these rates have been in decline since 2010 and fluctuate by region.

Public health rhetoric that focused on this vaccine success story worked around the world in the late 1980s and 1990s. Gradually, though, the perceived threat in the U.S. of polio and other vaccine-preventable diseases dissipated over generations as vaccinations largely eliminated the risk. Most people in developed countries lack firsthand experiences of just how terrifying these diseases are, having never experienced polio, diphtheria, measles or pertussis, or lost family members to them.

At the same time that polio has been largely forgotten in the U.S., anti-vaccination messages have been spreading disinformation that distorts the risk of vaccines, ignoring the realities of the diseases they immunize against.

Rhetoric from polio vaccine campaigns in the 1950s and 1960s emphasized the risks of not getting immunized – acute illness, life-changing pain and paralysis or even death. In the 21st century U.S., immunization campaigns no longer emphasize these risks, and it’s easy to forget the potentially deadly repercussions of skipping vaccines.

I believe pervasive public health messaging can counter anti-vaccination disinformation. A reminder for the American public about this still dangerous disease can help ensure that “Got Polio?” does not become a serious question.

Katherine A. Foss, Professor of Media Studies, Middle Tennessee State University

This article is republished from The Conversation under a Creative Commons license. 

An Easy-to-Make Summer Sweet

(Culinary.net) When it’s beyond hot outside and the kids are begging for a delicious afternoon snack, sometimes it’s difficult to know where to turn. The pantry is full and the refrigerator is stocked, yet nothing sounds appetizing when it’s scorching outside.

Combining three simple ingredients you probably already have in your kitchen can save the day and provide a refreshing and scrumptious snack.

Try this 3-Ingredients Strawberry Ice Cream on warm days ahead. It’s chilled to perfection with fresh strawberries and fluffy whipping cream to create a creamy texture perfect for the kiddos.

Start by pureeing 1 pound of fresh strawberries. Add 1 pint of whipping cream and sweetened condensed milk to a mixing bowl then beat until stiff peaks form.

Fold the strawberry puree in with the whipping cream mixture. Pour into a loaf pan and freeze for 5 hours.

Before serving, let ice cream soften for 5-10 minutes.

It’s delicious, rich and has sweet strawberry flavor that can help satisfy nearly any sweet tooth. It’s a wonderful treat after long summer days spent playing outside, splashing in the pool or just relaxing, soaking up the sun.

Find more summer dessert recipes at Culinary.net.

If you made this recipe at home, use #MyCulinaryConnection on your favorite social network to share your work.

Watch video to see how to make this recipe!

3-Ingredient Strawberry Ice Cream

Servings: 4-6

  • 1          pound fresh strawberries, stems removed
  • 1          pint heavy whipping cream
  • 1          can (14 ounces) sweetened condensed milk
  1. In blender, puree strawberries.
  2. In bowl of stand mixer, beat whipping cream and sweetened condensed milk until stiff peaks form. Fold in strawberry puree. Pour into loaf pan. Freeze 5 hours.
  3. Before serving, let ice cream soften 5-10 minutes.
SOURCE:
Culinary.net

The controversial technology of reflecting sunlight away from the planet could help blunt the worst impacts of climate change

For decades, climate scientist David Keith of Harvard University has been trying to get people to take his research seriously. He’s a pioneer in the field of geoengineering, which aims to combat climate change through a range of technological fixes. Over the years, ideas have included sprinkling iron in the ocean to stimulate plankton to suck up more carbon from the atmosphere or capturing carbon straight out of the air.

Keith founded a company that develops technology to remove carbon from the air, but his specialty is solar geoengineering, which involves reflecting sunlight away from Earth to reduce the amount of heat that gets trapped in the atmosphere by greenhouse gases. The strategy hasn’t been proven, but modeling suggests it will work. And because major volcanic eruptions can have the same effect, there are some real-world data to anchor the idea.

In the near future, Keith and his colleagues hope to launch one of the first tests of the concept: a high-altitude balloon that would inject tiny, reflective particles into the layer of the upper atmosphere known as the stratosphere. The place and time for the experiment are still to be determined, but it would be a baby step toward showing whether artificial stratospheric particles could help cool the planet the way eruptions do naturally.

But the idea of using a technological fix for climate change is controversial. Talking about — let alone researching — geoengineering has long been considered taboo for fear that it would dampen efforts to fight climate change in other ways, particularly the critical work of reducing carbon emissions. That left geoengineering on the fringes of climate research. But people’s attitudes may be changing, Keith says. He argues that while geoengineering by itself cannot solve the problem of climate change, it could help mitigate the damage if implemented carefully alongside emissions reductions.

In 2000, Keith published an overview of geoengineering research in the  Annual Review of Energy and the Environment, in which he noted that major climate assessments up until that point had largely ignored it. Earlier this year, he spoke in Seattle about the current state of the field at the annual meeting of the American Association for the Advancement of Science.  Knowable Magazine talked with Keith about how the scientific, technological and geopolitical landscape has changed in the intervening decades.

This conversation has been edited for length and clarity.

Twenty years ago you called geoengineering “deeply controversial.” How has the controversy changed since then?

Back then it was something that a pretty small group of people who thought about climate knew about — and mostly agreed they wouldn’t talk about. And that was it. Now it’s much more widely discussed. I think the taboo is reduced, for sure. It’s certainly still controversial, but my sense is that there has been a real shift. An increasing number of people who are in climate science or in public policy around climate or in environmental groups now agree that this is something we should talk about, even if many think it should never be implemented. There’s even growing agreement that research should happen. It feels really different.

Why was there a taboo against talking about geoengineering, and do you think was it valid?

I think it’s well-intentioned; people are right to worry that talking about geoengineering might reduce the effort to cut emissions. I don’t think this concern about moral hazard is a valid reason not to do research. There were people who argued that we shouldn’t allow the AIDS triple-drug cocktail to be distributed in Africa because it would be misused, creating resistance. Others argued against implementation of airbags, because people would drive faster. There is a long history of arguing against all sorts of potentially risk-reducing technologies because of the potential for risk compensation — the possibility that people will change behavior by taking on more risks. I think it’s an ethically confused argument.

For me, the most serious concern is some entities — like big fossil-fuel companies that have a political interest in blocking emissions cuts — will attempt to exploit the potential of geoengineering as an argument against emissions cuts. This concern has likely been the primary reason that some big civil-society groups want to block or contain discussion of this stuff so it doesn’t enter more widely into the climate debate. For me the concern is entirely justified, but I think the right answer is to confront it head-on rather than avoiding debate. I don’t want a world where decisions are made by elites talking behind closed doors.

Has the amount of geoengineering research increased in the past two decades?

Dramatically, even in the last couple of years. When I wrote that Annual Reviews paper in 2000, there was virtually zero organized research. There were a few researchers occasionally getting interested and putting in like 1 percent of their time.

Now there are little research programs almost everywhere you care to mention. There’s a Chinese program that’s pretty serious; there’s an Australian one that’s better funded than anything in the United States; there are several in Europe.

What has been the biggest surprise over the past 20 years in how solar geoengineering might work?

The big surprise has been recent results, including two studies I was involved in, showing that the effects of a global solar geoengineering program wouldn’t be as geographically unequal as was feared. What matters for real public policy is who is made worse off.

For one paper published last year in Nature  Climate Change, we used a very high-resolution computer model, and we compared, over all the land surface, two worlds: one world where we have two times preindustrial levels of carbon dioxide and the other world where we have enough solar geoengineering to reduce the temperature change by half. For each of the 33 geographical study regions designated by the Intergovernmental Panel on Climate Change, we tried to look at whether solar geoengineering would move a particular climate variable back toward preindustrial levels, which we call “moderated,” or move it further away from preindustrial, which we call “exacerbated.”

We focused on some of the most important climate variables: change in extreme temperature, change in average temperature, change in water availability and change in extreme precipitation. And what we found seems almost too good to be true: There wasn’t a single variable in a single region that was exacerbated. That was a surprise.

In a paper published in March in Environmental Research Letters, we did the same analysis with another model, and we found that with solar geoengineering, everything is moderated in all regions except four. But all four of those are dry regions that get wetter. So my guess is many residents of those regions would actually prefer that outcome because in general people are more worried about getting drier than wetter.

Now, what the model shows may or may not be true in the real world. But if there is a single reason to really look at these technologies and evaluate them in experiments, it’s results like this that show you can reduce almost all or many of the major perturbations of climate without making any region significantly worse. That’s quite a thing.

How would your planned real-world experiment, known as the Stratospheric Controlled Perturbation Experiment (SCoPEx), work?

SCoPEx is a stratospheric balloon experiment to put aerosols in the stratosphere and measure their interaction over the first hours and the first kilometer or so after release in a plume. It involves a high-altitude balloon that will lift a gondola carrying a package of scientific instruments to an altitude of 20 kilometers. It will release a very small amount of materials such as ice, calcium carbonate (essentially powdered limestone) or sulfuric acid droplets known as sulfates. The gondola will be fitted with propellers that were originally made for airboats so that it can fly through the plume of released materials to take measurements.

The amount of released material will be on the order of 1 kilogram, which is far too small to have any direct health or environmental impact once released. The goal is not to change climate or even to see if you can reflect any sunlight. The goal is simply to improve our models of the way aerosols form in the stratosphere, especially in plumes, which is very relevant for understanding how solar geoengineering would work. We hope to launch the experiment soon. But when and where that will happen depends on balloon availability and recommendations from an advisory committee.

We know there are health risks related to sulfuric acid pollution in the lower atmosphere. Are there potential health risks from injecting sulfate aerosols into the stratosphere?

Anything we put in the stratosphere will end up coming down to the surface, and that’s one of the risks we must consider. A full-scale solar geoengineering program might involve injecting around 1.5 million tons of sulfur and sulfuric acid into the stratosphere per year. This could be done using a fleet of aircraft; roughly 100 aircraft would need to continuously fly payloads up to about 20 kilometers (12 miles) altitude. You would not be wrong to think this sounds crazy. We know that sulfuric acid pollution in the lower atmosphere kills many people every year, so putting sulfuric acid into the stratosphere is obviously a risk. But it’s important to understand how much 1.5 million tons a year really is.

The 1991 eruption of Mount Pinatubo, in the Philippines, poured about 8 million tons of sulfur in one year into the stratosphere. It cooled the climate and had implications for all sorts of systems. Current global emissions of sulfur are about 50 million tons a year into the lower atmosphere, and that kills several million people every year from fine particulate air pollution. So the relative risk from solar geoengineering is fairly small, and it has to be weighed against the risk of not doing solar geoengineering.

How quickly could a full-scale solar geoengineering program get off the ground?

It could happen very fast, but all the ways it happens very fast are bad cases, basically where one country just jumps on it very quickly. It’s obvious that what would be best is for countries not to just start doing it but to articulate clear plans and build in checks and balances and so on. 

If there were much wider research over the next half-decade to decade — which is possible because attitudes really are changing — then it’s plausible that some coalition of countries could begin to inch toward real implementation with serious, visible plans that can be critiqued by the scientific community starting by the end of this decade. I don’t expect it will happen that fast, but I think it’s possible.

How does geoengineering fit in with other efforts to combat climate change such as reducing fossil-fuel emissions and removing carbon from the air?

The first, and by far the most important, thing we do about climate change is decarbonizing the economy, which breaks the link between economic activity and carbon emissions. There’s nothing I can say about solar geoengineering that changes the fact that we have to reduce emissions. If we do not do that, we’re done.

Then carbon removal, which involves capturing and storing carbon that has already been emitted, could break the link between emissions and the amount of carbon dioxide in the atmosphere. Large-scale carbon removal really makes sense when emissions are clearly heading toward zero, and we’re getting toward the harder chunk of the economy to mitigate. And then solar geoengineering is a thing that might partially and imperfectly weaken, but not break, the link between the amount of carbon dioxide in the atmosphere and climate changes — changes in sea level, changes in extreme events, changes in temperature, etc.

So if you look at the curve of overall greenhouse gases in the atmosphere, you can think of emissions cuts as flattening the curve. Carbon removal takes you down the other side of the curve. And then solar geoengineering can cut off the top of the curve, which would reduce the risk of the carbon dioxide that is in the air already.

Some people think we should use it only as a get-out-of-jail card in an emergency. Some people think we should use it to quickly try to get back to a preindustrial climate. I’m arguing we use solar geoengineering to cut the top off the curve by gradually starting it and gradually ending it.

Do you feel optimistic about the chances that solar geoengineering will happen and can make a difference in the climate crisis?

I’m not all that optimistic right now because we seem to be so much further away from an international environment that’s going to allow sensible policy. And that’s not just in the US. It’s a whole bunch of European countries with more populist regimes. It’s Brazil. It’s the more authoritarian India and China. It’s a more nationalistic world, right? It’s a little hard to see a global, coordinated effort in the near term. But I hope those things will change.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

ChatGPT could be an effective and affordable tutor

Educational software has a long history, but chatbots could help students excel like never before. Fabio Principe / EyeEm via Getty Images
Anne Trumbore, University of Virginia

Imagine a private tutor that never gets tired, has access to massive amounts of data and is free for everyone. In 1966, Stanford philosophy professor Patrick Suppes did just that when he made this prediction: One day, computer technology would evolve so that “millions of schoolchildren” would have access to a personal tutor. He said the conditions would be just like the young prince Alexander the Great being tutored by Aristotle.

Now, ChatGPT, a new artificial intelligence-powered chatbot with advanced conversational abilities, may have the capability to become such a tutor. ChatGPT has collected huge amounts of data on a wide range of topics and can pass graduate school exams. As a researcher who studies how computers can be used to help people learn, I think ChatGPT can be used to help students excel academically. However, in its current form, ChatGPT shows an inability to stay focused on one particular task, let alone tutoring.

Philosophy, engineering and artificial intelligence scholars envisioned using the computer as an “intelligent tutor” well before the internet became a global commercial network in the 1990s. I believe lessons from developing those early tutoring systems can offer insight into how students and educators can best make use of ChatGPT as a tutor in the future.

Computers as tutors

Suppes – the Stanford philosophy professor – was a pioneer of a field called “computer-assisted instruction.” He developed some of the earliest educational software. That software provided individual instruction via computer and led students to have better test results than those who didn’t use the program. I worked for Suppes in developing software and other online programs from 2004 to 2012.

Since then, experiments in building “intelligent tutors” to help students have driven advances in artificial intelligence, social networks and computer hardware. And today, the abilities of ChatGPT to write essays, answer philosophical questions and solve computer coding problems may finally achieve Suppes’ goal of truly personalized tutoring via computer.

Early versions of personalized learning

In 1972, a new personalized learning system called PLATO, for Programmed Logic for Automated Teaching Operations, made its debut. It was the first widely available personalized learning system of its kind.

Created by Don Bitzer, a professor of electrical engineering at the University of Illinois, PLATO allowed up to 1,000 students to be logged onto a mainframe computer simultaneously. Each student could complete different online courses in foreign languages, music, math and many other subjects while receiving feedback from the computer on their work.

PLATO enabled students to reach the same level of achievement as in-person classes in less time. And most students preferred this mode of instruction over sitting in a large lecture class. Yet, the system was too expensive to be used by many colleges and universities. Each computer terminal was marketed at over US$8,000 – about $58,000 today – and schools were charged additional fees every time a student used the system. Still, PLATO’s success with students inspired a number of companies to create software that provided a similar kind of tutoring, including the College Curriculum Corporation, which was co-founded by Suppes.

Popular personal computer brands, such as Apple and Commodore, advertised the availability of educational software as a reason for families to invest in a home computer.

By 1985, researchers at Carnegie Mellon University were designing software using advances in artificial intelligence and cognitive psychology. They claimed that the current technology had advanced to a level that enabled computer systems to be designed to serve as effective as human tutors. However, even though there were over 10,000 pieces of educational software available at the time, much of it was of fairly low quality and did not provide real tutoring.

Although the more advanced designs of the educational software developed at Carnegie Mellon enabled students to learn significantly more than students in traditional classrooms, they were not widely used in schools.

In the 1980s and 1990s, a school would need a sizable number of expensive, high-powered computer workstations for students to use an intelligent tutor. Today, the computers are much more powerful and much less expensive.

And early intelligent tutors were used primarily in math, science and technology courses, which limited their appeal. Students could not ask the software programs a question. However, students could get feedback on answers to quizzes and tests.

In 2001, the next generation of intelligent tutoring systems was able to have conversations with students in written English. These systems, early chatbots, used advances in natural language processing to have dialogues with students about a range of subjects. Each system was designed specifically for one subject, such as physics. Adding another subject, like English, would require its own specialized tutoring system.

The next few years brought three developments, which enabled a forward leap in achieving the dream of computerized tutoring. One was the availability of broadband internet, which was faster than the dial-up connections of the 1990s. The second was the lower cost of more powerful computers. The third was investment by technology companies and government funding for university research in artificial intelligence, which led to the development of what we now call chatbots.

The first chatbots

By 2007, early AI chatbots provided tutoring by talking students through their answers to questions. Research showed these chatbots supported learning outcomes similar to the effects of human tutors.. Even so, using chatbot technology to provide tutoring was primarily experimental.

During the 2010s, social networks and advances in artificial intelligence dramatically changed the way people interact with technology. Chatbots are integrated into many parts of our lives and are no longer experimental. People now talk to their phones, to their cars and their home appliances. Many people have virtual assistants at home. They can ask Alexa, Siri or Google for directions or the answers to simple homework questions.

Today, ChatGPT has advanced abilities that enable more complex conversations and can provide individualized tutoring much like what Suppes predicted in 1966.

I asked ChatGPPT why it would be a good tutor, and it responded: “I am a good tutor for a student because I am able to understand and respond to a wide range of questions and prompts, and I can provide detailed and accurate information on a wide range of subjects. Additionally, I am able to communicate this information in a clear and easy-to-understand manner, making it accessible for students of all levels of understanding. Because I am a machine-learning model, I can also provide a large amount of information quickly and efficiently, which can be especially helpful for students who are trying to learn a lot of new material in a short amount of time.”

By contrast, Siri said, “Hmm … I don’t have an answer for that. Is there something else I can help you with?”

Recent research in using chatbots in college courses shows that how chatbots are used is important in encouraging students to learn more deeply. For instance, AI chatbots enabled significant improvements in learning in a graduate course on cloud computing at Carnegie Mellon. These learning gains occurred when these chatbots asked students to build on an existing argument or to provide more information about a claim they had made. In this case, the chatbot asked the student a question, rather than vice versa.

Many educators are worried about students learning less with ChatGPT since it can be used to cheat on assignments and papers. Others are worried about ChatGPT giving wrong answers or spreading misinformation.

Yet the history and research of intelligent tutors show that using the right design to harness the power of chatbots like ChatGPT can make deeper, individualized learning available to almost anyone. For example, if people use ChatGPT to ask students questions that prompt them to revise or explain their work, students will have better learning gains. Since ChatGPT has access to far more knowledge than Aristotle ever did, it has great potential for providing tutoring to students to help them learn more than they would otherwise.

Anne Trumbore, Chief Digital Learning Officer, Sands Institute for Lifelong Learning, University of Virginia

This article is republished from The Conversation under a Creative Commons license. 

Banking crises rooted in a system that rewards excessive risk-taking – as First Republic’s precarious situation shows

Another bank on the brink. AP Photo/Jeff Chiu
Alexandra Digby, University of Rochester; Dollie Davis, Minerva University, and Robson Hiroshi Hatsukami Morgan, Minerva University

First Republic Bank is on the brink of collapse, a victim of the panic that has roiled small and midsize banks since the failure of Silicon Valley Bank in March 2023.

Should First Republic fail, it would underscore how the impact of risky decisions at one bank can quickly spread into the broader financial system. It should also provide the impetus for policymakers and regulators to address a systemic problem that has plagued the banking industry from the savings and loan crisis of the 1980s to the financial crisis of 2008 to the recent turmoil following SVB’s demise: incentive structures that encourage excessive risk-taking.

The Federal Reserve’s top regulator seems to agree. On April 28, 2023, the central bank’s vice chair for supervision delivered a stinging report on the collapse of Silicon Valley Bank, blaming its failures on its weak risk management, as well as supervisory missteps.

We are professors of economics who study and teach the history of financial crises. In each of the financial upheavals since the 1980s, the common denominator was risk. Banks provided incentives that encouraged executives to take big risks to boost profits, with few consequences if their bets turned bad. In other words, all carrot and no stick.

One question we are grappling with now is what can be done to keep history from repeating itself and threatening the banking system, economy and jobs of everyday people.

S&L crisis sets the stage

The precursor to the banking crises of the 21st century was the savings and loan crisis of the 1980s.

The so-called S&L crisis, like the collapse of SVB, began in a rapidly changing interest rate environment. Savings and loan banks, also known as thrifts, provided home loans at attractive interest rates. When the Federal Reserve under Chairman Paul Volcker aggressively raised rates in the late 1970s to fight raging inflation, S&Ls were suddenly earning less on fixed-rate mortgages while having to pay higher interest to attract depositors. At one point, their losses topped US$100 billion.

To help the teetering banks, the federal government deregulated the thrift industry, allowing S&Ls to expand beyond home loans to commercial real estate. S&L executives were often paid based on the size of their institutions’ assets, and they aggressively lent to commercial real estate projects, taking on riskier loans to grow their loan portfolios quickly.

In the late 1980s, the commercial real estate boom turned bust. S&Ls, burdened by bad loans, failed in droves, requiring the federal government take over banks and delinquent commercial properties and sell the assets to recover money paid to insured depositors. Ultimately, the bailout cost taxpayers more than $100 billion.

Short-term incentives

The 2008 crisis is another obvious example of incentive structures that encourage risky strategies.

At all levels of mortgage financing – from Main Street lenders to Wall Street investment firms – executives prospered by taking excessive risks and passing them to someone else. Lenders passed mortgages made to people who could not afford them onto Wall Street firms, which in turn bundled those into securities to sell to investors. It all came crashing down when the housing bubble burst, followed by a wave of foreclosures.

Incentives rewarded short-term performance, and executives responded by taking bigger risks for immediate gains. At the Wall Street investment banks Bear Stearns and Lehman Brothers, profits grew as the firms bundled increasingly risky loans into mortgage-backed securities to sell, buy and hold.

As foreclosures spread, the value of these securities plummeted, and Bear Stearns collapsed in early 2008, providing the spark of the financial crisis. Lehman failed in September of that year, paralyzing the global financial system and plunging the U.S. economy into the worst recession since the Great Depression.

Executives at the banks, however, had already cashed in, and none were held accountable. Researchers at Harvard University estimated that top executive teams at Bear Stearns and Lehman pocketed a combined $2.4 billion in cash bonuses and stock sales from 2000 to 2008.

A familiar ring

That brings us back to Silicon Valley Bank.

Executives tied up the bank’s assets in long-term Treasury and mortgage-backed securities, failing to protect against rising interest rates that would undermine the value of these assets. The interest rate risk was particularly acute for SVB, since a large share of depositors were startups, whose finances depend on investors’ access to cheap money.

When the Fed began raising interest rates last year, SVB was doubly exposed. As startups’ fundraising slowed, they withdrew money, which required SVB to sell long-term holdings at a loss to cover the withdrawals. When the extent of SVB’s losses became known, depositors lost trust, spurring a run that ended with SVB’s collapse.

For executives, however, there was little downside in discounting or even ignoring the risk of rising rates. The cash bonus of SVB CEO Greg Becker more than doubled to $3 million in 2021 from $1.4 million in 2017, lifting his total earnings to $10 million, up 60% from four years earlier. Becker also sold nearly $30 million in stock over the past two years, including some $3.6 million in the days leading up to his bank’s failure.

The impact of the failure was not contained to SVB. Share prices of many midsize banks tumbled. Another American bank, Signature, collapsed days after SVB did.

First Republic survived after it was rescued by a consortium of major banks led by JPMorgan Chase, but the damage was already done. First Republic recently reported that depositors withdrew more than $100 billion in the six weeks following SVB’s collapse, and now it appears that it could soon fail too.

The crisis isn’t over yet. Banks had over $620 billion in unrealized losses at the end of 2022, largely due to rapidly rising interest rates.

The big picture

So, what’s to be done?

We believe the bipartisan bill recently filed in Congress, the Failed Bank Executives Clawback, would be a good start. In the event of a bank failure, the legislation would empower regulators to claw back compensation received by bank executives in the five-year period preceding the failure.

Clawbacks, however, kick in only after the fact. To prevent risky behavior, regulators could require executive compensation to prioritize long-term performance over short-term gains. And new rules could restrict the ability of bank executives to take the money and run, including requiring executives to hold substantial portions of their stock and options until they retire.

The Fed’s new report on what led to SVB’s failure points in this direction. The 102-page report recommends new limits on executive compensation, saying leaders “were not compensated to manage the bank’s risk,” as well as stronger stress-testing and higher liquidity requirements.

We believe these are also good steps, but probably not enough.

It comes down to this: Financial crises are less likely to happen if banks and bank executives consider the interest of the entire banking system, not just themselves, their institutions and shareholders.

Alexandra Digby, Adjunct Assistant professor of Economics, University of Rochester; Dollie Davis, Associate Dean of Faculty, Minerva University, and Robson Hiroshi Hatsukami Morgan, Assistant Professor of Social Sciences, Minerva University

This article is republished from The Conversation under a Creative Commons license. 

8 Ways to Make Your Pet Feel Like Part of the Family

It’s said that pets can be people’s best friends, but the relationships between people and their four-legged friends often run even deeper. In fact, many consider pets to be an integral part of their families.

From eating and playing together to matching outfits and more, consider these practical ways to make dogs and cats feel like they’re important members of your household so you can become two peas in a pod.

Play Often
Playtime is an easy (and fun) way to bond with pets, whether your furry friends prefer fetch, tug-of-war or any other game that keeps you both active. On top of the emotional benefits, it can help keep them in better physical shape – you might even be able to get a little exercise, too.

Feed Human-Grade Food
Sharing a meal with people is a common way to connect, and that same connection can be made with pets. Feeding them at the same time you sit down for a meal can create a special bond. Though it’s tempting to feed them off your plate, consider a human grade food option like Freshly Crafted from Full Moon Pet. Created for pet parents who seek homemade quality ingredients free from additives or preservatives, this fresh frozen food includes 100% human-grade ingredients like real USDA chicken breast and blueberries, meaning it looks, smells and tastes just like food you would cook in your own kitchen while providing complete nutritional benefits pets need to thrive.

With flavors including Homestead Turkey, Ranch-Raised Beef and Free-Range Chicken, these curated meals promote skin and coat health, support immune health and encourage muscle growth.

Set Up a Special Space
Just like humans, pets need a little quiet time now and then. Creating an area they can retreat to with a bed, blankets and favorite toys offers a “safe space” where they can be comfortable. Plus, it’s a perfect spot for them to nap when you’re away and the house is empty.

Schedule Regular Wellness Visits
The veterinarian’s office may not be your furry friend’s favorite hangout, but scheduling regular checkups is an important part of owning pets. A balanced, quality diet can reduce trips to the vet to just essential visits – often annually or bi-annually – and help your pets live happy and healthy lives.

Show Affection
Belly rubs, freshly brushed coats and simply spending time together are all effective ways to show pets your affection. Another way to share your love is to reward dogs with their favorite treats like Full Moon Chicken Jerky, which is a high-protein, meaty treat made with few ingredients, including USDA chicken breast, so you can feel good about providing your pets with a treat that tastes good and is good for them.

Take Pets on Vacation
Skip the anxiety of leaving pets behind during your next trip by letting them join in on the fun. Exploring the world together is an exciting way to bond as you can visit new places, find dog parks and discover pet-friendly restaurants. If it’s your pet’s first road trip, consider a shorter excursion to ensure he or she is comfortable in the car, and make sure to bring all the essentials like food, water, bowls, bedding, toys and medications.

Wear Matching Clothes
When you’re dressed to feel your best, you’re likely to feel more confident in yourself. Provide that same feeling of satisfaction to pets by dressing them in the same clothes you plan to wear for a walk around the block or a trip to the dog park. Next time there’s a family photo session, plan to include furry friends in matching outfits so everyone can be dressed to impress.

Celebrate Special Occasions
Birthdays, “gotcha days” or holidays are perfect moments for celebrating your bond with your pets. Make the day extra special by indulging in favorites like going for a walk, heading to the dog park, gifting a new toy, enjoying specialty treats and more.

To find more ways to bond with your pets, visit FullMoonPet.com.

SOURCE:
Full Moon Pet