Sunday, April 30, 2023

The road to low-carbon concrete

For thousands of years, humanity has had a love affair with cement and concrete. But now, industry groups and researchers are seeking solutions to the huge amounts of carbon dioxide that cement-making generates.

Nobody knows who did it first, or when. But by the 2nd or 3rd century BCE, Roman engineers were routinely grinding up burnt limestone and volcanic ash to make caementum: a powder that would start to harden as soon as it was mixed with water.

They made extensive use of the still-wet slurry as mortar for their brick- and stoneworks. But they had also learned the value of stirring in pumice, pebbles or pot shards along with the water: Get the proportions right, and the cement would eventually bind it all into a strong, durable, rock-like conglomerate called opus caementicium or — in a later term derived from a Latin verb meaning “to bring together” —  concretum.

The Romans used this marvelous stuff throughout their empire — in viaducts, breakwaters, coliseums and even temples like the Pantheon, which still stands in central Rome and still boasts the largest unreinforced concrete dome in the world.

Two millennia later, we’re doing much the same, pouring concrete by the gigaton for roads, bridges, high-rises and all the other big chunks of modern civilization. Globally, in fact, the human race is now using an estimated 30 billion metric tons of concrete per year — more than any other material except water. And as fast-developing nations such as China and India continue their decades-long construction boom, that number is only headed up.

Unfortunately, our long love affair with concrete has also added to our climate problem. The variety of caementum that’s most commonly used to bind today’s concrete, a 19th-century innovation known as Portland cement, is made in energy-intensive kilns that generate more than half a ton of  carbon dioxide for every ton of product. Multiply that by gigaton global usage rates, and cement-making turns out to contribute about 8 percent of total CO 2 emissions.

Granted, that’s nowhere near the fractions attributed to transportation or energy production, both of which are well over 20 percent. But as the urgency of addressing climate change heightens public scrutiny of cement’s emissions, along with potential government regulatory pressures in both the United States and Europe, it’s become too big to ignore. “Now it’s recognized that we need to cut net global emissions to zero by 2050,” says Robbie Andrew, a senior researcher at the CICERO Center for International Climate Research in Oslo, Norway. “And the concrete industry doesn’t want to be the bad guy, so they’re looking for solutions.” 

Major industry groups like the London-based Global Cement and Concrete Association and the Illinois-based  Portland Cement Association have now released detailed road maps for reducing that 8 percent to zero by 2050. Many of their strategies rely on emerging technologies; even more are a matter of scaling up alternative materials and underutilized practices that have been around for decades. And all can be understood in terms of the three chemical reactions that characterize concrete’s life cycle: calcination, hydration and carbonation.

The direct approach: Eliminate emissions from the start

Portland cement is made in giant rotary kilns that carry out the calcination reaction:

calcium carbonate (limestone, chalk) + heat →  calcium oxide (quicklime) + carbon dioxide.

The carbonate-rich rock is ground up and placed in the kiln along with clay, which fuses with the quicklime and contributes minerals that will eventually help the concrete resist cracks and weathering. The end result is “clinker”: pale, grayish nodules that are ground to make cement powder.

About 40 percent of a kiln’s CO2 emissions arise from the “heat” term in this equation, and it’s been a tough fraction to cut. Clinker production requires peak temperatures of 1,450 degrees Celsius, hotter than molten lava, and kiln operators have long assumed that the only practical way to get there is to burn coal or natural gas. Biomass like wood doesn’t burn consistently hot enough. And standard electric heaters powered with renewable sources like wind or solar get their heat from electrical resistance in current-carrying wires. “You can’t get much out before the wire just falls apart,” Andrew says.

Yet the industry has now begun to explore all-electric options that can be powered by renewables. In May, for example, the Swedish green-tech firm  SaltX Technology demonstrated that it can produce clinker with its Electric Arc Calciner: a proprietary system similar to the plasma torches widely used by automakers and other manufacturers for cutting metal. Plasma torches pass an electric current through a jet of inert gas, typically nitrogen or argon, which ionizes the gas and heats it to temperatures over 20,000 degrees Celsius. In June, SaltX announced a partnership with the Swedish limestone supplier SMA Mineral to accelerate commercialization of its technology.

And in 2021, the German firm HeidelbergCement demonstrated that it could make clinker by replacing the fossil fuels with hydrogen, which burns at over 2,000 degrees Celsius. Hydrogen is mostly made from natural gas at the moment. But it can also be made via the electrolysis of water. So as clean energy prices fall and the generation of lots of hydrogen with green electricity becomes more plausible, Andrew says, the interest of cement companies is growing.

But even then, there’s work to be done before cement-makers around the country and the world can switch over to hydrogen wholesale, says Richard Bohan, who leads the Portland Cement Association’s sustainability efforts. The systems aren’t yet set up for it. “Hydrogen would be great — and right off the bat could reduce our carbon footprint by 40 percent,” he says. “Hydrogen, though, requires infrastructure — either pipelines or a very robust electric grid that in some areas of the country we don’t have yet.” It could help, experts say, if Congress passes proposed measures to expedite energy projects.

To tackle the other 60 percent of cement emissions — the CO2 that’s released on the right-hand side of the calcination reaction — the industry is beginning to revive some old alternatives for cement’s raw materials.

Simply by adding some powdered, unbaked limestone to its final product, for example, a kiln’s carbon footprint can be reduced as much as 10 percent. (Limestone alone is relatively inert but will help Portland cement harden when it’s mixed with water.) This Portland-limestone cement is already commonly used in Europe and is now taking off in the United States. “We’re seeing regions of the country where Portland-limestone cements are the predominant material and we’re hearing individual plants say that they’re only going to produce this type from now on,” Bohan says.

Kiln operators are also taking a fresh look at replacing some of their limestone-based cement with mineral-rich industrial waste products. One commonly used example is blast-furnace slag from steel mills, which is rich in calcium and hardens like standard cement when it’s mixed with water. Another is fly ash from coal-fired power plants, which doesn’t harden on its own, but does when it’s mixed with water and standard cement. Either way, the resulting cement yields concrete that is at least as strong and durable as the standard variety, if somewhat more abrasive and slower to cure, while potentially trimming emissions by another 15 or even 20 percent.

Granted, there was a lot of carbon dioxide emitted during the original creation of these wastes. But using them in cement doesn’t produce any new carbon. And two-plus centuries of industrialization have left a substantial backlog of slag and ash, even if we eventually phase out coal entirely. “It’s a win-win. If you have the waste, then replacing your clinker with it is cheaper than producing new clinker,” says Andrew. Indeed, this technique is already widely used in fast-growing countries like Brazil and China, which are producing mountains of slag and ash as they build up their industries.

By themselves, however, the kinds of substitutions just mentioned can’t cut more than about a fifth of that 60 percent of the total carbon dioxide released on the right side of the chemical reaction. So, with an eye on that 2050 zero-emission goal, industry researchers have been investigating at least half a dozen recipes for alternative cements that could minimize or eliminate the 60 percent — often by eliminating the Portland-cement ingredient that produces it, calcium carbonate.

This is definitely a long-term solution, cautions environmental scientist Jeffrey Rissman, who studies industrial greenhouse gas emissions at Energy Innovation, a climate policy think tank in San Francisco. “These newer technologies are at various stages of R&D and commercialization,” he says. “So they still need more technology refinements to help them scale up and drive down their costs.”

Still, some alternatives are considerably further along than others. Among the best-developed are geopolymers, which are hard materials that result when various oxides of silicon and aluminum are soaked in an alkaline solution such as lye (sodium hydroxide), and respond by linking themselves into long chains and networks. The need to use alkali solutions instead of plain water does make geopolymer cements trickier to handle at construction sites. Even so, they have been successfully used in a number of construction projects. And industry interest has been rising fast over the past decade: Not only do geopolymers have a total carbon footprint as much as 80 percent smaller than ordinary Portland cement, but they are also quite a bit stronger. They are also more resistant to water, fire, weathering and chemicals — which is why geopolymers have been commercially produced since the 1970s for encapsulating toxic wastes, sealing ordinary concrete against the elements, and a variety of other, non-cement applications.

And there is no shortage of raw materials: Silicon oxides and aluminum oxides are abundant in slag and fly ash, and they are found in clay, discarded glass and even agricultural by-products. (Burnt rice hulls are so rich in silica that they’re a respiratory danger to anyone who breathes them in.) So in addition to cutting carbon emissions, the widespread use of geopolymer cement could be a handy way to get rid of quite a few troublesome waste products. 

The indirect approach: Maximize concrete efficiency

Once it reaches the construction site, cement begins to fulfill its intended purpose in the hydration reaction:

cement (CaO and minerals) + water (H2O) + aggregate (sand or gravel) + air →  concrete.

The cement, water and aggregate are blended into a thick slurry (or delivered that way in a cement mixer truck), poured into a mold, and left untouched for days or weeks while water and cement react to form concrete. This process also locks in the aggregate, which is included for strength and bulk, along with any reinforcements like steel rebar.

Aside from the transportation required to truck materials to the site, there’s nothing here that generates any further CO2. But the hydration equation does highlight an indirect way of reducing a building’s cement usage, and thus its carbon footprint: Use concrete as sparingly as possible.

Careful attention to concrete efficiency could deliver nearly a quarter of the reductions required to meet the industry’s 2050 zero-emissions goal, according to estimates in the Global Cement and Concrete Association’s climate road map. But that’s not the norm yet, says Cécile Faraud, who leads clean construction efforts at the international climate action group C40 Cities. “Business as usual is, ‘Oh, let’s pour a bit more concrete, just to be on the safe side.’”

That it is, agrees Bohan of Portland Cement — and for good reason: “Contractors, material suppliers, architects and engineers are naturally very risk-averse,” he says, as are the agencies that write building codes. “They want the built environment to last for a very long time” — decades, if not centuries. And, as demonstrated in 2021 in Surfside, Florida, when a 40-year-old high-rise condominium collapsed, killing 98 residents, the consequences of structural failure can be very high.

Still, adds Bohan, attitudes have begun to shift in the face of climate change. “The industry has begun to realize they can have safety, security and resilience, and have a sustainable built environment,” he says. They also have to work with  a growing number of climate-conscious cities that are legislating change: In 2016, for example, Vancouver targeted the emissions produced by concrete and other structural materials for a 40 percent reduction by 2030.

Builders and engineers are trying out a lot of ways to economize on concrete without compromising safety. One is through careful design. For example, says Rissman, higher-strength concrete mixes often have a higher cement content — and thus, a larger carbon footprint. “You can reserve those mixes for structural elements like support pillars and use a lower-strength mix for walkways or stairs that don’t need to support heavy weight,” he says.

A more high-tech way to achieve a similar result was demonstrated in May by researchers at the Graz University of Technology in Austria, who found that they could reduce a concrete building’s carbon footprint by as much as 50 percent through the use of construction-scale  3D printers. In these systems, which have been attracting worldwide interest in recent years as a fast, affordable way to build homes and other structures from local materials, robot-controlled nozzles extrude streams of wet concrete to build up walls and other elements, layer by layer. The Graz team achieved their savings by using this method to create intricate, void-filled walls and ceilings that placed concrete exactly where it was needed for strength and safety, but nowhere else. The team has also shown that the printers can extrude thin steel wires along with the wet slurry, thus reinforcing those parts of the structure where concrete alone isn’t strong enough — and without the need for conventional steel reinforcement rods, or rebar.

An even higher-tech approach is to use concrete made with water that contains suspended flakes of graphene: a super-strong form of carbon in which the atoms bind to one another in a hexagonal lattice one atom thick. In 2018, a team of researchers at the University of Exeter in the United Kingdom announced that they had used such a graphene suspension to produce concrete that was 146 percent stronger than the conventional variety. If ways can be found to mass-produce graphene at a low-enough price to make its use routine — and lots of groups are working to get those costs down — then the team’s calculations suggest that an entire building made of such concrete would need only about half as much cement as a conventionally built one to achieve the same structural strength. That could have a major impact on CO 2 emissions.

There is even a no-tech approach: Keep using the structures we’ve already built for as long as possible. After all, “the more durable your buildings are, the less concrete you will need for new buildings,” says Diana Ürge-Vorsatz, an environmental scientist at the Central European University in Vienna and coauthor of a look at ways to achieve a net-zero construction industry in the 2020 Annual Review of Environment and Resources.

In developed nations like the US, says Ürge-Vorsatz, who is also a vice chair of the emission-mitigation working group of the International Panel on Climate Change, this will require tax policies and other incentives that reward reuse instead of endlessly building what’s shiny and new. And in fast-growing countries like China and India, she says, increasing buildings’ longevity means shifting the focus from speed to quality. “When you just want to expand quickly then you do it the cheapest and fastest way,” she says. “Here in Eastern Europe, we had a big construction rush in the 1960s and ’70s and a lot of those buildings are already crumbling.”

And then there is the no-concrete approach: Completely replace the gray stuff with something more renewable. One emerging option is mass timber: the generic name for a variety of wood products that have been glued or otherwise bonded into giant structural elements that can equal or exceed the performance of concrete and steel. Since its development by Austrian researchers in the early 1990s, mass timber has been widely used in Europe and is drawing increased attention in the US — especially in states like Oregon and Washington that have extensive forests and many idled sawmills. The world’s tallest wood-frame building, an 87-meter apartment-retail tower completed in Milwaukee, Wisconsin, in July 2022, may not hold that distinction long: Taller mass-timber buildings have been proposed — including one that would rise 80 stories over the Chicago waterfront.

The cutting-edge approach: Exploit the carbonation reaction

Appearances to the contrary, concrete is not chemically inert. Even as it starts to harden, for example, it’s already participating in the carbonation reaction:

Ca(OH)2 (in concrete) + CO 2 (in air) → CaCO 3 + H 2O (water vapor)

In effect, says Andrew, this is a spontaneous reversal of the cement-making process: As soon as calcium compounds in the concrete are exposed to CO2 in the air, he says, “they will try to close the loop and form calcium carbonate again.”

This happens rapidly on a fresh concrete surface, adds Andrew, then slows as the carbon dioxide molecules have to diffuse deeper and deeper into the solid mass to find unreacted calcium. But it never stops completely — which means that all those concrete structures scattered around the planet are actually pulling CO2 out of the atmosphere and undoing some of the climate damage they caused. In its road map, the Portland Cement Association estimates that older concrete structures have already absorbed about 10 percent of the CO 2 produced to build them. But that’s a deliberately conservative number, says Bohan;  other estimates range as high as 43 percent.

For builders, it’s true, carbonation is often viewed as an enemy to be fought — especially in big, heavy structural elements like foundations, pillars and retaining walls, all of which have to be reinforced with steel rebar. In fresh concrete, which provides an alkaline environment, this steel is surrounded with a protective oxide layer. But in carbonated concrete, the chemistry shifts and dissolves the protective layer. This leaves the steel wide open to rust and corrosion, which can eventually lead to a structure’s collapse.

And yet at least half a dozen startup companies have been launched over the past decade with technologies intended to enhance the carbonation reaction — and thereby make concrete into a significant repository for atmospheric CO2.

One of the best-established of these startups is Nova Scotia-based CarbonCure, which has already sold more than 700 systems for installation at concrete plants worldwide to inject fresh, wet concrete mixes with CO 2 captured from industrial sources. The injected CO 2 immediately starts reacting with the slurry, filling it within minutes with a blizzard of solid calcium-carbonate nanocrystals. These nanocrystals, in turn, will enhance the strength of the concrete as it cures — meaning, says CarbonCure, that builders can use around 5 percent less Portland cement with no loss of safety margin. Furthermore, the company says that its concrete mix can be used with standard steel rebar, since the solid nanocrystals will not degrade that protective oxide layer the way that atmospheric CO 2 does.

In Los Gatos, California, Blue Planet Systems is hoping to achieve much more dramatic reductions by focusing not on the cement part of concrete but the aggregate: the inert filler of sand or gravel that makes up most of concrete’s volume. The company’s process is proprietary, but the basic idea is to start with any calcium-rich waste product, such as slag or concrete rubble from a demolition site, soak it in a “capture solution” and expose it to the raw flue gas coming out of a cement kiln, power plant, steel mill or any other emission source. The solution helps the calcium ions pull the CO directly out of the flue gas and bind it into calcium carbonate.

The end result, after the capture solution is recovered for reuse, is solid nodules that are 44 percent calcium carbonate. When used as aggregate, says Blue Planet, which is constructing its first demonstration plant in Pittsburg, California, these nodules yield a concrete that has bound as much or more carbon dioxide as went into making it — nearly 670 kilograms per cubic meter.

It remains to be seen whether innovations like these can really get the concrete industry to a place where it emits no net carbon dioxide. Yet industry observers and insiders alike find plenty of room for optimism, if only because the momentum for change has built so rapidly. Remember, says Andrew, that as recently as a decade ago there seemed to be no feasible, climate-friendly alternatives to Portland cement at all. The stuff was cheap, familiar and had a huge infrastructure already in place — hundreds of quarries, thousands of kilns, whole fleets of trucks fanning out to deliver pre-mixed concrete slurry to building sites. “So for a long time, decarbonizing cement production was in the ‘too hard’ basket,” he says.

Yet today, says Bohan, “because of this intense attention to the climate issue, people are now going back and saying, ‘Wow, we didn’t realize all these options were available.’”

Editor’s note: This article was updated on November 21, 2022, to clarify the number of systems sold by CarbonCure, and to what types of facilities. CarbonCure has sold more than 700 of its systems, for operation at hundreds of concrete plants worldwide. The article originally said that CarbonCure has equipped 418 Portland cement plants with its systems. A change was also made to clarify that the source of the carbon dioxide used by these systems comes from various industrial sources, not mostly power plants as implied.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter.

Genes, microbes and other factors govern how each person’s body processes nutrients. Understanding the connections could help optimize diets — and health.

For many years, researchers and clinicians assumed that nutrition was a one-size-fits-all affair. Everybody needs the same nutrients from their food, they thought, and a vitamin pill or two could help dispense with any deficiencies.

But now scientists are learning that our genes and environment, along with the microbes that dwell in us and other factors, alter our individual abilities to make and process nutrients. These differences mean that two given people can respond to identical diets in different ways, contributing to varied health outcomes and patterns of disease.

Until recently, scientists didn’t fully appreciate that individual metabolic differences can have a big impact on how diet affects the risk for chronic diseases, says Steven Zeisel, director of the Nutrition Research Institute at the University of North Carolina, Chapel Hill. The new knowledge is resolving long-standing mysteries about human health and paving the way toward a world of “precision nutrition,” Zeisel writes in a recent article in the Annual Review of Food Science and Technology.

Although the findings are unlikely to lead all the way to hyper-individualized dietary recommendations, they could help to tailor nutrition to subsets of people depending on their genetics or other factors: Zeisel’s company, SNP Therapeutics, is working on a test for the genetic patterns of 20-odd variants that can identify individuals at risk of fatty liver disease, for example. Knowable Magazine spoke with Zeisel about our developing understanding of precision nutrition.

This interview has been edited for length and clarity.

Why has nutrition lagged behind other research areas in medicine?

Nutrition studies have always had a problem with variability in experimental results. For instance, when infants were given the fatty acid DHA [docosahexaenoic acid], some had an improvement in their cognitive performance and others didn’t. Because some showed improvements, it was added to infant formula. But we didn’t understand why they were responding differently, so scientists continued to debate why we did this if only 15 percent of children improved and 85 percent showed no response.

The confusion came from an expectation that everybody was essentially the same. People didn’t realize that there were predictable sources of variation that could separate those who responded to something from those who did not. For DHA, it turned out that if the mother had a difference in her genes that made her slow to produce DHA, then her baby needed extra DHA and responded when given it. That gene difference occurs in about 15 percent of women — and, it turns out, it’s their babies that get better when given DHA.

How are researchers starting to make sense of this variability?

Studying differences in human genetics is one way. We conducted a series of studies that found a good deal of variation in the amounts of choline [an essential nutrient] that people required: Men and postmenopausal women got sick when deprived of it, but only half of young women became sick.

We found that some women can make choline because the hormone estrogen turns on the gene to make choline. Other women have a difference in this gene that makes it unresponsive to estrogen. Men and postmenopausal women need to get the nutrient another way — by eating it — because they have minimal amounts of estrogen.

If I had initially done the choline study and chosen only young women participants, I would have found that half needed choline, half didn’t, and had a lot of noise in my data. Now that we can explain it, it makes sense. What seemed to be noisy data can be better described using a precision nutrition approach.

Are there other nutritional conundrums that suggest these sorts of variations are common?

There are some things for which we already know the underlying genetic reasons. For example, there’s a great deal of information on genetic differences that make some people’s cholesterol go up when they eat a high-fat diet while other people’s doesn’t. Researchers are discovering genetic variants that account for why some people need more vitamin D than others to get the same levels in their blood.

Every metabolic step is controlled by such variants. So, when we find people who seem to be responding differently in our studies, that’s a hint that there is some underlying variation. Rather than throwing the data away or saying participants didn’t comply with the study protocol, we can look at the data to discover some of the genetic reasons for these differences. Precision nutrition is really a change in how we do nutrition research, in that we’re starting to identify why some people respond and some don’t.

Besides genetic variants, are there other factors that precision nutrition needs to take into account?

Right now, much of our ability to be more precise comes from better tools to understand genetic variation. But genetics alone doesn’t determine your response to nutrients. It interacts with other factors too.

The microbiome [the community of bacteria and other microbes that live in and on our body] clearly also affects how nutrients work. Most microbiome research until now has been to name the organisms in the gut, but it’s now getting to the point where researchers can measure what microbial genes are switched on, what nutrients are made by gut microbes, and so on. As that research matures, we’ll be able to get much better recommendations than we do now.

Our environment could be a very important factor as well. We’re starting to be able to measure different environmental exposures by testing for thousands of chemicals in a drop of blood. Epigenetics, which is the science of chemical marks placed on DNA to turn genes­ on and off, will also likely contribute to important differences. It’s been a hard field because these marks vary in different tissues, and we can’t easily get a sample of liver or heart tissue for a nutrition test. We have to track these changes in the bloodstream, and estimate whether they’re the same changes that occurred in the organs themselves.

We’ll have to include each of these factors to improve our predictions of who will or won’t respond to a certain nutrient. Eventually, precision nutrition will have all of these inputs at its early stages.

There are various precision nutrition tests now being sold by different companies. Do they have anything useful to offer?

Right now, most tests look at one gene at a time in a database and say, “You have this gene difference and it makes you more susceptible to something.” But the metabolic pathways for most nutrients are not controlled by a single gene. There may be 10 or 20 steps that all add up to how you respond to sugars, for example, and any one of those steps can cause a problem. Knowing where you have variations all along the pathway can help us predict how likely you are to have a problem metabolizing sugar. It’s more sophisticated, but it’s also harder to do.

Are there ethical concerns with precision nutrition?

Once I know something about a person’s genetics for nutrition, I may be able to predict that they’re more likely to develop a disease or a health problem. That could change whether an insurance company wants to cover them. We have to try to make that risk clear to people, and also work on improving privacy so their information isn’t available to anybody but them.

The other problem is that wealthier people can afford to do these genetic tests and others can’t. But we can use precision nutrition to find alternate solutions. For instance, women who can’t turn choline production genes on with the hormone estrogen are at higher risk of having babies with neural tube defects and poor brain development. If we need a test for only that one gene difference, a test like that could be reduced to a few dollars and made widely available. Or we might choose to just give everybody choline supplements, if that proves to be a more cost-effective solution.

In the long run, will these discoveries help prevent disease?

There is an advantage in seeking more precise advice for some problems right now. With obesity, for instance, we know that as people gain weight, they develop a group of problems called metabolic syndrome that’s related to the accumulation of fat in the liver. We know that because of genetic differences, about 20 percent of the population is much more likely to develop fatty liver — and is at higher risk for developing these related problems. If we can test for these gene differences, then we can identify those who will benefit the most from changes in diet and weight loss and treat them, either with supplements, drugs or lifestyle changes.

Salt sensitivity is another example. About 10 percent of people have higher blood pressure when they eat high-salt diets. Right now, because we don’t know the metabolic differences that contribute, we say everybody should stay away from salt. But the truth is, only about 10 percent of people are benefiting from that recommendation, and 90 percent are getting bland food that they don’t like. If we could do genetic testing and tell whether a person is salt-sensitive, then they know that effort is worth it for their health. I think that helps to make people comply with recommendations and change their lifestyles.

Unlike some drugs, which have an all-or-nothing effect, nutrition’s effects tend to be modest. But it’s clearly an important, easy intervention. And if we don’t fix a diet, then we have to treat the problems that arise from a bad diet.

Nutrition is always going to be a tough field to get precise results. It isn’t going to be perfect until we can get all the variables identified. Part of what precision nutrition is doing is helping to refine the tools we have to understand these complex systems.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Justice Clarence Thomas and his wife have bolstered conservative causes as he is poised to lead the Supreme Court rolling back more landmark rulings

U.S. Supreme Court Justice Clarence Thomas at the White House on Oct. 26, 2020. Jonathan Newton /The Washington Post via Getty Images)
Neil Roberts, University of Toronto

With the opening of the U.S. Supreme Court’s new session on Oct. 3, 2022, Clarence Thomas is arguably the most powerful justice on the nation’s highest court.

In 1991, after Thomas became an associate justice and only the second African American to do so, his power was improbable to almost everyone except him and his wife, Virginia “Ginni” Thomas.

He received U.S. Senate confirmation despite lawyer Anita Hill’s explosive workplace sexual harassment allegations against him.

Today, Thomas rarely speaks during oral arguments, yet he communicates substantively through his prolific written opinions that reflect a complicated mix of self-help, racial pride and the original intent of America’s Founding Fathers.

He isn’t chief justice. John Roberts Jr. is.

But with Thomas’ nearly 31 years of service, he’s the longest-serving sitting justice and on track to have the lengthiest court tenure ever.

June Jordan, pioneering poet and cultural commentator, observed in 1991 when President George H.W. Bush nominated Thomas that people “focused upon who the candidate was rather than what he has done and will do.”

As a scholar of political theory and Black politics, I contend we haven’t learned from this vital insight.

Conservative activism

Thomas’ service is under increasing scrutiny as his wife, a conservative activist, testified on Sept. 27, 2022, before the House committee investigating the Jan. 6 attack on the U.S. Capitol that she still believes false claims that the 2020 election was rigged against Donald Trump.

According to documents obtained by that committee, Ginni Thomas was instrumental in coordinating efforts to keep former President Donald Trump in office. Her efforts included sending emails to not only former White House Chief of Staff Mark Meadows but also state officials in Arizona and Wisconsin.

Of particular concern to the Jan. 6 committee is testimony from Thomas on her email correspondence with John Eastman, her husband’s former law clerk, who is considered to be the legal architect of Trump’s last-ditch bid to subvert the 2020 election.

In my view, Clarence and Ginni Thomas’ intertwined lives highlight a distressing underside to their personal union: the blurring of their professional and personal lives, which has had the appearance of fracturing the independence of the executive and judicial branches of government.

In this light, Thomas’ sole dissent in the case involving Trump’s turning over documents to the Jan. 6 committee is all the more alarming.

‘What he has done and will do’

Clarence Thomas has cultivated a distinct judicial philosophy and vision of the world – and a view of his place in it.

From what can be gleaned from his own writings and speeches, his vision has been derived from Black nationalism, capitalism, conservatism, originalism and his own interpretations of the law.

Since Thomas’ confirmation, his ideas and rulings have attracted many critics.

A white womn dressed in a full-length green gown is walking next to a middle-aged Black man wearing a black tuxedo.
Supreme Court Justice Clarence Thomas and Virginia Thomas arrive at White House dinner in 2019. Paul Morigi/Getty Images

But his interpetations of the law are now at the center of the high court’s jurisprudence.

In his concurring opinion of the court’s decision to overturn Roe v. Wade, Thomas argued that the court should reconsider reversing other related landmark rulings, including access to contraception in Griswold v. Connecticut, LGBTQ+ sexual behavior and sodomy laws in Lawrence v. Texas and same-sex marriage in Obergefell v. Hodges.

In short, Thomas’ sentiments reveal a broader ultraconservative agenda to roll back the social and political gains that marginalized communities have won since the 1960s.

The rulings in those cases, Thomas wrote, relied on the due process clause of the 14th Amendment and “were demonstrably erroneous decisions.”

“In future cases,” Thomas explained, “we should reconsider all of this Court’s substantive due process precedents, including Griswold, Lawrence, and Obergefell … we have a duty to ‘correct the error’ established in those precedents.”

Other recent Supreme Court rulings, on Second Amendment rights, Miranda rights, campaign finance regulations and tribal sovereignty, are also evidence of Thomas’ impact on the nation’s highest court.

The long game

In his memoir and public speeches, Thomas identifies as a self-made man.

Though he has benefited from affirmative action programs – and the color of his skin played a role in his Supreme Court nomination – Thomas has staunchly opposed such efforts to remedy past racial discrimination. Like other notable Black conservatives, Thomas argues that group-based preferences reward those who seek government largesse rather than individual initiative.

With the exception of guidance of Catholic Church institutions and his grandfather Myers Anderson, Thomas claims he earned his accomplishments by effort, hard work and his own initiative.

In a 1998 speech, Thomas foreshadowed his judicial independence and made clear that his attendance before the National Bar Association, the nation’s largest Black legal association, was not to defend his conservative views – or further anger his critics.

“But rather,” he explained, “to assert my right to think for myself, to refuse to have my ideas assigned to me as though I was an intellectual slave because I’m black.”

“I come to state that I’m a man, free to think for myself and do as I please,” Thomas went on. “I’ve come to assert that I am a judge and I will not be consigned the unquestioned opinions of others. But even more than that, I have come to say that, isn’t it time to move on?”

But like many of Thomas’ complexities, his own self-made narrative distorts the ideas of the first prominent Black Republican who remains one of his intellectual heroesFrederick Douglass, the statesman, abolitionist and fugitive ex-slave whose portrait has hung on the wall of Thomas’ office.

But in “Self-Made Men,” a speech he first delivered in 1859. Douglass disagreed with the idea that accomplishments result from solely individual upliftment.

“Properly speaking,” Douglass wrote, “there are in the world no such men as self-made men. That term implies an individual independence of the past and present which can never exist.”

Law against the people

Thomas’ view of the law is rooted in the originalism doctrine of an immutable rather than living U.S. Constitution.

Since the 1776 Declaration of Independence, modern America for Thomas has been predominantly a republic, where laws are made for the people through their elected representatives. Unlike a pure democracy, where the people vote directly and the majority rules, the rights of the minority are protected in a republic.

Dating back to ancient Rome, the history of republicanism is a story of denouncing domination, rejecting slavery and championing freedom.

Yet in my view, American republicanism has an underside: its long-standing basis in inequality that never intended its core ideals to apply beyond a small few.

An elderly Black and an elderly white man stand next to each other as onlookers applaud.
Clarence Thomas is seen here with GOP leader Mitch McConnell. Drew Angerer/Getty Images

Thomas claims consistency with America’s original founding.

In my view, Thomas’ perilous conservative activism works against a fundamental principle of the U.S. Constitution – “to form a more perfect union.”

Thomas’ rulings reveal a broader ultraconservative agenda to roll back the social and political gains that marginalized communities have won since the 1960s.

Neil Roberts, Professor of Political Science, University of Toronto

This article is republished from The Conversation under a Creative Commons license.

‘Got polio?’ messaging underscores a vaccine campaign’s success but creates false sense of security as memories of the disease fade in US

For much of the 20th century, Americans were used to seeing people bearing the signs of past polio infection. Genevieve Naylor/Corbis via Getty Images
Katherine A. Foss, Middle Tennessee State University

Got Polio? Me neither. Thanks, Science.

Messages like this are used in memes, posters, T-shirts and even some billboards to promote routine vaccinations. As this catchy statement reminds people of once-feared diseases of the past, it – perhaps unintentionally – conveys the message that polio has been relegated to the history books.

Leonardo DiCaprio meme 'Remember that time you got polio? Nope? Me neither? Thanks Science!'
This pro-science message uses a popular ‘cheers’ meme format.

Phrasing that aims to encourage immunizations by highlighting their accomplishments implies that some diseases are no longer a threat.

Few people today know much about polio. In 2022, only one-third of surveyed adults in the U.S. were aware that polio has no cure. Moreover, a 2020 poll had found that 84% of adults viewed vaccinating children as important, a 10% decline from 2001. The COVID-19 pandemic amplified anti-vaccination messaging, while also delaying routine immunization.

Vaccine-preventable diseases are far from eradicated. Measles outbreaks in unvaccinated or under-vaccinated American communities have begun resurfacing in the past few years, despite a 2000 declaration that the virus had been eliminated in the U.S. Pertussis cases have been on the rise, with more than 18,000 cases reported in 2019. And in July 2022, polio reappeared in an unvaccinated New York man – the first U.S. diagnosis since 1979. This case helped return attention to polio, causing at least some young adults to wonder about their own vaccination status.

A shift in focus to immunization in developing countries has further lulled Americans into a false sense of security. While global approaches have been effective and are certainly needed, as the author of “Constructing the Outbreak: Epidemics in Media and Collective Memory,” I suggest that the celebratory messaging is no longer as effective as it once was and runs the risk of making it seem as if polio only lives in history books.

semicircle of patients in iron lungs use mirrors to watch a TV
Polio patients at Baltimore’s Children’s Hospital watched television from inside the iron lungs that breathed for them. Bettmann via Getty Images

Campaigning against a devastating disease

Before vaccines, polio – called infantile paralysis or poliomyelitis – was the most feared childhood disease in the U.S. Frequently affecting elementary school kids, the disease sometimes presented like a cold or flu – fever, sore throat and headache. In other cases, limb or spinal pain and numbness first indicated that something was wrong. Paralysis of legs, arms, neck, diaphragm or a combination could occur and, depending on the area affected, render patients unable to walk, lift their arms, or breathe outside of an iron lung.

magazine add with images of kids with polio asks for donations
Full page ads like this one from 1953 solicited funds to help polio patients. March of Dimes

Only time could reveal whether the paralysis was permanent or would recede, sometimes to return decades later as Post-Polio Syndrome. Enough people were infected in outbreaks in the 1930s, 1940s and early 1950s that the effects of paralytic polio were quite visible in everyday life in the form of braces, crutches, slings and other mobility devices.

Thanks to the National Foundation for Infantile Paralysis, beating polio became a national priority. The NFIP grew out of President Franklin Delano Roosevelt’s Warm Springs Foundation. Roosevelt himself had been partially paralyzed by polio, and the NFIP provided funds for public education, research and survivors’ rehabilitation.

Eleanor Roosevelt smiles with a young boy holding a 'Mothers March on Polio' scroll
Eleanor Roosevelt helped inaugurate the Mothers’ March on Polio to raise money to fight the disease. Bettmann/CORBIS via Getty Images

Its campaigns were prolific and diverse, combining interpersonal and mass communication strategies.

From FDR “Birthday Ball” celebrations to parades and elementary school fundraising competitions, various groups raised money. High schoolers performed polio-themed plays, putting the disease itself on trial in “The People vs. Polio.” People passed around collection boxes at movie theaters and other public gatherings.

text of three 'I will not' and 'I will' points
An ad placed in Vogue in 1952 laid out the ‘Polio Pledge.’ National Foundation for Infantile Paralysis

Campaigns used every medium. Brochures and short films raised awareness of the threat of polio, emphasizing the need for funding to support patient rehabilitation and scientific research. The National Foundation for Infantile Paralysis generated scores of radio scripts and hired Frank Sinatra, Elvis Presley and other famous voices to read them. Judy Garland, Mickey Rooney, Lucille Ball and other Hollywood stars also joined the fight. Comic strips and cartoons featuring Mickey Mouse and Donald Duck rallied for March of Dimes funds to help polio patients.

Starting in 1946, the NFIP featured children with crutches and braces who had survived polio as “poster children” asking for funds to help them walk again. News stories covered outbreaks and polio epidemics, detailing the devastation of the disease on individuals, families and communities, while advising families how to reduce risk through the “Polio Pledge for Parents,” which provided a list of do’s and don'ts during summer months.

From public enemy No. 1 to success story

The work of the National Foundation for Infantile Paralysis yielded unprecedented and continuous success, providing hospitals with equipment during epidemics and supporting the development of vaccines. Following the largest vaccine trial in history, on April 12, 1955, the Poliomyelitis Vaccine Evaluation Center announced that Jonas Salk’s vaccine was 80%-90% effective against paralytic polio and officially ready for general use.

families in line outside a school with a sign 'Entrance for polio shots' in 1955
Once a vaccine was available, people lined up to protect themselves and their families from the virus. Bettmann via Getty Images

Over the next decade, the NFIP shifted its focus to widespread immunization, again using both mass media and local campaigns. With Salk’s vaccine, and then Albert Sabin’s, polio cases fell quickly, from the peak of 57,879 cases in 1952 to only 72 cases in 1965, with the last naturally occurring U.S. case in 1979.

The repeated declaration of what polio vaccines could and were accomplishing was strategically effective in persuading more people to get their shots. The American public of the 1960s and 1970s had lived through repeated polio epidemics and knew both the fear of contracting the disease and its visible aftereffects. As of 2021, 92.7% of Americans were fully protected by the vaccine, though these rates have been in decline since 2010 and fluctuate by region.

Public health rhetoric that focused on this vaccine success story worked around the world in the late 1980s and 1990s. Gradually, though, the perceived threat in the U.S. of polio and other vaccine-preventable diseases dissipated over generations as vaccinations largely eliminated the risk. Most people in developed countries lack firsthand experiences of just how terrifying these diseases are, having never experienced polio, diphtheria, measles or pertussis, or lost family members to them.

At the same time that polio has been largely forgotten in the U.S., anti-vaccination messages have been spreading disinformation that distorts the risk of vaccines, ignoring the realities of the diseases they immunize against.

Rhetoric from polio vaccine campaigns in the 1950s and 1960s emphasized the risks of not getting immunized – acute illness, life-changing pain and paralysis or even death. In the 21st century U.S., immunization campaigns no longer emphasize these risks, and it’s easy to forget the potentially deadly repercussions of skipping vaccines.

I believe pervasive public health messaging can counter anti-vaccination disinformation. A reminder for the American public about this still dangerous disease can help ensure that “Got Polio?” does not become a serious question.

Katherine A. Foss, Professor of Media Studies, Middle Tennessee State University

This article is republished from The Conversation under a Creative Commons license. 

An Easy-to-Make Summer Sweet

(Culinary.net) When it’s beyond hot outside and the kids are begging for a delicious afternoon snack, sometimes it’s difficult to know where to turn. The pantry is full and the refrigerator is stocked, yet nothing sounds appetizing when it’s scorching outside.

Combining three simple ingredients you probably already have in your kitchen can save the day and provide a refreshing and scrumptious snack.

Try this 3-Ingredients Strawberry Ice Cream on warm days ahead. It’s chilled to perfection with fresh strawberries and fluffy whipping cream to create a creamy texture perfect for the kiddos.

Start by pureeing 1 pound of fresh strawberries. Add 1 pint of whipping cream and sweetened condensed milk to a mixing bowl then beat until stiff peaks form.

Fold the strawberry puree in with the whipping cream mixture. Pour into a loaf pan and freeze for 5 hours.

Before serving, let ice cream soften for 5-10 minutes.

It’s delicious, rich and has sweet strawberry flavor that can help satisfy nearly any sweet tooth. It’s a wonderful treat after long summer days spent playing outside, splashing in the pool or just relaxing, soaking up the sun.

Find more summer dessert recipes at Culinary.net.

If you made this recipe at home, use #MyCulinaryConnection on your favorite social network to share your work.

Watch video to see how to make this recipe!

3-Ingredient Strawberry Ice Cream

Servings: 4-6

  • 1          pound fresh strawberries, stems removed
  • 1          pint heavy whipping cream
  • 1          can (14 ounces) sweetened condensed milk
  1. In blender, puree strawberries.
  2. In bowl of stand mixer, beat whipping cream and sweetened condensed milk until stiff peaks form. Fold in strawberry puree. Pour into loaf pan. Freeze 5 hours.
  3. Before serving, let ice cream soften 5-10 minutes.
SOURCE:
Culinary.net

The controversial technology of reflecting sunlight away from the planet could help blunt the worst impacts of climate change

For decades, climate scientist David Keith of Harvard University has been trying to get people to take his research seriously. He’s a pioneer in the field of geoengineering, which aims to combat climate change through a range of technological fixes. Over the years, ideas have included sprinkling iron in the ocean to stimulate plankton to suck up more carbon from the atmosphere or capturing carbon straight out of the air.

Keith founded a company that develops technology to remove carbon from the air, but his specialty is solar geoengineering, which involves reflecting sunlight away from Earth to reduce the amount of heat that gets trapped in the atmosphere by greenhouse gases. The strategy hasn’t been proven, but modeling suggests it will work. And because major volcanic eruptions can have the same effect, there are some real-world data to anchor the idea.

In the near future, Keith and his colleagues hope to launch one of the first tests of the concept: a high-altitude balloon that would inject tiny, reflective particles into the layer of the upper atmosphere known as the stratosphere. The place and time for the experiment are still to be determined, but it would be a baby step toward showing whether artificial stratospheric particles could help cool the planet the way eruptions do naturally.

But the idea of using a technological fix for climate change is controversial. Talking about — let alone researching — geoengineering has long been considered taboo for fear that it would dampen efforts to fight climate change in other ways, particularly the critical work of reducing carbon emissions. That left geoengineering on the fringes of climate research. But people’s attitudes may be changing, Keith says. He argues that while geoengineering by itself cannot solve the problem of climate change, it could help mitigate the damage if implemented carefully alongside emissions reductions.

In 2000, Keith published an overview of geoengineering research in the  Annual Review of Energy and the Environment, in which he noted that major climate assessments up until that point had largely ignored it. Earlier this year, he spoke in Seattle about the current state of the field at the annual meeting of the American Association for the Advancement of Science.  Knowable Magazine talked with Keith about how the scientific, technological and geopolitical landscape has changed in the intervening decades.

This conversation has been edited for length and clarity.

Twenty years ago you called geoengineering “deeply controversial.” How has the controversy changed since then?

Back then it was something that a pretty small group of people who thought about climate knew about — and mostly agreed they wouldn’t talk about. And that was it. Now it’s much more widely discussed. I think the taboo is reduced, for sure. It’s certainly still controversial, but my sense is that there has been a real shift. An increasing number of people who are in climate science or in public policy around climate or in environmental groups now agree that this is something we should talk about, even if many think it should never be implemented. There’s even growing agreement that research should happen. It feels really different.

Why was there a taboo against talking about geoengineering, and do you think was it valid?

I think it’s well-intentioned; people are right to worry that talking about geoengineering might reduce the effort to cut emissions. I don’t think this concern about moral hazard is a valid reason not to do research. There were people who argued that we shouldn’t allow the AIDS triple-drug cocktail to be distributed in Africa because it would be misused, creating resistance. Others argued against implementation of airbags, because people would drive faster. There is a long history of arguing against all sorts of potentially risk-reducing technologies because of the potential for risk compensation — the possibility that people will change behavior by taking on more risks. I think it’s an ethically confused argument.

For me, the most serious concern is some entities — like big fossil-fuel companies that have a political interest in blocking emissions cuts — will attempt to exploit the potential of geoengineering as an argument against emissions cuts. This concern has likely been the primary reason that some big civil-society groups want to block or contain discussion of this stuff so it doesn’t enter more widely into the climate debate. For me the concern is entirely justified, but I think the right answer is to confront it head-on rather than avoiding debate. I don’t want a world where decisions are made by elites talking behind closed doors.

Has the amount of geoengineering research increased in the past two decades?

Dramatically, even in the last couple of years. When I wrote that Annual Reviews paper in 2000, there was virtually zero organized research. There were a few researchers occasionally getting interested and putting in like 1 percent of their time.

Now there are little research programs almost everywhere you care to mention. There’s a Chinese program that’s pretty serious; there’s an Australian one that’s better funded than anything in the United States; there are several in Europe.

What has been the biggest surprise over the past 20 years in how solar geoengineering might work?

The big surprise has been recent results, including two studies I was involved in, showing that the effects of a global solar geoengineering program wouldn’t be as geographically unequal as was feared. What matters for real public policy is who is made worse off.

For one paper published last year in Nature  Climate Change, we used a very high-resolution computer model, and we compared, over all the land surface, two worlds: one world where we have two times preindustrial levels of carbon dioxide and the other world where we have enough solar geoengineering to reduce the temperature change by half. For each of the 33 geographical study regions designated by the Intergovernmental Panel on Climate Change, we tried to look at whether solar geoengineering would move a particular climate variable back toward preindustrial levels, which we call “moderated,” or move it further away from preindustrial, which we call “exacerbated.”

We focused on some of the most important climate variables: change in extreme temperature, change in average temperature, change in water availability and change in extreme precipitation. And what we found seems almost too good to be true: There wasn’t a single variable in a single region that was exacerbated. That was a surprise.

In a paper published in March in Environmental Research Letters, we did the same analysis with another model, and we found that with solar geoengineering, everything is moderated in all regions except four. But all four of those are dry regions that get wetter. So my guess is many residents of those regions would actually prefer that outcome because in general people are more worried about getting drier than wetter.

Now, what the model shows may or may not be true in the real world. But if there is a single reason to really look at these technologies and evaluate them in experiments, it’s results like this that show you can reduce almost all or many of the major perturbations of climate without making any region significantly worse. That’s quite a thing.

How would your planned real-world experiment, known as the Stratospheric Controlled Perturbation Experiment (SCoPEx), work?

SCoPEx is a stratospheric balloon experiment to put aerosols in the stratosphere and measure their interaction over the first hours and the first kilometer or so after release in a plume. It involves a high-altitude balloon that will lift a gondola carrying a package of scientific instruments to an altitude of 20 kilometers. It will release a very small amount of materials such as ice, calcium carbonate (essentially powdered limestone) or sulfuric acid droplets known as sulfates. The gondola will be fitted with propellers that were originally made for airboats so that it can fly through the plume of released materials to take measurements.

The amount of released material will be on the order of 1 kilogram, which is far too small to have any direct health or environmental impact once released. The goal is not to change climate or even to see if you can reflect any sunlight. The goal is simply to improve our models of the way aerosols form in the stratosphere, especially in plumes, which is very relevant for understanding how solar geoengineering would work. We hope to launch the experiment soon. But when and where that will happen depends on balloon availability and recommendations from an advisory committee.

We know there are health risks related to sulfuric acid pollution in the lower atmosphere. Are there potential health risks from injecting sulfate aerosols into the stratosphere?

Anything we put in the stratosphere will end up coming down to the surface, and that’s one of the risks we must consider. A full-scale solar geoengineering program might involve injecting around 1.5 million tons of sulfur and sulfuric acid into the stratosphere per year. This could be done using a fleet of aircraft; roughly 100 aircraft would need to continuously fly payloads up to about 20 kilometers (12 miles) altitude. You would not be wrong to think this sounds crazy. We know that sulfuric acid pollution in the lower atmosphere kills many people every year, so putting sulfuric acid into the stratosphere is obviously a risk. But it’s important to understand how much 1.5 million tons a year really is.

The 1991 eruption of Mount Pinatubo, in the Philippines, poured about 8 million tons of sulfur in one year into the stratosphere. It cooled the climate and had implications for all sorts of systems. Current global emissions of sulfur are about 50 million tons a year into the lower atmosphere, and that kills several million people every year from fine particulate air pollution. So the relative risk from solar geoengineering is fairly small, and it has to be weighed against the risk of not doing solar geoengineering.

How quickly could a full-scale solar geoengineering program get off the ground?

It could happen very fast, but all the ways it happens very fast are bad cases, basically where one country just jumps on it very quickly. It’s obvious that what would be best is for countries not to just start doing it but to articulate clear plans and build in checks and balances and so on. 

If there were much wider research over the next half-decade to decade — which is possible because attitudes really are changing — then it’s plausible that some coalition of countries could begin to inch toward real implementation with serious, visible plans that can be critiqued by the scientific community starting by the end of this decade. I don’t expect it will happen that fast, but I think it’s possible.

How does geoengineering fit in with other efforts to combat climate change such as reducing fossil-fuel emissions and removing carbon from the air?

The first, and by far the most important, thing we do about climate change is decarbonizing the economy, which breaks the link between economic activity and carbon emissions. There’s nothing I can say about solar geoengineering that changes the fact that we have to reduce emissions. If we do not do that, we’re done.

Then carbon removal, which involves capturing and storing carbon that has already been emitted, could break the link between emissions and the amount of carbon dioxide in the atmosphere. Large-scale carbon removal really makes sense when emissions are clearly heading toward zero, and we’re getting toward the harder chunk of the economy to mitigate. And then solar geoengineering is a thing that might partially and imperfectly weaken, but not break, the link between the amount of carbon dioxide in the atmosphere and climate changes — changes in sea level, changes in extreme events, changes in temperature, etc.

So if you look at the curve of overall greenhouse gases in the atmosphere, you can think of emissions cuts as flattening the curve. Carbon removal takes you down the other side of the curve. And then solar geoengineering can cut off the top of the curve, which would reduce the risk of the carbon dioxide that is in the air already.

Some people think we should use it only as a get-out-of-jail card in an emergency. Some people think we should use it to quickly try to get back to a preindustrial climate. I’m arguing we use solar geoengineering to cut the top off the curve by gradually starting it and gradually ending it.

Do you feel optimistic about the chances that solar geoengineering will happen and can make a difference in the climate crisis?

I’m not all that optimistic right now because we seem to be so much further away from an international environment that’s going to allow sensible policy. And that’s not just in the US. It’s a whole bunch of European countries with more populist regimes. It’s Brazil. It’s the more authoritarian India and China. It’s a more nationalistic world, right? It’s a little hard to see a global, coordinated effort in the near term. But I hope those things will change.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.