Saturday, April 22, 2023

What will it take to recycle millions of worn-out EV batteries?


In Nevada and other US states, entrepreneurs are anticipating the coming boom in retired lithium-ion batteries from electric cars and hoping to create a market for recycled minerals

Thirty miles east of Reno, Nevada, past dusty hills patched with muted blue sage and the occasional injury-lawyer billboard, a large concrete structure rises prominently in the desert landscape. When fully constructed, it will be a pilot for a business that entrepreneurs envision as a major facet of America’s future green economy: lithium-ion battery recycling.

Construction manager Chuck Leber points out bays where trucks will drop off batteries, and deep drains in rooms to catch leaking chemicals. He shows me a two-foot concrete slab under the building — a hefty foundation so that workers can move equipment and adapt the plant while refining the recycling process. Later this year, the first batteries will pass through the facility; the goal is to ramp up to handle 20,000 metric tons of batteries a year.

The 60,000-square-foot plant owned by the American Battery Technology Company is an optimistic endeavor to address the inconvenient environmental downside of electric vehicles — their resource-demanding battery packs. It is also a test of whether business leaders can live up to their promises to help build a circular economy: one in which materials are reused indefinitely, minimizing the need to continually pry more minerals from the earth.

Since 2019, electric vehicles — EVs — have more than tripled their share of the auto market, and 6.6 million were sold globally in 2021. Facing pressure and sometimes outright regulation to reduce their climate footprint, many automakers have pledged to stop sales of new combustion-engine vehicles by 2040. “In five years, we are aiming for having tens of millions of electric vehicles on the roads,” says Alexandre Milovanoff of the sustainability consulting company Anthesis Group, who has studied  how an EV transition would affect America’s electrical grid. “We’re talking about a market that is exploding.”

To feed the rising EV battery demand, the US government and companies are investing in domestic mining for the needed minerals — including nickel, manganese, cobalt and lithium (the price of which more than quadrupled in 2021). But they are also looking for ways to reduce dependence on newly mined materials through recycling. In March 2022, President Joe Biden  invoked the Defense Production Act to bolster supplies of the in-demand minerals, directing domestic investments both in mining and in other forms of recovery.

Researchers say that figuring out recycling could help to avoid the environmental risks of more mining and a buildup of hazardous battery waste — but reprocessing these batteries and refining the metals they contain for reuse is difficult and costly, and many remain skeptical of how truly circular that supply chain can ever be. “An electric vehicle battery is a very complex piece of technology with a lot of different components in it — so a recycling facility is going to be very complicated,” says Michael McKibben, a geologist at the University of California, Riverside. “In the long run, that’s going to be important, but in the short run, it’s got a ways to go.”

Sourcing specific minerals

To power a car, electrons in the battery move from the negative electrode, the anode, to the positive electrode, the cathode. Typically, the anode is made of copper and graphite, while the cathode consists of a class of compounds called lithium metal oxides — ones that contain lithium plus other metals such as cobalt, manganese and nickel.

All of these metals must be sourced — and recycling alone cannot yet meet market needs. Though the US has numerous copper mines (and obtains a sizable chunk of copper from scrap recycling), nearly all of the other metals in lithium-ion batteries come from mines in other countries. More than 80 percent of global lithium comes from Chile, Australia and China, while more than 60 percent of cobalt comes from the Democratic Republic of Congo.

This overseas reliance can come with costs. Much of the lithium mined today, for example, comes from the fragile Atacama Desert in Chile, where the metal is recovered by evaporating salty brine in massive ponds. It’s cost-effective, but  researchers and local communities have raised concerns over toxic wastes and the depletion and contamination of water supplies; by one estimate, it takes  500,000 gallons of water — largely lost to evaporation — to concentrate a single metric ton of lithium. Sourcing battery metals also has been connected with  human rights abuses in some locations, such as cobalt mining in the Democratic Republic of Congo, where companies have been accused of using child labor, paying workers poorly and failing to provide basic safety equipment.

There’s also a greenhouse-gas price to pay for the long-distance transport of materials: Before anyone even gets in the driver’s seat of a brand-new electric vehicle, some EV battery materials have already traveled tens of thousands of miles. (Still, electric vehicles — with few exceptions — have a smaller  carbon footprint than gas-powered cars, and electrifying transportation is key to slashing carbon emissions to stave off disastrous levels of  climate change.)

For now, mining remains a need, and researchers think it’s possible to reduce its impacts through domestic operations and new technologies. But they say it is also crucial to ramp up the technology and business models for recycling. After thousands of charges and discharges, cells of lithium-ion batteries dry up and cracks form in the cathode materials, until the battery can neither hold nor deliver enough charge. Millions of EV batteries will soon be reaching this point, and if they’re deposited at the dump, they can leach toxic chemicals and even catch fire. A few US companies collect batteries for recycling, but this capacity lags behind the volume of spent lithium-ion batteries from cars, phones, computers and other electronics. In 2019, US recycling companies  diverted from landfills about 15 percent of all retired lithium-ion batteries.

The challenges of recycling

Profitability is a major barrier. Though lithium-ion batteries contain valuable metals, they are challenging to take apart and the minerals are hard to extract from the tight layers of inorganic and organic compounds. By one estimate, the cost of recycled lithium is  five times that of virgin lithium from brine-mining. Compare that with lead-acid batteries in combustion cars, which are almost entirely diverted from landfills and recycled. “It’s easy as pie to recycle a lead-acid battery in comparison to a lithium-ion battery,” says geologist Jens Gutzmer, director of the Helmholtz Institute Freiberg for Resource Technology in Germany and coauthor of  an article about building a circular metals economy in the  Annual Review of Materials Research.

Another problem is that today’s main lithium-ion battery recycling processes are also not particularly efficient. A process used by many recyclers, pyrometallurgy, involves melting down the batteries and burning off plastic separators to extract the coveted metals. Pyrometallurgy is energy-intensive, emits toxic gases and can’t recover  some valuable minerals, including lithium, at all.

With growing EV sales, a massive wave of dead electric car batteries will soon exacerbate recycling problems. By 2028, researchers predict that the world will have more than a million metric tons of them to deal with. “I like to compare it to the plastic industry — we have a lot of plastic waste, and people are not really dealing with that — and I’m just worried that this will be happening also with batteries,” says Laura Lander, a materials scientist at King’s College London. And yet if it could be made profitable, scaling up EV battery recycling could, by 2040,  reduce the global need for newly mined lithium by 25 percent, and for cobalt and nickel by 35 percent, according to one report prepared by the Institute for Sustainable Futures at the University of Technology Sydney in Australia.

Efforts to improve battery recycling are underway at the Department of Energy’s ReCell Center, a collaboration with national labs and universities launched in 2019. There, researchers are working to scale up what’s called “direct recycling.” This method aims to recapture the cathode material — a carefully manufactured powder — without melting or dissolving the whole battery and destroying the powder in the process. “They put a lot of time and effort into making these beautiful, spherical particles that are about 10 microns in diameter with the right crystal structure,” says Albert Lipson, a materials scientist at Argonne National Laboratory and the ReCell Center.

Lipson’s research team developed a chemical process to successfully recover cathode powder, which can then be rejuvenated by adding fresh lithium — returning the charging capacity that was lost as the original battery aged. The direct recycling method could make it more profitable to recover battery components while producing fewer  greenhouse gas emissions than other recycling processes that use energy-intensive steps to re-manufacture cathode materials. (These involve, at one point, putting the material in a massive furnace.) ReCell’s direct recycling is being done only in laboratory-size batches right now, but Lipson says his team is working with companies to scale up the process.

Battery recycling startups, for their part, are primarily using a technique called hydrometallurgy that dissolves the batteries in acid. Liquid solvents are then used to extract the minerals. Though hydrometallurgy isn’t new, the recycling companies — including American Battery Technology Company and Redwood Materials, both based in northern Nevada and headed by former Tesla engineers — say they are making the process more efficient and recovering more material than in the past.

Ryan Melsert, who heads American Battery Technology, says his time at Tesla’s Gigafactory — a gargantuan facility assembling batteries outside Sparks, Nevada — clued him in to ways to improve the recycling technology. Instead of shredding batteries as old-school hydrometallurgical recycling does, his company will use machines to break down used batteries, and then will separate and sell the lower-value components such as plastic, aluminum and steel. Proprietary chemical reactions will then extract nickel, cobalt, manganese and lithium.

“Instead of just dropping a battery in a furnace or a shredder,” Melsert says, “what our team has done is essentially take many of the same techniques we developed on the manufacturing side and we now operate them in reverse order.” He says the process can recover more than 90 percent of the high-value elements.

Getting the batteries from cars

But to recycle batteries, these startups will need to ensure that the packs make it to their facilities to begin with — a challenge in and of itself because facilities that process junked cars today don’t have protocols for EVs, including how to handle the batteries. Melsert says his company hopes to build on new partnerships with General Motors, Ford and Stellantis (which owns several brands including Dodge, Jeep and Maserati) to ensure that when a car is traded in, the battery will be sent for recycling. And Redwood Materials has announced collaborations with Volkswagen, Toyota, Ford and other automakers on battery collection and recycling.

As I walk with Leber, the construction manager, across American Battery Technology’s future recycling facility, he shows me where the finished goods warehouse will be located, across the building from the truck bays. During the first phase of operation, these finished goods will consist of “black mass,” a crumbly mixture of the valuable metals that will be sold to smelter companies for further refining and resale to battery manufacturers. Eventually, the company plans to add a second stage that will further refine this mix into separate minerals on-site.

American Battery Technology, along with Redwood Materials, Retriev Technologies and Canada-based Li-Cycle — the four main builders of EV battery recycling capacity in the US — have visions on their websites suggesting they are striving to build an infinitely recyclable supply chain. But is truly infinite reuse of battery minerals possible? Experts like Milovanoff and Gutzmer say that’s unlikely due to barriers like labor costs and energy needs. Still, it is technically possible to scale up and recycle more than 90 percent of the lithium, cobalt, nickel and copper in batteries, Lipson says — as long as the economics works out.

Ultimately, the success of battery recycling rests on whether it can be done cheaply enough. Even with improved technology, recyclers may face difficulties making their products cost-competitive with virgin minerals, says Aimee Boulanger, executive director of the Initiative for Responsible Mining Assurance, a coalition that works with companies to improve environmental and labor standards of mining projects. Incentives and regulations may also be needed: In the European Union, regulators have proposed guidelines for sustainable batteries that include their containing a proportion of recycled materials.

Melsert is optimistic. He thinks that since most battery minerals are mined internationally now, the transportation and import costs of virgin minerals will make domestically recycled materials competitive — a calculation supported by some research. In about another two years, he hopes to start building a facility that’s an order of magnitude larger to keep up with growing EV sales. And with demand for minerals outpacing what recycling will, for now, be able to provide, his company also has stakes in mining lithium in central Nevada.

“Some of the largest companies in the world are buying as much recycled battery metals as available,” he says. “The challenge, right now, is really about who can scale up the quickest.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

SpaceX launches most powerful rocket in history in explosive debut – like many first liftoffs, Starship’s test was a successful failure

Starship, the most powerful rocket ever built, launched from a spaceport in Texas. AP Photo/Eric Gay)
Wendy Whitman Cobb, Air University

On April 20, 2023, a new SpaceX rocket called Starship exploded over the Gulf of Mexico three minutes into its first flight ever. SpaceX is calling the test launch a success, despite the fiery end result. As a space policy expert, I agree that the “rapid unscheduled disassembly” – the term SpaceX uses when its rockets explode – was a very successful failure.

A large rocket standing next to a tower.
The full Starship stack comprises a Starship spacecraft (in black) on top of a rocket dubbed Super Heavy (in silver) and is nearly 400 feet (120 meters) tall. Hotel Marmot/Flickr, CC BY-SA

The most powerful rocket ever built

This launch was the first fully integrated test of SpaceX’s new Starship. Starship is the most powerful rocket ever developed and is designed to be fully reusable. It is made of two different stages, or sections. The first stage, called Super Heavy, is a collection of 33 individual engines and provides more than twice the thrust of a Saturn V, the rocket that sent astronauts to the Moon in the 1960s and 1970s.

The first stage is designed to get the rocket to about 40 miles (65 kilometers) above Earth. Once Super Heavy’s job is done, it is supposed to separate from the rest of the craft and land safely back on the surface to be used again. At that point the second stage, called the Starship spacecraft, is supposed to ignite its own engines to carry the payload – whether people, satellites or anything else – into orbit.

An explosive first flight

While parts of Starship have been tested previously, the launch on April 20, 2023, was the first fully integrated test with the Starship spacecraft stacked on top of the Super Heavy rocket. If it had been successful, once the first stage was spent, it would have separated from the upper stage and crashed into the Gulf of Mexico. Starship would then have continued on, eventually crashing 155 miles (250 kilometers) off of Hawaii.

During the SpaceX livestream, the team stated that the primary goal of this mission was to get the rocket off the launch pad. It accomplished that goal and more. Starship flew for more than three minutes, passing through what engineers call “max Q” – the moment at which a rocket experiences the most physical stress from acceleration and air resistance.

A cloud of fire and smoke in the sky with pieces falling Earthward.
The Starship spacecraft and Super Hheavy rocket were unable to separate during the flight, so engineers blew up the full rocket. AP Photo/Eric Gay

According to SpaceX, a few things went wrong with the launch. First, multiple engines went out sometime before the point at which the Starship spacecraft and the Super Heavy rocket were supposed to separate from each other. The two stages were also unable to separate at the predetermined moment, and with the two stages stuck together, the rocket began to tumble end over end. It is still unclear what specifically caused this failure.

Starship is almost 400 feet (120 meters) tall and weighs 11 million pounds (4.9 million kilograms). An out-of-control rocket full of highly flammable fuel is a very dangerous object, so to prevent any harm, SpaceX engineers triggered the self-destruct mechanism and blew up the entire rocket over the Gulf of Mexico.

All modern rockets have mechanisms built into them that allow engineers to safely destroy the rocket in flight if need be. SpaceX itself has blown up many of its own rockets during testing.

Success or failure?

Getting to space is hard, and it is not at all unusual for new rockets to experience problems. In the past two years, both South Korea and Japan have attempted to launch new rockets that also failed to reach orbit. Commercial companies such as Virgin Orbit and Relativity Space have also lost rockets recently. None of these were crewed missions, and in most of these failed launches, flight engineers purposefully destroyed the rockets after problems arose.

SpaceX’s approach to testing is different from that of other groups. Its company philosophy is to fail fast, find problems and fix them with the next rocket. This is different from the more traditional approach taken by organizations such as NASA that spend far more time identifying and planning for possible problems before attempting a launch.

The traditional approach tends to be slow. The development of NASA’s Space Launch System – the rocket that will take astronauts to the Moon as part of the Artemis program – took more than 10 years before its first launch this past November. SpaceX’s method has allowed the company to move much faster but can be costlier because of the time and resources it takes to build new rockets.

SpaceX engineers will look to identify the specific cause of the problem so that they can fix it for the next test launch. With this approach, launches like this first Starship test are successful failures that will help SpaceX reach its eventual goal of sending astronauts to Mars.

Wendy Whitman Cobb, Professor of Strategy and Security Studies, Air University

This article is republished from The Conversation under a Creative Commons license. 

The Supreme Court rules mifepristone can remain available – here’s how 2 conflicting federal court decisions led to this point

The Supreme Court is the latest court to take up the question of regulating a medication used for abortions. Kent Nishimura/Los Angeles Times via Getty Images
Naomi Cahn, University of Virginia and Sonia Suter, George Washington University

The U.S. Supreme Court issued an emergency ruling on April 21, 2023, that allows continued access to the abortion pill mifepristone in states where abortion is legal.

The court’s decision, which included few details and only indicated that Justices Clarence Thomas and Samuel Alito did not concur, follows a whirlwind legal process about whether people should be able to purchase mifepristone, one of two drugs used in a two-dose series for inducing a medical abortion.

On April 7, two federal district court judges halfway across the country from each other issued conflicting rulings about the validity of the Food and Drug Administration’s approval of mifepristone.

Within a week, yet another court issued a third opinion, which allowed mifepristone to continue to be prescribed, but under more limited circumstances. Two days after that, on April 14, the U.S. Supreme Court issued yet a fourth divergent opinion, albeit a temporary one, maintaining that the drug should be kept available while the court considered the most recent emergency ruling.

As scholars of reproductive justice, we have been carefully following these cases to make sense of what they mean for the FDA’s authority to approve drugs – and where that leaves access to medication abortion, which is used in more than half of all abortions today.

One issue that confuses many people is how different courts can rule in contradictory ways.

But in fact, there are many instances when federal courts in one part of the country hand down decisions that conflict with those of other jurisdictions.

The federal system

It’s first useful to understand how the federal court system in the U.S. works. State-run court systems are entirely separate from the federal judicial system, which is where the mifepristone rulings are playing out.

Federal courts handle a variety of issues, including those relating to the United States government, the Constitution or federal laws, or controversies between states or between the U.S. government and foreign governments.

There are 94 federal district courts, organized into 12 regional circuits. The district courts are trial courts, where cases are presented to a judge or jury. Their decisions are bound by the legal doctrine established by their respective circuit courts, which handle appeals of cases from their constituent district courts. All of these courts are bound by Supreme Court decisions.

If there is no prior ruling to establish a precedent on the matter, federal district court judges can issue rulings based on their independent legal judgment. Consequently, district courts in different circuits can end up issuing separate rulings that contradict each other.

It’s relatively common for differences to arise between district courts – or even for different circuit courts to rule differently on appeals in similar cases.

Only the Supreme Court can issue an opinion that binds all circuits. So when there are disagreements between circuit courts, the Supreme Court can step in and make a decision for the whole country.

For example, the 6th Circuit, which serves Kentucky, Ohio, Michigan and Tennessee, upheld same-sex marriage bans in all four states in 2014. By then, four other circuits had reached the opposite result and struck down same-sex marriage bans. This set up, as one commentator explained, an “almost certain review by the Supreme Court,” particularly because this was “an issue of fundamental constitutional significance.”

Until the Supreme Court decided the issue in 2015, however, same-sex marriage was legal in some states, but not in others.

Other examples

There are many other examples where federal circuit courts disagree.

In 2018, the 7th Circuit Court of Appeals, which serves Illinois, Indiana and Wisconsin, ruled that an Indiana state law that banned abortions based on genetic anomalies was not constitutional. The Supreme Court decided not to take Indiana’s appeal of that ruling.
But in 2021, the 6th Circuit Court of Appeals upheld an Ohio law banning abortions based on one kind of genetic anomaly, Down syndrome. That created a circuit-court split of a sort usually resolved by the Supreme Court.

However, the Dobbs decision, which resolved a different abortion case, essentially dissolved the conflict by holding that the U.S. Constitution does not prevent states from banning abortions for any reason: They simply must show a “rational basis” that “would serve legitimate state interests.”

One other thing that confuses many people is how district courts can issue orders that go beyond the borders of their districts, and even their circuits, sometimes applying nationally. There is some scholarly dispute about this. Nevertheless, many judges have issued nationwide rulings on a wide range of issues, including migrant protection protocols, loan foregiveness and mask-wearing mandates.

The case of mifepristone

With this latest example of courts butting heads, Federal District Judge Matthew Kacsmaryk in Texas ruled first, on April 7. His decision took the form of a preliminary injunction, which is essentially a temporary ruling, until the court has a chance to go through a full trial. Kacsmaryk concluded that the FDA had exceeded its authority in approving mifepristone in 2000 and in loosening the prescribing restrictions over the years. As a result, he ruled that the drug’s approval should be revoked entirely.

Within an hour of Kacsmaryk’s ruling, Federal District Judge Thomas Rice in Washington state issued a contradictory ruling, which was also a preliminary injunction, declaring that the FDA’s approval of the drug and its uses should not be revoked.

While Kacsmarkyk’s ruling applied nationwide, Rice’s ruling applied only to the 17 states and the District of Columbia that were the plaintiffs in the suit he was handling. He noted that he had authority to make his ruling nationwide, but he also had discretion to limit the reach of the ruling to the parties that brought suit.

Where the issues stand

The Supreme Court’s ruling means mifepristone will remain as widely available as it was before. Fifteen states already restrict access to medication abortions.

“As a result of the Supreme Court’s stay, mifepristone remains available and approved for safe and effective use while we continue this fight in the courts,” President Joe Biden said in a White House statement.

But that decision is only in effect while the case is being decided by the 5th Circuit. Undoubtedly, that decision will be appealed to the Supreme Court again.

So far, no one has appealed the Washington district court opinion, although a potential future Supreme Court ruling after the 5th Circuit decision would also affect that case’s outcome. And the situation gets even more complicated, with a third lawsuit filed in a federal court in Maryland on April 19. That case was brought by GenBioPro, the manufacturer of a generic version of mifepristone, which the FDA approved in 2019. GenBioPro is seeking to preserve the approval of its drug, despite all the conflicting and confusing court rulings.

Although the Supreme Court majority said that it had hoped that the Dobbs opinion would end federal battles over abortion rights, there is more confusion and conflict than ever, in every corner of the country. And the confusion may continue for a while.

Naomi Cahn, Professor of Law, University of Virginia and Sonia Suter, Professor of Law, George Washington University

This article is republished from The Conversation under a Creative Commons license.

The ancient pathogens in old graves are as dead as the people they once infected. Still, they tell a vivid tale.

From the Black Death to the Spanish flu, waves of infectious disease have repeatedly laid waste to human populations. Scientists from many disciplines have long been intrigued by the possibility of disclosing the exact identity of the responsible pathogens and figuring out what made them so deadly. Yet even after sequencing ancient DNA became possible, the omnipresence of microbes made it challenging to pinpoint the historical culprits.

New technology has now made it much easier and cheaper to sequence large amounts of DNA. And by tracking the damage that accumulates in genetic material as it ages, researchers have found ways to distinguish truly old DNA from that of modern contaminants, finally allowing them to identify the pathogens behind infamous scourges.

One of the pioneers of the field of microbial archaeology is geneticist Johannes Krause, founding director of the Max Planck Institute for the Science of Human History in Jena, Germany. Earlier this month, he published a paper in Nature Communications  tracing the spread of the Black Death, which killed half the European population — 30 million to 50 million people — in less than five years, starting in 1347. Krause and coauthors examine the challenges and revelations to be had in exploring ancient pathogens in recent issues of the Annual Review of Microbiology and the Annual Review of Genomics and Human Genetics.

This interview has been edited for length and clarity.

The job of the average archaeologist, to uncover the ancient remains of humans and all of their artifacts, is hard enough. But how do you find microbes that infected people thousands of years ago?

We extract all the DNA we can get from those same human remains, often fossilized teeth or bone, and we sequence it. This allows us to distinguish human DNA from the DNA of the pathogens we’re looking for, and then to try and reconstruct their genomes. This way, we are building a molecular fossil record that can tell us how pathogens have changed through time. And that provides important information about the biology of the microbial villains that have caused major epidemics in the past.

Ancient DNA is often highly fragmented. How do you know which bits of the genome go where?

There are different ways of doing this. You can try to let the computer put the pieces together based on overlaps. But like a jigsaw puzzle, that can be challenging when pieces are missing. So that’s when we need to look at the puzzle box, so to speak, and try to fit the fragments to the DNA of a modern relative instead. Which means it is as good as impossible to discover a new species, or to recognize a species with genes that mutate very fast, as the sequences may have changed so much we have no idea what it is.

The first thing many people might think of when they hear the words “microbial” and “archaeology” in the same sentence is pathogens escaping from ancient graves, “curse of the pharaohs ”-style. Is this something you need to take precautions for?

It is certainly something we thought about early on. There have been some studies, in the 1980s, where people tried to grow ancient bacteria or viruses. But nobody has been able to revive a pathogen that is more than a hundred years old, so I think it is very unlikely that this will happen.

There also is not a single case in which anybody got infected from an old skeleton, and there are thousands of archaeologists and anthropologists in the world handling ancient human bones on a daily basis. These people often don’t use gloves, and some of them even touch tiny fragments with the tongue to find out whether the fragments are made of stone or bone — bone is a spongy material, so it will take up liquid from your tongue and stick to it.

The pathogens really appear to be as dead as the person is.

So the largest risk, in fact, may be the reverse: Ancient tissues of people who died from a disease you’re interested in could be contaminated by other microbes that interfere with the analysis?

Yes. Microbial DNA is everywhere — ancient tissue samples usually contain up to 99 percent microbial DNA, much of it modern. With the older approaches, almost everything used to show up as positive for the bacteria causing tuberculosis, for example — even stones or plants. That is in part because many pathogens have harmless relatives that are not in our databases yet.

So it is extremely important to make sure that DNA is indeed from the past. We have developed several approaches to do so, including one that looks at DNA damage. In 2011, we could show that the damage patterns in ancient bacterial DNA were identical to those we see in human DNA of the same age. That was the first time we could authenticate ancient bacterial DNA, and it changed the field. Now, if DNA does not have this damage, we don’t believe it is old.

When deciding on the first pathogen to target using the brand new ancient DNA toolbox you developed, how did you choose, as the saying goes, between plague and cholera?

Our main motivation to study plague was that when we started this research, it wasn’t really clear what had caused the Black Death. There was much discussion among historians on whether it was some sort of virus, or a disease that is unknown today. An important advantage was that we had access to 50 bodies from the famous East Smithfield cemetery in London, which was used only during the Black Death pandemic, leaving little doubt what the people buried there had died from. In about half of these people, we could identify the plague bacterium Yersinia pestis. So that likely caused it.

Does your research also reveal where the Black Death may have come from, originally?

The oldest historical records are from a city called Kaffa in Crimea, a region that was often disputed in the past, as it is today. In the first half of the 14th century, it was a Genoese colony, besieged from the east by the Golden Horde. According to historians, the assailants ended up bombarding Kaffa with dead bodies, which may have spread the disease within the city. This forced the Genoese to retreat to Italy, bringing the plague to Europe, where it spread very quickly, killing half the population in only five years.

“The Black Death was sort of the Big Bang for the plague.”

Maria Spyrou, now a postdoc in my lab, collected ancient Yersinia pestis samples from different parts of Europe, and one of the genomes she looked at was a 14th century strain from the Samara region in Russia, about 1,500 kilometers northeast of Crimea. When she added that strain to the Yersinia family tree, it turned out to be ancestral to the Black Death, corroborating the idea that the disease may have come from the east.

All the other genomes she got from the Black Death period, from many different places in Europe, are 100 percent identical, showing how fast it must have spread. And though the bacteria did change later on, the strain from that time appears to be the common ancestor of most of the strains in the world today. So the Black Death was sort of the Big Bang for the plague.

Interestingly, the genomes from that period don’t have anything you don’t find in daughter strains today, which means the Black Death is still around.

Does that mean these bacteria could still cause a similar epidemic today?

Theoretically, I think they still could, certainly in a context similar to medieval Europe. Even today, there are about 2,500 human cases every year, and most of them are from related strains. The bacteria that infected a few hundred people in Madagascar in 2017 were very similar in their biology to those that caused the Black Death.

Fortunately, we now have good antibiotics, because without treatment, 60 percent of people die of plague within seven to 10 days, and plague occurs in rodent populations almost all over the world. In the Grand Canyon, for example, there are signs saying you shouldn’t touch the squirrels, because they carry Yersinia pestis. It really is a rodent disease — humans get infected only by accident. We don’t live with as many rodents as we used to, and the black rat, which was once very common and lived almost like a mouse, inside people’s houses, has since largely been replaced by the brown rat, which usually resides underground.

Last but not least, fleas have also nearly disappeared in many places thanks to improved hygiene. So I think these factors are probably more important than any genetic change in the bacteria — or in people.

In one of the reviews, you mention that a very close relative of Yersinia pestis, Yersinia pseudotuberculosis — which you initially used to piece together some of the early plague genomes — commonly occurs in the environment, including on “improperly washed” vegetables. Can your genetic analysis teach us why pestis is so dangerous and pseudotuberculosis is as good as harmless?

Yersinia pseudotuberculosis appears to be very bad at escaping the human immune system. There is no known case of it entering the blood, which is how pestis causes the tissue death that results in the black hands and feet that gave the Black Death its name.

Y. pseudotuberculosis also does not have the genes that are necessary for flea transmission. After a flea sucks blood from an individual infected with Y. pestis, the bacteria produce a biofilm that clogs the flea’s gut, preventing it from swallowing any more blood. So the flea is starving, and it starts biting hundreds of times a day, and every time it bites it brings the blood in contact with the biofilm, then spits it out again, transmitting the bacteria into the new bite mark. As Yersinia pseudotuberculosis does not have the genes to make this biofilm, it could not have been transmitted by flea bites.

Interestingly, we have recently found that Yersinia pestis bacteria from the Bronze Age and the Late Stone Age were missing some of those genes as well. They may instead have infected the lungs, and spread through the air, as some plague bacteria still do today. This is quite exciting: We are really starting to see how Yersinia pestis has emerged to become a dangerous human pathogen.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

A Full, Fresh Menu Fit for a Brunch Feast

Birthdays, holidays or just casual Saturdays are all perfect excuses to enjoy brunch with your favorite people. Bringing everyone together with quiches, pastries, appetizers, desserts and more offers an easy way to kick back and relax on a warm weekend morning.

These recipes for Easy Brunch Quiche, Savory Cheese Balls and Lemon Blueberry Trifle provide a full menu to feed your loved ones from start to finish, regardless of the occasion.

Find more brunch inspiration by visiting Culinary.net.

A Savory Way to Start the Celebration

Serving up exquisite flavor doesn’t have to mean spending hours in the kitchen. You can bring the cheer and favorite tastes with simple appetizers that are equal measures delicious and visually appealing.

These Savory Cheese Balls are easy to make and perfect for get-togethers and brunch celebrations. Texture and color are the name of the game with this recipe, and the result is a beautiful array of red, gold and green, all on one plate.

To find more recipes fit for brunch, visit Culinary.net.

Savory Cheese Balls

Servings: 6-12

  • 2 packages (8 ounces each) cream cheese
  • 2 tablespoons caraway seeds
  • 1 teaspoon poppy seeds
  • 2 cloves garlic, minced, divided
  • 1/4 cup parsley, chopped
  • 2 teaspoons thyme leaves, chopped
  • 1 teaspoon rosemary, chopped
  • 1/4 cup dried cranberries, chopped
  • 2 tablespoons pecans, chopped
  • crackers (optional)
  • fruit (optional)
  • vegetables (optional)
  1. Cut each cream cheese block into three squares. Roll each square into ball.
  2. In small bowl, combine caraway seeds, poppy seeds and half the garlic.
  3. In second small bowl, combine parsley, thyme, rosemary and remaining garlic.
  4. In third small bowl, combine cranberries and pecans.
  5. Roll two cheese balls in seed mixture, two in herb mixture and two in cranberry mixture.
  6. Cut each ball in half and serve with crackers, fruit or vegetables, if desired.

Finish Brunch with a Light, Layered Treat

After enjoying eggs, bacon, French toast and pancakes or any other brunch combination you crave, it’s tough to top a fresh, fruity treat to round out the meal. Dish out a delicious dessert to cap off the morning and send guests out on a sweet note that’s perfectly light and airy.

The zesty zip of lemon curd in this Lemon Blueberry Trifle brings out the sweetness of whipped cream made with Domino Golden Sugar, fresh blueberries and cubed pound cake for a vibrant, layered bite. Plus, it’s a bright, beautiful centerpiece you can feel proud of as soon as guests try their first bite.

Find more dessert recipes fit for brunch and other favorite occasions at DominoSugar.com.

Lemon Blueberry Trifle

Prep time: 45 minutes
Servings: 8-10

Lemon Curd:

  • 1 cup Domino Golden Sugar
  • 2 tablespoons cornstarch
  • 1/4 cup freshly squeezed lemon juice
  • 1 tablespoon lemon zest
  • 6 tablespoons water
  • 1/4 teaspoon salt
  • 6 egg yolks
  • 1/2 cup (1 stick) unsalted butter, at room temperature, cut into 1/2-inch cubes

Whipped Cream:

  • 2 cups heavy whipping cream, cold
  • 2 tablespoons Domino Golden Sugar
  • 2 teaspoons pure vanilla extract

Trifle:

  • 1 cup blueberry jam
  • 12 ounces fresh blueberries, plus additional for garnish, divided
  • 1 pound cake, cubed
  • lemon slices, for garnish
  • mint, for garnish
  1. To make lemon curd: In medium saucepan, stir sugar and cornstarch. Stir in lemon juice, lemon zest, water and salt. Cook over medium heat, stirring constantly, until thickened. Remove from heat and gradually stir in three egg yolks; mix well until combined. Stir in remaining egg yolks. Return to heat and cook 2 minutes, stirring constantly. Remove from heat.
  2. Stir in butter; mix until incorporated. Cover with plastic wrap, touching surface of lemon curd to prevent curd forming skin. Refrigerate until completely cool.
  3. To make whipped cream: In large bowl, beat cream, sugar and vanilla until soft peaks form. Do not overbeat.
  4. To make trifle: Mix blueberry jam with 12 ounces fresh blueberries. Place one layer cubed pound cake in bottom of trifle dish. Top with layer of blueberries. Add dollops of lemon curd and whipped cream. Repeat layering ending with whipped cream.
  5. Decorate trifle with lemon slices, fresh blueberries and mint.

Say Goodbye to Basic Brunch

The same old brunch menu can become tiresome and dull. Adding something new to the table with fresh ingredients and simple instructions can enhance your weekend spread and elevate brunch celebrations.

Try this Easy Brunch Quiche that is sure to have your senses swirling with every bite. This quiche is layered with many tastes and a variety of ingredients to give it crave-worthy flavor, from broccoli and bacon to mushrooms, eggs and melty cheese.

Visit Culinary.net to find more brunch recipes.

Easy Brunch Quiche

Serves: 12

  • 1 package (10 ounces) frozen broccoli with cheese
  • 12 slices bacon, chopped
  • 1/2 cup green onions, sliced
  • 1 cup mushrooms, sliced
  • 4 eggs
  • 1 cup milk
  • 1 1/2 cups shredded cheese, divided
  • 2 frozen deep dish pie shells (9 inches each)
  1. Heat oven to 350 F.
  2. In medium bowl, add broccoli and cheese contents from package. Microwave 5 minutes, or until cheese is saucy. Set aside.
  3. In skillet, cook chopped bacon 4 minutes. Add green onions; cook 2 minutes. Add mushrooms; cook 4 minutes, or until bacon is completely cooked and mushrooms are tender. Drain onto paper towel over plate. Set aside.
  4. In medium bowl, whisk eggs and milk until combined. Add broccoli and cheese mixture. Add 1 cup cheese. Stir to combine. Set aside.
  5. In pie shells, divide drained bacon mixture evenly. Divide broccoli mixture evenly and pour over bacon mixture. Sprinkle remaining cheese over both pies.
  6. Bake 40 minutes.
  7. Cool at least 12 minutes before serving.

Note: To keep edges of crust from burning, place aluminum foil over pies for first 20 minutes of cook time. Remove after 20 minutes and allow to cook uncovered until completed.

SOURCE:
Domino Sugar

How lunar cycles guide the spawning of corals, worms and more


Many sea creatures release eggs and sperm into the water on just the right nights of the month. Researchers are starting to understand the biological rhythms that sync them to phases of the moon.

It’s evening at the northern tip of the Red Sea, in the Gulf of Aqaba, and Tom Shlesinger readies to take a dive. During the day, the seafloor is full of life and color; at night it looks much more alien. Shlesinger is waiting for a phenomenon that occurs once a year for a plethora of coral species, often several nights after the full moon.

Guided by a flashlight, he spots it: coral releasing a colorful bundle of eggs and sperm, tightly packed together. “You’re looking at it and it starts to flow to the surface,” Shlesinger says. “Then you raise your head, and you turn around, and you realize: All the colonies from the same species are doing it just now.”

Some coral species release bundles of a pinkish- purplish color, others release ones that are yellow, green, white or various other hues. “It’s quite a nice, aesthetic sensation,” says Shlesinger, a marine ecologist at Tel Aviv University and the Interuniversity Institute for Marine Sciences in Eilat, Israel, who has witnessed the show during many years of diving. Corals usually spawn in the evening and night within a tight time window of 10 minutes to half an hour. “The timing is so precise, you can set your clock by the time it happens,” Shlesinger says.

Moon-controlled rhythms in marine critters have been observed for centuries. There is calculated guesswork, for example, that in 1492 Christopher Columbus encountered a kind of glowing marine worm engaged in a lunar-timed mating dance, like the “flame of a small candle alternately raised and lowered.” Diverse animals such as sea mussels, corals, polychaete worms and certain fishes are thought to synchronize their reproductive behavior by the moon. The crucial reason is that such animals — for example, over a hundred coral species at the Great Barrier Reef — release their eggs before fertilization takes place, and synchronization maximizes the probability of an encounter between eggs and sperm.

How does it work? That has long been a mystery, but researchers are getting closer to understanding. They have known for at least 15 years that corals, like many other species, contain light-sensitive proteins called cryptochromes, and have recently reported that in the stony coral, Dipsastraea speciosa, a period of darkness between sunset and moonrise appears key for triggering spawning some days later.

Now, with the help of the marine bristle worm Platynereis dumerilii, researchers have begun to tease out the molecular mechanism by which myriad sea species may pay attention to the cycle of the moon.

The bristle worm originally comes from the Bay of Naples but has been reared in laboratories since the 1950s. It is particularly well-suited for such studies, says Kristin Tessmar-Raible, a chronobiologist at the University of Vienna. During its reproductive season, it spawns for a few days after the full moon: The adult worms rise en masse to the water surface at a dark hour, engage in a nuptial dance and release their gametes. After reproduction, the worms burst and die.

The tools the creatures need for such precision timing — down to days of the month, and then down to hours of the day — are akin to what we’d need to arrange a meeting, says Tessmar-Raible. “We integrate different types of timing systems: a watch, a calendar,” she says. In the worm’s case, the requisite timing systems are a daily — or circadian — clock along with another, circalunar clock for its monthly reckoning.

To explore the worm’s timing, Tessmar-Raible’s group began experiments on genes in the worm that carry instructions for making cryptochromes. The group focused specifically on a cryptochrome in bristle worms called L-Cry. To figure out its involvement in synchronized spawning, they used genetic tricks to inactivate the l-cry gene and observe what happened to the worm’s lunar clock. They also carried out experiments to analyze the L-Cry protein.

Though the story is far from complete, the scientists have evidence that the protein plays a key role in something very important: distinguishing sunlight from moonlight. L-Cry is, in effect, “a natural light interpreter,” Tessmar-Raible and coauthors write in a 2023 overview of rhythms in marine creatures in the Annual Review of Marine Science.

The role is a crucial one, because in order to synchronize and spawn on the same night, the creatures need to be able to stay in step with the patterns of the moon on its roughly 29.5-day cycle — from full moon, when the moonlight is bright and lasts all night long, to the dimmer, shorter-duration illuminations as the moon waxes and wanes.

When L-Cry was absent, the scientists found, the worms didn’t discriminate appropriately. The animals synchronized tightly to artificial lunar cycles of light and dark inside the lab — ones in which the “sunlight” was dimmer than the real sun and the “moonlight” was brighter than the real moon. In other words, worms without L-Cry latched onto unrealistic light cycles. In contrast, the normal worms that still made L-Cry protein were more discerning and did a better job of synchronizing their lunar clocks correctly when the nighttime lighting more closely matched that of the bristle worm’s natural environment.

The researchers accrued other evidence, too, that L-Cry is an important player in lunar timekeeping, helping to discern sunlight from moonlight. They purified the L-Cry protein and found that it consists of two protein strands bound together, with each half holding a light-absorbing structure known as a flavin. The sensitivity of each flavin to light is very different. Because of this, the L-Cry can respond to both strong light akin to sunlight and dim light equivalent to moonlight — light over five orders of magnitude of intensity — but with very different consequences.

“I find it very exciting that we could describe a protein that can measure moon phases.”

Eva Wolf

After four hours of dim “moonlight” exposure, for example, light-induced chemical reactions in the protein — photoreduction — occurred, reaching a maximum after six hours of continuous “moonlight” exposure. Six hours is significant, the scientists note, because the worm would only encounter six hours’ worth of moonlight at times when the moon was full. This therefore would allow the creature to synchronize with monthly lunar cycles and pick the right night on which to spawn. “I find it very exciting that we could describe a protein that can measure moon phases,” says Eva Wolf, a structural biologist at IMB Mainz and Johannes Gutenberg University Mainz, and a collaborator with Tessmar-Raible on the work.

How does the worm know that it’s sensing moonlight, though, and not sunlight? Under moonlight conditions, only one of the two flavins was photoreduced, the scientists found. In bright light, by contrast, both flavin molecules were photoreduced, and very quickly. Furthermore, these two types of L-Cry ended up in different parts of the worm’s cells: the fully photoreduced protein in the cytoplasm, where it was quickly destroyed, and the partly photoreduced L-Cry proteins in the nucleus.

All in all, the situation is akin to having “a highly sensitive ‘low light sensor’ for moonlight detection with a much less sensitive ‘high light sensor’ for sunlight detection,” the authors conclude in a report published in 2022.

Many puzzles remain, of course. For example, though presumably the two distinct fates of the L-Cry molecules transmit different biological signals inside the worm, researchers don’t yet know what they are. And though the L-Cry protein is key for discriminating sunlight from moonlight, other light-sensing molecules must be involved, the scientists say.

In a separate study, the researchers used cameras in the lab to record the burst of swimming activity (the worm’s “nuptial dance”) that occurs when a worm sets out to spawn, and followed it up with genetic experiments. And they confirmed that another molecule is key for the worm to spawn during the right one- to two-hour window — the dark portion of that night between sunset and moonrise — on the designated spawning nights.

Called r-Opsin, the molecule is extremely sensitive to light, the scientists found — about a hundred times more than the melanopsin found in the average human eye. It modifies the worm’s daily clock by acting as a moonrise sensor, the researchers propose (the moon rises successively later each night). The notion is that combining the signal from the r-Opsin sensor with the information from the L-Cry on what kind of light it is allows the worm to pick just the right time on the spawning night to rise to the surface and release its gametes.

Resident timekeepers

As biologists tease apart the timekeepers needed to synchronize activities in so many marine creatures, the questions bubble up. Where, exactly, do these timekeepers reside? In species in which biological clocks have been well studied — such as Drosophila and mice — that central timekeeper is housed in the brain. In the marine bristleworm, clocks exist in its forebrain and peripheral tissues of its trunk. But other creatures, such as corals and sea anemones, don’t even have brains. “Is there a population of neurons that acts as a central clock, or is it much more diffuse? We don’t really know,” says Ann Tarrant, a marine biologist at the Woods Hole Oceanographic Institution who is studying chronobiology of the sea anemone Nematostella vectensis.

Scientists are also interested in knowing what roles are played by microbes that might live with marine creatures. Corals like Acropora, for example, often have algae living symbiotically within their cells. “We know that algae like that also have circadian rhythms,” Tarrant says. “So when you have a coral and an alga together, it’s complicated to know how that works.”

Researchers are worried, too, about the fate of spectacular synchronized events like coral spawning in a light-polluted world. If coral clock mechanisms are similar to the bristle worm’s, how would creatures be able to properly detect the natural full moon? In 2021, researchers reported lab studies demonstrating that light pollution can desynchronize spawning in two coral species — Acropora millepora and Acropora digitifera — found in the Indo-Pacific Ocean.

Shlesinger and his colleague Yossi Loya have seen just this in natural populations, in several coral species in the Red Sea. Reporting in 2019, the scientists compared four years’ worth of spawning observations with data from the same site 30 years earlier. Three of the five species they studied showed spawning asynchrony, leading to fewer — or no — instances of new, small corals on the reef.

Along with artificial light, Shlesinger believes there could be other culprits involved, such as endocrine-disrupting chemical pollutants. He’s working to understand that — and to learn why some species remain unaffected.

Based on his underwater observations to date, Shlesinger believes that about 10 of the 50-odd species he has looked at may be asynchronizing in the Red Sea, the northern portion of which is considered a climate-change refuge for corals and has not experienced mass bleaching events. “I suspect,” he says, “that we will hear of more issues like that in other places in the world, and in more species.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Adolescents’ brains are highly capable, if inconsistent, during this critical age of exploration and development. They are also acutely tuned into rewards.

The ability to set a goal and pursue it without getting derailed by temptations or distractions is essential to nearly everything we do in life, from finishing homework to driving safely in traffic. It also places complex demands on the brain, requiring skills like working memory — the ability to keep small amounts of information in mind to perform a task — as well as impulse control and being able to rapidly adapt when rules or circumstances change.

Taken together, these elements add up to something researchers call executive function. We all struggle with executive function sometimes, for example when we’re stressed or don’t get enough sleep. But in teenagers, these powers are still a work in progress, contributing to some of the contradictory behaviors and lapses in judgment — “My honor roll student did what on TikTok?” — that baffle many parents.

This erratic control can be dangerous, especially when teens make impulsive choices. But that doesn’t mean the teen brain is broken, says Beatriz Luna, a developmental cognitive neuroscientist at the University of Pittsburgh and coauthor of a review on the maturation of one aspect of executive function, called cognitive control, in the 2015 Annual Review of Neuroscience.

Adolescents have all the basic neural circuitry needed for executive function and cognitive control, Luna says. In fact, they have more than they need — what’s lacking is experience, which over time will strengthen some neural pathways and weaken or eliminate others. This winnowing serves an important purpose: It tailors the brain to help teens handle the demands of their unique, ever-changing environments and to navigate situations their parents may never have encountered. Luna’s research suggests that teens’ inconsistent cognitive control is key to becoming independent, because it encourages them to seek out and learn from experiences that go beyond what they’ve been actively taught.

Knowable Magazine asked Luna to share what she’s learned about the development of the brain’s executive control system — and why we might not want to rush the process, even if we could. This conversation has been edited for clarity and length.

It seems like executive function isn’t just one thing — it’s more complex than that. How do you define it? And what’s the difference between executive function and cognitive control?

Executive function and cognitive control overlap, and sometimes refer to the same exact thing. One way to understand it is that while some of our behaviors are generated externally — something like a visual stimulus or, say, someone screaming at you — and you react, the rest of our behaviors are internally driven. This means that there is a plan and there’s a goal, and you have to engage particular systems in the brain to generate your behavior, while ignoring external distractors. Those systems are pretty much what executive function is.

Cognitive control is underscoring a very important aspect of executive function, which is the ability of brain regions like the prefrontal cortex to exert control over more reactive parts of the brain like the ventral striatum, which is active when we do something rewarding or pleasurable, or even think about doing something rewarding.

How do you study cognitive control in your own lab?

It’s very basic neuroscience: We say, “Here’s a light, don’t look at it.”

What a simple thing, right? But it’s a very elegant and strong way to probe the parts of the brain that perform executive function, and specifically cognitive control. When there’s a light, your whole brain wants to look at it — but you have this instruction: Don’t look at it. To do that you have to invoke cognitive control and say, “I’m not going to look at it, I’m going to look to the other side.”

This has been a very important way to look at development, particularly through adolescence. Adolescents and kids, they’re really smart. On many typical neuropsychological sorts of tests, they appear to be at an adult level. But you can’t fool the eye movement system that responds, or not, to the experimental command to not look at a light. And we see, over and over again, that the teenagers are still not performing at adult levels.

Do researchers see similar differences in other types of tests? When does executive function, or cognitive control, reach adult levels?

We have a paper that will be coming out soon where we took behavioral data from big data repositories, so we have tens of thousands of individuals, and we applied very high-level analytics to answer that question. And we found that no matter how you’re assessing executive function, you get better and better through childhood and adolescence until around 18 to 20 years old. At that point, the number of errors you make in each test levels out.

There are two things going on here: First, there’s a lot of variability between teens, so different kids are developing differently. But there’s also a lot of variability within each individual kid. In some trials, teens show adultlike responses, and in other trials, they don’t. Adults, in contrast, tend to perform at about the same level over lots of different trials. Moreover, adult error rates are still stable when we test them 18 months later.

That’s telling us a few things. Number one, it means that the circuitry you need to produce an executive response is already there in adolescence. Second, what’s changing during development is the ability to access these systems in a sustained and reliable fashion. That happens only through the maturation of brain circuitry that, as you develop, works more consistently but less flexibly.

“The circuitry you need to produce an executive response is already there in adolescence ... what’s changing during development is the ability to access these systems in a sustained and reliable fashion.”

How do the brain systems needed for executive function develop over the lifespan? Are there certain key ages when they are being built in the brain?

During childhood, brain and behavior are driven primarily by a process of accumulation. You’re learning new things — how to walk, how to talk. You’re learning how to tap into all these cognitive abilities. Your brain is growing.

By the time you reach adolescence, everything is there. Now that you have the basic neural architecture, there’s a reversal from accumulation to specialization, based on experience. It’s a time when synaptic pruning — the elimination of connections between neurons — in the prefrontal cortex is occurring. And the connections between regions are starting to decrease, as the brain is specializing, and some connections are getting stronger.

We believe that what’s occurring is that the adolescent brain is actively exploring its environment: “Let me try it this way. Oh, now let me try this way. Oh, wait a minute, I think it worked better here.” Eventually, after much experience, the brain says, “OK. You know what, this is the optimal way, so we’re going to myelinate this circuitry.” That is like insulating the tracts, so signals are going faster, and you’re not losing so much signal on the way. But it’s also cementing it, preventing it from changing. That’s what provides the stability, and the reliability of being able to engage executive function.

What parts of the brain are important for executive function? We hear a lot about the prefrontal cortex — part of the wrinkly layer in the front of the brain. Is that the most important or only region?

Yes, we have the prefrontal cortex right here, behind your forehead. But the prefrontal cortex can’t do anything on its own. That’s not its role. Its role is to be a conductor.

One way I explain it is that, in my lab, I’m the prefrontal cortex. I’m not doing the analyses. Instead, everyone’s coming to me and telling me, “This is what we found.” I’m putting it all together and writing grants and coming up with theoretical models and so forth.

That’s what the prefrontal cortex is doing: It’s listening and organizing and telling various brain regions, “Hey, I need more from you, and I need less from you.” It’s talking to the rest of the brain. Cognitive control is really the ability of the prefrontal cortex to engage with all parts of the brain — from the reward circuits to the parietal cortex, which has to do with attention, to sensorimotor areas that control things like eye movement — whatever is needed.

Are there times when teenagers are better at cognitive control than adults?

In any laboratory, including ours, we always find the same result: Adolescents just are not as good as adults at cognitive control — except in studies where we say, if you do this trial correctly, we’re going to give you extra points for more money. And, miraculously, adolescents can then do it like adults.

How is that possible? What we have found from different studies is that the minute that they see there’s a short-term reward involved, they are pushing their system, and to even a greater level than adults. When we’ve looked at dopamine in the brain, a neurotransmitter involved in reward, we have found that kids with higher levels of dopamine in neurons in the basal ganglia are the ones really benefiting from that extra push.

Are there other types of rewards that affect how adolescents perform on these kinds of tasks?

That brings up an important question: When you’re an adolescent, what are the rewards that matter? There’s the monetary incentive. But peers are another important one, because you have all these hormones that are telling your brain it’s time for you to start making a network of peers to survive, with the intention of finding a partner and reproducing.

There’s one study from my colleague’s lab I think is great, where they looked at simulated driving. In the test, the light turns red and if you don’t stop, you lose. What they found is that adolescents performed just like adults except when their peers were present — then, all of a sudden, they became way riskier, and activity in the part of the brain that has to do with reward was elevated.

That suggests that in some circumstances, sensitivity to reward is helping with cognitive control, but in other circumstances it can be detrimental: In the driving study, enhancing reward through the presence of peers undermined cognitive control because the reward that mattered more was peer approval, not winning at the game.

How do these kinds of behavioral tests relate to how teens fare in everyday life?

In real life, behaviors like doing well in school are very complex. But at the core, even complex behaviors involve these brain processes: inhibitory control, working memory, task-switching. If you’re concerned — saying “What’s wrong with this kid?” — you need to focus on each one of those processes individually, to understand what’s not working. If a core component is not optimal, then complex behaviors that engage these components are also not going to be optimal.

How do you define “normal” or typical executive function?

That’s a great question. Our main interest is to map typical trajectories of development, with the long-term goal of having a pediatric growth chart for executive function. I’m in a psychiatric department, and so it’s important for us to understand the emergence of major mental illnesses, many of which appear in adolescence and involve deficits in executive function. One of the ideas of this pediatric growth chart is to identify risk and then find ways to fortify any weaknesses in executive function.

How does someone’s genetic background affect their executive function, and how does that relate to their risk of mental illness?

Development through childhood is two things — genetics and environment — trying to work together. When you reach adolescence, the brain says OK, you’ve had a lot of time, now we have to start making some decisions about which circuits are going to predominate. This occurs through the system that psychologist Donald Hebb famously described in the 1940s, where neural connections that get used more get stronger, and connections that don’t get used get weaker.

The brain doesn’t know what’s good or bad. If you have experiences of sadness over and over, for example, the brain’s like, “Oh, you use that circuitry a lot, we are going to make this a predominant circuit.” When it comes time for further physical reinforcement, that circuit is going to be myelinated because you’ve used it so much, like a muscle that gets stronger with use.

So, for example, you might have a genetic predisposition for depression, and live in a household where a parent has depression. You’re being exposed to negative affect daily so the circuitry is being used a lot, and that might lead, through a Hebbian process, to you developing depression as well. But hypothetically, maybe at school or through therapy that same individual gets experiences of engaging cognitive control, strengthening other circuits. We all experience negative affect, so that circuitry exists for all of us. It’s a matter of how important a role it plays.

When you’re talking about things like bipolar disorder or schizophrenia, there can be a very strong inherited predisposition. But there’s a good reason that diagnosis of these disorders is not done until adulthood. It’s because the brain hasn’t decided yet. For example, if you have ADHD when you’re young, depending on your environment and experience that could turn into a typical brain, or into a wide range of different things, like substance use or even bipolar disorder.

So yes, it’s a period of risk, but also a period of opportunity to strengthen alternative, helpful systems like cognitive control.

Are there things we can do during adolescence that can reduce the risk of mental illness?

I’m not a clinician, so this is not my area of expertise, but the idea is that if you do something like cognitive behavioral therapy, CBT, which trains you to start to observe your emotional reactions and to get your prefrontal executive system to engage, your control will get stronger. That can help build resilience and ways to cope, even if you have a genetic predisposition to mental illness.

Should we be trying to accelerate the development of executive function in children?

There are some colleagues who have proposed, based on certain types of training, that perhaps you can get executive function earlier. But my take on it is, why? Why would we want this to come on earlier? It’s important to not always have executive function at the forefront, especially when you should be experimenting and should be trying all your circuits, so that you can have a very well-informed brain as it makes its decisions about which circuits it needs to strengthen and which ones it doesn’t. So, I’m not convinced you can really push executive function, but even if you did, whether it would be the right thing to do.

“I’m not convinced you can really push executive function, but even if you did, whether it would be the right thing to do.”

If the brain is still developing past 18, as you say, what does that mean for how much responsibility teens and young adults have for their actions and decisions?

There are important nuances. In the juvenile justice system, one of the arguments against harsh sentences for young offenders is that we don’t know who that kid is — what they did at the time may not be who they really are going to become. So, getting a life sentence doesn’t seem very useful because that might have just been part of the risk-taking and, yes, we keep an eye on them, but we don’t put them in there for 60 or 70 years.

That’s one part of the story. Another part of the story that my students have become passionate about is legislation on gender-affirming care. Some people have used the work that my colleagues and I have done to say, “Hey, look, the brain is not done until the 20s, so adolescents cannot make these sorts of decisions.” But we argue that when teenagers have time to deliberate — when they’re surrounded not by peers who are more reward-driven, but by adults who have more stable access to cognitive control — we think teenagers can make these sorts of long-term decisions. It’s not easy, but it’s doable.

We think that the decision to seek gender-affirming care is a good model for what teenagers can do with adult support, because it’s something that takes months, even years, to plan out and deliberate about. For many teenagers this is something they have known since they were very young. We agree we need to help teenagers avoid making impulsive decisions about gender-affirming therapy — but there are a lot of things that make it distinct from delinquent behavior in teens, which is usually about impulsive decision-making. So, there are all these subtleties about what the implications might be.

I think we all know people who, as adults, are more comfortable taking risks, or more easily distracted, than others. Does that mean their executive function is deficient or they are somehow less “mature”?

That is a really interesting point. I was a crazy risk-taking adolescent. I think what occurred over development is, I’m still a risk taker but now I do it in science. I was also the one who could be very distractible. Like, for me, my biggest fear was boredom, and it still is. When I write a grant, I’m not going to write the next logical step. I want to be risky. I want to move science in big jumps, not little steps.

So I feel like my risk-taking kind of stayed, but it transformed.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Preserving Biodiversity Through Sustainable Development

Many industries around the globe are prioritizing sustainable development to ensure a healthy planet for the future.

For the Mexican avocado industry, success depends on the conservation of natural resources, soil, forests and water, which is why it is working to reduce its impact to protect the natural environment. One important priority is supporting biodiversity – the variety of plants and animals in one region or ecosystem – through environmentally friendly and responsible practices.

Most Mexican Hass avocados exported to the U.S. begin their journey in Michoacán, Mexico, an area known for its flourishing ecosystem. The Mexican avocado industry including APEAM and MHAIA are committed to preserving and enhancing biodiversity and promoting sustainable development and forest conservation by:

Protecting Pollinators
Bees and wild pollinators like butterflies, are essential to ensuring avocado trees have resources and support to grow. Avocado orchard production increases 25% when pollinators are present, and 80% of Mexican avocado production is due to pollinators.

On top of that, about 30% of avocado orchards in Michoacán have added beehives or work with local beekeepers to rent beehives to increase the presence of bees on the farms. Avocado farmers also take care to use plant- and flower-friendly agrochemicals at the right time when pollinators aren’t active.

Through its partnership with Forests For Monarchs, MHAIA has planted more than 1.2 million trees to protect the environment and reforest the area close to the reserve of the monarch butterfly, an important native pollinator in Mexico.

Maintaining Habitats Through Forest Preservation
APEAM’s efforts to preserve more than 1.3 million acres of the “Avocado Strip” includes preventing and responding to fires, creating a biological corridor and researching sustainable developments for soil and water. The industry has supported planting nearly 2.9 million trees throughout Michoacán.

Water Conservation
Water use is also critical; approximately 61% of the avocado orchards in Michoacán rely on rainfall and natural irrigation. Another 36% utilize sustainable irrigation.

Learn more about the avocado industry’s sustainability practices at avocadoinstitute.org.

SOURCE:
Avocados From Mexico