Wednesday, April 12, 2023

Getting lab-grown meat — and milk — to the table



Beef, chicken and dairy made from cultured cells could offer a smaller footprint than conventional farms. Companies are working on scaling up and bringing prices down.

Diners at the swanky Atelier Crenn restaurant in San Francisco expect to be served something unusual. After all, the venue boasts three Michelin stars and is widely considered to be one of the world’s top restaurants.

But if all goes according to plan, there will soon be a new dish on the menu that truly is remarkable: chicken that was never part of a living bird.

That peculiar piece of meat — likely to be the first of its kind ever sold in the US — comes from a radical sort of food technology now in development, in which meat is produced by culturing muscle cells in vast tanks of nutrients. A similar effort — to culture mammary cells — is also underway and may soon yield real milk without cows.

The company behind Crenn’s chicken, California-based Upside Foods, got a thumbs-up in November 2022 from the US Food and Drug Administration, which said it had no concerns about the safety of the technology. (The company’s manufacturing facility still requires a certificate of inspection from the US Department of Agriculture.)

This cellular agriculture, as some of its proponents call it, faces formidable technical obstacles before it can ever be more than a curiosity. But if it does reach the mainstream, it offers the prospect of a cruelty-free source of meat and dairy — potentially with a smaller environmental footprint than conventional animal products.

Conceptually, cellular agriculture is straightforward. Technicians take a small tissue sample from a chicken, cow or other animal. From that, they isolate individual cells that go into a bioreactor — basically a big vat of nutrient solution — where the cells multiply manyfold and, eventually, mature into muscle, fat or connective tissue that can be harvested for people to eat.

Products in which these cells are jumbled together, as in ground meat, are easiest to make, and that’s what most cellular meat companies are developing, at least initially. But Upside has a more ambitious goal: to create chicken with whole muscle fibers. “We’ve figured out ways to produce that textural experience,” says Eric Schulze, Upside’s vice president of product and regulation. He declines to explain exactly how they do it.

The process takes two to three weeks from start to finish, regardless of whether they are making chicken or beef. That’s much faster than the eight to 10 weeks required to raise a fryer chicken, or the 18 to 36 months needed for a cow. “We’re doing a cow’s worth of meat in 21 days or less,” says Schulze.

One cellular meat product is already available commercially, though not in the US. In Singapore, a few restaurants and street vendors now offer a chicken nugget that contains a mix of cellular meat and plant-based ingredients. The product sells for about the same price as organic, farm-raised chicken, but the true cost of production is higher. “We’re selling it at a loss, for sure,” says Vítor Espírito Santo, senior director of cellular agriculture at Good Meat, the US-based company producing the nugget.

But the cost should come down once the company expands to larger scale, Santo says. “Everything we do right now is more expensive because we are using a 1,200-liter bioreactor. Once we are producing in 250,000 liters, it will be competitive with conventional meat.” The company is now working on gaining approval in the US.

Meat isn’t the only animal product that can come from cell cultures. Several companies are working to produce milk by culturing mammary cells and collecting the milk they secrete. For example, Opalia, a Montreal-based company, grows mammary cells on the surface of a three-dimensional, branched structure that resembles the lobules of a real udder, says CEO Jennifer Côté. The cells secrete milk into the structure’s lobules, where it can be collected and drawn off. Some other companies, such as North Carolina-based BioMilq, are using a similar technology with human mammary cells to produce human breast milk. None are yet on the market.

In some ways, the process for making milk is easier than producing meat because the cells themselves don’t need to be harvested and replaced. “The cells we use can stay alive for multiple months on end,” says Côté. That means the company can concentrate on developing cells that secrete a lot of milk, rather than ones that divide rapidly. Moreover, she adds, because the cells themselves are not part of the product, Opalia can genetically modify its cells without the milk itself being a GMO product.

Proponents hope that cellular meat and milk can eventually offer several big advantages over the conventional versions. By cutting animals out of the process, cultured products do away with most of the animal-welfare issues that beset modern factory farms. Meat and milk that come from clean culture facilities instead of manure-laden farmyards should also be less likely to carry food-borne diseases, says Elliot Swartz, lead scientist for cultivated meat technology at the Good Food Institute, a Washington DC-based nonprofit organization supporting alternatives to meat.

Enthusiasts also claim that cell-based products should be more sustainable than conventional animal products, because farmers will no longer need to feed, water and house entire animals just to harvest their muscles. It’s hard to know whether this benefit will pan out in reality, since the technology is still under development. Only a few studies have tried to estimate the environmental impact of cell-based meat, and all have made huge assumptions about what future technologies will look like.

One thing seems clear, however. Cell-based meat relies heavily on electricity for tasks like heating or cooling culture tanks and pumping cells from place to place. If that electricity comes from renewables, the overall carbon footprint of cell-based meat will be much less than if it comes from fossil fuels, says Swartz.

Assuming a relatively green electric grid, though, one careful study of cell-based meat’s potential, by the Dutch consulting company CE Delft, suggests that its environmental footprint is likely to be roughly the same as that of conventional pork or poultry — among the greener conventional meats, by most reckonings — and far less than that of beef.

So far, however, companies and academic researchers have only taken baby steps toward cellular agriculture. If the industry is ever to grow big enough to change the face of global agriculture, it would need to overcome several major hurdles, says David Block, a chemical engineer at the University of California, Davis, who works on the technology behind cultured meat.

One of the biggest challenges, most experts agree, is finding an inexpensive way to supply the nutrients and growth factors the growing cells need. Existing culture media are far too costly and often depend on calves’ blood for molecules such as fibroblast growth factor and insulin-like growth factor 1, which are essential for cell growth and maintenance. Researchers are hoping that relatively unprocessed sources like plant or yeast extract can eventually provide most of the nutrients and vitamins they need, and that they can find a cheaper way to produce the growth factors.

As a step in that direction, Dutch researchers have developed a growth medium using no serum — just off-the-shelf chemicals — to which they add more than a dozen growth factors and other nutrients. Their new medium allowed cow muscle cells to grow almost as well as on calf serum, they reported recently.

Scaling up from research-sized cultures to big commercial operations — an essential step to keeping costs down — may also present problems. The larger the bioreactor, the more difficult it is to ensure that waste products like ammonia are removed, says Ricardo San Martin, a chemical engineer who directs the Alternative Meats Lab at the University of California, Berkeley. Even merely stirring extremely large bioreactors can subject the cells to damaging shear forces, he notes.

The nutrient-supply problem gets even tougher for whole-muscle meats such as steaks or whole chicken breasts. In the animal, such thick slabs of muscle have networks of blood vessels snaking through them, so that every muscle cell is close to a blood supply. Many researchers in the field think replicating that 3D structure in culture poses serious challenges that have yet to be overcome. “I don’t think we are close to growing a steak, and I don’t see it in the next 10 or 15 years,” says San Martin.

Still, proponents remain optimistic that those problems will be settled soon. “Technologically, we’re not concerned,” says Schulze. “With enough time and scientific ingenuity, somebody, somewhere, will find a way to make this work. The cost is the main issue for everyone.”

But cost remains a big stumbling block. The first lab-grown burger patty, produced by a Dutch team in 2013, cost an estimated 250,000 euros (about $330,000). And while costs have fallen since then, they remain much higher than for conventional meat. In a study that has not yet been peer-reviewed, Block and his colleagues estimated that producing a ground-beef product in a 42,000-liter bioreactor — almost twice as big as the largest in use today for mammalian cells — would cost about $13.80 per pound. To bring the cost down under $6 per pound, only a little pricier than conventional ground beef, would require a much larger, 260,000-liter bioreactor.

But cultured meat may not have to match the price of ground beef or chicken to be commercially viable. Some consumers will probably pay higher prices to avoid the ethical and environmental costs of conventional meat, just as they do today for plant-based meat substitutes like Impossible and Beyond Meat. And some conventional products such as caviar, foie gras or bluefin tuna are so expensive that cultured versions could probably be cost-competitive pretty soon, says Swartz. That would give manufacturers a way to bring in some profits even as they work to bring costs down further.

Another intermediate step could be to use cultured meats to enhance the flavor of plant-based products, as Good Meat is doing now with the part-cultured-meat, part-plant-based meat patties they sell in Singapore. Manufacturers could also add cultured animal fat cells to give a meatier flavor to a plant-based product. “You only need maybe 5 percent animal fat to achieve that,” says Swartz. Such hybrid products, he thinks, are likely to be the dominant role for cellular meat in the next decade.

Similar first steps could help cultured-milk companies generate revenue before they can match cow’s milk in price. Breast milk offers enough advantages over infant formula, says Swartz, that many consumers are likely to pay high prices for cultured human milk from BioMilq and other companies. “There are a variety of proteins and fatty acids and sugars that are simply not there if you don’t have breast milk,” says Nurit Argov-Argaman, a lactation physiologist at the Hebrew University of Jerusalem. Argov-Argaman is also chief scientist at Wilk, an Israeli company that is culturing human breast cells to extract high-value components such as fatty acids and lactoferrin, a protein essential to iron uptake, to enrich infant formula.

A few of these cell-cultured meat and milk products should make it to supermarket shelves within the next few years, experts say. But as promising as these first steps are, no one really knows whether cellular meat and milk will eventually grab a significant share of the global market for animal-based foods.

“There are certainly immense challenges — no one’s denying that,” says Schulze. “But our plan is to work on that as an industry. It’s effectively a space race for food. The difference here is we will attempt to rationally solve these challenges one by one in a reasonable time frame — and do it safely, of course, since it’s food.”

Editor’s note: A caption for an image in this article was updated on March 21, 2023, to clarify that the $330,000 estimated cost in US dollars of a burger patty made from cultured meat was based on 2013 exchange rates between the euro and US dollar.

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Why more and more Americans are painting their lawns

Americans – especially those living in areas affected by drought – are turning to paint to give their grass that perfect green sheen. Justin Sullivan/Getty Images
Ted Steinberg, Case Western Reserve University

To paint or not to paint?

That is the question that many homeowners are facing as their dreams for perfect turf are battered – whether it’s from inflation pushing pricier lawn care options out of reach, or droughts leading to water shortages.

Increasingly, many are turning in the spreader for the paint can, opting, according to a report in The Wall Street Journal, for shades of green with names like “Fairway” and “Perennial Rye.”

Where does this yen for turning the outside of the house into a trim green carpet come from?

Some years ago, I decided to investigate and the result was my book “American Green: The Obsessive Quest for the Perfect Lawn.”

What I found was that lawns extend far back in American history. Former presidents George Washington and Thomas Jefferson had lawns, but these were not perfect greenswards. It turns out that the ideal of perfect turf – a weed-free, supergreen monoculture – is a recent phenomenon.

The not-so-perfect lawns of Levittown

Its beginnings can largely be traced to the post–World War II era when suburban developments such as the iconic Levittown, New York, had its start.

Levittown was the brainchild of the Levitt family, which viewed landscaping – a word that only entered the English language in the 1930s – as a form of “neighborhood stabilization,” or a way of bolstering property values. The Levitts, who built 17,000 homes between 1947 and 1951, thus insisted that homeowners mow the yard once a week between April and November and included the stricture in covenants accompanying their deeds.

But the Levitts took the obsession with the lawn only so far. “I don’t believe in being a slave to the lawn,” wrote Abraham Levitt. Clover was, to him, “just as nice” as grass.

Black and white photo of woman standing outside her suburban home with a perfectly manicured lawn.
The developers of Levittown required homeowners to mow their yards once a week between April and November. ClassicStock/Getty Images

Engineering perfection

All of which is to say that the quest for the perfect lawn did not come naturally. It had to be engineered, and one of the greatest influencers in this regard was the Scotts Co. of Marysville, Ohio, which took agricultural chemicals and created concoctions that homeowners could spread over their yards.

Formulators like Scotts had one great advantage: Turfgrass is not native to North America, and growing it on the continent is, for the most part, an uphill ecological battle. Homeowners thus needed a lot of help in the quest for perfection.

But first Scotts had to help lodge the idea of perfect turf in the American imagination. Scotts was able to tap into postwar trends in brightly colored consumer products. From yellow slacks to blue Jell-O, colored products became status symbols and a sign that the consumer had rejected the drab black-and-white world of urban life for the modern suburb and its kaleidoscopic colors – which included, of course, the vibrant green lawn.

Architectural trends also helped the perfect turf aesthetic take root. A blurring of indoor and outdoor space occurred in the postwar era as patios and eventually sliding glass doors invited homeowners to treat the yard as an extension of their family room. What better way to achieve a comfy outdoor living space than to carpet the yard in a nice greensward.

In 1948, the perfect lawn took a giant step forward when the Scotts Co. began selling its “Weed and Feed” lawn care product, which allowed homeowners to eliminate weeds and fertilize simultaneously.

The development was probably one of the worst things ever to happen, ecologically speaking, to the American yard. Now homeowners were spreading the toxic herbicide 2,4-D – which has since been linked to cancer, reproductive harm and neurological impairment – on their lawns as a matter of course, whether they were having an issue with weeds or not.

Selective herbicides like 2,4-D killed broadleaf “weeds” like clover and left the grass intact. Clover and bluegrass, a desirable turf species, evolved together, with the former capturing nitrogen from the air and adding it to the soil as fertilizer. Killing it off sent homeowners back to the store for more artificial fertilizer to make up for the deficit.

That was bad news for homeowners, but a good business model for those companies selling lawn care products who, on the one hand, handicapped homeowners by killing off the clover and, on the other hand, sold them more chemical inputs to recreate what could have occurred naturally.

The “perfect” lawn had come of age.

The meaning of grass painting

By the early 1960s, homeowners were already looking for ways of achieving perfect turf on the cheap.

A 1964 article in Newsweek pointed out that green grass paint was being sold in 35 states. The magazine opined that because a homeowner “needs a Bachelor of Chemistry to comprehend the bewildering variety of weed and bug destroyers now fogging the market,” paint was becoming an attractive alternative.

So the interest in grass painting is not entirely new.

A bird's eye view of suburban houses with green lawns.
Suburban tract houses in Centerville, Md. Edwin Remsberg/The Image Bank via Getty Images

What is new, however, is that the recent interest in painting the lawn is taking place in a context in which a more pluralistic vision of the yard has taken root.

People fed up with corporate-dominated lawn care are turning back the clock and cultivating their yards with clover, a plant that is resistant to drought and provides nutrients to the lawn, to boot. And so the clover lawn has been making a comeback, with videos on TikTok tagged #cloverlawn boasting 78 million views.

Together, the return of grass painting with the resurgent interest in clover lawns suggests that the ideal of the resource-intensive perfect lawn is an ecological conceit that the country may no longer be able to afford.

Ted Steinberg, Professor of History, Case Western Reserve University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, April 11, 2023

OPINION: After cryptoassets, a wave of central bank digital currencies is set to revolutionize our ideas about what money is and how to manage it

In October 2020, the Bahamas released a new kind of digital currency: “sand dollars.” These digital tokens are issued by the country’s central bank and are legal tender, with the same legal status as their old-fashioned money — paper notes and coins. The sand dollar is cash; it just doesn’t have a physical form. Residents of the Bahamas can now download an e-wallet onto their phones, load it with sand dollars, and spend away with a simple tap.

For now, only two countries have officially launched such central bank digital currencies (CBDCs): the Bahamas and Nigeria. But many more are actively running CBDC trials, including China and the countries of the Eastern Caribbean Currency Union. More than 100 countries are exploring the idea. In the US, the Federal Reserve this January issued a  discussion paper on CBDCs, considering all the risks and opportunities.

This is an exciting moment in the evolution of currency. Recent years have seen a boom in cryptoassets (often called “cryptocurrencies”) like bitcoin; now companies are innovating with less risky alternatives, including so-called stablecoin, and nations are exploring CBDCs. All of these are likely to come to exist side-by-side in a new continuum, with today’s two most common forms of money (cash and bank deposits) facing tough competition. This evolving landscape comes with the promise of making international payments easier, improving access to microloans and reducing transaction costs. But there are also great risks to avoid.

Money, let us not forget, has already come a long way, evolving from beads to gold coins to paper bills and credit cards. In the early 1900s, national currencies were typically backed by commodities; that is no longer true. The United States dollar, for example, was divorced from the gold standard in the early 1930s, becoming a “fiat” currency whose value is backed solely by the word of the government. Then, as computers rose in power and use, electronic payments became ubiquitous.

Recent years have seen a flurry of activity in the rise of cryptoassets (while often called cryptocurrencies, they aren’t really currencies, but rather assets with speculative value and appeal). These are privately issued and secured by cryptography — decentralized assets that allows peer-to-peer transactions without an intermediary like a bank. Since bitcoin’s launch in 2009, an estimated 14,000 different types of cryptoassets have been issued, from litecoin to ethereum, holding an estimated market value of US$2.3 trillion at the end of 2021. They are highly volatile, infrequently accepted and carry a high transaction cost.

The volatility of cryptoassets has created interest in stablecoins, which are typically issued by an entity such as a payment operator or bank, and attempt to offer price stability by linking their value to a fixed asset such as US dollars or gold. Tether gold and PAX gold are two of the most liquid gold-backed stablecoins. There are many stablecoins with various shades of stability, and their growth is exponential. Stablecoins, unlike cryptoassets, have the potential to become global payments instruments.

CBDCs can be thought of as a new type of fiat money that expands digital access to central bank reserves, making them available to the public at large instead of just commercial banks. A CBDC would combine the digital nature of banking with the peer-to-peer transactions of cash. But there are still many questions about how any given country’s CBDC might work: Would funds exist in an account at the bank, or would they come closer to cash, materializing as digital tokens? Would CBDCs pay interest rates like a bank deposit does, or not? In Bermuda, the sand dollar is run through the country’s central bank, has certain quantity restrictions and does not pay interest.

There are some important advantages to CBDCs: They have the potential to make payment systems more cost-effective, competitive and resilient. They would reduce, for example, a nation’s cost of managing physical cash, a sizable expense for some countries that have a large land mass or many dispersed islands.

CBDCs could help improve cross-border payments, which currently rely on multilayered banking relationships, creating long payment chains that are slow, costly and hard to track. CBDCs could also help make payment systems more resilient through the establishment of a decentralized platform, essentially fortifying the payments infrastructure against operational risks and cyberattacks.

Many countries have large numbers of people without bank accounts: The “unbanked” often have no access to loans, interest or other financial and payment services. CBDCs could transform their lives by bringing them into the financial system.

But there are risks, too. A prominent one is if everyone decided to hold a lot of CBDCs and suddenly withdrew their money from banks. Banks would then have to raise interest rates on deposits to retain customers, or charge higher interest rates on loans. Fewer people would get credit and the economy could slow. Also, if CBDCs decrease the costs of holding and transacting in foreign currency, countries with weak institutions, high inflation or volatile exchange rates might watch as consumers and firms abandon wholesale their domestic currencies.

There are ways to get around these problems. For instance, central banks could offer lower interest rates on CBDC holdings (these show up as liabilities on a central bank’s balance sheet) than on other forms of the central bank’s liabilities, or only distribute CBDCs through existing financial institutions.

Institutions are now racing to draw up new rules and regulations to cover all these contingencies and figure out how new forms of money should be treated: as deposits, securities or commodities. The intergovernmental watchdog Financial Action Task Force, for example, has amended its anti-money laundering policies and counter-financing of terrorism standard in light of virtual assets; the Basel Committee on Banking Supervision has issued a paper on how banks can prudently limit their exposure to cryptoassets. The International Monetary Fund (where I work) is on the case, providing independent analysis of these issues.

Everyone will have to think fast and on their feet. Central banks will have to become more like Apple or Microsoft to keep CBDCs on the frontier of technology and in the wallets of users. Future money may be transferred in entirely new ways, including automatically by chips embedded in everyday products. This will require frequent tech redesigns and a diversity of currency types. Whatever form your money currently takes, in your bank, your wallet and your phone, expect the near future to look quite different.

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Viruses and bacteria travel in fluids, such as the air we breathe. Studying exhalations, toilet flushes and rain drops, with math and modeling, can sharpen the big-picture view of how to prevent infections.

When we think of the air we breathe, we usually don’t think fluids. But air is a fluid. And bacteria and viruses are carried by fluids. So understanding the dynamics of fluids — how they flow under the influence of various forces, such as gravity and any initial momentum imparted to the fluid — is crucial to understanding how viruses or other pathogens spread from place to place, from person to person.

Lydia Bourouiba made the connection while studying fluid dynamics at McGill University in Montreal. In 2003, she was partway through her PhD when the SARS epidemic struck. She realized then that she wanted her work to have an impact on public health.

Bourouiba now leads the Fluid Dynamics of Disease Transmission Laboratory at the Massachusetts Institute of Technology (MIT). For more than a decade, she has focused her attention on how fluids can help disease move from one host or reservoir to the next. Armed with high-speed cameras, some fancy mathematics and old-fashioned grit, Bourouiba studies everything from the motion of droplets that are ejected when we breathe, cough or sneeze to how splashes of water droplets from leaves can spread pathogens from plant to plant. She explored the current state of such knowledge in papers in the 2021 Annual Review of Fluid Mechanics and the Annual Review of Biomedical Engineering.

Knowable Magazine spoke to Bourouiba on how understanding the dynamics of fluids can inform public health measures and help limit the spread of infectious diseases such as Covid-19.

This conversation has been edited for length and clarity.

How did you get interested in fluid dynamics and infectious diseases?

When I took my first class on fluid dynamics, I fell in love with the topic, because of its beauty and universality. The beauty is tied to the mathematics of fluid dynamics, and that you may find fluid processes working at the scale of stars and galaxies, as well as at the cellular level.

Despite the age of the field, so many fundamental questions remain to be answered. I find such universality and depth beautiful. I have also always felt strongly about human rights, and in particular about equity in access to public health and education. It’s another side of who I am.

Coming back to fluids: Pathogens travel in fluids, whether they’re in the body or outside, and through air, which is a fluid as well. In combining fluid dynamics with applied questions about disease transmission and other topics, I saw a way to apply myself to fundamental open questions in fluid dynamics and mathematics. After much exploration, I embarked on a scientific journey that aligned with who I am and my values.

What was the state of knowledge when you began working on these problems?

When it comes to respiratory diseases, I found that the world of public health had dogmas about the spread of pathogens in droplets of mucus and saliva from person to person. There was also this notion of pathogens spreading via aerosols — the solid residues remaining in the air after the liquid in small droplets has evaporated. But there wasn’t much modern scientific evidence regarding the behavior of droplets or aerosols. The prevailing idea was that when we exhale, the droplets that come out follow isolated trajectories — that means a pathway influenced only by gravity’s pull and the drag of air, and not by the turbulent cloud of gas that’s emitted with them. These droplets can carry pathogens (the SARS-CoV-2 virus, for example, is about 100 nanometers across, orders of magnitude smaller than the droplets).

Our work showed that a central assumption in many infection control regimes — that droplets follow isolated trajectories unaffected by the gas cloud — was wrong.

If the droplets are considered in isolation, how far one droplet will go depends only on its initial momentum. When you do the aerodynamic calculation, you get a distance of 1 to 2 meters for the larger droplets. Using the same calculations, one can show that droplets and aerosols less than 50 micrometers in size would not travel more than a few centimeters, even if the droplets are ejected at very high speeds, because of the huge drag on them relative to their size. Our work showed that the prevailing notion — that droplets are emitted in isolation and follow individual trajectories — was wrong, and that this physical picture would have to be revised to accurately assess the risk posed by droplets laden with infectious pathogens.

How did you begin studying what happens when we exhale?

At MIT, I had access to a center for high-speed imaging, the Edgerton Center, where I could use advanced imaging approaches to reveal what we can’t see with the naked eye. There are many ways to reveal what people exhale. For example, to study the liquid droplets in isolation, there is shadowgraphy or scattering, which involves imaging the shadows cast by the droplets.

I also investigated and developed other approaches to reveal not just the liquid phase but also the gas phase, including camera sensors that use a very high frame-rate to capture these extremely fast processes. The gas emitted in a breath encapsulates, traps and transports the liquid droplets, and so is clearly critical. We’re talking about emissions — a high-momentum, turbulent movement of the droplets-laden puff of warm and moist air that we exhale when we breathe, talk, sing, cough or sneeze — that occur within 100 to 200 milliseconds.

Imaging techniques can capture the light scattered by the microdroplets of the exhaled cloud, others can capture changes in air density (due to change in temperature and moisture). Combining approaches and algorithms, we can separate the largest liquid droplets from the gas puff and its cargo of droplets, some invisible to the naked eye.

These techniques allowed me to start modeling the physical process: the emergence of the exhalation and its spread in the form of a multiphase, turbulent gas cloud rather than in isolated droplet trajectories. The cloud actually governs the distance that droplets of most sizes can reach. The exhalation’s movement is initially influenced by the momentum of the gas phase and then by the background indoor airflow in a more passive, turbulent dispersal pattern.

How did you get humans to produce the necessary exhalations for the studies?

Coughing, talking, breathing is, of course, straightforward. For sneezing, it varies. Some individuals are sensitive to light and respond to it by sneezing. Others need to tickle their nose. Those involved found their own trick. Because it’s a reflex, once the sneeze is triggered it proceeds with little difference from a “natural” one.

What did you find once you did all the imaging and modeling?

I found that these earlier studies had not accounted for the presence of a gas cloud. From the point of view of fluid dynamics, most of the momentum is not in the liquid phase (the droplets). It’s mostly in the gas phase, which traps droplets within it and carries them forward in a concentrated localized packet. (That’s in contrast to the previous understanding, which was that the droplets would be spread out fairly uniformly in an indoor space.) And therefore, the overall evolution of this ejecta — its motion in space and time — depends on the physics of the gas cloud, at least at this first phase of exhalation.

Some of these drops, of course, can escape from the gas cloud and settle on surfaces, but where they escape, how they escape and where they end up, is primarily driven, again, by the physics of the cloud. The distribution, distances and timescales associated with a gas cloud laden with droplets are dramatically different than those of isolated, individual drops. The old paradigm did not account for this.

The early stage of the cloud is dominated by the very high momentum of the exhalation cloud itself, not by the background indoor airflow, which may be just a few centimeters per second and much slower than the average speed of a breath. So, initially the dynamics of the cloud dominates the dispersal of pathogen-laden droplets. The cloud can span a room in seconds to minutes. As it moves forward, the cloud draws in ambient air and slows down.

Eventually there’s a point of transition, where the exhaled cloud speed becomes comparable to that of the background air. And only then does the background airflow take over in what is a more chaotic dispersal of the droplets or aerosols that were concentrated in the respiratory cloud up to that point. This is when the concentrated packets of particulates begin to break apart and start following the pattern of airflow.

Our first observations were surprising, as clearly the reality looked very different from the existing descriptions that essentially ignored the physics of the exhaled cloud.

What kind of distances did you measure?

In the earlier scenario, small, isolated droplets, even emitted at the highest exhalation speeds, can be shown to go only a few centimeters before air resistance brings them down. Isolated, larger droplets at similar speeds are less sensitive to drag and can go further, up to 1 to 2 meters (about 3.3 to 6.6 feet).

Violent exhalations ­– coughing, sneezing, shouting or singing — may send some disease-laden droplets hurtling as much as 25 feet from the source.

How far the droplets in the cloud travel, however, is governed by the cloud’s dynamics — except for huge blobs more than a millimeter in diameter. These large blobs immediately leave the cloud. But the range of most of the smaller drops is enhanced by the gas cloud. For the most violent exhalations caused by coughing, sneezing, shouting or even singing, drops smaller than 20 or 30 micrometers across can go 200 times farther than they would if they were emitted in isolation.

In fact, the cloud and its payload can reach distances of up to 6 to 8 meters (about 19 to 26 feet) for the most violent exhalations! Even with normal breathing, the gas cloud can easily spread 2 meters with its payload of small, suspended droplets.

What happens to the gas cloud over time?

As the cloud moves forward, it sweeps up ambient air, expands and slows down. So drops moving faster than the mean speed of the cloud can escape, leaving fewer and fewer drops trapped in the cloud. When the background air flow takes over — when it’s moving faster than the cloud — the opposing forces become the ambient air flow speed versus the settling speed of the suspended particles. Droplets invisible to the naked eye, less than 10 micrometers in size (but still more than 100 times larger than most viruses), can remain suspended in the air for hours to days, depending on the background airflow.

Is this what one means by pathogen-carrying droplets being airborne?

Once you’re talking about a respiratory disease, you’re always exhaling pathogen-carrying droplets into the air. But, to cause infection, they need to be inhaled and reach their target tissue in the respiratory system. The question is one of route and level of exposure. That depends on understanding the dynamics of the gas cloud and the fate of its payload of drops and their contents, as well as how the pathogen interacts with the environment. This is dynamic, not static, so we need to incorporate such dynamic thinking about these questions to develop more robust fundamentals that can lead to improved surveillance and mitigation.

For example, the fact that SARS-CoV-2–containing droplets can remain in the air for hours with the virus potentially still being viable and dispersed indoors means that healthcare workers caring for Covid-19 patients should use high-grade respirators, and they should be putting them on well before coming face-to-face with the infected individual, not just when they are within 6 feet of the patient.

Should the rest of us wear masks?

Even at this stage of the pandemic, and given the new variants, it is key to still wear masks as an effective means of disease control in addition to personal protection. But it is important to understand that fluids follow the path of least resistance — “fluids are lazy” as we say. If a mask is not sealed — it’s open on the sides — most of the fluid passes through the largest openings, not the mask’s filter material.

However, encountering an obstacle does lower the exhalation cloud’s momentum, which reduces its range and means the cloud can be overtaken by the room’s air flow earlier in its trajectory. If most of the flow passes through the mask’s filter, as happens in well-sealed masks, what comes out is a gas flow with lowered viral particle content.  

How about ventilation in indoor spaces? What effect does it have on the spread of the droplets?

Most buildings in the US have mechanical mixing ventilation. That means that the inlet and outlet are both near the ceiling. We already know that displacement ventilation might be better at ensuring that the contaminants stay in the upper room levels rather than in the breathing zone. In displacement ventilation, cooler clean air is slowly injected from the floor or lower levels and exits from the ceiling or upper room levels. At a steady state in an ideal setting, you can create a kind of stratification, such that the breathing zone is fresher, with fewer contaminants, than the upper layer of air even with people in the room.

Obviously, in an emergency response setting, one has to work with whatever ventilation system is in place. So it is important to ensure that there’s enough fresh air coming in from the outside per unit time per person. We know from studies of tuberculosis that at least 10 to 15 liters of fresh air per second per person is needed to reduce airborne transmission of respiratory diseases indoors. That’s achievable with modern ventilation systems and even with good portable air purifiers.

Is this a concern only for hospitals or also for other indoor spaces, like grocery stores, restaurants and schools?

It’s a concern everywhere, particularly in smaller, older buildings that are not up to basic ventilation standards and that are planning to return to full or even half occupancy. Generally, building ventilation standards are not optimized for reduction of respiratory diseases, but for comfort levels. For normal occupancy during a pandemic, you need to exceed those basic standards.

You have also studied how non-respiratory infections can spread in hospitals. Tell us about that.

We looked at mechanisms that could spread spores of Clostridium difficile, a bacterium that causes serious, sometimes life-threatening infections of the colon. Hospitals are often important contributors to the transmission of this gastrointestinal disease. In North America, many hospitals use high-pressure flushes in toilets, for energy efficiency. And, again surprisingly, little work had been done on the problem of emissions from these flushes from the fluid dynamics and design points of view.

We wanted to see if the design of the devices could in fact play a role in the airborne route of transmission of C. difficile, rather than just in surface contamination. We used light-scattering and high-speed imaging and other methods to study the fragmentation — the generation of airborne droplets — from a range of flush systems.

We found a very interesting pattern. We quantified when and how contaminating droplets are created by the fluid fragmentation that is enhanced by the current designs. Plumes of small droplets are created throughout the flush process and these droplets are carried around by the background airflow and can remain suspended in a room for a long time.

The issue we revealed is that typical cleaning protocols of hospitals may enhance such emissions. Cleaning agents, or surfactants, reduce the surface tension of the water and subsequent flushes can thus end up aerosolizing the fluid more extensively.  While certain detergents may kill viruses and bacteria, they typically do not neutralize bacterial spores. So, current toilet designs and cleaning protocols can enhance emission of such spores. 

Theoretically, those spores may end up infecting someone else. We need to be more systematic in studying these effects and developing fundamental science insights that can one day lead to improved patient-management and infection-control protocols at the frontline.

Tell us about your work with plants.

The question of contamination and disease transmission holds for animals and plants as well. I got particularly interested in the transmission of leaf diseases like rust in plants such as wheat. The connection between droplets and transmission became clear when we learned about the empirical evidence linking rainfall to the appearance a few weeks later of lesions on wheat or other primary crops.

In the lab, we started studying details of how drops of water behave as they fall on plants. The high-speed imaging revealed a rich set of processes of breakup and fragmentation of drops that had never been reported and were surprising. Most plant leaves have been thought to be super-hydrophobic, as in water-resistant, built to let water drops slide off like a raincoat. Lotus leaves are a good example of that. Yet, we found that most common crop leaves are somewhere between the extremes: not fully wetting (coated by a thin liquid film) and not fully hydrophobic. So, fluids interact with these leaves and fragment in more complex ways than would be anticipated if leaves were super-hydrophobic.

Also, leaves and stems are compliant: they move and oscillate when hit by a rain or irrigation drop. We discovered that the interaction between drops of water impacting the leaves and the wetting and mechanical properties of the leaves can cause water to fragment in a way that may be particularly effective for spreading pathogens. Disturbed by an impact, a contaminated, standing drop of water on a leaf may stretch out in a crescent shape that helps disperse any disease agents within it.

Depending on the balance of the wetting and mechanical properties, in particular the compliance or stiffness of their leaves, plants can favor short-range transmission of large drops that contain a lot of pathogens or long-range transmission of smaller droplets each containing comparatively fewer pathogens but dispersed over a greater area.

Without even knowing about the genetic susceptibility of plants to a particular or emerging leaf pathogen, one can leverage information about the dynamics of the leaves to select for crop combinations in fields. The goal is to set up contamination barriers while reducing losses in yields by designing a polyculture that integrates firewalls — crops strategically placed that are associated with a shorter range of contaminant dispersal via droplet fragmentation. 

These results involve some pretty diverse phenomena, whether it’s the transmission of respiratory diseases, or the spread of infections in hospitals, or transmission of diseases in plants. Is there a common theme?

All of these insights are linked by their fascinating fluid dynamics and interfacial physics — what happens when fluids and solids meet. It’s curiosity-driven and focused on fundamentals. When a crisis hits, it is not obvious early on what kinds of basic research will become important. So, it’s crucial to support research that may not appear ready for immediate use.

The type of research I do, and did for years prior to this pandemic, is focused on the intersection of fundamental fluid dynamics, biophysics and infectious disease. It was not particularly popular, mainstream nor funded by traditional sources. Nevertheless, we carried on, and the insights we gained turned out to be central to key safety measures and led to an explosion of research in this area that will enable us to better prepare and respond to future crises.

When we face new challenges, it is often the insights from basic, scientist-driven research that can enable or suggest solutions. That is why it is so vital to be wary of group-think and nurture intellectual freedom and diversity in the research enterprise.

Given that you are studying the transmission of respiratory diseases, how have the many months of the pandemic been for you?

Very busy and grueling. There’s still a sense of urgency, particularly given the resurgence of Covid-19 infections with the fourth wave and with the newer, highly transmissible variants. But there is also a sense of moral obligation and duty to educate, communicate and share in any way we can. This is a mission way beyond the usual ivory tower of academia. The pandemic and the knowledge needed to combat it are both still unfolding. Staying focused on giving back to society should be core to the mission of universities, particularly in this time of need.

Editor’s note: This article was revised on September 6, 2021, to correct two errors. The piece should have said that the previous understanding was that droplets, not the gas cloud, would be distributed fairly uniformly through a room. In addition, the description of a droplet's trajectory considered in isolation should have said that it would not have been influenced by the gas cloud emitted with it as opposed to being influenced by other droplets, as was stated originally.

This article is part of  Reset: The Science of Crisis & Recovery , an ongoing Knowable Magazine  series exploring how the world is navigating the coronavirus pandemic, its consequences and the way forward. Reset is supported by a grant from the Alfred P. Sloan Foundation.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Estonia’s e-governance revolution is hailed as a voting success – so why are some US states pulling in the opposite direction?

Estonian Prime Minister Kaja Kallas reacts to e-vote results on March 5, 2023. Raigo Pajula/AFP via Getty Images
Erik S. Herron, West Virginia University

Estonia, a small country in northern Europe, reached a digital milestone when the country headed to the polls on March 5, 2023.

For the first time, over 50% of voters cast their ballots online in a national parliamentary election.

As a political science researcher who focuses on elections, I was in Estonia to learn about the process of internet voting. In the capacity of an international election observer, I visited standard polling places and also attended the final internet vote count held in the parliament building.

As someone who also regularly volunteers as a poll worker in the United States, I found the contrast between Estonia’s integrated information systems and internet voting, and the patchwork system operating in the U.S., to be notable. And with several U.S. states withdrawing from the Electronic Registration Information Center, or ERIC, that contrast is growing sharper.

I believe Estonia offers America an important example of how information sharing can be used to enhance the integrity of elections.

Estonia’s e-governance system

Estonia has long been seen as a pioneer in digitizing the democratic process.

Internet voting, which began in Estonia in 2005, is just a small part of the e-governance ecosystem that all Estonians access regularly. Using a government-issued ID card that allows Estonians to identify themselves and securely record digital signatures, they can register a newborn baby, sign up for social benefits, access health records and conduct almost any other business they have with a government agency. This ID card is mandatory for all citizens.

Central to the success of Estonia’s digitization revolution is a secure data-sharing system known as the X-Road.

Government agencies collect only the personal information they require to provide their services, and if another agency has already gathered a piece of information, then it is accessible through the X-Road. In other words, each piece of personal information is collected only once and then shared securely when it is needed. A person’s home address, for example, is collected by the population register and no other government entity. If it’s needed by election administrators, health care workers, a school or any other agency, those organizations request it from the population register online.

So, imagine that you are applying for admission to a university, which requires both your date of birth and your school grades. These are stored by two different agencies. By using your ID card, you can auto-populate the application using data that the system instantaneously pulls in from the two agencies that store that information.

Because of this information sharing, election officials know who is eligible to vote and which online ballot they should receive no matter where they live in the country.

A decentralized approach in U.S.

For many reasons, the U.S. system of election management is very different from Estonia’s, and online voting is rare.

Developing and maintaining an e-governance system requires technical, political and social forces to align. Because each U.S. state manages its own elections, and decisions can vary at the county level or below, it is difficult to envision a consistent technical solution. It is also more challenging to coordinate a solution across such a large country and safely implement secure online voting given current U.S. internet voting technology.

Additionally, concerns about federal interference in state matters have prompted political and social pushback on recent election reforms. Public consensus on instituting a nationally mandated electronic ID similar to the one that forms the foundation of Estonia’s internet voting appears unlikely.

Research shows that most Estonians trust their e-governance systems, although there are skeptics. Some critiques focus on perceived security shortcomings.

The internet voting process has also become politicized. In the most recent election, one political party that had discouraged its voters from using online voting – and unsurprisingly trailed its rivals in the online count – challenged the process in court. Its effort to annul internet voting failed. The U.S. witnessed a similar dynamic around absentee ballots in the 2020 elections.

Long line of people standing outside waiting to vote
Nearly all U.S. voters vote in person or by absentee or mail-in ballot. Michael M. Santiago/Getty Images

Balancing security, efficiency and access

While the United States’ decentralized approach has its advantages, it also creates shortcomings in security, efficiency and access.

Secure elections means that only people who have the right to vote are able to cast a ballot and that they aren’t improperly influenced in the process. Efficient elections means the process is smooth — voters don’t have to wait in long lines, and their ballots are counted quickly and accurately. And access emphasizes that people who have the right to vote can register, gather the information they need in order to vote, and successfully cast their ballot.

Sometimes changes to voting practices that enhance one of these values – say, security – may create impediments for another – say, access. Requiring a photo ID to vote, for example, may reduce the small likelihood of voter impersonation, but it also risks preventing a legitimate voter who forgets to bring, or doesn’t have, a valid photo ID from exercising their right to vote. Finding an acceptable balance among these values is a challenge for citizens and policymakers alike.

Misinformation derails digital efforts

Several states, including my own state of West Virginia, recently made a decision that I believe undermines all three of these values by making our elections less secure, less efficient and less accessible.

In early March, West Virginia joined Florida, Missouri, Alabama and Louisiana in withdrawing from the Electronic Registration Information Center. ERIC is a multistate, data-sharing effort to make voter rolls more accurate and encourage eligible citizens to vote. The 28 participating states and the District of Columbia provide voter registration and driver’s license data to ERIC and receive an analysis that shows who has moved, who has died and who is eligible to vote but has not registered.

These reports help states clean up their voter rolls, identify incidents of fraud and provide unregistered voters with information about how to vote.

In other words, ERIC is designed to enhance security, efficiency and access. However, over the past year, unsubstantiated claims have circulated that ERIC is being used as a partisan tool to undermine election integrity.

ERIC was established, however, as a nonpartisan information provider with bipartisan support. States that exit ERIC may be sacrificing the integrity of their election process based on unfounded conspiracies.

The U.S. can learn a tremendous amount from Estonia’s e-governance revolution. Estonia faces a hostile security environment with an antagonistic Russia next door. But its integrated systems have helped balance security, efficiency and access in a wide range of government services. With the decision to withdraw from ERIC, some states are in danger of pulling the U.S. in the other direction.

Erik S. Herron, Professor of Political Science, West Virginia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Misuse of Adderall promotes stigma and mistrust for patients who need it – a neuroscientist explains the science behind the controversial ADHD drug

Many people with ADHD are finding it difficult to get their Adderall prescriptions filled amid the shortage. AP Photo/Jenny Kane
Habibeh Khoshbouei, University of Florida

The nationwide shortages of Adderall that began in fall 2022 have brought renewed attention to the beleaguered drug, which is used to treat attention-deficit/hyperactivity disorder and narcolepsy.

Adderall became a go-to drug for ADHD over the past two decades but quickly came under fire because of overprescription and misuse. In some cases, people who do not have a proper ADHD diagnosis are using the drug for its perceived cognitive-enhancing effects, leading to an increase in its abuse rates and drug dependence.

Not only has misuse of Adderall led to its stigmatization as a drug of abuse, it can also lead to negative physical side effects, including cardiovascular complications, sleep disturbances and addiction.

I am a neuroscientist with a focus on studying the dopamine system in both the brain and peripheral immune system. My research specifically examines the short- and long-term effects of psychostimulant drugs like methamphetamine on a protein that transports dopamine, a chemical messenger that is not properly regulated in people with ADHD.

Through this work, I aim to better understand the complex interplay between drug use and the dopamine system, which may ultimately lead to new treatments for drug addiction and related disorders. Unfortunately, I’ve seen that the stigma and false narratives surrounding Adderall have made it more difficult for patients who need this medication to access it.

A surge in demand for Adderall during the pandemic along with supply chain issues have led to a nationwide shortage.

How Adderall treats ADHD

Adderall is the commercial name of a mixture of a few types of amphetamines, which are stimulants that increase dopamine levels in the brain to help address deficits in those with ADHD.

The underlying processes that lead to ADHD are poorly understood. The core symptoms include hyperactivity, inattention, mood swings, temper, disorganization, stress sensitivity and impulsivity.

Multiple studies suggest that these symptoms may be due to the improper regulation of dopamine levels in the brain.

Neurons have a protein called dopamine transporter that normally functions like a vacuum cleaner that sucks the chemical into the neuron. But people with ADHD have a leaky dopamine transporter, meaning that dopamine gets pushed out of the neuron into the surrounding environment of the synapse – the space between neurons where chemical messages are passed back and forth.

Adderall is thought to work by blocking this leaky protein, preventing dopamine from spewing out of the neuron through the dopamine transporter. This is thought to stabilize dopamine levels in the brain of ADHD patients, thus reducing their debilitating symptoms.

Adderall helps stabilize dopamine levels in the brains of people with ADHD.

The paradoxical effects of Adderall

People who don’t have ADHD usually have a functioning dopamine transporter that is able to maintain balanced levels of this chemical inside and outside of the neuron. When they use amphetamines like Adderall, however, the drug can disrupt the transporter’s ability to remove dopamine from the synapse as well as cause it to work backward and push dopamine out of the neuron. This results in too much dopamine in the synapse, which can lead to feelings of euphoria and increased wakefulness.

While these effects might sound good on the surface, misusing the drug is problematic because it can lead to cardiovascular problems. Current evidence suggests that Adderall doesn’t significantly increase cardiovascular disease risk for people with ADHD. But people without ADHD who misuse Adderall can develop a dependence on the drug and take it at dangerous dosages.

Adderall misuse doesn’t just involve a harmful cycle that reinforces its use because of its rewarding effects. It also reinforces dependence by causing negative emotional states some researchers have dubbed the “dark side” of addiction. Excessive activation of the brain’s reward system disrupts how it normally functions, resulting in a decrease in overall sensitivity to reward signals. It also leads to persistent activation of the brain’s stress systems, which results in feelings of anxiety and restlessness in the absence of the drug.

Adderall works when you need it

Other drugs like methylphenidate, known by the brand name Ritalin, also treat ADHD by targeting the dopamine transporter.

While Adderall and Ritalin reduce the hyperactive, impulsive and inattentive symptoms in people with ADHD by stabilizing dopamine levels, they do so using different mechanisms. Ritalin reduces the dopamine transporter’s leakiness by directly blocking entry. Adderall also reduces leakiness, but by competing with dopamine for entry into the transporter.

In people without ADHD, both Ritalin and Adderall significantly increase brain dopamine and induce euphoria, hyperactivity and other symptoms. However, both drugs are equally beneficial to patients with ADHD.

To treat anxiety, depression, narcolepsy and other neuropsychiatric diseases, millions of patients worldwide take medication that targets the transport of dopamine and other neurotransmitters like norepinephrine and serotonin, but their use is not stigmatized by recreational misuse.

Because of the euphoria-inducing properties and hyperactivity that Adderall can induce for those who do not need the drug, its misuse and abuse have unfortunately promoted false narratives about Adderall for those who do need it. For ADHD patients, however, it can reduce negative symptoms and greatly improve quality of life.

Habibeh Khoshbouei, Professor of Neuroscience, University of Florida

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Blessed are the (tiny) cheesemakers

Cheese is not just a tasty snack — it’s an ecosystem. And the fungi and bacteria within that ecosystem play a big part in shaping the flavor and texture of the final product.

Some cheeses are mild and soft like mozzarella, others are salty-hard like Parmesan. And some smell pungent like Époisses, a funky orange cheese from the Burgundy region in France.

There are cheeses with fuzzy rinds such as Camembert, and ones marbled with blue veins such as Cabrales, which ripens for months in mountain caves in northern Spain.

Yet almost all of the world’s thousand-odd kinds of cheese start the same, as a white, rubbery lump of curd.

How do we get from that uniform blandness to this cornucopia? The answer revolves around microbes. Cheese teems with bacteria, yeasts and molds. “More than 100 different microbial species can easily be found in a single cheese type,” says Baltasar Mayo, a senior researcher at the Dairy Research Institute of Asturias in Spain. In other words: Cheese isn’t just a snack, it’s an ecosystem. Every slice contains billions of microbes — and they are what makes cheeses distinctive and delicious.

People have made cheese since the late Stone Age, but only recently have scientists begun to study its microbial nature and learn about the deadly skirmishes, peaceful alliances and beneficial collaborations that happen between the organisms that call cheese home.

To find out what bacteria and fungi are present in cheese and where they come from, scientists sample cheeses from all over the world and extract the DNA they contain. By matching the DNA to genes in existing databases, they can identify which organisms are present in the cheese. “The way we do that is sort of like microbial CSI, you know, when they go out to a crime scene investigation, but in this case we are looking at what microbes are there,” Ben Wolfe, a microbial ecologist at Tufts University, likes to say.

Early on, that search yielded surprises. For example, cheesemakers often add starter cultures of beneficial bacteria to freshly formed curds to help a cheese on its way. Yet when Wolfe’s group and others examined ripened cheeses, they found that the microbial mixes — microbiomes — of the cheeses showed only a passing resemblance to those cultures. Often, more than half of the bacteria present were microbial “strangers” that had not been in the starter culture. Where did they come from?

Many of these microbes turned out to be old acquaintances, but ones we usually know from places other than cheese. Take Brachybacterium, a microbe present in Gruyère, which is more commonly found in soil, seawater and chicken litter (and perhaps even an Etruscan tomb). Or bacteria of the genus  Halomonas, which are usually associated with salt ponds and marine environments.

Then there’s Brevibacterium linens, a bacterium that has been identified as a central contributor to the stinkiness of Limburger. When not on cheese, it can often be found in damp areas of our skin such as between our toes.  B. linens also adds characteristic notes to the odor of sweat. So when we say that dirty feet smell “cheesy,” there’s truth to it: The same organisms are involved. In fact, as  Wolfe once pointed out, the bacteria and fungi on feet and cheese “look pretty much the same.” (An artist in Ireland demonstrated this some years ago by culturing cheeses with organisms plucked from people’s bodies.)

Initially, researchers were dumbfounded by how some of these microbes ended up on and in cheese. Yet, as they sampled the environment of cheesemaking facilities, a picture began to emerge. The milk of cows (or goats or sheep) contains some microbes from the get-go. But many more are picked up during the milking and cheesemaking process. Soil bacteria lurking in a stable’s straw bedding might attach themselves to the teats of a cow and end up in the milking pail, for example. Skin bacteria fall into the milk from the hand of the milker or get transferred by the knife that cuts the curd. Other microbes enter the milk from the storage tank or simply drift down off the walls of the dairy facility.

Some microorganisms are probably brought in from surprisingly far away. Wolfe and other researchers now suspect that marine microbes such as Halomonas get to the cheese via the sea salt in the brine that cheesemakers use to wash down their cheeses.

A simple, fresh white cheese like petit-suisse from Normandy might mostly contain microbes of a single species or two. But in long-ripened cheeses such as Roquefort, researchers have detected hundreds of different kinds of bacteria and fungi. In some cheeses, more than 400 different kinds have been found, says Mayo, who has investigated microbial interactions in the cheese ecosystem. Furthermore, by repeatedly testing, scientists have observed that there can be a sequence of microbial settlements whose rise and fall can rival that of empires.

Consider Bethlehem, a raw milk cheese made by Benedictine nuns in the Abbey of Regina Laudis in Connecticut. Between the day it gets made (or “born,” as cheesemakers say) to when it’s fully ripe about a month later, Bethlehem changes from a rubbery, smooth disk to one with a dusty white rind sprouting tiny fungal hair, and eventually to a darkly mottled surface. If you were to look with a strong microscope, you could watch as the initially smooth rind becomes a rugged, pocketed terrain so densely packed with organisms that they form biofilms similar to the microbial mats around bathroom drains. A single gram of rind from a fully ripened cheese might contain a good 10 billion bacteria, yeasts and other fungi.

But the process usually starts simply. Typically, the first microbial settlers in milk are lactic acid bacteria (LABs). These LABs feed on lactose, the sugar in the milk, and as their name implies, they produce acid from it. The increasing acidity causes the milk to sour, making it inhospitable for many other microbes. That includes potential pathogens such as Escherichia coli, says Paul Cotter, a microbiologist at the Teagasc Food Research Centre in Ireland who wrote about  the microbiology of cheese and other foods in the 2022  Annual Review of Food Science and Technology.

However, a select few microorganisms can abide this acid environment, among them certain yeasts such as Saccharomyces cerevisiae  (baker’s yeast). These microbes move into the souring milk and feed on the lactic acid that LABs produce. In doing so, they neutralize the acidity, eventually allowing other bacteria such as  B. linens to join the cheesemaking party.

As the various species settle in, territorial struggles can ensue. A study in 2020 that looked at 55 artisanal Irish cheeses found that almost one in three cheese microbes possessed genes needed to produce “weapons” — chemical compounds that kill off rivals. At this point it isn’t clear if and how many of these genes are switched on, says Cotter, who was involved in the project. (Should these compounds be potent enough, he hopes they might one day become sources for new antibiotics.)

But cheese microbes also cooperate. For example, the Saccharomyces cerevisiae yeasts that eat the lactic acid produced by the LABs return the favor by manufacturing vitamins and other compounds that the LABs need. In a different sort of cooperation, threadlike fungal filaments can act as “roads” for surface bacteria to travel deep into the interior of a cheese, Wolfe’s team has found.

By now you might have started to suspect: Cheese is fundamentally about decomposition. Like microbes on a rotten log in the woods, the bacteria and fungi in cheese break down their environment — in this case, the milk fats and proteins. This makes cheeses creamy and gives them flavor.

Mother Noella Marcellino, a longtime Benedictine cheesemaker at the Abbey of Regina Laudis, put it this way in a 2021 interview with Slow Food: “Cheese shows us what goodness can come from decay. Humans don’t want to look at death, because it means separation and the end of a cycle. But it’s also the start of something new. Decomposition creates this wonderful aroma and taste of cheese while evoking a promise of life beyond death.”

Exactly how the microbes build flavor is still being investigated. “It’s much less understood,” says Mayo. But a few things already stand out. Lactic acid bacteria, for example, produce volatile compounds called acetoin and diacetyl that can also be found in butter and accordingly give cheeses a rich, buttery taste. A yeast called Geotrichum candidum brings forth a blend of alcohols, fatty acids and other compounds that impart the moldy yet fruity aroma characteristic of cheeses such as Brie or Camembert. Then there’s butyric acid, which smells rancid on its own but enriches the aroma of Parmesan, and volatile sulfur compounds whose cooked-cabbage smell blends into the flavor profile of many mold-ripened cheeses like Camembert. “Different strains of microbe can produce different taste components,” says Cotter.

All a cheesemaker does is set the right conditions for the “rot” of the milk. “Different bacteria and fungi thrive at different temperatures and different humidity levels, so every step along the way introduces variety and nuance,” says Julia Pringle, a microbiologist at the artisan Vermont cheesemaker Jasper Hill Farm. If a cheesemaker heats the milk to over 120 degrees Fahrenheit, for example, only heat-loving bacteria like Streptococcus thermophilus will survive — perfect for making cheeses like mozzarella.

Cutting the curd into large chunks means that it will retain a fair amount of moisture, which will lead to a softer cheese like Camembert. On the other hand, small cubes of curd drain better, resulting in a drier curd — something you want for, say, a cheddar.

Storing the young cheese at warmer or cooler temperatures will again encourage some microbes and inhibit others, as does the amount of salt that is added. So when cheesemakers wash their ripening rounds with brine, it not only imparts seasoning but also promotes colonies of salt-loving bacteria like B. linens that promptly create a specific kind of rind: “orangey, a bit sticky, and kind of funky,” says Pringle.

Even the tiniest changes in how a cheese is handled can alter its microbiome, and thus the cheese itself, cheesemakers say. Switch on the air exchanger in the ripening room by mistake so that more oxygen flows around the cheese and suddenly molds will sprout that haven’t been there before.

But surprisingly, as long as the conditions remain the same, the same communities of microbes will show up again and again, researchers have found. Put differently: The same microbes can be found almost everywhere. If a cheesemaker sticks to the recipe for a Camembert — always heats the milk to the relevant temperature, cuts the curd to the right size, ripens the cheese at the appropriate temperature and moisture level — the same species will flourish and an almost identical kind of Camembert will develop, whether it’s on a farm in Normandy, in a cheesemaker’s cave in Vermont or in a steel-clad dairy factory in Wisconsin.

Some cheesemakers had speculated that cheese was like wine, which famously has a terroir — that is, a specific taste that is tied to its geography and is rooted in the vineyard’s microclimate and soil. But apart from subtle nuances, if everything goes well in production, the same cheese type always tastes the same no matter where or when it’s made, says Mayo.

By now, some microbes have been making cheese for people for so long that they have become — in the words of microbiologist Vincent Somerville at the University of Lausanne in Switzerland — “domesticated.” Somerville studies genomic changes in cheese starter cultures used in his country. In Switzerland, cheesemakers traditionally hold back part of the whey from a batch of cheese to use again when making the next one. It’s called backslopping, “and some starter cultures have been continuously backslopped for months, years, and even centuries,” says Somerville. During that time, the backslopped microbes have lost genes that are no longer useful for them in their specialized dairy environment, such as some genes needed to metabolize carbohydrates other than lactose, the only sugar found in milk.

But not only has cheesemaking become tamer over time, it is also cleaner than it used to be — and this has had consequences for its ecosystem. These days, many cows are milked by machines and the milk is siphoned directly into the closed systems of hermetically sealed, ultra-filtered storage tanks, protected from the steady rain of microbes from hay, humans and walls that settled on the milk in more traditional times.

Often the milk is pasteurized, too — that is, briefly heated to high temperatures to kill the bacteria that come naturally with it. Then, they’re replaced with standardized starter cultures.

All of this has made cheesemaking more controlled. But alas, it also means that there’s less diversity of microbes in our cheeses. Many of our cheddars, provolones and Camemberts, once wildly proliferating microbial meadows, have become more like manicured lawns. And because every microbe contributes its own signature mix of chemical compounds to a cheese, less diversity also means less flavor — a big loss.

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

How direct admission is changing the process of applying for college

A college admission letter might come from a school you haven’t applied to – or even heard of. Antonio_Diaz / iStock / Getty Images Plus via Getty Images
Mary L. Churchill, Boston University

For students and families who are considering college, a relatively new option for admission is gaining popularity. In addition to the long-standing regular admissions process, and various options for early admission decisions, is something called “direct admission.”

The Conversation asked Mary Churchill, a scholar of higher education administration at Boston University, to explain what direct admission is and how it works.

What is direct admission?

In direct admission, soon-to-be high school graduates can be accepted into a college or university without having to submit an application.

This often happens during a student’s senior year of high school, but some colleges make these offers during junior year.

Direct admission is one of several strategies colleges and universities use to make it easier for high school graduates to go to college. They are also hoping it can help reverse a trend of declining higher education enrollment in the U.S.

Applying to college can take a lot of money and time and requires students to figure out the college application process, which can sometimes be complex. The fear of rejection also discourages some people from applying.

With direct admission, this fear of rejection is removed because qualified students receive an acceptance letter from a college without needing to apply.

One university’s explanation of the direct admission process.

How can students qualify?

In some cases, all a student has to do is graduate from high school. In other cases, students have to achieve a certain GPA or score on the ACT or SAT.

Students don’t typically know that they have qualified until they receive an acceptance letter. Many community colleges are charged with offering educational opportunities to any member of the public. So they will often directly admit all students who successfully graduate from a given high school or district. Other colleges are more selective and may admit all graduates with grades or standardized testing scores above a minimum target.

In some states, all students who graduate from a public high school are offered admission to a set of public colleges and universities. Idaho was the first to do this, in 2015.

What are the benefits for colleges?

One of the biggest advantages is they get more direct access to the students the college wants to attract, which can be different for every college. Often the most desirable students are top scholars, people from a particular geographic area or some combination of demographic attributes, like racial or ethnic background and family economic status.

This enables colleges to reach more students than they would if they only did high school visits and college fairs, or direct marketing to students.

In addition, the college has an opportunity to reach potential students who are from more demographically diverse backgrounds than their usual applicants.

For example, colleges can target schools that have a lot of students from a particular group that is underrepresented on campus and that the college hopes to attract – and offer direct admissions to all the students in a graduating class.

If a college wanted to enroll more male students, it could offer direct admissions to all-boys high schools. If it wanted to enroll more Black and Latino boys, it could offer direct admissions to all-boys high schools with larger populations of Black and Latino students.

What are the benefits for students?

Direct admission does not require students or their families to fill out an application or pay application fees. Of course, students who accept their admission must complete paperwork and pay tuition and other costs associated with enrolling – but they need not do anything to receive an admission letter from the college.

When an unexpected welcome letter arrives from a well-known college, it can help students who didn’t see college in their future begin to envision themselves as college students.

Some colleges target students for direct admission even earlier than their junior years, because they know that students often decide whether they want to go to college or not as early as middle school.

Evidence shows that direct admission programs lead to more students admitted to colleges, and more students attending.

When Idaho launched its statewide direct admissions program in 2015, overall college enrollment grew by about 8%.

Is this the future of college admissions?

For colleges that are nonselective, the answer is yes.

Direct admission is a relatively inexpensive way for an individual college, or an entire state, to make college opportunities more clearly available to more students. Colleges can get the attention of their ideal student populations.

As direct admission becomes more common, colleges – especially community colleges – will likely need additional staff and money to handle the large-scale influx of admissions.

Some institutions are even partnering with education management companies, such as Concourse, Sage Solutions and The Common Application. These colleges may be able to spend less on marketing and recruitment over time. But initially, they will need to spend more to process students admitted directly.

Students may find themselves receiving admission letters from colleges they’ve never applied to – and perhaps never even heard of. This may lead students to turn more to guidance counselors to help them decide which direct admission offers to accept based on a school’s cost, academic programs and other factors.

Mary L. Churchill, Associate Dean, Strategic Partnerships and Community Engagement and Professor of the Practice, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.