Monday, April 17, 2023

How animals follow their nose

It’s not easy to find the source of a swirling scent plume. Scientists are using experiments and simulations to uncover the varied strategies that animals employ.

On October 2, 2022, four days after Hurricane Ian hit Florida, a search-and-rescue Rottweiler named Ares was walking the ravaged streets of Fort Myers when the moment came that he had been training for. Ares picked up a scent within a smashed home and raced upstairs, with his handler trailing behind, picking his way gingerly through the debris.

They found a man who had been trapped inside his bathroom for two days after the ceiling caved in. Some 152 people died in Ian, one of Florida’s worst hurricanes, but that lucky man survived thanks to Ares’ ability to follow a scent to its source.

We often take for granted the ability of a dog to find a person buried under rubble, a moth to follow a scent plume to its mate or a mosquito to smell the carbon dioxide you exhale. Yet navigating by nose is more difficult than it might appear, and scientists are still working out how animals do it.

“What makes it hard is that odors, unlike light and sound, don’t travel in a straight line,” says Gautam Reddy, a biological physicist at Harvard University who coauthored a survey of the way animals locate odor sources in the 2022 Annual Review of Condensed Matter Physics. You can see the problem by looking at a plume of cigarette smoke. At first it rises and travels in a more or less straight path, but very soon it starts to oscillate and finally it starts to tumble chaotically, in a process called turbulent flow. How could an animal follow such a convoluted route back to its origin?

Over the last couple of decades, a suite of new high-tech tools, ranging from genetic modification to virtual reality to mathematical models, have made it possible to explore olfactory navigation in radically different ways. The strategies that animals use, as well as their success rates, turn out to depend on a variety of factors, including the animal’s body shape, its cognitive abilities and the amount of turbulence in the odor plume. One day, this growing understanding may help scientists develop robots that can accomplish tasks that we now depend on animals for: dogs to search for missing people, pigs to search for truffles and, sometimes, rats to search for land mines.

The problem of tracking an odor seems as if it should have an elementary solution: Simply sniff around and head in the direction where the scent is strongest. Continue until you find the source.

This strategy — called gradient search or chemotaxis — works quite well if the odor molecules are distributed in a well-mixed fog, which is the end stage of a process known as diffusion. But diffusion occurs very slowly, so thorough mixing can take a long time. In most natural situations, odors flow through the air in a narrow and sharply delineated stream, or plume. Such plumes, and the smells they convey, travel much more quickly than they would by diffusion. In some respects, this is good news for a predator, which can’t afford to wait hours to track its prey. But the news is not all good: Odor plumes are almost always turbulent, and turbulent flow makes searching by gradient wildly inefficient. At any given point, it’s quite possible that the direction in which the scent increases most rapidly could point away from the source.

Animals can call on a variety of other strategies. Flying insects, such as moths in search of a mate, adopt a “cast-and-surge” strategy, which is a form of anemotaxis, or response based on air currents. When a male moth detects a female’s pheromones, he will immediately start flying upwind, assuming there is a wind. If he loses the scent — which probably will happen, especially when he is far away from the female — he will then start “casting” from side to side in the wind. When he finds the plume again, he will resume flying upwind (the “surge”) and repeat this behavior until he sees the female.

Some land-bound insects may use a strategy called tropotaxis, which could be thought of as smelling in stereo: Compare the strength of the smell at the two antennae and turn toward the antenna getting the strongest signal. Mammals, which typically have nostrils that are more narrowly spaced relative to body size than an insect’s antennae, often use a comparison-shopping strategy called klinotaxis: Turn your head and sniff on one side, turn your head and sniff on the other side, and turn your body in the direction of the stronger smell. This requires a slightly higher level of cognition because of the need to retain a memory of the most recent sniff.

Odor-sensing robots may have another strategy they can draw on — one that nature might never have come up with. In 2007, physicist Massimo Vergassola of l’École Normale Supérieure in Paris, proposed a strategy called infotaxis, in which olfaction meets the information age. While most of the other strategies are purely reactive, in infotaxis the navigator creates a mental model of where the source is likeliest to be, given the information it has previously collected. It will then move in the direction that maximizes information about the source of the smell.

The robot will either move toward the most likely direction of the source (exploiting its previous knowledge) or toward the direction about which it has the least information (exploring for more information). Its goal is to find the combination of exploitation and exploration that maximizes the expected gain in information. In the early stages, exploration is better; as the navigator gets closer to the source, exploitation is the better bet. In simulations, navigators using this strategy travel paths that look a lot like the cast-and-surge trajectories of moths.

In Vergassola’s earliest version, the navigator needs to make a mental map of its surroundings and calculate a mathematical quantity called Shannon entropy, a measure of unpredictability that is high in directions the navigator has not explored and low in directions it has explored. This probably requires cognitive abilities that animals do not possess. But Vergassola and others have developed newer versions of infotaxis that are less computationally demanding. An animal, for example, “can take short cuts, maybe approximate the solution to within 20 percent, which is pretty good,” says Vergassola, a coauthor of the Annual Reviews article.

Infotaxis, klinotaxis, tropotaxis, anemotaxis … which taxis will get you to your destination first? One way to figure that out is to go beyond qualitative observations of animal behavior and to program a virtual critter. Researchers can then figure out the success rate of various strategies under a variety of situations in both air and water. “We can manipulate far more things,” says Bard Ermentrout, a mathematician at the University of Pittsburgh and a member of Odor2Action, a 72-person research group organized by John Crimaldi, a fluid dynamicist at the University of Colorado, Boulder. For example, researchers can test how well a fly’s strategy would work underwater, or they can ramp up the turbulence of the fluid and see when a particular search strategy starts to fail.

So far, simulations show that when turbulence is low, both stereo smelling and comparison shopping work most of the time — though, as expected, the former works better for animals with widely spaced sensors (think insects) and the latter works better for animals with closely spaced sensors (think mammals). For high turbulence, though, a simulated animal doesn’t perform well with either approach. Yet real mice hardly seem fazed by a turbulent plume, lab experiments show. This suggests that mice may still have tricks we don’t know about, or that our description of klinotaxis is too simple.

Furthermore, while simulations can tell you what an animal might do, they don’t necessarily say what it does. And we still don’t have a way to ask the animal, “What is your strategy?” But high-tech experiments with fruit flies are getting closer and closer to that Dr. Dolittle-style dream.

Fruit flies are in many ways ideal organisms for smell research. Their olfactory systems are simple, with only about 50 kinds of receptors (compared to about 400 in humans, and more than 1,000 in mice). Their brains are also relatively simple, and the connections between neurons in their central brain have been mapped: The fruit fly’s connectome, a sort of wiring diagram for its central brain, was published in 2020. “You can look up any neuron and see whom it’s connected to,” says Katherine Nagel, a neuroscientist at New York University and another of the Odor2Action team members. Before, the brain was a black box; now researchers like Nagel can just look the connections up.

One of the puzzles about flies is that they appear to use a different version of the “surge-and-cast” strategy than moths. “We noticed that flies, when they encounter an odor plume, would usually turn toward the center line of the plume,” says Thierry Emonet, a biophysicist at Yale University. Once they find the center line, the source is most likely to be directly upwind. “[We] asked, how the heck does the fly know where the center of the plume is?”

Emonet and his collaborator Damon Clark (a physicist whose lab is next door) have answered this question with an ingenious combination of virtual reality and genetically modified flies. In the early 2000s, researchers developed mutant flies with olfactory neurons that respond to light. “It turns the antenna into a primitive eye, so we can study olfaction the way that we study vision,” says Clark.

This solved one of the biggest problems in smell research: You usually can’t see the odor plume that an animal is responding to. Now you can not only see it, you can project a movie of any odor landscape you want. The genetically modified fly will perceive this virtual reality as a smell and respond to it accordingly. Another mutation rendered the flies blind, so that their actual vision wouldn’t interfere with the visual “odor.”

In their experiments, Clark and Emonet put these genetically modified flies in a container that confines their motion to two dimensions. After the flies got accustomed to the arena, the researchers presented them with a visual odor landscape consisting of moving stripes. The flies always walked toward the oncoming stripes, they found.

Next, Clark and Emonet presented a more realistic odor landscape, with turbulent twists and swirls copied from real plumes. The flies were able to navigate successfully to the center of the plume. Finally, the researchers projected a time-reversed movie of the very same plume, so that the average motion of the odor in the virtual plume was toward the center, rather than away — an experiment that could not possibly be done with a real odor plume. The flies were confused by this bizarro-world plume and moved away from the center rather than toward it.

Flies, Clark and Emonet concluded, must sense the motion of odor packets, as Emonet calls discrete clumps of odor molecules. Think about this for a second: When you smell the neighbor’s barbecue, can you tell whether the smoke particles passing your nose are traveling from left to right, or right to left? It’s not obvious. But a fly can tell — and olfaction researchers have previously overlooked this possibility.

How does sensing the motion of odor molecules help the fly find the center of the plume? The key point is that at any given time, there are more odor molecules traveling away from the center of the plume than toward it. As Emonet explains, “the number of packets in the center line is higher than away from it. So you get a lot of packets in the center moving away, and not as many from the outside moving in. Each packet individually has equal probability of moving in any direction, but collectively there is a dispersion away from the center.”

In fact, the flies are processing the incoming sensory information in a remarkably sophisticated way. In a windy environment, the direction the fly travels is actually a combination of two distinct directions, the direction of air flow and the average direction the odor packets are moving. By using the fly connectome, Nagel has pinpointed one of the places in the brain where this processing must occur. The fly’s wind-sensing neurons crisscross over its olfactory direction-sensing neurons at a particular place in the brain that’s descriptively called the “fan-shaped body.” Together, the two sets of neurons tell the fly which direction to move in.

In other words, the fly is not just reacting to its sensory inputs but also combining them. Since each set of directions is what mathematicians call a vector, the combination is a vector sum. It’s possible, says Nagel, that the flies are literally adding vectors. If so, their neurons are performing a calculation that human college students learn how to do in vector calculus.

Nagel plans to look next for similar neural structures in the brains of crustaceans. “The odor is completely different, the locomotion is different, but this central complex region is conserved,” she says. “Are they doing fundamentally the same thing as flies?”

While the connectome and virtual-reality experiments are producing amazing insights, there are many questions left to be answered. How do dogs like Ares track a smell that is partly on the ground and partly in the air? How do they allocate their time between sniffing the ground and sniffing the air? For that matter, how does “sniffing” work? Many animals actively disturb the airflow, rather than just passively receiving it; mice, for example, “whisk” with their whiskers. How do they use this information?

And what other non-human abilities might animals possess, akin to the flies’ ability to detect the motion of an odor packet? These and many more mysteries are likely to keep biologists, physicists and mathematicians sniffing for answers for a long time.

Dana Mackenzie is a mathematician who went rogue and became a science writer. He likes learning about unexpected ways in which math pops up in everyday life.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Israel’s judicial reform efforts could complicate its relationship with US – but the countries have faced other bumps along the road

The United States and Israel are close allies, but their alliance has had ups and downs over the years. iStock/Getty Images Plus
Boaz Dvir, Penn State

President Joe Biden startled many Americans and Israelis when he recently asked the Jewish state’s new far-right government to make its controversial attempts to reform the judicial system disappear like leavened products before Passover.

Biden’s unexpected request came in response to Israeli Prime Minister Benjamin Netanyahu’s proposal to weaken the ability of Israel’s Supreme Court to review or toss out laws. The country has no written constitution, so some observers think this could throw its checks and balances into disarray.

Netanyahu’s plan would also stop the court from overruling the government’s legislative and executive branches and allow politicians to appoint judges.

Viewing these efforts as attacks on democracy, hundreds of thousands of Israelis have clogged urban arteries in unceasing, unprecedented protests.

Appearing to side with the protesters, Biden said at the end of March 2023 that the Israeli government “cannot continue down this road.”

So disorientating were Biden’s remarks that, within 24 hours, they propelled the administration into damage control. National Security Council spokesperson John Kirby lauded Netanyahu for holding meetings with other politicians in a futile attempt to reach a compromise.

The reforms Netanyahu proposed, along with dozens of other reactionary proposals, threaten to weaken what many observers consider to be the Middle East’s sole democracy.

Still, in the 21st century, an American president has rarely pointed his finger so directly at Israel, one of the United States’ top allies. Biden’s break from etiquette has prompted Democrats and Republicans alike to wonder: Do the proposed reforms represent a grave risk to American and Israeli relations?

The answer, based on my research into the history of U.S.-Israel relations, is that it’s complicated.

A young woman appears to yell into a megaphone and holds her arm up, surrounded by other people all having Israeli flags.
Protesters take part in ongoing demonstrations against proposed judicial reforms in Tel Aviv on April 8, 2023. Gil Cohen-Magen/AFP via Getty Images

Eight sides to this story

On the surface, Israel’s shift to a de facto autocracy would – and some say, should – undermine Israel’s U.S. relations.

A perception exists in both countries that, to a large extent, the U.S. and Israel’s alliance stems from and is sustained by their shared democratic values.

But the U.S. has several allies that are not democratic, including Pakistan, Saudi Arabia and Honduras. Israel was always a democracy, yet it took America nearly two decades after the country’s founding to warm up to Israel’s democratic values.

The narrative of democracy uniting Israel with the U.S. tells only one part of a larger story that dates back to the 1947 United Nations Partition Plan, which paved the way for the creation of the Jewish state.

A complex relationship

The U.S. and Israel have had tumultuous relations from the start.

U.S. representatives voted for the U.N.’s Partition Plan on Nov. 29, 1947. This called for splitting Palestine between the Arabs and Jews. But the U.S. quickly reversed course and proposed replacing the U.N. plan with an international trusteeship, which would have prevented Israel’s creation.

The U.S. also declared an arms embargo on the Middle East on Dec. 5, 1947. As I showed in my PBS documentary “A Wing and a Prayer,” the embargo spared the Arabs, who received military supplies and training from the United Kingdom and France. But the embargo stifled the Jews, who lacked weapons and allies.

Only in the Cold War’s second decade did Washington start thawing its relations with Jerusalem. This began in 1962, when President John F. Kennedy sold Israel defensive missiles.

During the 1956 Suez Crisis – when Israel joined the U.K. and France in fighting Egypt – Washington sided with Cairo, which had just switched from a monarchy to a political dictatorship.

Succumbing to American pressure, Israel received nothing in March 1957 when it gave up the Sinai Peninsula, which it had conquered a few months earlier during the Suez Crisis.

Benefiting from much better U.S. relations two decades later, Israel secured a game-changing peace agreement with Egypt for the same piece of land, which by then it had retaken during the 1967 Six-Day War.

Where they diverge

Despite their similarities, the U.S. and Israel differ on several fundamental fronts. For instance, until the 1980s, the Jewish state’s economy looked nothing like America’s. It resembled communist Russia’s, with enough governmental controls to pacify Karl Marx.

Even today, decades after the Reagan administration compelled Israel to institute free-market changes, the Jewish state offers such socialist programs such as nationalized health care.

Although the two countries share interests, their priorities sometimes diverge and even clash.

For example, it has long been in Washington’s interest to resolve the Palestinian-Israeli conflict through the implementation of the two-state solution. Among other benefits, it would win the U.S. favor with such key allies as Saudi Arabia, which supports Palestinian people’s right to statehood.

Yet the lack of progress on the Palestinian issue has had no effect on America’s financial support for Israel: Every year the U.S. gives Israel US$3.3 billion and an additional $500 million as part of a defensive missile development collaboration.

Two older men shake hands and smile, standing in front of a blue backdrop and American and Israeli flags.
Then-Vice President Joe Biden and Israeli Prime Minister Benjamin Netanyahu shake hands in Jerusalem in 2016. Debbie Hill/AFP via Getty Images

Maintaining ties

So, what keeps the U.S. and Israel so close?

According to Dennis Ross, a Washington Institute for Near East Policy distinguished fellow who served as special assistant to President Obama, the alliance comes down to strategic cooperation, which started under President Ronald Reagan in the 1980s.

As part of this partnership, the U.S. and Israel help each other achieve geopolitical goals in the Middle East and beyond. They assist each other in maintaining security at home and abroad, share intelligence, conduct military exercises and collaborate on technological pursuits.

“Every administration after that, even if the president doesn’t have the warmest relations with the Israelis – it’s true for George H.W. Bush, it’s true for Barack Obama – nonetheless they build on that basic foundation,” Ross said during an interview for “Israel Survived an Early Challenge,” a documentary short I co-produced with Retro Report.

Growth over time

The two countries’ strategic partnership has only grown through the decades.

The U.S. counts on Israel now more than ever for military, intelligence and diplomatic cooperation. With Russia digging its claws into Ukraine and China flashing its sharp teeth at Taiwan, America must have a reliable, capable ally in the Middle East. So the U.S has no choice but to maintain close ties with Israel.

Strategically speaking, the Jewish state has it all: second-to-none military intelligence, Hollywood-worthy espionage, sci-fi-like technology and advanced, seasoned armed forces.

For the U.S., such a partnership has proved priceless, and its value only keeps going up.

Washington’s latest geopolitical gaffe – Pentagon memos leaked over the holiday weekend showing the U.S. has been spying on allies such as South Korea, France and Israel – only accentuates the vitality and durability of its relations with the Jewish state.

Americans and Israelis know their relations are strong enough to easily withstand the leaked memos crisis. After all, they survived the 1980s Jonathan Pollard affair, during which the U.S. Navy Intelligence employee gave the Jewish state classified documents, some of which reportedly fell into Soviet hands. So, while the U.S. will certainly go to great lengths to preserve Israel’s democratic nature, it is unlikely to walk away from this strategic partnership.

Boaz Dvir, Assistant Professor in Journalism, Penn State

This article is republished from The Conversation under a Creative Commons license.

What’s the fittest fitness for the oldest old?

Even for 60ish youngsters, researchers reaffirm that exercise is essential. But just walking won’t cut it — break out the weights and go for strength training too.

Like many in her age range, Sylvia McGregor, a 97-year-old in Sydney, Australia, deals with her share of maladies — in her case, arthritis, osteoporosis, hearing loss, macular degeneration, lung disease, hypothyroidism, chronic kidney disease, heart disease and two total knee replacements. But  unlike most nonagenarians, she does intensive strength training twice a week.

She credits the exercises, which she’s been doing for 12 years, with allowing her to live independently. “I still live by myself, and I take care of myself,” she says. “It was only when I was in hospital last year that they said I had to have a walker to go back home alone. So I said, ‘That’s OK by me.’”

McGregor is in one of the fastest-growing age groups — individuals age 80 years and older. By 2050, this “oldest old” group is expected to triple in number to 447 million worldwide.

Their longevity reflects improved management of chronic health conditions that lets older adults live longer even if they have serious health problems. But physical function deteriorates as people age, and many older adults become unable to take care of themselves — eroding the quality of those extra years and decades. “Maintaining independence is so important to people,” says public health scientist Rebecca A. Seguin-Fowler, director of the Healthy Living program at the Texas A&M AgriLife Institute for Advancing Health Through Agriculture, and of StrongPeople, which runs community-based nutrition and physical activity programs. “Even if they live in a retirement community and then eventually maybe in assisted living, they still want to be able to do things on their own as much as possible.”

Exercise is the best prescription for maintaining independence, researchers say. But what is the right dose — in terms of frequency, intensity and duration? What type of exercise is best? At what age do you need to start — and how late is too late?

There are too few studies about exercise among the oldest old to offer definitive guidelines for that age group, says Erin Howden, a researcher and exercise physiologist at the Baker Heart and Diabetes Institute and coauthor of an overview of exercise in octogenarians in the 2022 Annual Review of Medicine. But evidence for the “younger older” — people ages 60 to 75 — is sufficient to provide good, basic advice to anyone who wants to still be working in their garden at 97.

Independent living requires the ability to perform the activities of daily life — bathing or showering, dressing, getting in and out of bed or a chair, walking, using the toilet, and eating. Doing these things takes four physical attributes: cardiorespiratory fitness (how well the cardiovascular system and breathing apparatus supply oxygen during physical exertion); muscle strength and power; flexibility; and dynamic balance, meaning the ability to remain stable while moving.

Biological aging takes a toll on each of these. Cardiovascular fitness — the ability of heart and blood vessels to distribute and use oxygen during exertion — declines throughout adulthood as our circulatory capacity decreases. That decline speeds considerably late in life. Over 70, cardiovascular fitness falls by more than 21 percent per decade — and that’s for healthy people. Prolonged inactivity and common chronic conditions such as heart failure, diabetes and obesity make the situation worse. It is common for octogenarians to have cardiovascular function so low that it plays a part in preventing them from performing basic activities like vacuuming and cooking.

Dynamic balance, essential for walking, stair-climbing and avoiding falls, declines also, thanks to deterioration of the musculoskeletal system and of neurologic function. And muscle mass decreases by about 3 to 8 percent per decade after 30, with decline accelerating after 60. That often reduces both muscular strength — the ability of muscles to exert force, allowing us to lift objects — and muscular power, the ability to do work quickly, which we need to climb stairs. The more immobile you are, the faster this wasting can proceed. This muscle loss, known as sarcopenia, is why walking, one of the most popular forms of exercise, may not be enough to keep us operating independently. “People think ‘Oh, I walk,’ but walking will not help you build muscle,” Seguin-Fowler says.

Lifelong exercisers have the best shot at maintaining functional independence in old age. Over the years, they have built up greater physical capacity — strength, range of motion, stamina and balance — and enhanced organ function. But that’s not most Americans. In fact, in 2018 only about a quarter of Americans 18 and over met the federal government’s exercise guidelines for adults, according to the Centers for Disease Control and Prevention.

Those recommendations: at least 150 minutes a week of moderate-intensity aerobic activity (or 75 minutes of vigorous activity), along with muscle-strengthening exercises such as lifting weights or working with resistance bands — at least 8 to 12 repetitions for each exercise — at least two days per week. To that, adults 65 and older should add balance and flexibility training — think tai chi, Pilates or yoga — about three days a week.

If that prescription sounds daunting, Howden offers this perspective: Any amount of physical activity is better than nothing, and it’s never too late to start. And older people should always be pushing themselves to do more. “Whether you are walking or cycling or whatever your activity, keep extending the amount of time that you are doing it — and then one or two days a week, try to do something that’s a bit more intense,” she says.

There are lots of ways to tick the aerobic exercise box. An analysis of 41 clinical trials involving older adults with an average age of 67 found that many regimens work, including walking, running, dancing and other activities, at different intensity levels and durations. In general, the more frequently a person exercised, the greater the benefit.

The bottom line: A healthy but sedentary 67-year-old who engages in aerobic exercise three times a week for 30 to 35 minutes per session, working at moderate intensity, for 16 to 20 weeks, might expect to improve their aerobic fitness by about 16 percent compared to people who did nothing.

Aerobic training earlier in life is better to prevent — and at younger ages, even reverse — the normal, age-related stiffening of the arteries that is a risk factor for hypertension and stroke. For example, a study of 10 healthy but sedentary people 65 and older who worked up, over the course of a year, to 200 weekly minutes of vigorous aerobic exercise improved their cardiovascular fitness, but the training had no effect on their arterial stiffness. In contrast, a small study of adults ages 49 to 55 found that cardiovascular fitness improved and cardiac stiffness lessened through a combination of high-, moderate- and low-intensity aerobic exercise for 150 to 180 minutes per week for two years.

Howden, who led the second study, sees a clear takeaway: “Middle age and late middle age is when we need to get serious about incorporating a structured exercise program into our daily lives.”

And muscles? Two decades of research have shown that resistance training can prevent and even reverse the loss of muscle mass, power and strength that people typically experience as they age. Here is what works, according to an analysis of 25 studies involving people age 60 and older, with an average age of 70: Exercisers should have two sessions of machine-weight training per week, with a training intensity of 70 to 79 percent of their “one-rep max” — the maximum load that they could fully lift if they were only doing it once. Each session includes two to three sets of each exercise and seven to nine repetitions per set.

As for fitness for the oldest old, the first study of this group was a clinical trial with 100 frail, elderly nursing home residents in Boston. The average age was just over 87, and more than a third of participants were 90 or older. The vast majority — 83 percent — used a cane, walker or wheelchair; half had arthritis; many had pulmonary disease, bone fractures, hypertension, cognitive impairment or depression.

Individuals assigned to the exercise group completed a regimen of high-intensity resistance training of hip and knee muscles three days per week for 10 weeks. For each of the muscle groups, resistance machines were set at 80 percent of the one-rep max. The training was progressive, meaning that the load was increased at each training session if the individual could tolerate it. Sessions lasted 45 minutes, and at each session, the exerciser completed three sets of eight lifts for each muscle group.

By the end of the trial, exercisers had significantly increased muscle strength and mobility in their hips and knees compared to a group of non-exercisers. Four participants no longer used walkers after the training, getting by with a cane instead.

The lead investigator for that study was Maria A. Fiatarone Singh, now a geriatrician at the University of Sydney. For older people, she says, strength training, which helps with balance, is the top-priority exercise because it makes other forms of activity possible. “Most people, including healthcare professionals, still have this idea that the most important thing is to help people to walk around, but that is only important if they actually can walk around,” she says. “You have to have strength and balance first.”

Fiatarone Singh started the strength training program in which McGregor and her elderly peers press weights twice a week, and nobody is getting off easy. “We actually increase the weight every time we see somebody when they’re first starting,” Singh says. “At some point, their gains are less steep, but they still gain muscle mass if you continue to increase the weight.”

When she looks at a graph of McGregor’s muscle mass over time — “Hers is rock solid” — Fiatarone Singh sees inspiration. “When somebody who is in their nineties sees themselves getting stronger,” she says, “they will tell you how good it feels.”

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Peanut Butter Perfection

(Culinary.net) If you’ve ever taken a bite of something and the only word that came to mind was “yum,” you know what it’s like to experience this dessert. It’s fluffy, sweet, perfectly crumbly and tastes delicious. It’s rich but light. It’s a dessert that will likely never go out of style.

You can stop guessing what it may be: this treat is a scrumptious bite of Fluffy Peanut Butter Pie drizzled with chocolate syrup. You will understand the craze once you sink your fork into the chilled triangle resting on your plate. With a chocolate cookie crust and a thick, delicious peanut butter filling, this pie is everything many people want in a dessert.

Although it tastes like you have been in the kitchen all day, it’s a simple-to-make, delightful treat with luscious peanut butter flavor that melts in your mouth.

To make this pie, remove the filling from 20 chocolate cookies and crush them with a rolling pin until they are just crumbs. Mix cookie crumbs with melted butter and mold into a pie dish to create the crust.

Next, in a mixer, combine cream cheese and reserved cookie filling. Then add sweetened condensed milk, peanut butter, lemon juice and vanilla extract while you continue mixing.

In a mixing bowl, beat whipping cream until stiff peaks form. Fold peanut butter mixture in with the whipping cream.

Layer the peanut butter and whipping cream mixture on top of the crust in the pie pan. Chill for about 4 hours then drizzle with chocolate syrup just before serving.

This dessert is perfect for anyone with a sweet tooth. House guests, birthday parties or even just a simple treat after a meal; it’s an any-occasion kind of pie.

Find more dessert recipes at Culinary.net.

Watch video to see how to make this recipe!

 

Fluffy Peanut Butter Pie

Servings: 6-8

  • 20        chocolate cream-filled cookies
  • 1/4       cup butter, melted
  • 1          package (8 ounces) cream cheese, softened
  • 1          cup smooth peanut butter
  • 1          can (14 ounces) sweetened condensed milk
  • 3          tablespoons lemon juice
  • 1          teaspoon vanilla extract
  • 1          cup whipping cream
  • chocolate syrup
  1. Remove cream filling from chocolate cookies; set aside. With rolling pin, finely crush chocolate cookies.
  2. In medium bowl, combine finely crushed cookies with melted butter.
  3. Press crumb mixture firmly into bottom and sides of 9-inch pie plate; chill while preparing filling.
  4. In large bowl, beat cream cheese until fluffy. Add reserved cookie cream filling, peanut butter and sweetened condensed milk; beat until smooth. Stir in lemon juice and vanilla extract.
  5. In medium bowl, beat whipping cream until stiff peaks form. Fold whipped cream into peanut butter mixture. Mix until combined.
  6. Pour into crust. Chill 4 hours, or until set. Drizzle chocolate syrup over pie before serving.
  7. Cover leftovers and store in refrigerator.
SOURCE:
Culinary.net
There are two types of wildfire in the state, and they’re on the rise for different reasons. Each needs a distinct management approach, a researcher says.

About five years ago, the infamous Thomas fire swept through California’s Santa Barbara and Ventura counties, burning more than 440 square miles and causing $2.2 billion in damages before finally being contained in January 2018. It was the largest wildfire in modern Californian history at the time — a title it lost just months later to another massive fire that broke out that summer. At least six other California wildfires now surpass it for sheer size.

Megafires are becoming ever more common in California, and the total area burned each year is rising fast. From 2000 to 2020, over five million hectares burned across the state — double that of the previous 20 years, says fire ecologist Jon Keeley of the US Geological Survey at Sequoia National Park, coauthor of a 2022 overview of fire ecology and evolution in the Annual Review of Ecology, Evolution, and Systematics .

The common stories told about California’s wildfire problem center on forest fires, where climate change and a buildup of burnable fuels from a century of intense fire suppression are the culprits. But there’s another story to tell too, says Keeley. Wind-driven fires like the Thomas fire that strike California’s chaparral landscape — shrub, grass and woodland common to the coast and southern region of the state — are also on the rise. For this type of fire, Keeley says, climate change and fuel aren’t primarily to blame — rather, a growing population and faulty power lines.

Big, wind-driven wildfires used to strike once every 30 to 130 years or so, says Keeley; now it’s every 10 to 15 years. And that’s proving lethal to several native plants. Without time to recover between fires, whole landscapes are being transformed and taken over by invasive species.

Keeley spoke with Knowable Magazine about these two distinct kinds of California wildfire, and why he thinks they need to be managed differently. This interview has been edited for length and clarity.

Your work emphasizes that there are two distinct types of extreme wildfire events in California. How are they different?

One type is those fires that strike mostly in the forests of the Sierra Nevada mountains. These are driven by anomalous accumulation of fuels — plant material and forest debris — that has built up because of more than a century of effective fire suppression. These fires are often ignited by lightning.

Then we have fires that are largely not in forests, but in the shrubland vegetation and oak woodlands of coastal central and Southern California. These chaparral landscapes tend to be juxtaposed with large metropolitan areas, and the fires are started by people. These fires are not driven by fuel accumulation; they are driven by extreme wind events that typically happen in the autumn and winter. They are called Santa Ana winds in Southern California, and North or Diablo winds in Northern California.

Once these fires are started, they’re really hard to extinguish. And as a result, even though we’ve had the same policy of suppression for these fires, we have not been effective in excluding fires in the chaparral like we have in forest.

Both types of fire are on the rise, but for different reasons. What has changed for the forest fires of the Sierras?

Historically, these fires would burn in the understory, and there was not enough fuel to carry that fire up into the canopy. We would have low-intensity surface fires.

But then, in the early 1900s, state and federal agencies embarked on a program of trying to eliminate fires from our forest. It was so effective that the area burned in the forested landscapes dropped to very low levels, far lower than historically was ever the case. Without fire, more and more dead material accumulated. That has resulted in fires now spreading from the surface up into the canopy and changing the fire regime from one of a low-intensity surface fire to a high-intensity crown fire.

Is climate change also playing a role?

In the forests, yes. We know that those forest fires are sensitive to temperatures in the spring and summer: The higher the temperatures, the more area burns. We have 100 years of data showing this pattern. As temperatures are increasing due to global warming, we should expect more area to burn in these forests.

What about for the wind-driven fires of the chaparral?

We have no evidence that climate change is playing a role for these fires. We did a very extensive study published in 2021 that examined 70 years of wind events and area burned. We looked at the temperatures and the precipitation levels and we found no relationship between climate and area burned during these autumn Santa Ana wind fires. Global warming is not a big factor because there are other things that override it in these coastal areas.

Why, then, are these wind-driven fires becoming more common?

These fires are 100 percent ignited by humans. We’ve had an increase of 6 million more people in California over the past 20 years. That translates into a great increase of the power grid, and more power lines means more opportunities for fire. Maintenance is probably the No. 1 issue. PG&E has been faced with a number of lawsuits related to failure to maintain these lines. The 2018 Camp fire, for example, was ignited by a power line that was wasn’t being properly maintained.

Can wildfire be good for the landscape?

Well, it depends. In the forests of the Sierra Nevada, frequent, low-intensity surface fires can be very valuable. The primary advantage of frequent fires in forest is it keeps the surface fuels down and prevents them from accumulating to the point where fire spreads from the surface to the canopy. Under these conditions, fire also creates openings in the forest, maybe 10 to 20 acres in size, and those are sites where certain species like ponderosa pine regenerate.

The difference in chaparral is, it’s a crown fire system, meaning that when fire burns through, it burns the entire canopy. The site requires several decades to recover to the point where it can withstand another fire. That period of time is something we’re working on now to try and determine: How short is too short in these systems? In general, it appears you need 20 to 30 years without a fire for that system to recover fully.

What happens to chaparral when wildfires are more frequent than that?

Chaparral has a range of different regeneration strategies for different species. Some species like oaks resprout from the base after a fire, and it almost doesn’t matter what the frequency of fires is — they seem to resprout very well. So, in those species, frequency of fire is not a threat.

But then we have another group that represents the majority of the diversity of woody vegetation: obligate seeding shrubs of Ceanothus, which is sometimes called California lilac or buckbrush, and Arctostaphylos, which is known as manzanita. The majority of species in those two genera depend 100 percent on seedlings coming up after the fire, and to do that, they need time to accumulate a seed bank.

We’ve done a series of studies over the last two years, looking at regeneration in stands of different ages after they burn, and have found that in Ceanothus, for example, if the fires burn at less than 15-year intervals, there’s just not enough seed in the soil. And we get sometimes complete extirpation of those species from the site. For manzanitas, it’s a similar pattern. It often requires even longer periods, maybe 20 years or more, in order to accumulate sufficient seeds.

What are the implications of that?

When you wipe out these plants, you can wipe out a huge part of the landscape. For example, in Orange County, we have data from the 2020 Bond fire that over-burned the 2007 Santiago fire; it was a 13-year interval between those fires. And we showed that there was something like several thousands of skeletons of Ceanothus per hectare. They had no seedlings at all. They were wiped out from the site.

These systems do not fill in with native species; you end up opening up an ecological vacuum and non-native weedy species come in, and then you convert it from chaparral to non-native grasslands. That changes the water-holding capacity of the landscape. It increases the probability of erosion and flooding. And also, shrublands generally only have a fire season of about six months out of the year; grasslands have a 12-month fire season because these grasses are capable of drying out very rapidly. You can get a rainfall event and a week later, it can burn.

You just change a lot of things in the ecosystem that are not desirable. You eliminate a lot of native species: bird species, as well as lizards and rodents.

What does all this mean for prescription burning: the practice of intentionally lighting controlled fires to burn an area. Is that a good or bad idea?

Prescription burning is a really important management tool in forests. I work in Sequoia National Park, and there we have clear evidence that prescription burning, which has been going on for 50 years, has been highly effective at preventing catastrophic fires. For example, the 2021 KNP Complex fire burned something like 80,000 acres, mainly in the Sequoia and Kings Canyon National Parks. It destroyed vast stretches of forest, but when it hit the Giant Forest, a landscape that has been subjected to prescription burning, it died down. Giant Forest is the poster child of why you want to do prescription burning. If you can burn frequently enough, you can keep the fuels down and prevent massive crown fires.

When you get to Southern California, there is a belief by some fire management agencies that they work the same way: that we need to put fire onto the landscape. But these are landscapes that are already suffering from excessive burning due to human ignitions. There’s the potential for prescription burning to actually be damaging to resources. If agencies are going to burn in chaparral, they need to try and focus on areas that are older than a couple of decades.

Is that a hard message for policymakers to hear?

The problem is that chaparral is an extremely dangerous fuel. And the older the vegetation is, the harder it is to control the fire. So agencies don’t want to wait that long between fires, for the sake of buildings and other infrastructure that might be damaged.

The US Forest Service and National Park Service are very concerned about natural resource issues. For example, the National Park Service in the Santa Monica Mountains about two decades ago rewrote their management plan to greatly reduce prescription burning based on our research, because they realized they’re already getting way too much fire for the resources on their landscapes.

For most of Southern and Central California, fire protection is the responsibility of Cal Fire (the California Department of Forestry and Fire Protection). They mostly deal with rural lands, which are private property. And so resource issues aren’t always their primary concern when it comes to management decisions. What we need to do is balance fire hazard reduction with resource conservation.

Why is it so difficult to change people’s perceptions about California’s wildfires?

Well, there’s absolutely no question that policymakers, agencies and journalists like a simple story. They like a story where one model fits all. And so generally, mostly, what they hear is the story about fires in forest, which says: We don’t have enough fire, we need to put fire back into the system, we need to do prescription burning to get it back. And very few people hear the story that for some landscapes it’s exactly the opposite.

We have two different landscapes with two very different fire regimes that require two very different management practices. That’s really what we’re trying to focus on.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Keep Your Home Office Organized for Increased Productivity

When temperatures creep up again, it signals time for an annual tradition: spring cleaning. While big projects like windows are hard to overlook, don’t forget smaller areas that need attention, too, such as your home office.

Making sense of a year’s worth of paperwork and clutter can take some serious time, especially as many people have been working from home more than normal, but getting organized can help you tackle home management tasks more efficiently. Making the office a priority can reduce frustration when it comes to spending additional time in your office while working from home.

These five tips can help get you started:

  1. Make sure you have furniture that can adequately store your stuff, including plenty of space for files, reference books and computer equipment. Pieces need not be costly to be functional and there are plenty of attractive options available online and at both small and major retailers.
     
  2. Arrange the space with its intended use and your own work style in mind. For example, if you don’t need ample space to spread out over a large, flat work area, eliminate that space – it’s simply an invitation for clutter.
     
  3. Place items you rely on frequently, such as a calculator or ruler, within arm’s reach so they can easily be put away between uses. Capture these items in containers and bins to keep the space looking neat and free of clutter.
     
  4. Establish a filing system that lets you keep track of important papers you need to keep and have a shredder handy to help you discard any sensitive documents. Whether you alphabetize, color code or use some other method, group paperwork into segments for categories such as bills, banking, health care, auto, insurance and so on for easy access in the future.
     
  5. Tangled cords can make even the most organized spaces look messy, and they may pose a fire or tripping hazard. Get control of your cords by storing devices you don’t use regularly and securing the remaining cords with twist ties or clips. Remember to use a surge-protected power strip to minimize the chance of damage should a power surge occur.

Find more tips to make your workspace tidy and organized at eLivingtoday.com.

Why does money exist?

Cash is pretty convenient. Dilok Klaisataporn/EyeEm via Getty Images
M. Saif Mehkari, University of Richmond

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


Why do money and trading exist? – Vanessa C., age 10, Gilbert, Arizona


Imagine a world without money. With no way to buy stuff, you might need to produce everything you wear, eat or use unless you could figure out how to swap some of the things you made for other items.

Just making a chicken sandwich would require spending months raising hens and growing your own lettuce and tomatoes. You’d need to collect your own seawater to make salt.

You wouldn’t just have to bake the bread for your sandwich. You’d need to grow the wheat, mill it into flour and figure out how to make the dough rise without store-bought yeast or baking powder.

And you might have to build your own oven, perhaps fueled by wood you chopped yourself after felling some trees. If that oven broke, you’d probably need to fix it or build another one yourself.

Even if you share the burden of getting all this done with members of your family, it would be impossible for a single family to internally produce all the goods and provide all the services everyone is used to enjoying.

To maintain anything like today’s standard of living, your family would need to include a farmer, a doctor and a teacher. And that’s just a start.

This account of making a sandwich ‘completely from scratch’ in six months required a lot of machines built by others.

Specializing and bartering

Economists like me believe that using money makes it a lot easier for everyone to specialize, focusing their work on a specific activity.

A farmer is better at farming than you are, and a baker is probably better at baking. When they earn money, they can pay others for the things they don’t produce or do.

As economists have known since David Ricardo’s work in the 19th century, there are gains for everyone from exchanging goods and services – even when you end up paying someone who is less skilled than you. By making these exchanges easy to do, money makes it possible to consume more.

People have traded goods and services with one kind of money or another, whether it was trinkets, shells, coins and paper cash, for tens of thousands of years.

People have always obtained things without money too, usually through barter. It involves swapping something, such as a cookie or a massage, for something else – like a pencil or a haircut.

Bartering sounds convenient. It can be fun if you enjoy haggling. But it’s hard to pull off.

Let’s say you’re a carpenter who makes chairs and you want an apple. You would probably find it impossible to buy one because a chair would be so much more valuable than that single piece of fruit. And just imagine what a hassle it would be to haul several of the chairs you’ve made to the shopping mall in the hopes of cutting great deals through barter with the vendors you’d find there.

Paper money is far easier to carry. You might be able sell a chair for, say, $50. You could take that $50 bill to a supermarket, buy two pounds of apples for $5 and keep the $45 in change to spend on other stuff later. Another advantage money has over bartering is that you can use it more easily to store your wealth and spend it later. Stashing six $50 bills takes up less room than storing six unsold chairs.

Nowadays, of course, many people pay for things without cash or coins. Instead, they use credit cards or make online purchases. Others simply wave a smartwatch at a designated device. Others use bitcoins and other cryptocurrencies. But all of these are just different forms of money that don’t require paper.

No matter what form it takes, money ultimately helps make the trading of goods and services go more smoothly for everyone involved.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

M. Saif Mehkari, Associate Professor of Economics, University of Richmond

This article is republished from The Conversation under a Creative Commons license. 

ChatGPT, DALL-E 2 and the collapse of the creative process

Does the moment of imagination carry more value than the work of making something real? DeAgostini/Getty Images
Nir Eisikovits, UMass Boston and Alec Stubbs, UMass Boston

In 2022, OpenAI – one of the world’s leading artificial intelligence research laboratories – released the text generator ChatGPT and the image generator DALL-E 2. While both programs represent monumental leaps in natural language processing and image generation, they’ve also been met with apprehension.

Some critics have eulogized the college essay, while others have even proclaimed the death of art.

But to what extent does this technology really interfere with creativity?

After all, for the technology to generate an image or essay, a human still has to describe the task to be completed. The better that description – the more accurate, the more detailed – the better the results.

After a result is generated, some further human tweaking and feedback may be needed – touching up the art, editing the text or asking the technology to create a new draft in response to revised specifications. Even the DALL-E 2 art piece that recently won first prize in the Colorado State Fair’s digital arts competition required a great deal of human “help” – approximately 80 hours’ worth of tweaking and refining the descriptive task needed to produce the desired result.

It could be argued that by being freed from the tedious execution of our ideas – by focusing on just having ideas and describing them well to a machine – people can let the technology do the dirty work and can spend more time inventing.

But in our work as philosophers at the Applied Ethics Center at University of Massachusetts Boston, we have written about the effects of AI on our everyday decision-making, the future of work and worker attitudes toward automation.

Leaving aside the very real ramifications of robots displacing artists who are already underpaid, we believe that AI art devalues the act of artistic creation for both the artist and the public.

Skill and practice become superfluous

In our view, the desire to close the gap between ideation and execution is a chimera: There’s no separating ideas and execution.

It is the work of making something real and working through its details that carries value, not simply that moment of imagining it. Artistic works are lauded not merely for the finished product, but for the struggle, the playful interaction and the skillful engagement with the artistic task, all of which carry the artist from the moment of inception to the end result.

The focus on the idea and the framing of the artistic task amounts to the fetishization of the creative moment.

Novelists write and rewrite the chapters of their manuscripts. Comedians “write on stage” in response to the laughs and groans of their audience. Musicians tweak their work in response to a discordant melody as they compose a piece.

In fact, the process of execution is a gift, allowing artists to become fully immersed in a task and a practice. It allows them to enter what some psychologists call the “flow” state, where they are wholly attuned to something that they are doing, unaware of the passage of time and momentarily freed from the boredom or anxieties of everyday life.

This playful state is something that would be a shame to miss out on. Play tends to be understood as an autotelic activity – a term derived from the Greek words auto, meaning “self,” and telos meaning “goal” or “end.” As an autotelic activity, play is done for itself – it is self-contained and requires no external validation.

For the artist, the process of artistic creation is an integral part, maybe even the greatest part, of their vocation.

But there is no flow state, no playfulness, without engaging in skill and practice. And the point of ChatGPT and DALL-E is to make this stage superfluous.

A cheapened experience for the viewer

But what about the perspective of those experiencing the art? Does it really matter how the art is produced if the finished product elicits delight?

We think that it does matter, particularly because the process of creation adds to the value of art for the people experiencing it as much as it does for the artists themselves.

Part of the experience of art is knowing that human effort and labor has gone into the work. Flow states and playfulness notwithstanding, art is the result of skillful and rigorous expression of human capabilities.

Recall the famous scene from the 1997 film “Gattaca,” in which a pianist plays a haunting piece. At the conclusion of his performance, he throws his gloves into the admiring audience, which sees that the pianist has 12 fingers. They now understand that he was genetically engineered to play the transcendent piece they just heard – and that he could not play it with the 10 fingers of a mere mortal.

Does that realization retroactively change the experience of listening? Does it take away any of the awe?

As the philosopher Michael Sandel notes: Part of what gives art and athletic achievement its power is the process of witnessing natural gifts playing out. People enjoy and celebrate this talent because, in a fundamental way, it represents the paragon of human achievement – the amalgam of talent and work, human gifts and human sweat.

Baseball player raises arms before a cheering crowd.
Boston Red Sox Hall of Famer David Ortiz celebrates before a crowd of adoring fans in 2016. Michael Ivins/Boston Red Sox via Getty Images

Is it all doom and gloom?

Might ChatGPT and DALL-E be worth keeping around?

Perhaps. These technologies could serve as catalysts for creativity. It’s possible that the link between ideation and execution can be sustained if these AI applications are simply viewed as mechanisms for creative imagining – what OpenAI calls “extending creativity.” They can generate stimuli that allow artists to engage in more imaginative thinking about their own process of conceiving an art piece.

Put differently, if ChatGPT and DALL-E are the end results of the artistic process, something meaningful will be lost. But if they are merely tools for fomenting creative thinking, this might be less of a concern.

For example, a game designer could ask DALL-E to provide some images about what a Renaissance town with a steampunk twist might look like. A writer might ask about descriptors that capture how a restrained, shy person expresses surprise. Both creators could then incorporate these suggestions into their work.

But in order for what they are doing to still count as art – in order for it to feel like art to the artists and to those taking in what they have made – the artists would still have to do the bulk of the artistic work themselves.

Art requires makers to keep making.

The warped incentives of the internet

Even if AI systems are used as catalysts for creative imaging, we believe that people should be skeptical of what these systems are drawing from. It’s important to pay close attention to the incentives that underpin and reward artistic creation, particularly online.

Consider the generation of AI art. These works draw on images and video that already exist online. But the AI is not sophisticated enough – nor is it incentivized – to consider whether works evoke a sense of wonder, sadness, anxiety and so on. They are not capable of factoring in aesthetic considerations of novelty and cross-cultural influence.

Rather, training ChatGPT and DALL-E on preexisting measurements of artistic success online will tend to replicate the dominant incentives of the internet’s largest platforms: grabbing and retaining attention for the sake of data collection and user engagement. The catalyst for creative imagining therefore can easily become subject to an addictiveness and attention-seeking imperative rather than more transcendent artistic values.

It’s possible that artificial intelligence is at a precipice, one that evokes a sense of “moral vertigo” – the uneasy dizziness people feel when scientific and technological developments outpace moral understanding. Such vertigo can lead to apathy and detachment from creative expression.

If human labor is removed from the process, what value does creative expression hold? Or perhaps, having opened Pandora’s box, this is an indispensable opportunity for humanity to reassert the value of art – and to push back against a technology that may prevent many real human artists from thriving.

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston and Alec Stubbs, Postdoctoral Fellow in Philosophy, UMass Boston

This article is republished from The Conversation under a Creative Commons license.

Sunday, April 16, 2023

The technology could transform how growers protect their harvests, by detecting plant diseases very early on. But the challenge is to develop tools that are as affordable as they are effective.

Swarms of locusts devastating crops in East Africa, corn rootworms wreaking havoc in the Midwestern US. Blights destroying rubber trees in Brazil and ravaging potatoes in South India. Unpredictable and erratic weather patterns brought on by climate change will only exacerbate these problems — and, scientists say, make crop diseases more likely to strike and inflict major damage .

A single warm winter can enable a pest to invade new territories. Maize- and millet-chomping armyworms and fruit-and vegetable-feasting Tephritid fruit flies have spread to new locations as a result of warming weather. Desert locusts, which destroy entire crops when they swarm, are expected to strike new regions as they change their migratory routes. It is a serious problem in a world in which an estimated 700 million to 800-plus million people faced hunger in 2021 and with the global population set to further grow.

Plant pathologist Karen Garrett of the University of Florida, Gainesville, believes that artificial intelligence (AI) could be immensely valuable in fighting these blights. If agriculture is equipped with cost-effective AI tools that can identify crop diseases and pest infestations early in their development, growers and others can catch problems before they take off and cause real damage, she says — a topic she and colleagues explored in the 2022 Annual Review of Phytopathology. This conversation has been edited for length and clarity.

You specialize in studying plant diseases, so let’s dive into this topic from that angle. How do changes in environment and climate affect plants and the emergence of plant diseases?

Most pathogens have a range of temperatures that favor them. From a pathogen’s standpoint, some years can be better than others. Sometimes, a hard winter or a long drought will kill off a pathogen. But it will not in a mild year — so the pathogen will thrive, and there may be more disease in the following seasons.

Consider potato late blight. It’s a famous example of a plant disease that had a big impact on European society during the mid-1840s. Late blight was one of the drivers of the Irish potato famine, which generated a big exodus of people from Ireland.

First, the pathogen was introduced. Then there were some years that had weather conditions that strongly favored the pathogen: cool and wet weather. As a result, the pathogen thrived, wreaking havoc on the crop. It’s estimated that a million people died and a million fled the country during that time.

Today, where temperatures are getting milder, such as at higher elevations and toward the Earth’s poles, pathogens favored by mild conditions can move into new regions and become more destructive.

When new crop diseases arise, how can anyone be sure that they are linked with climate change?

Any given crop epidemic is kind of like a storm. It’s hard to say whether an individual storm is due to climate change or not, but you can start to draw conclusions.

One thing that plant pathologists talk about all the time is the “disease triangle.” Getting a disease requires three things: a pathogen that is able to infect, a conducive environment, and a host plant that can get infected. If the environment changes, for example through climate change, so that weather favoring a pathogen is more common, it will make it easier for the pathogen to thrive and attack more plants. People’s decisions about how to manage plant disease are another dimension. Often, several of these components change at the same time, so it’s challenging to say how much of an epidemic’s damage is strictly due to climate change.

Let’s add artificial intelligence to this discussion. How can AI help to mitigate the threats of pathogens to crops?

Artificial intelligence is intelligence produced by a machine, such as a computer system equipped with learning algorithms that can keep improving its ability to make predictions as it gets more information. These tools are so advanced that they can process huge amounts of information within seconds. For crop resiliency, AI can help by making better tools for crop surveillance, designing better robots to deliver pesticides or harvest, and better software to help in breeding for traits like disease resistance and drought tolerance. It has a strong social angle, as it can help farmers and policymakers to make smart decisions.

Let’s break down each of these. How has AI been used in surveillance techniques and what are the existing technologies? Can you explain?

If you think about the rise of an epidemic in an area, at the early stages, the disease is only in a few locations. And then later, it will start to grow rapidly. There is potential for surveillance to employ remote sensing techniques like drones and satellite imagery that can identify the location of crops in farmlands that are infected with pathogens. AI tools can already use image analysis to spot changes in the coloration of leaves, flowers or fruits, and even their shapes or sizes.

Identifying diseases and taking action early can make it a lot easier to manage an epidemic. In the past, satellite data used to be very coarse: You couldn’t get a high enough resolution to diagnose a problem. But the resolution keeps getting better. As a result, their potential has been growing for use in surveillance.

How exactly does AI use image analysis in these tools?

Well, there’s a lot of work at the beginning. First, people have to collect and curate thousands of images of healthy and diseased plants in a range of conditions. So collecting and curating these images takes time and investment. Then algorithms are developed to learn from these images of healthy and diseased plants, to identify signatures of disease.

A lot of diseases have distinctive symptoms that can be detected visually. So if you have a drone, for example, that can go and take images in large fields, then those images can be compared and analyzed using AI to efficiently diagnose visible crop disease.

For example, our coauthor Michael Selvaraj in Colombia has been working on this technology for identifying diseases in bananas. In Florida, some growers have invested in drones for surveillance. Currently, some growers scan images from drones themselves, to get a quick view of their orchards. This will probably gradually be replaced by automated image analysis of the videos of orchards as image analysis develops further and can efficiently find diseased plants.

But there are also safety regulations issues, because unplanned use of drones could create safety hazards for the public as well. It’s still a young industry. But as there are many advantages, I think it’ll only expand, as policies strike a balance between protecting the public and providing benefits in agriculture.

And how can AI be used with robotic tools to aid in crop resiliency?

Agricultural robotics is a growing field right now. An interesting AI example already in place is segregating healthy fruit from those infected with pathogens or otherwise damaged.

Fruit can often be distinguished as diseased or not, based on color and shape. These AI tools can process those images a lot faster and more consistently so that the discolored and low-quality fruit — which are often infected with pathogens — are automatically separated.

Also, there’s the idea of using drones that can collect and analyze images and then take immediate actions based on the analyses — for example, to decide to spray a pesticide. I think these tools will probably be ready for wider use in the near future, and will again need good policies.

Tell me more about how AI tools can help in plant breeding and in making more resilient strains.

You can think of plant breeding partly as a numbers game, because you have to breed plants and process lots of individual offspring when you are breeding for a trait. Crop breeders search among these offspring to find good traits for further development.

Plant breeders can use AI tools to predict which plants will grow quickly in a particular climate, which genes will help them thrive there, and which crosses between plant parents will likely yield better traits. The traits can relate to speed of growth, cooking properties, yield and resistance to pathogens. Crop breeders inoculate the offspring with a pathogen and see which ones are resistant, and what genes are associated with resistance.

AI can speed up the analysis of great numbers of genetic sequences related to these properties and find the right combination of DNA sequences you need for a desirable trait. And image analysis is increasingly being used for characterizing the offspring in breeding programs for major economic crops such as wheat, maize and soybean.

How have farmers gone about incorporating AI tools across the world?

People have been working on tools for image analysis of diseases so that farmers can take a photo of their plant and then get an assessment using a phone. For example, PlantVillage Nuru is a phone application that uses image analysis to diagnose potential diseases in crops. It uses machine learning and thousands of images of crop diseases collected by experts from around the world. The images are analyzed by AI and support growers in making informed decisions about crop management.

Image analysis for disease diagnosis is generally not 100 percent accurate, but it can provide a level of confidence to help growers diagnose their crop diseases and understand the uncertainty.

What are some of the challenges involved in developing these kinds of AI tools?

For one thing, you need a lot of data for the AI system to learn from. To make an image analysis tool for diagnostics, you need to include a representative set of crop varieties, which can have a wide range of shapes and colorations. One big challenge is just getting enough of these images that are labeled correctly to be used for the image analysis tool to learn.

Another big issue is cost. There can be a lot of tools that do what you want them to do, but is the benefit that they bring big enough that it’s worth the cost investment? I think there are a lot of AI tools that are already useful, but they might not be profitable for farmers yet. Many current applications are in cases where very high-value materials are processed, such as in postharvest fruit handling and in crop breeding.

Another sort of challenge is training and capacity building so that the use of such tools isn’t dependent on one expert but is more broadly used. A challenge for AI, and new technologies in general, is to make sure that the costs and benefits are fairly distributed in society.

What’s your ideal vision for securing a climate-resilient food security system for the future?

To be resilient to climate change, our food systems need to be designed to respond rapidly to new challenges. We can predict some future challenges, but some changes are likely to be a surprise. Education and capacity building are key to resilience, along with effective cooperation locally and globally. An international proposal for a global surveillance system for plant disease is an inspiring vision.

For food security in general, we need to support science education and capacity building, to make the best use of our current technologies and to support the development of better technologies. We need to work for food systems that minimize negative effects of agriculture on wildlands and maximize benefits for human health.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.