Wednesday, May 10, 2023

These four challenges will shape the next farm bill – and how the US eats

Small-scale farmers, organic producers and local markets receive a tiny fraction of farm bill funding. Edwin Remsberg/VWPics/Universal Images Group/Getty Images
Kathleen Merrigan, Arizona State University

For the 20th time since 1933, Congress is writing a multiyear farm bill that will shape what kind of food U.S. farmers grow, how they raise it and how it gets to consumers. These measures are large, complex and expensive: The next farm bill is projected to cost taxpayers US$1.5 trillion over 10 years.

Modern farm bills address many things besides food, from rural broadband access to biofuels and even help for small towns to buy police cars. These measures bring out a dizzying range of interest groups with diverse agendas.

Umbrella organizations like the American Farm Bureau Federation and the National Farmers Union typically focus on farm subsidies and crop insurance. The National Sustainable Agriculture Coalition advocates for small farmers and ranchers. Industry-specific groups, such as cattlemen, fruit and vegetable growers and organic producers, all have their own interests.

Environmental and conservation groups seek to influence policies that affect land use and sustainable farming practices. Hunger and nutrition groups target the bill’s sections on food aid. Rural counties, hunters and anglers, bankers and dozens of other organizations have their own wish lists.

As a former Senate aide and senior official at the U.S. Department of Agriculture, I’ve seen this intricate process from all sides. In my view, with the challenges in this round so complex and with critical 2024 elections looming, it could take Congress until 2025 to craft and enact a bill. Here are four key issues shaping the next farm bill, and through it, the future of the U.S. food system.

The price tag

Farm bills always are controversial because of their high cost, but this year the timing is especially tricky. In the past two years, Congress has enacted major bills to provide economic relief from the COVID-19 pandemic, counter inflation, invest in infrastructure and boost domestic manufacturing.

These measures follow unprecedented spending for farm support during the Trump administration. Now legislators are jockeying over raising the debt ceiling, which limits how much the federal government can borrow to pay its bills.

Agriculture Committee leaders and farm groups argue that more money is necessary to strengthen the food and farm sector. If they have their way, the price tag for the next farm bill would increase significantly from current projections.

On the other side, reformers argue for capping payments to farmers, which The Washington Post recently described as an “expensive agricultural safety net,” and restricting payment eligibility. In their view, too much money goes to very large farms that produce commodity crops like wheat, corn, soybeans and rice, while small and medium-size producers receive far less support.

Food aid is the key fight

Many people are surprised to learn that nutrition assistance – mainly through the Supplemental Nutrition Assistance Program, formerly known as food stamps – is where most farm bill money is spent. Back in the 1970s, Congress began including nutrition assistance in the farm bill to secure votes from an increasingly urban nation.

Today, over 42 million Americans depend on SNAP, including nearly 1 in every 4 children. Along with a few smaller programs, SNAP will likely consume 80% of the money in the new farm bill, up from 76% in 2018.

Why have SNAP costs grown? During the pandemic, SNAP benefits were increased on an emergency basis, but that temporary arrangement expired in March 2023. Also, in response to a directive included in the 2018 farm bill, the Department of Agriculture recalculated what it takes to afford a healthy diet, known as the Thrifty Food Plan, and determined that it required an additional $12-$16 per month per recipient, or 40 cents per meal.

Because it’s such a large target, SNAP is where much of the budget battle will play out. Most Republicans typically seek to rein in SNAP; most Democrats usually support expanding it.

Anti-hunger advocates are lobbying to make the increased pandemic benefits permanent and defend the revised Thrifty Food Plan. In contrast, Republicans are calling for SNAP reductions, and are particularly focused on expanding work requirements for recipients.

Groceries on a kitchen counter.
Jaqueline Benitez puts away groceries at her home in Bellflower, Calif., Feb. 13, 2023. Benitez, 21, works as a preschool teacher and depends on SNAP benefits to help pay for food. AP Photo/Allison Dinner

Debating climate solutions

The 2022 Inflation Reduction Act provided $19.5 billion to the Department of Agriculture for programs that address climate change. Environmentalists and farmers alike applauded this investment, which is intended to help the agriculture sector embrace climate-smart farming practices and move toward markets that reward carbon sequestration and other ecosystem services.

This big pot of money has become a prime target for members of Congress who are looking for more farm bill funding. On the other side, conservation advocates, sustainable farmers and progressive businesses oppose diverting climate funds for other purposes.

There also is growing demand for Congress to require USDA to develop better standards for measuring, reporting and verifying actions designed to protect or increase soil carbon. Interest is rising in “carbon farming” – paying farmers for practices such as no-till agriculture and planting cover crops, which some studies indicate can increase carbon storage in soil.

But without more research and standards, observers worry that investments in climate-smart agriculture will support greenwashing – misleading claims about environmental benefits – rather than a fundamentally different system of production. Mixed research results have raised questions as to whether establishing carbon markets based on such practices is premature.

A complex bill and inexperienced legislators

Understanding farm bills requires highly specialized knowledge about issues ranging from crop insurance to nutrition to forestry. Nearly one-third of current members of Congress were first elected after the 2018 farm bill was enacted, so this is their first farm bill cycle.

I expect that, as often occurs in Congress, new members will follow more senior legislators’ cues and go along with traditional decision making. This will make it easier for entrenched interests, like the American Farm Bureau Federation and major commodity groups, to maintain support for Title I programs, which provide revenue support for major commodity crops like corn, wheat and soybeans. These programs are complex, cost billions of dollars and go mainly to large-scale operations.

How the U.S. became a corn superpower.

Agriculture Secretary Tom Vilsack’s current stump speech spotlights the fact that 89% of U.S. farmers failed to make a livable profit in 2022, even though total farm income set a record at $162 billion. Vilsack asserts that less-profitable operations should be the focus of this farm bill – but when pressed, he appears unwilling to concede that support for large-scale operations should be changed in any way.

When I served as deputy secretary of agriculture from 2009 to 2011, I oversaw the department’s budget process and learned that investing in one thing often requires defunding another. My dream farm bill would invest in three priorities: organic agriculture as a climate solution; infrastructure to support vibrant local and regional markets and shift away from an agricultural economy dependent on exporting low-value crops; and agricultural science and technology research aimed at reducing labor and chemical inputs and providing new solutions for sustainable livestock production.

In my view, it is time for tough policy choices, and it won’t be possible to fund everything. Congress’ response will show whether it supports business as usual in agriculture, or a more diverse and sustainable U.S. farm system.

Kathleen Merrigan, Executive Director, Swette Center for Sustainable Food Systems, Arizona State University

This article is republished from The Conversation under a Creative Commons license. 

Gain-of-function research is more than just tweaking risky viruses – it’s a routine and essential tool in all biology research

Gain-of-function experiments in the lab can help researchers get ahead of viruses naturally gaining the ability to infect people in the wild. KTSDesign/Science Photo Library via Getty Images
Seema Lakdawala, University of Pittsburgh and Anice Lowen, Emory University

The term “gain of function” is often taken to refer to research with viruses that puts society at risk of an infectious disease outbreak for questionable gain. Some research on emerging viruses can result in variants that gain the ability to infect people but this does not necessarily mean the research is dangerous or that it is not fruitful. Concerns have focused on lab research on the virus that causes bird flu in 2012 and on the virus that causes COVID-19 since 2020. The National Institutes of Health had previously implemented a three-year moratorium on gain-of-function research on certain viruses, and some U.S. legislatures have proposed bills prohibiting gain-of-function research on “potentially pandemic pathogens.”

The possibility that a genetically modified virus could escape the lab needs to be taken seriously. But it does not mean that gain-of-function experiments are inherently risky or the purview of mad scientists. In fact, gain-of-function approaches are a fundamental tool in biology used to study much more than just viruses, contributing to many, if not most, modern discoveries in the field, including penicillin, cancer immunotherapies and drought-resistant crops.

As scientists who study viruses, we believe that misunderstanding the term “gain of function” as something nefarious comes at the cost of progress in human health, ecological sustainability and technological advancement. Clarifying what gain-of-function research really is can help clarify why it is an essential scientific tool.

What is gain of function?

To study how a living thing operates, scientists can change a specific part of it and then observe the effects. These changes sometimes result in the organism’s gaining a function it didn’t have before or losing a function it once had.

For example, if the goal is to enhance the tumor-killing ability of immune cells, researchers can take a sample of a person’s immune cells and modify them to express a protein that specifically targets cancer cells. This mutated immune cell, called a CAR-T cell thereby “gains the function” of being able to bind to cancerous cells and kill them. The advance of similar immunotherapies that help the immune system attack cancer cells is based on the exploratory research of scientists who synthesized such “Frankenstein” proteins in the 1980s. At that time, there was no way to know how useful these chimeric proteins would be to cancer treatment today, some 40 years later.

CAR-T cell therapy involves giving a patient’s immune cells an increased ability to target cancer cells.

Similarly, by adding specific genes into rice, corn or wheat plants that increase their production in diverse climates, scientists have been able to produce plants that are able to grow and thrive in geographical regions they previously could not. This is a critical advance to maintain food supplies in the face of climate change. Well-known examples of food sources that have their origins in gain-of-function research include rice plants that can grow in high flood plains or in drought conditions or that contain vitamin A to reduce malnutrition.

Medical advances from gain-of-function research

Gain-of-function experiments are ingrained in the scientific process. In many instances, the benefits that stem from gain-of-function experiments are not immediately clear. Only decades later does the research bring a new treatment to the clinic or a new technology within reach.

The development of most antibiotics have relied on the manipulation of bacteria or mold in gain-of-function experiments. Alexander Fleming’s initial discovery that the mold Penicillium rubens could produce a compound toxic to bacteria was a profound medical advance. But it wasn’t until scientists experimented with growth conditions and mold strains that therapeutic use of penicillin became feasible. Using a specific growth medium allowed the mold to gain the function of increased penicillin production, which was essential for its mass production and widespread use as a drug.

Worker monitoring penicillin capsules coming down production line
Gain-of-function research played a key role in the development and mass production of penicillin. Wesley/Stringer/Hulton Archive via Getty Images

Research on antibiotic resistance also relies heavily on gain-of-function approaches. Studying how bacteria gain resistance against drugs is essential to developing new treatments microbes are unable to evade quickly.

Gain-of-function research in virology has also been critical to the advancement of science and health. Oncolytic viruses are genetically modified in the laboratory to infect and kill cancerous cells like melanoma. Similarly, the Johnson & Johnson COVID-19 vaccine contains an adenovirus altered to produce the spike protein that helps the COVID-19 virus infect cells. Scientists developed live attenuated flu vaccines by adapting them to grow at low temperatures and thereby lose the ability to grow at human lung temperatures.

By giving viruses new functions, scientists were able to develop new tools to treat and prevent disease.

Nature’s gain-of-function experiments

Gain-of-function approaches are needed to advance understanding of viruses in part because these processes already occur in nature.

Many viruses that infect such nonhuman animals as bats, pigs, birds and mice have the potential to spill over into people. Every time a virus copies its genome, it makes mistakes. Most of these mutations are detrimental – they reduce a virus’s ability to replicate – but some may allow a virus to replicate faster or better in human cells. Variant viruses with these rare, beneficial mutations will spread better than other variants and therefore come to dominate the viral population – that is how natural selection works.

If these viruses can replicate even a little bit within people, they have the potential to adapt and thereby thrive in their new human hosts. That is nature’s gain-of-function experiment, and it is happening constantly.

Gain-of-function experiments in the lab can help scientists anticipate the changes viruses may undergo in nature by understanding what specific features allow them to transmit between people and infect them. In contrast to nature’s experiments, these are conducted in highly controlled lab conditions designed to limit infection risk to laboratory personnel and others, including air flow control, personal protective equipment and waste sterilization.

People in protective clothing collecting dead pelicans on a beach
Researchers and public health officials are concerned that the bird flu virus is evolving to more readily infect people. Guadalupe Pardo/AP Photo

It is important that researchers carefully observe lab safety to minimize the theoretical risk of infecting the general population. It is equally important that virologists continue to apply the tools of modern science to gauge the risk of natural viral spillovers before they become outbreaks.

A bird flu outbreak is currently raging across multiple continents. While the H5N1 virus is primarily infecting birds, some people have gotten sick too. More spillover events can change the virus in ways that would allow it to transmit more efficiently among people, potentially leading to a pandemic.

Scientists have a better appreciation of the tangible risk of bird flu spillover because of gain-of-function experiments published a decade ago. Those lab studies showed that bird flu viruses could be transmitted through the air between ferrets within a few feet of one another. They also revealed multiple features of the evolutionary path the H5N1 virus would need to take before it becomes transmissible in mammals, informing what signatures researchers need to look out for during surveillance of the current outbreak.

Oversight on gain of function

Perhaps this sounds like a semantic argument, and in many respects it is. Many researchers would likely agree that gain of function as a general tool is an important way to study biology that should not be restricted, while also arguing that it should be curtailed for research on specific dangerous pathogens. The problem with this argument is that pathogen research needs to include gain-of-function approaches in order to be effective – just as in any area of biology.

Oversight of gain-of-function research on potential pandemic pathogens already exists. Multiple layers of safety measures at the institutional and national levels minimize the risks of virus research.

While updates to current oversight are not unreasonable, we believe that blanket bans or additional restrictions on gain-of-function research do not make society safer. They may instead slow research in areas ranging from cancer therapies to agriculture. Clarifying which specific research areas are of concern regarding gain-of-function approaches can help identify how the current oversight framework can be improved.

Seema Lakdawala, Associate Professor of Microbiology and Immunology at Emory University and Adjunct Professor Microbiology and Molecular Genetics, University of Pittsburgh and Anice Lowen, Associate Professor of Microbiology and Immunology, Emory University

This article is republished from The Conversation under a Creative Commons license. 

Tuesday, May 9, 2023

Our bodies crave more food if we haven’t had enough protein, and this can lead to a vicious cycle — especially if we’re reaching for ultraprocessed instead of high-fiber whole foods

This story starts in an unusual place for an article about human nutrition: a cramped, humid and hot room somewhere in the Zoology building of the University of Oxford in England, filled with a couple hundred migratory locusts, each in its own plastic box.

It was there, in the late 1980s, that entomologists Stephen Simpson and David Raubenheimer began working together on a curious job: rearing these notoriously voracious insects, to try and find out whether they were picky eaters.

Every day, Simpson and Raubenheimer would weigh each locust and feed it precise amounts of powdered foods containing varying proportions of proteins and carbohydrates. To their surprise, the young scientists found that whatever food the insects were fed, they ended up eating almost exactly the same amount of protein.

In fact, locusts feeding on food that was low in protein ate so much extra in order to reach their protein target that they ended up overweight — not chubby on the outside, since their exoskeleton doesn’t allow for bulges, but chock-full of fat on the inside.

Inevitably, this made Simpson and Raubenheimer wonder whether something similar might be causing the documented rise in obesity among humans. Many studies had reported that even as our consumption of fats and carbohydrates increased, our consumption of protein did not.

Might it be that, like locusts, we are tricked into overeating, in our case by the irresistible, low-protein, ultraprocessed foods on the shelves of the stores where we do most of our foraging? That’s what Raubenheimer and Simpson, both now at the University of Sydney, argue in their recent book “ Eat Like the Animals” and in an overview in the Annual Review of Nutrition.

Simpson took us through the reasoning and the data in an interview with Knowable Magazine. This conversation has been edited for length and clarity.

How does an entomologist end up studying nutrition in humans?

My interest in feeding behavior goes all the way back to my undergraduate years in Australia, where I was studying the food choices of sheep blowfly maggots, which are laid in the wool of sheep and eat the sheep alive. For my PhD, I took an opportunity at the University of London, England, to study appetite and food intake control in migratory locusts, which exist in two extreme forms — one solitary and one aggregating in swarms that create devastating plagues.

Since they had this reputation for being absolutely voracious, we surely did not expect them to have a lot of nuance in the way they control what they eat. But I started to explore whether they could sense the requirement for different nutrients and use it to regulate their intake. That led to experiments with artificial diets of different nutrient compositions, which showed that locusts have nutrient-specific appetites for protein and carbohydrate: Their food tastes differently to them depending on what they need, and that enables them to balance their diets.

In 1987, I started working with David Raubenheimer at Oxford to find out what happens if you put locusts on a diet that forces different appetites to compete, by feeding the animals mixtures of proteins and carbohydrates in relative amounts that do not match their intake target. We made 25 different diets, measured how much the locusts ate, how quickly they developed, and how big they grew, and found that when protein and carbohydrate appetites compete, protein wins.

What that means is that if you put animals on a low-protein, high-carb diet, they’ll eat more calories to get that limiting protein, and they’ll end up obese. Likewise, if you put them on a high-protein, low-carb diet, they don’t need to eat as much to get to their protein target, and they end up losing weight. It was at that point that we knew we had discovered a powerful new way of looking at nutrition.

We started looking at lots of different species of insects, and found that they, too, had the capacity to regulate their intake of protein and carbohydrate, and that protein was often, but not always, the prioritized nutrient.

By now, we have studied species from cats, dogs and free-ranging primates to fish in aquaculture to slime molds to humans, in a variety of contexts — from understanding health and disease to optimizing animal feed to conservation biology.

You’ve found that the nutrient levels that animals aim for are the ones at which they grow, survive or reproduce best. Just by following their appetite, they eat exactly what they need. Why don’t we?

There are two possibilities. Either our biology is broken, or it still works but we’re in the wrong environment. What we’ve shown in our studies is the latter. What has happened is our appetites, which evolved in natural environments, have now been subjected to highly engineered food environments which have been designed, in many ways, to hack our biology, to subvert our appetites.

One of our favorite examples came from a study we did in Sydney. We confined people in a sleep center for three four-day periods and provided them with foods and menus which were varied and matched in palatability, but were all of the same nutrient composition for a given week.

We had a 25 percent protein week, a 15 percent protein week, and a 10 percent protein week, and the subjects didn’t know that was going on. As far as they were concerned, they were allowed to eat what they wanted, everything tasted equally well and there were lots of choices. But it turned out that during the low-protein week, people ate more, because their protein appetite would drive them to eat more calories, to try and get enough protein. They largely did this by increasing snacking between meals, and selectively on savory-flavored snacks.

We’ve subsequently discovered that when you’re low in protein, as is the case on a 10 percent protein diet, you have elevated levels of a hormone called FGF21, which is mainly released from the liver. What we’ve shown in mouse experiments and confirmed in humans is that FGF21 switches on savory-seeking behavior, which is a proxy for eating protein.

Now, if you have that response and the nearest savory thing is a bag of barbecue-flavored potato crisps, that’s a protein decoy. You’ll be misdirected to eat that, but you’ll not get any substantial amount of protein. You’ll remain protein-hungry, and you’ll have to eat more to satisfy that protein appetite. That means you’re accumulating excess calories, and that is precisely what happens to us in our modern food environment.

You argue that ultraprocessed foods are especially likely to make us consume too many calories. Why would that be so?

Over the last couple of years, population survey data have shown that the average person in the US, Australia or the UK gets more than half their calories from highly processed foods — in some cases it’s 90 percent or more. As the proportion of ultraprocessed food in the diet increases, protein intake remains largely the same, but energy intake goes up steeply because of the dilution of protein by the fats and carbs in these foods. So this protein appetite we discovered initially in locusts operates in us too. In our modern food environment, it drives us to overconsume energy, and that sets up a vicious cycle.

What we find is that as people become overweight, their metabolism becomes dysregulated. Their tissues become less responsive to insulin, which normally regulates protein metabolism. This makes protein metabolism less efficient, causing the body to break down lean tissues like muscle and bone and burn protein to produce energy.

That increases people’s protein target, so they’ll eat even more, put on more weight, become even more metabolically dysregulated, start craving more protein, and so on.

We’ve since taken that basic idea and used it in a paper at the end of last year to propose a new understanding of why women are prone to put on weight during menopause. That’s a period when protein breakdown rates go steeply upwards in bone and muscle because of the decline in reproductive hormones. And it is driving the same sort of outcome that I just described.

You also see it in aging, you see it in people who smoke, you see it with excess alcohol intake — these are all circumstances in which FGF21 goes up, protein appetite goes up, protein breakdown goes up, and you’ll end up in this sort of vicious cycle.

As an entomologist, how did you manage to convince colleagues in nutrition science this matters?

It’s just the accumulation of evidence. Last fall, we spoke at the Royal Society in London at a big obesity conference, and the response to our talk indicated to me that protein leverage is now accepted as one of the main, credible underlying explanations for obesity. Our evidence comes from pre-clinical studies, it comes from clinical studies, it comes from cohort studies, it comes from population-level analyses, it comes from deep mechanistic biology — it’s now unanswerably there. The remaining question is: Of the various influences that drive obesity, is protein appetite a main one? We think it probably is.

Why would protein be the strongest driver of our appetites? What would be the biological logic?

All three macronutrients — fat, carbs and protein — contain calories, so we can burn any of them to yield energy, and we can use any of them to make glucose, which is the preferred fuel for our cells and brain.

But only protein has nitrogen, which we need for many other purposes, from maintaining our cells to producing offspring. You don’t want to eat too little protein.

That leaves the question of why we don’t overeat it. Why do we eat fewer calories than we need on a high-protein diet, rather than eat excess protein? To us, that implied there is a cost to eating too much protein, and we set out to discover that cost in fruit flies. We designed a large experiment where we confined a thousand flies to one of 28 diets varying in the ratio of protein and carbohydrate, the two major macronutrients for flies. What we found was that flies lived longest on a lower-protein, high-carbohydrate diet, but laid most eggs on a higher-protein, lower-carbohydrate diet. A really-high-protein diet, finally, wasn’t better for either outcome.

That overturned a hundred years of thinking around restricting calories and aging: The dominant view was that reduced calories were what prolonged life, but our data showed that the type of calories matter, notably the ratio of protein to carbs. And it created quite a stir at the time — the paper came out in 2008.

We set out to do the same experiment in mice. To do that, we had to add fat as a third nutrient dimension to the dietary design. That involved an enormous study. We took more than 700 mice and put them on one of 25 different diets varying in the concentration and ratio of protein, carbohydrate and fat. It took 6 metric tons of experimental diet to run that study across the 3 or 4 years it took before the oldest mice died.

That was the first of a whole series of huge mouse experiments where we looked at different types of carbohydrate, different ratios of amino acids, and so on. The long and the short of it was that the mice lived longer on low-protein, high-carbohydrate diets, but reproduced better on high-protein, low-carbohydrate diets — very similar to the flies.

Importantly, the benefit of low protein was only realized when the carbohydrates were harder-to-digest complex carbohydrates like fiber and starch, not simple sugars. If you translate that into human populations and look across the world for human populations that live the longest, lo and behold they’re the ones on diets low in protein and high in healthy carbohydrates and fats, such as Mediterranean-style diets and the traditional Okinawan diet.

I’m sure they’re all very healthy, but how do people on these diets manage their appetites?

That’s a really interesting question. The Okinawans certainly are hungry for protein. In traditional Japanese cuisine, there is an almost religious prominence given to umami flavors, which are the signature of protein, the savory characteristic in foods. So that’s like a societal protein appetite.

The other question is: On a 10 percent protein diet like the Okinawan diet, why aren’t they all suffering obesity because they have to eat far more to get their protein? The answer is that the traditional diet is low in energy, and high in fiber. By eating more to try and attain their protein target, they get more fiber instead of more calories, until their stomach is full. That’s a crucial distinction with the modern industrialized food environment, which isn’t just low in protein, but also low in fiber — and high in fats and carbs.

If low protein and low fiber content are the main problem, would it help to just increase them in ultraprocessed foods? Or would that not be sufficient?

Science has already nudged the industry in that direction in a couple of ways that are not altogether helpful. The high-protein snack industry is a phenomenon which reflects this science. Their response was: We’ve got a new market now for high-protein bars. Whether or not that’s ultimately going to help the world’s waistline is less clear at the moment, as the food environment as a whole remains replete with low-protein, low-fiber, ultrapalatable processed foods.

The principal driver for reducing protein content in ultraprocessed foods was that protein is more expensive than fats and carbohydrates. It was cheaper to take some of the protein out and add a little more fat and carbs, particularly when you can make things taste fantastic by mixing sugar and fat and a bit of salt together.

Some of the big providers of lifestyle interventions have shifted towards increasing the percent protein in the diet. And of course, all of the commercially successful fad diets of recent decades have been high-protein diets. But none of them takes account of the fact that there’s potentially a cost to a higher-protein diet.

As we’ve shown originally in flies and mice, a higher protein-to-carbohydrate ratio than we need speeds up aging in our tissues. That being said, if you’re suffering obesity and diabetes, the benefits of a high-protein diet in terms of weight loss may outweigh the costs. It’s a matter of understanding the relative costs and benefits associated with different diet compositions, relating them to personal goals and breaking away from some of the crazy diet zealotry that goes on online and is promoted by many of the fad diet industries.

So you’d recommend eating more fiber and fewer carbs and fats rather than eating more protein? How does that affect your own choice of snacks outside of mealtimes?

I have a deep love of food, cooking, and even hunting and gathering — I’m a fisherman. But I’m as susceptible to the siren call of ultraprocessed foods and beverages as everyone else. These products have been designed to be irresistible, so I avoid them, except on occasions. They are not in the house or my shopping trolley.

As a family, we eat whole foods, plenty of fruits and vegetables, pulses, nuts and grains, as well as dairy and high-quality meat, fish and poultry. There are many ways to mix a nutritionally balanced and delicious diet without the use of apps or computer programs. After all, no species in the history of life on Earth ever needed those.

The trick is to take advantage of our evolved biology of appetite by creating an environment in which our appetites can guide us to a healthy and balanced diet. We need to help our appetites work for ourselves and our health, not the profits of the food and beverage industries.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. 

More than just a distraction, mind-wandering (and its cousin, daydreaming) may help us prepare for the future

When psychologist Jonathan Smallwood set out to study mind-wandering about 25 years ago, few of his peers thought that was a very good idea. How could one hope to investigate these spontaneous and unpredictable thoughts that crop up when people stop paying attention to their surroundings and the task at hand? Thoughts that couldn’t be linked to any measurable outward behavior?

But Smallwood, now at Queen’s University in Ontario, Canada, forged ahead. He used as his tool a downright tedious computer task that was intended to reproduce the kinds of lapses of attention that cause us to pour milk into someone’s cup when they asked for black coffee. And he started out by asking study participants a few basic questions to gain insight into when and why minds tend to wander, and what subjects they tend to wander toward. After a while, he began to scan participants’ brains as well, to catch a glimpse of what was going on in there during mind-wandering.

Smallwood learned that unhappy minds tend to wander in the past, while happy minds often ponder the future. He also became convinced that wandering among our memories is crucial to help prepare us for what is yet to come. Though some kinds of mind-wandering — such as dwelling on problems that can’t be fixed —  may be associated with depression, Smallwood now believes mind-wandering is rarely a waste of time. It is merely our brain trying to get a bit of work done when it is under the impression that there isn’t much else going on.

Smallwood, who coauthored an influential 2015 overview of mind-wandering research in the  Annual Review of Psychology, is the first to admit that many questions remain to be answered. 

This conversation has been edited for length and clarity. 

Is mind-wandering the same thing as daydreaming, or would you say those are different?

I think it’s a similar process used in a different context. When you’re on holiday, and you’ve got lots of free time, you might say you’re daydreaming about what you’d like to do next. But when you’re under pressure to perform, you’d experience the same thoughts as mind-wandering.

I think it is more helpful to talk about the underlying processes: spontaneous thought, or the decoupling of attention from perception, which is what happens when our thoughts separate from our perception of the environment. Both these processes take place during mind-wandering and daydreaming.

It often takes us a while to catch ourselves mind-wandering. How can you catch it to study it in other people?

In the beginning, we gave people experimental tasks that were really boring, so that mind-wandering would happen a lot. We would just ask from time to time, “Are you mind-wandering?” while recording the brain’s activity in an fMRI scanner. 

But what I’ve realized, after doing studies like that for a long time, is that if we want to know how thinking works in the real world, where people are doing things like watching TV or going for a run, most of the data we have are never going to tell us very much. 

So we are now trying to study these situations. And instead of doing experiments where we just ask, “Are you mind-wandering?” we are now asking people a lot of different questions, like: “Are your thoughts detailed? Are they positive? Are they distracting you?” 

How and why did you decide to study mind-wandering? 

I started studying mind-wandering at the start of my career, when I was young and naive. 

I didn’t really understand at the time why nobody was studying it. Psychology was focused on measurable, outward behavior then. I thought to myself: That’s not what I want to understand about my thoughts. What I want to know is: Why do they come, where do they come from, and why do they persist even if they interfere with attention to the here and now?

Around the same time, brain imaging techniques were developing, and they were telling neuroscientists that something happens in the brain even when it isn’t occupied with a behavioral task. Large regions of the brain, now called the default mode network, did the opposite: If you gave people a task, the activity in these areas went down. 

When scientists made this link between brain activity and mind-wandering, it became fashionable. I’ve been very lucky, because I hadn’t anticipated any of that when I started my PhD, at the University of Strathclyde in Glasgow. But I’ve seen it all pan out.

Would you say, then, that mind-wandering is the default mode for our brains?

It turns out to be more complicated than that. Initially, researchers were very sure that the default mode network rarely increased its activity during tasks. But these tasks were all externally focused — they involved doing something in the outside world. When researchers later asked people to do a task that doesn’t require them to interact with their environment — like think about the future — that activated the default mode network as well.

More recently, we have identified much simpler tasks that also activate the default mode network. If you let people watch a series of shapes like triangles or squares on a screen, and every so often you surprise them and ask something — like, “In the last trial, which side was the triangle on?”— regions within the default mode network increase activity when they’re making that decision. That’s a challenging observation if you think the default mode network is just a mind-wandering system. 

But what both situations have in common is the person is using information from memory. I now think the default mode network is necessary for any thinking based on information from memory — and that includes mind-wandering. 

Would it be possible to demonstrate that this is indeed the case?

In a recent study, instead of asking people whether they were paying attention, we went one step further. People were in a scanner reading short factual sentences on a screen. Occasionally, we’d show them a prompt that said, “Remember,” followed by an item from a list of things from their past that they’d provided earlier. So then, instead of reading, they’d remember the thing we showed them. We could cause them to remember. 

What we find is that the brain scans in this experiment look remarkably similar to mind-wandering. That is important: It gives us more control over the pattern of thinking than when it occurs spontaneously, like in naturally occurring mind-wandering. Of course, that is a weakness as well, because it’s not spontaneous. But we’ve already done lots of spontaneous studies.

When we make people remember things from the list, we recapitulate quite a lot of what we saw in spontaneous mind-wandering. This suggests that at least some of the activity we see when minds wander is indeed associated with the retrieval of memories. We now think the decoupling between attention and perception happens because people are remembering. 

Have you asked people what their minds are wandering toward?

The past and future seem to really dominate people’s thinking. I think things like mind-wandering are attempts by the brain to make sense of what has happened, so that we can behave better in the future. I think this type of thinking is a really ingrained part of how our species has conquered the world. Almost nothing we’re doing at any moment in time can be pinpointed as only mattering then.

That’s a defining difference. By that, I don’t mean that other animals can’t imagine the future, but that our world is built upon our ability to do so, and to learn from the past to build a better future. I think animals that focused only on the present were outcompeted by others that remembered things from the past and could focus on future goals, for millions of years — until you got humans, a species that’s obsessed with taking things that happened and using them to gain added value for future behavior. 

People are also, very often, mind-wandering about social situations. This makes sense, because we have to work with other people to achieve almost all of our goals, and other people are much more unpredictable than the Sun rising in the morning.

Though it is clearly useful, isn’t it also very depressing to keep returning to issues from the past? 

It certainly can be. We have found that mind-wandering about the past tends to be associated with negative mood. 

Let me give you an example of what I think may be happening. For a scientist like me, coming up with creative solutions to scientific problems through mind-wandering is very rewarding. But you can imagine that if my situation changes and I end up with a set of problems I can’t fix, the habit of going over the past may become difficult to break. My brain will keep activating the problem-solving system, even if it can’t do anything to fix the problem, because now my problems are things like getting divorced and my partner doesn’t want any more to do with me. If such a thing happens and all I’ve got is an imaginative problem-solving system, it’s not going to help me, it’s just going to be upsetting. I just have to let it go.

That’s where I think mindfulness could be useful, because the idea of mindfulness is to bring your attention to the moment. So if I’d be more mindful, I’d be going into problem-solving mode less often.

If you spend long enough practicing being in the moment, maybe that becomes a habit. It’s about being able to control your mind-wandering. Cognitive behavioral therapy for depression, which aims to help people change how they think and behave, is another way to reduce harmful mind-wandering.

Nowadays, it seems that many of the idle moments in which our minds would previously have wandered are now spent scrolling our phones. How do you think that might change how our brain functions?

The interesting thing about social media and mind-wandering, I think, is that they may have similar motivations. Mind-wandering is very social. In our studies, we’re locking people in small booths and making them do these tasks and they keep coming out and saying, “I’m thinking about my friends.” That’s telling us that keeping up with others is very important to people.

Social groups are so important to us as a species that we spend most of our time trying to anticipate what others are going to do, and I think social media is filling part of the gap that mind-wandering is trying to fill. It’s like mainlining social information: You can try to imagine what your friend is doing, or you can just find out online. Though, of course, there is an important difference: When you’re mind-wandering, you’re ordering your own thoughts. Scrolling social media is more passive.

Could there be a way for us to suppress mind-wandering in situations where it might be dangerous? 

Mind-wandering can be a benefit and a curse, but I wouldn’t be confident that we know yet when it would be a good idea to stop it. In our studies at the moment, we are trying to map how people think across a range of different types of tasks. We hope this approach will help us identify when mind-wandering is likely to be useful or not — and when we should try to control it and when we shouldn’t.

For example, in our studies, people who are more intelligent don’t mind wander so often when the task is hard but can do it more when tasks are easy. It is possible that they are using the idle time when the external world is not demanding their attention to think about other important matters. This highlights the uncertainty about whether mind wandering is always a bad thing, because this sort of result implies it is likely to be useful under some circumstances.

This map — of how people think in different situations — has become very important in our research. This is the work I’m going to focus on now, probably for the rest of my career.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

What is insider trading? Two finance experts explain why it matters to everyone

Financier Ivan Boesky was the real-life inspiration for Gordon Gekko of ‘Wall Street.’ Yves Gellie/Gamma-Rapho via Getty Images
Alexander Kurov, West Virginia University and Marketa Wolfe, Skidmore College

Insider trading is the term used to describe the illegal act in which someone relies on market-moving, nonpublic information to decide whether to buy or sell a financial asset.

For example, say you work as an executive at a company that plans to make an acquisition. If it’s not public, that would count as inside information. It becomes a crime if you either tell a friend about it – and that person then buys or sells a financial asset using that information – or if you make a trade yourself.

Punishment, if you’re convicted for insider trading, can range from a few months to over a decade behind bars.

Insider trading became illegal in the U.S. in 1934 after Congress passed the Securities Exchange Act in the wake of the worst sustained decline in stocks in history.

From Black Monday 1929 through the summer of 1932, the stock market lost 89% of its value. The act was meant to prevent a whole litany of abuses from recurring, including insider trading.

While insider trading typically involves trading stocks of individual companies based on information about them, it can involve any kind of information about the economy, a commodity or anything else that moves markets.

Insider trading was dramatized in Oliver Stone’s 1987 classic movie “Wall Street.” Here, ruthless financier Gordon Gekko explains why information is so valuable.

Why insider trading matters

Insider trading is not a victimless crime. People trading on inside information benefit at the expense of others.

A key characteristic of well-functioning financial markets is high liquidity, which means it is easy to make large trades at low transaction costs. But when traders fear losing money to counterparts with inside information, they charge higher transaction costs, which leads to less liquidity and lower investor returns. And since a lot of people have a stake in financial markets – about half of U.S. families own stocks either directly or indirectly – this behavior hurts most Americans.

Insider trading also makes it more expensive for companies to issue stocks and bonds. If investors think that insiders might be trading bonds of a company, they will demand a higher return on the bonds to compensate for their disadvantage – increasing the cost to the company. As a result, the company has less money to hire more workers or invest in a new factory.

There are also broader impacts of insider trading. It undermines public confidence in financial markets and feeds the common view that the odds are stacked in favor of the elite and against everyone else.

Furthermore, since inside traders profit from privileged access to information rather than work, this makes people believe that the system is rigged.

Martha Stewart, flanked by U.S. Marshals, leaves court
Martha Stewart was found guilty of insider trading in 2004. AP Photo/Bebeto Matthews

Hard to prove

Research shows that insider trading is common and profitable yet notoriously hard to prove and prevent.

A recent study estimated that overall only about 15% of insider trading in the U.S. is detected and prosecuted but suggested more of it is coming to light in recent years because of increased enforcement.

One of the more famous – and few – examples of insider trading being prosecuted was the 2004 conviction of businesswoman and media personality Martha Stewart for selling shares based on an illegal tip from a broker.

The sudden collapse of several banks in 2023 has also caught the attention of authorities. The Securities and Exchange Commission is reportedly investigating executives at both Silicon Valley Bank and First Republic Bank, which was seized and sold on May 1, for potential insider trading.

And, so, the cat-and-mouse game between regulators and those who want to game the system continues.

This is an updated and shortened version of an article that was originally published on Feb. 18, 2022.

Alexander Kurov, Professor of Finance and Fred T. Tattersall Research Chair in Finance, West Virginia University and Marketa Wolfe, Associate Professor of Economics, Skidmore College

This article is republished from The Conversation under a Creative Commons license. 

What is that voice in your head when you read?

Reading becomes faster when you don’t have to say each word out loud. Gary Waters/Science Photo Library via Getty Images
Beth Meisinger, University of Memphis and Roger J. Kreuz, University of Memphis

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


What is that voice in your head when you read? – Luiza, age 14, Goiânia, Brazil


When you first begin reading, you read out loud.

Reading aloud can make the text easier to understand when you’re a beginning reader or when you are reading something that’s challenging. Listening to yourself as you read helps with comprehension.

After that, you might “mumble read.” That’s when you mumble, whisper or move your lips as you read. But this practice slowly fades as your reading skills develop, and you start to read silently “in your head.” That’s when your inner voice comes into play.

As experts in reading and language, we see this transition from reading out loud to silently all the time. It’s a normal part of the development of reading skills. Usually, kids are good at reading silently by the fourth or fifth grade.

The shift from reading out loud to reading silently is very similar to how kids develop thinking and speaking skills.

Young children often speak to themselves as a way to think through challenges. Lev Vygotsky, a Russian psychologist, called this “private speech.” And kids aren’t the only ones who talk to themselves. Just watch an adult try to put together a new vacuum cleaner. You might hear them muttering to themselves as they try to understand the assembly instructions.

As kids become better thinkers, they shift to talking inside their heads instead of out loud. This is called “inner speech.”

Once you’re a good reader, it’s a lot easier to read silently. Reading becomes faster because you don’t have to say each word. And you can jump back to reread parts without disrupting the flow of reading. You can even skip over short familiar words.

Silent reading is more flexible, and it allows you to focus on what’s most important. And it’s during silent reading that you may discover your inner voice.

Developing an inner voice

Hearing an inner voice while reading is relatively common. In fact, one study found that 4 in 5 people say they often or always hear an inner voice when they read silently to themselves.

It’s also been suggested that there are many types of inner voices. Your inner voice might be your own: It might sound similar to the way you speak or might be just like your spoken voice. Or it might assume a different tone or timbre altogether.

A study of adult readers found that the voice you hear in your head may change depending on what you are reading. For example, if the lines in a book are spoken by a specific character, you may hear that character’s voice in your head.

So, fear not if you start hearing a bunch of voices in your head when you dive into a book – it means you’ve already become a skilled silent reader.

Beth Meisinger, Associate Professor of Psychology, University of Memphis and Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license.

Monday, May 8, 2023

Online predators target children’s webcams, study finds

Children’s webcams are a safety risk. Peter Dazeley/The Image Bank via Getty Images
Eden Kamar, Hebrew University of Jerusalem and Christian Jordan Howell, University of South Florida

There has been a tenfold increase in sexual abuse imagery created with webcams and other recording devices worldwide since 2019, according to the the Internet Watch Foundation.

Social media sites and chatrooms are the most common methods used to facilitate contact with kids, and abuse occurs both online and offline. Increasingly, predators are using advances in technology to engage in technology-facilitated sexual abuse.

Once having gained access to a child’s webcam, a predator can use it to record, produce and distribute child pornography.

We are criminologists who study cybercrime and cybersecurity. Our current research examines the methods online predators use to compromise children’s webcams. To do this, we posed online as children to observe active online predators in action.

Chatbots

We began by creating several automated chatbots disguised as 13-year-old girls. We deployed these chatbots as bait for online predators in various chatrooms frequently used by children to socialize. The bots never initiated conversations and were programmed to respond only to users who identified as over 18 years of age.

We programmed the bots to begin each conversation by stating their age, sex and location. This is common practice in chatroom culture and ensured the conversations logged were with adults over the age of 18 who were knowingly and willingly chatting with a minor. Though it’s possible some subjects were underage and posing as adults, previous research shows online predators usually represent themselves as younger than they actually are, not older.

a 2-column 12-row table of text
A section of dialogue between a self-identified adult and the researchers’ chatbot posing as a 13-year-old. Eden Kamar, CC BY-ND

Most prior studies of child sexual abuse rely on historical data from police reports, which provides an outdated depiction of the tactics currently used to abuse children. In contrast, the automated chatbots we used gathered data about active offenders and the current methods they use to facilitate sexual abuse.

Methods of attack

In total, our chatbots logged 953 conversations with self-identified adults who were told they were talking with a 13-year-old girl. Nearly all the conversations were sexual in nature with an emphasis on webcams. Some predators were explicit in their desires and immediately offered payment for videos of the child performing sexual acts. Others attempted to solicit videos with promises of love and future relationships. In addition to these commonly used tactics, we found that 39% of conversations included an unsolicited link.

We conducted a forensics investigation of the links and found that 19% (71 links) were embedded with malware, 5% (18 links) led to phishing websites, and 41% (154 links) were associated with Whereby, a video conferencing platform operated by a company in Norway.

Editor’s note: The Conversation reviewed the author’s unpublished data and confirmed that 41% of the links in the chatbot dialogues were to Whereby video meetings, and that a sample of the dialogues with the Whereby links showed subjects attempting to entice what they were told were 13-year-old girls to engage in inappropriate behavior.

It was immediately obvious to us how some of these links could help a predator victimize a child. Online predators use malware to compromise a child’s computer system and gain remote access to their webcam. Phishing sites are used to harvest personal information, which can aid the predator in victimizing their target. For example, phishing attacks can give a predator access to the password to a child’s computer, which could be used to access and remotely control the child’s camera.

Whereby video meetings

At first, it was unclear why Whereby was favored among online predators or whether the platform was being used to facilitate online sexual abuse.

After further investigation, we found that online predators could exploit known functions in the Whereby platform to watch and record children without their active or informed consent.

This method of attack can simplify online sexual abuse. The offender does not need to be technically savvy or socially manipulative to gain access to a child’s webcam. Instead, someone who can persuade a victim to visit a seemingly innocuous site could gain control of the child’s camera.

Having gained access to the camera, a predator can violate the child by watching and recording them without actual – as opposed to technical – consent. This level of access and disregard for privacy facilitates online sexual abuse.

Based on our analysis, it is possible that predators could use Whereby to control a child’s webcam by embedding a livestream of the video on a website of their choosing. We had a software developer run a test with an embedded Whereby account, which showed that the account host can embed code that allows him to turn on the visitor’s camera. The test confirmed that it is possible to turn on a visitor’s camera without their knowledge.

We have found no evidence suggesting that other major videoconferencing platforms, such as Zoom, BlueJeans, WebEx, GoogleMeet, GoTo Meeting and Microsoft Teams, can be exploited in this manner.

Control of the visitor’s camera and mic is limited to within the Whereby platform, and there are icons that indicate when the camera and mic are on. However, children might not be aware of the camera and mic indicators and would be at risk if they switched browser tabs without exiting the Whereby platform or closing that tab. In this scenario, a child would be unaware that the host was controlling their camera and mic.

Editor’s note: The Conversation reached out to Whereby, and a spokesperson there disputed that the feature could be exploited. “Whereby and our users cannot access a user’s camera or microphone without receiving clear permission from the user to do so via their browser permissions,” wrote Victor Alexandru Truică, Information Security Lead for Whereby. He also said that users can see when the camera is on and can “close, revoke, or ‘turn off’ that permission at any time.”

A lawyer for the company also wrote that Whereby disputes the researchers’ claims. “Whereby takes the privacy and safety of its customers seriously. This commitment is core to how we do business, and it is central to our products and services.”

Revoking access to the webcam following initial permission requires knowledge of browser caches. A recent study reported that although children are considered fluent new media users, they lack digital literacy in the area of safety and privacy. Since caches are a more advanced safety and privacy feature, children should not be expected to know to clear browser caches or how to do so.

Keeping your kids safe online

Awareness is the first step toward a safe and trustworthy cyberspace. We are reporting these attack methods so parents and policymakers can protect and educate an otherwise vulnerable population. Now that videoconferencing companies are aware of these exploits, they can reconfigure their platforms to avoid such exploitation. Moving forward, an increased prioritization of privacy could prevent designs that can be exploited for nefarious intent.

There are several ways people can spy on you through your webcam.

Here are some recommendations to help keep your kid safe while online. For starters, always cover your child’s webcam. While this does not prevent sexual abuse, it does prevent predators from spying via a webcam.

You should also monitor your child’s internet activity. The anonymity provided by social media sites and chatrooms facilitates the initial contact that can lead to online sexual abuse. Online strangers are still strangers, so teach your child about stranger danger. More information about online safety is available on our labs’ websites: Evidence-Based Cybersecurity Research Group and Sarasota Cybersecurity.

Eden Kamar, Postdoctoral research fellow, Hebrew University of Jerusalem and Christian Jordan Howell, Assistant Professor in Cybercrime, University of South Florida

This article is republished from The Conversation under a Creative Commons license.