Thursday, April 27, 2023

Challenging the FDA’s authority isn’t new – the agency’s history shows what’s at stake when drug regulation is in limbo

In addition to evaluating new drug applications, the FDA also inspects drug manufacturing facilities. The U.S. Food and Drug Administration/Flickr
Christine Coughlin, Wake Forest University

Political pressure is nothing new for the U.S. Food and Drug Administration. The agency has frequently come under fire for its drug approval decisions, but attacks on its decision-making process and science itself have increased during the COVID-19 pandemic.

Recent challenges to the FDA’s authority have emerged in the context of reproductive rights.

On Nov. 18, 2022, a group of anti-abortion doctors and medical groups filed a lawsuit against the FDA, challenging its approval from more than 20 years ago of mifepristone, a drug taken in combination with another medication, misoprostol, to treat miscarriages and used to induce more than 50% of abortions in early-stage pregnancies in the U.S.

It is widely believed that the plaintiffs filed the lawsuit in the Northern District of Texas so District Judge Matthew J. Kacsmaryk, a well-known abortion opponent, could oversee the litigation. While Kacsmaryk did issue a preliminary injunction ruling that the FDA lacked the authority to approve mifepristone, an appeal partially reversed the decision and the Supreme Court stayed Kacsmaryk’s order. The case now sits at the 5th U.S. Circuit Court of Appeals and will likely return to the Supreme Court.

The FDA is the government’s oldest consumer protection agency. The effects of this lawsuit could reach far beyond mifepristone – undermining the agency’s authority could threaten its entire drug approval process and change access to commonly used drugs, ranging from amoxycillin and Ambien to prednisone and Paxlovid.

I am a legal scholar whose research focuses in part on the law and ethics of the FDA’s drug approval process. Examining the FDA’s history reveals the unprecedented nature of the current challenges to the agency’s authority.

Chart titled 'Data for Decisions' depicting sources the FDA considers in its decision-making
Then FDA Commissioner George Larrick used this chart during 1964 Senate testimony to illustrate the range of sources the agency uses in evaluating proposals. The U.S. Food and Drug Administration/Flickr

Events shaping FDA’s focus on safety

In its early years, the FDA focused primarily on balancing the competing goals of consumer safety with access to experimental treatments. The priority was strengthening consumer protection to prevent tragedy from recurring.

For instance, at the turn of the 20th century, Congress passed the Biologics Control Act of 1902, providing the federal government the authority to regulate vaccines. This law was introduced after 13 children died from inadvertently contaminated diphtheria antitoxin, which was made from the blood of a horse infected with tetanus.

A few years later, after investigative journalists publicized the unsanitary conditions and food-handling practices in meatpacking plants, Congress passed the Pure Food and Drug Act of 1906, which prohibited the marketing and sale of misbranded and contaminated foods, drinks and drugs.

Similarly, in 1937, approximately 71 adults and 34 children died from ingesting S.E. Massengill’s antibacterial elixir, which contained a poisonous raspberry flavoring added to sweeten the taste. In response, Congress passed the Federal Food, Drug and Cosmetic Act of 1938, requiring manufacturers to show that drugs are safe before they go on the market. This act marked the beginning of modern drug regulations and the birth of the FDA as a regulatory agency.

FDA scientist Frances Oldham Kelsey’s decision to not approve thalidomide for use in the U.S. protected Americans from the birth defects that swept newborns in other countries.

Then, in 1962, Dr. Frances Oldham Kelsey, a pharmacologist, physician and medical officer working at the FDA, refused to approve thalidomide, a drug marketed in Europe, Canada, Japan and other countries to alleviate morning sickness in pregnant women but later found to cause severe birth defects. Shocking revelations of children born without limbs or suffering from other debilitating conditions motivated Congress to pass the Kefauver-Harris Drug Amendments of 1962, which ushered in a more cautious approach to the drug approval process.

FDA’s turn toward expanding access

During the 1970s, questions about the limits of safety versus an individual’s right to access arose when cancer patients who wanted access to an unapproved drug derived from apricots, Laetrile, sued the FDA. The agency had blocked the drug’s shipment and sale because it was not approved for use in the U.S. At that time, the Supreme Court upheld the FDA’s protective authority, holding that an unproven therapy is unsafe for all patients, including the terminally ill.

The 1980s, however, marks the FDA’s shift toward increasing access following reports of an emerging disease – AIDS – which primarily affected gay men. In the first nine years of the AIDS epidemic, over 100,000 Americans died. AIDS patients and their advocates became vocal critics of the FDA, arguing that the agency was too paternalistic and restrictive following events like the thalidomide scare.

ACT UP protestors lying on the ground with tombstone-shaped signs demanding the FDA allow access to experimental HIV/AIDS drugs
Protests from HIV/AIDS activists like ACT UP spurred the FDA to develop expedited drug approval tracks to meet urgent public health needs. Mikki Ansin/Peter Ansin via Getty Images

After massive protests, Dr. Anthony Fauci, then director of the National Institute of Allergy and Infectious Diseases, proposed a parallel track program allowing eligible patients access to unapproved experimental treatments. This, along with other existing FDA mechanisms, helped lay the path for other alternative approval pathways, such as Emergency Use Authorization, which played a large role in permitting use of vaccines and medications pending full FDA approval during the COVID-19 pandemic.

Future of the FDA

Despite the FDA’s shift toward increased access, the political right has in recent years argued that the agency remains too bureaucratic and paternalistic and should be deregulated – an argument seemingly contrary to the reasoning underlying Kacsmaryk’s recent order that the FDA did not sufficiently evaluate the safety of mifepristone in its approval.

Mifepristone, which has overwhelming data supporting its safety, could remain available to some people in some states regardless of the outcome of this lawsuit. While the FDA approves drugs for consumer use, it does not regulate the general practice of medicine. Doctors can prescribe FDA-approved drugs off-label, meaning they could prescribe a drug with a different dose, in a different way or for a different use than what the FDA has approved it for.

The mifepristone case has broad implications for the FDA’s future and could have devastating effects on health in the U.S. Due in part to FDA involvement, public health interventions have led to a 62% increase in life expectancy in the 20th century. These include vaccines and medications for childhood illnesses and infectious diseases such as HIV, increased regulation of tobacco, and over-the-counter Narcan to combat the opioid crisis, among others.

The FDA needs to be able to use its scientific expertise to make data-driven decisions that balance safety and access, without the ability of a single judge to potentially gut the system. The agency’s history is an important reminder of the need for strong administrative agencies and ongoing vigilance to protect everyone’s health.

Christine Coughlin, Professor of Law, Wake Forest University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Serve a Sweet Treat

(Culinary.net) Surprise your family with a dessert fit for the season. This Carrot Cake recipe is a traditional take on the timeless treat and created using everyday ingredients. 

Find more dessert inspiration at Culinary.net.

Watch video to see how to make this recipe!

Carrot Cake

  • 2 1/4    cups flour
  • 1          teaspoon baking soda
  • 1/2       teaspoon salt
  • 1          teaspoon cinnamon
  • 1/2       teaspoon baking powder
  • 1          cup vegetable oil
  • 1 1/4    cups sugar
  • 3          eggs
  • 1 1/2    cups carrots, shredded
  • 1          cup crushed pineapple with juice
  • 2/3       cup walnuts

Frosting:

  • 2          packages (8 ounces each) cream cheese
  • 3/4       cup butter, softened
  • 1          teaspoon vanilla extract
  • 5 1/2    cups powdered sugar
  1. Heat oven to 350° F.
  2. In large bowl, sift flour, baking soda, salt, cinnamon and baking powder.
  3. In mixing bowl, cream together oil and sugar. Add eggs one at a time. Gradually add in carrots and crushed pineapple.
  4. Add dry mixture to wet ingredients and beat until smooth. Fold in walnuts.
  5. Pour batter into two lightly greased 8-inch round cake pans and bake 25-30 minutes, or until knife inserted in center comes out clean. Allow cakes to cool completely. Remove cakes from pans and slice off tops to level cakes.
  6. To make frosting: In mixing bowl, cream together cream cheese, butter and vanilla. Gradually add in powdered sugar and mix until smooth.
  7. Spread two large spoonfuls frosting over top of one cake and stack second cake on top. Frost entire cake with remaining frosting.
SOURCE:
Culinary.net

5 Tips to Encourage Picky Eaters

(Culinary.net) Feeding an entire family can be difficult enough on its own with busy evenings full of hustle and bustle. One additional factor that can cause even more headaches is dealing with a picky eater, especially a child whose preferred menu ranges from hot dogs to candy.

If you’re looking to widen the palate of your picky eater (or eaters), consider these tips to start down a path toward a more expansive slate of family meals.

Start Slow
Loading up your little ones’ plates with steamed veggies and sauteed fish may be a surefire way to send them to the pantry for a less nutritious snack. Instead, try combining personal favorites with small portions of foods you’d like to introduce, such as chicken nuggets alongside green beans or topping pizza with black olives.

Don’t Force It
While it can be frustrating to constantly hear “no” to fruits and veggies, forcing them upon children may turn them away for good. In addition, a struggle over eating certain foods may create a constant sense of frustration around mealtime, which may only decrease a child’s desire to expand his or her horizons.

Create a Fun Experience
Remember not every meal has to include something new. On occasion, mix up mealtime by serving your children’s favorites, even if it’s as simple as a hamburger or as creative as breakfast for dinner.

Bring Your Sidekicks to the Store
Introducing your children to the place your family’s food comes from may help them feel more comfortable with new flavors. Plus, by letting them in on the shopping process, you can have some help choosing foods they’re more likely to be willing to try.  

Let Children Help Cook
Much like choosing their own ingredients increases the likelihood they’ll try something new, perhaps becoming part of the cooking process can help children see how a meal comes together. It doesn’t have to be a gourmet experience – simply seasoning roasted asparagus with salt and olive oil, for example, can introduce your up-and-coming chefs to the kitchen while helping make the cooking (and eating) process a fun adventure.

For more food tips and kid-friendly recipes, visit Culinary.net.

SOURCE:
Culinary.net

In protecting land for wildlife, size matters – here’s what it takes to conserve very large areas

A bison herd on the America Prairie reserve in Montana. Amy Toensing/Getty Images
David Jachowski, Clemson University

Driving north on state Highway 66 through the Fort Belknap Indian Reservation in central Montana, it’s easy to miss a small herd of bison lounging just off the road behind an 8-foot fence. Each winter, heavy snows drive bison out of Wyoming’s Yellowstone National Park – the only place in the U.S. where they have lived continuously since prehistoric times – and into Montana, where they are either killed or shipped off to tribal lands to avoid conflict with cattle ranchers.

In the winter of 2022-2023 alone, over 1,500 bison have been “removed,” about 25% of Yellowstone’s entire population. The bison at Fort Belknap are refugees that have been trucked 300 miles to the reservation from past Yellowstone winter culls.

Although bison are the U.S. national mammal, they exist in small and fragmented populations across the West. The federal government is working to restore healthy wild bison populations, relying heavily on sovereign tribal lands to house them.

Indeed, tribal lands are the great wildlife refuges of the prairie. Fort Belknap is the only place in Montana where bison, critically endangered black-footed ferrets and swift foxes, which occupy about 40% of their historic range, all have been restored.

A black-footed ferret looks out of a burrow.
Black-footed ferrets, which once ranged across the Great Plains, are one of the most endangered species in the U.S. J. Michael Lockhart, USFWS/Flickr, CC BY

But Indigenous communities can’t and shouldn’t be solely responsible for restoring wildlife. As an ecologist who studies prairie ecosystems, I believe that conserving grassland wildlife in the U.S. Great Plains and elsewhere will require public and private organizations to work together to create new, larger protected areas where these species can roam.

Rethinking how protected areas are made

At a global scale, conservationists have done a remarkable job of conserving land, creating over 6,000 terrestrial protected areas per year over the past decade. But small has become the norm. The average size of newly created protected areas over that time frame is 23 square miles (60 square kilometers), down from 119 square miles (308 square kilometers) during the 1970s.

Chart showing number and average size of new protected areas from 1900-2020
From the 1970s through 2020, the annual rate of protected area creation on land (solid purple bars) increased, but these areas’ average size (hollow bars) decreased. David Jachowski/Data from Protected Planet, CC BY-ND

Creating large new protected areas is hard. As the human population grows, fewer and fewer places are available to be set aside for conservation. But conserving large areas is important because it makes it possible to restore critical ecological processes like migration and to sustain populations of endangered wildlife like bison that need room to roam.

Creating an extensive protected area in the Great Plains is particularly difficult because this area was largely passed over when the U.S. national park system was created. But it’s becoming clear that it is possible to create large protected areas through nontraditional methods.

Consider American Prairie, a nonprofit that is working to stitch together public and tribal lands to create a Connecticut-sized protected area for grassland wildlife in Montana. Since 2004, American Prairie has made 37 land purchases and amassed a habitat base of 460,000 acres (about 720 square miles, or 1,865 square kilometers).

The American Prairie initiative is working to create a protected zone of prairie grassland the size of Connecticut by knitting together public and private lands where ranchers and others are still working.

Similarly, in Australia, nonprofits are making staggering progress in conserving land while government agencies struggle with funding cuts and bureaucratic hurdles. Today, Australia is second only to the U.S. in its amount of land managed privately for conservation.

Big ideas make room for smaller actions

Having worked to conserve wildlife in this region for over 20 years, I have seen firsthand that by setting a sweeping goal of connecting 3.2 million acres (5,000 square miles, or 13,000 square kilometers), American Prairie has reframed the scale at which conservation success is measured in the Great Plains. By raising the bar for land protection, they have made other conservation organizations seem more moderate and created new opportunities for those groups.

One leading beneficiary is The Nature Conservancy, which owns the 60,000-acre Matador Ranch within the American Prairie focal area. When the conservancy first purchased the property, local ranchers were skeptical. But that skepticism has turned to support because the conservancy isn’t trying to create a protected area.

Instead, it uses the ranch as a grassbank – a place where ranchers can graze cattle at a low cost, and in return, pledge to follow wildlife-friendly practices on their own land, such as altering fences to allow migratory pronghorn to slip underneath. Via the grassbank, ranchers are now using these wildlife conservation techniques on an additional 240,000 acres of private property.

Using smooth wire instead of barbed wire for prairie fences enables pronghorn to cross under them with less chance of injury.

Other moderate conservation organizations are also working with ranchers. For example, this year the Bezos Earth Fund has contributed heavily to the National Fish and Wildlife Foundation’s annual grants program, helping to make a record $US16 million available to reward ranchers for taking wildlife-friendly actions.

A collective model for achieving a large-scale protected area in the region has taken shape. American Prairie provides the vision and acts to link large tracts of protected land for restoring wildlife. Other organizations work with surrounding landowners to increase tolerance toward wildlife so those animals can move about more freely.

Instead of aiming to create a single polygon of protected land on a map, this new approach seeks to assemble a large protected area with diverse owners who all benefit from participating. Rather than excluding people, it integrates local communities to achieve large-scale conservation.

A global pathway to 30x30

This Montana example is not unique. In a recent study, colleagues and I found that when conservationists propose creating very large protected areas, they transform conservation discussions and draw in other organizations that together can achieve big results.

Many recent successes started with a single actor leading the charge. Perhaps the most notable example is the recently created Cook Islands Marine Park, also known as Marae Moana, which covers 735,000 square miles (1.9 million square kilometers) in the South Pacific. The reserve’s origin can be traced back to Kevin Iro, an outspoken former professional rugby player and member of the islands’ tourism board.

While some individual conservation organizations have found that this strategy works, global, national and local policymakers are not setting comparable large-scale targets as they discuss how to meet an ambitious worldwide goal of protecting 30% of the planet for wildlife by 2030. The 30x30 target was adopted by 190 countries at an international conference in 2022 on saving biodiversity.

Critics argue that large protected areas are too complicated to create and too expensive to maintain, or that they exclude local communities. However, new models show that there is a sustainable and inclusive way to move forward.

In my view, 30x30 policymakers should act boldly and include large protected area targets in current policies. Past experience shows that failing to do so will mean that future protected areas become smaller and smaller and ultimately fail to address Earth’s biodiversity crisis.

David Jachowski, Associate Professor of Wildlife Ecology, Clemson University

This article is republished from The Conversation under a Creative Commons license.

Fun facts about bones: More than just scaffolding

A new vision of the skeleton as a dynamic organ that sends and receives messages suggests potential therapies for osteoporosis and other problems

Bones: They hold us upright, protect our innards, allow us to move our limbs and generally keep us from collapsing into a fleshy puddle on the floor. When we’re young, they grow with us and easily heal from playground fractures. When we’re old, they tend to weaken, and may break after a fall or even require mechanical replacement.

If that structural role was all that bones did for us, it would be plenty.

But it’s not. Our bones also provide a handy storage site for calcium and phosphorus, minerals essential for nerves and cells to work properly. And each day their spongy interior, the marrow, churns out hundreds of billions of blood cells — which carry oxygen, fight infections and clot the blood in wounds — as well as other cells that make up cartilage and fat.

Even that’s not all they do. Over the past couple of decades, scientists have discovered that bones are participants in complex chemical conversations with other parts of the body, including the kidneys and the brain; fat and muscle tissue; and even the microbes in our bellies.

It’s as if you suddenly found out that the studs and rafters in your house were communicating with your toaster.

Scientists are still deciphering all the ways that bone cells can signal other organs, and how they interpret and respond to molecular messages coming from elsewhere. Already, physician-scientists are starting to consider how they might take advantage of these cellular conversations to develop new treatments to protect or strengthen bone.

“It’s a whole new area of exploration,” says Laura McCabe, a physiologist at Michigan State University in East Lansing. The recent work has convinced scientists that bone is far more dynamic than once thought, McCabe says — or, as a student of hers used to say, “Bone is not stone.”

Early evidence that bone has something to say

Bone is a unique tissue: It contains not only cells that build the hard matrix that gives the skeleton its strength, but also cells that break it down — enabling bone to reshape itself as a child grows, and to repair itself throughout life. The bone builders are called osteoblasts, and the disassembly crew consists of cells known as osteoclasts. When the balance between the actions of the two is off-kilter, the result is too little (or too much) bone. This happens, for example, in osteoporosis, a common condition of weak and brittle bones that results when bone synthesis fails to keep up with degradation of old bone.

In addition to osteoblasts and osteoclasts, bone contains another cell type, the osteocytes. While these cells comprise 90 percent or more of bone cells, they weren’t studied much until about 20 years ago, when a cell biologist named Lynda Bonewald got interested. Colleagues told her not to waste her time, suggesting that osteocyctes probably only played some mundane role like sensing mechanical forces to regulate bone remodeling. Or maybe they were just kind of there, not doing much of anything.

Bonewald, now at Indiana University in Indianapolis, decided to investigate them anyway. Osteocytes do, in fact, sense mechanical load, as she and other researchers have found. But as Bonewald says, “They do so much more.” She recently wrote about the importance of osteocytes to the kidneys, pancreas and muscles in the Annual Review of Physiology.

Her first finding regarding osteocyte communication with other organs, reported in 2006, was that the cells make a growth factor called FGF23. This molecule then cruises the bloodstream to the kidneys. If the body has too much FGF23 — as happens in an inherited form of rickets — the kidneys release too much phosphorus into urine, and the body starts to run out of the essential mineral. The resulting symptoms include softened bones, weak or stiff muscles, and dental problems.

Around the same time that Bonewald was diving into osteocyte research, physiologist Gerard Karsenty began investigating a potential relationship between bone remodeling and energy metabolism. Karsenty, now at Columbia University in New York, suspected that the two would be related, because destroying and re-creating bone is an energy-intensive process.

In a 2000 study, Karsenty investigated whether a hormone called leptin could be a link between these two biological processes. Leptin is produced by fat cells and is best known as a depressor of appetite. It also emerged in evolution around the same time as bone. In experiments with mice, Karsenty found that leptin’s effects in the brain put the brakes on bone remodeling.

The recent work has convinced scientists that bone is far more dynamic than once thought, McCabe says — or, as a student of hers used to say, “Bone is not stone.”

Using leptin in this way, Karsenty suggests, would have allowed the earliest bony creatures to suppress bone growth alongside appetite when food was scarce, saving their energy for day-to-day functions.

His group found support for this idea when they took X-rays of the hand and wrist bones of several children who lack fat cells, and thus leptin, due to a genetic mutation. In every case, radiologists unfamiliar with the people’s true ages ranked the bones as months or years older than they were. Without leptin, their bones had sped ahead, acquiring characteristics like higher density that are more typical of older bones.

That was a case of bone listening to other organs, but in 2007, Karsenty proposed that bone also has something to say about how the body uses energy. He found that mice lacking a bone-made protein called osteocalcin had trouble regulating their blood sugar levels.

In further research, Karsenty discovered that osteocalcin also promotes male fertility via its effects on sex hormone production, improves learning and memory by altering neurotransmitter levels in the brain, and boosts muscle function during exercise. He described these messages, and other conversations that bone participates in, in the Annual Review of Physiology in 2012.

It’s a spectacular set of functions for one molecule to handle, and Karsenty thinks they’re all linked to a stress response that early vertebrates — animals with backbones — evolved for survival. “Bone may be an organ defining a physiology of danger,” he says.

Karsenty proposes that osteocalcin’s effects allowed early vertebrates, both male and female, to respond to the sight of a predator by amping up energy levels, through the effects of testosterone, as well as muscle function. They’d be able to run away, and later remember (and avoid) the place where they’d encountered that threat.

Researchers in Karsenty’s lab did these studies with genetically modified osteocalcin-deficient mice that he developed, and several labs have replicated his results in various ways. However, labs in the US and in Japan, working with different strains of mice that don’t make osteocalcin, didn’t see the same widespread effects on fertility, sugar processing or muscle mass. The scientists haven’t yet been able to explain the disparities, and the danger-response hypothesis remains somewhat controversial.

Whether or not osteocalcin played the big role in vertebrate evolution that Karsenty proposes, these studies have inspired other scientists to examine all kinds of ways that bone listens to and talks to the rest of the body.

Crosstalk between muscle and bone

Bone and muscle, partners in movement, have long been known to interact physically. Muscles tug on bone, and as muscles get stronger and larger, bone responds to this increased physical pull by becoming bigger and stronger too. That allows bone to adapt to an animal’s physical needs, so the proportional muscle and bone can continue to work together effectively.

But it turns out that there’s also a chemical conversation going on. For example, skeletal muscle cells make a protein called myostatin that keeps them from growing too large. In experiments with rodents, alongside observations of people, researchers have found that myostatin also keeps bone mass in check.

During exercise, muscles also make a molecule called beta-aminoisobutyric acid (BAIBA) that influences fat and insulin responses to the increased energy use. Bonewald has found that BAIBA protects osteocytes from dangerous byproducts of cellular metabolism called reactive oxygen species. In young mice that were immobilized — which normally causes atrophy of bone and muscle — providing extra BAIBA kept both bones and muscle healthy.

In additional studies, Bonewald and colleagues found that another muscle molecule that increases with exercise, irisin, also helps osteocytes to stay alive in culture and promotes bone remodeling in intact animals.

The conversation isn’t all one-way, either. In return, osteocytes make prostaglandin E2, which promotes muscle growth, on a regular basis. They boost production of this molecular messenger when they experience an increase in the tug from working muscles.

What bone gets from the gut

The human body contains about as many microbial cells as human ones, and the trillions of bacteria and other microorganisms inhabiting the gut — its microbiome — function almost like another organ. They help to digest food and prevent bad bacteria from taking hold — and they talk to other organs, including bone.

So far, the bone-microbiome conversation seems to be one-way; no one has observed bone sending messages back to the microbes, says Christopher Hernandez, a biomechanics expert at Cornell University in Ithaca, New York. But the skeleton can learn a lot of useful things from the gut, McCabe says. For example, suppose a person gets a nasty case of food poisoning. They need all their resources to fight off the infection. “It’s not the time to build bone,” says McCabe.

There are plenty of complex conversations occurring between bone cells and gut microbes, and researchers are just starting to explore that complexity.

The first hints of a bone-microbiome connection came from a 2012 study of mice raised in a sterile environment, without any microbes at all. These animals had fewer bone-destroying osteoclasts, and thus higher bone mass. Giving the mice a full complement of gut microbes restored bone mass to normal, in the short term.

But the long-term effects were a bit different. The microbes released molecules called short-chain fatty acids that caused the liver and fat cells to make more of a growth factor called IGF-1, which promoted bone growth.

Gut microbes also appear to moderate another signal that affects bone: parathyroid hormone (PTH), from the parathyroid glands at the base of the neck. PTH regulates both bone production and breakdown. But PTH can only promote bone growth if mice have a gut full of microbes. Specifically, the microbes make a short-chain fatty acid called butyrate that facilitates this particular conversation. (Incidentally, that FGF23 made by osteocytes also acts on the parathyroid glands, tuning down their secretion of PTH.)

While scientists have uncovered many important roles for the gut microbiome in recent years, it wasn’t a given that they’d influence the skeleton, says Bonewald: “Boy, were we surprised to see effects on bone.” Now it’s clear there are plenty of complex conversations occurring between bone cells and gut microbes, and researchers are just starting to explore that complexity and what it might mean for overall health, says McCabe.

Can doctors join the conversation?

The most thrilling thing about these organ-to-organ messages, says McCabe, is that it suggests novel ways to help bone with medicines that act on different parts of the body. “We could be even more creative therapeutically,” she says.

The Centers for Disease Control and Prevention estimates that nearly 13 percent of Americans over 50 suffer from osteoporosis, and while there are several medications that slow the breakdown of bone, as well as some that speed buildup, they can have side effects and they’re not used nearly as much as they could be, says Sundeep Khosla, an endocrinologist at the Mayo Clinic in Rochester, Minnesota. That’s why he says new approaches are needed.

One obvious place to start is with the gut. Probiotics and other foods containing cultured microbes, such as the fermented milk kefir, can help to build a healthy microbiome. McCabe’s group found that a particular probiotic bacterium, Lactobacillus reuteri, protected mice from the bone loss that normally follows antibiotic treatment. Another group tried a combination of three types of Lactobacillus in post-menopausal women, the segment of the population most susceptible to osteoporosis, and those on the treatment experienced no bone loss during the yearlong study, whereas those in a placebo group did.

Hernandez has been investigating another therapeutic approach that would improve bone’s resilience, but not by adding mass or preventing breakdown. The work grew out of a series of experiments in which he used antibiotics to perturb, but not eliminate, the gut microbiome in mice. He predicted this would cause the mice to lose bone mass, but the results surprised him. “It didn’t change the density or the size of the bone,” he says, “but it changed how strong the bone was.” The bones of the antibiotic-treated animals were weak and brittle.

Investigating further, Hernandez’s team found that when mice receive antibiotics, their gut bacteria stop making as much vitamin K as they normally do, and so less of the vitamin reaches the large intestine, liver and kidneys. The result is alterations to the precise shape of mineral crystals in the bone. Hernandez is now investigating whether the source of the vitamin K — either from gut microbes or dietary sources like leafy greens — matters for bone crystallization. If people need the bacterial version, then probiotics or even fecal transplants might help, he suggests.

Karsenty’s work, meanwhile, has inspired an entirely different strategy. As he observed early on, leptin from fat cells slows bone formation via the brain. In response to leptin, the brain sends a signal that ultimately activates bone cells’ beta-adrenergic receptors, shutting off bone-building osteoblasts and stimulating bone-clearing osteoclasts.

These same beta-adrenergic receptors exist in various parts of the body, including the heart, and drugs that block them are commonly used to reduce blood pressure. To investigate whether these drugs might also prevent osteoporosis, Khosla tested a few different beta blockers in 155 post-menopausal women, and two of the drugs seemed to keep bones strong. He’s now running a larger study with 420 women; half will receive one of those drugs, atenolol, and the other half will get a placebo, for two years. The scientists will monitor them for changes in bone density in the hip and lower spine.

Khosla has another idea, based on the fact that as bone ages, it accumulates old, senescent osteocytes that produce inflammation. That inflammation, in turn, can affect the constant buildup and breakdown of bone, contributing to their imbalance in osteoporosis.

Senolytics are drugs that cause those old cells to kill themselves, and Khosla recently co-authored a summary of their potential for the Annual Review of Pharmacology and Toxicology. In a study in older mice, for example, this kind of medication boosted bone mass and strength. Khosla has another trial going, with 120 women age 70 or older, to test the ability of senolytics to increase bone growth or minimize its destruction.

Scientists still have plenty to learn about the conversations between bone and the rest of the body. With time, this research may lead to more therapies to keep not just the skeleton, but also the other conversationalists, healthy and strong.

But what’s clear already is that the skeleton is not just a nice set of mechanical supports. Bones constantly remodel themselves in response to the body’s needs, and they’re in constant communication with other parts of the body. Bone is a busy tissue with broad influence, and it’s working behind the scenes during the most basic daily activities.

So the next time you enjoy a cup of yogurt, work out or even empty your bladder, be sure to spare a moment to thank your bones for responding to microbial signals, conversing with your muscles and keeping your phosphorus supplies from going down the drain.

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Sudan’s conflict has its roots in three decades of elites fighting over oil and energy

The opening of a hydro-electric dam on the Nile River at Merowe, north of Khartoum, in 2009. Ashraf Shazly/Afp via GettyImages
Harry Verhoeven, Columbia University

Sudan stands on the brink of yet another civil war sparked by the deadly confrontation between the Sudan Armed Forces of General Abdelfatah El-Burhan and the Rapid Support Forces of Mohamed Hamdan Dagalo (“Hemedti”).

Much of the international news coverage has focused on the clashing ambitions of the two generals. Specifically, that differences over the integration of the paramilitary Rapid Support Forces into the regular army triggered the current conflict on April 15, 2023.

I am a professor teaching at Columbia University and my research focuses on the political economy of the Horn of Africa. A forthcoming paper of mine in the Journal of Modern African Studies details the strategic calculus of the Sudan Armed Forces in managing revolution and democratisation efforts, today as well as in past transitions. Drawing on this expertise, it is important to underline that three decades of contentious energy politics among rival elites forms a crucial background to today’s conflict.

The current conflict comes after a decade-long recession which has drastically lowered the living standards of Sudanese citizens as the state teetered on the brink of insolvency.

How energy has shaped Sudan’s violent political economy

Long gone are the heady days when Sudan emerged as one of Africa’s top oil producers. Close to 500,000 barrels were pumped every day by 2008. Average daily production in the last year has hovered around 70,000 barrels.

In the late 1990s, amid a devastating civil war, President Omar Al-Bashir’s military-Islamist regime announced that energy would help birth a new economy. It had already paved the way for this reality, ethnically cleansing the areas where oil would be extracted. The regime struck partnerships with Chinese, Indian and Malaysian national oil companies. Growing Asian demand was met with Sudanese crude.

Petrodollars poured in. The regime in power between 1989 and 2019 oversaw a boom. This enabled it to weather internal political crises, increase the budgets of its security agencies and to spend lavishly on infrastructure. Billions of dollars were channelled to the construction and expansion of several hydro-electric dams on the Nile and its tributaries.

These investments intended to enable the irrigation of hundreds of thousands of hectares. Food crops and animal fodder were to be grown for Middle Eastern importers. Electricity consumption in urban centres was transformed; production in Sudan was boosted by thousands of megawatts. The regime spent more than US$10 billion on its dam programme. That’s a phenomenal sum and testament to its belief that the dams would become the centrepiece of Sudan’s modernised political economy.

South Sudan secedes

Then, in 2011, South Sudan seceded – along with three-quarters of Sudan’s oil reserves. This exposed the illusions on which these dreams of hydro-agricultural transformation rested. The regime lost

half of its fiscal revenues, and about two-thirds of its international payment capacity.

The economy shrank by 10%. Sudan was also plagued by power cuts as the dams proved very costly and produced much less than promised. Lavish fuel subsidies were maintained but as evidence shows, these disproportionately benefited select constituencies in Khartoum and failed to protect the poor.

As the regime sank ever deeper into economic crisis, its security agencies concentrated on accumulating the means they deemed essential to survive, and to compete with each other. Both the Sudan Armed Forces and Rapid Support Forces deepened their involvement in Sudan’s political economy. They took control of key commercial activities. These included meat processing, information and communication technology and gold smuggling.

Soaring fuel, food and fertiliser prices

This economic crisis fuelled a popular uprising which led to the overthrow of Al-Bashir. After the 2018-2019 revolution, the international community oversaw a power-sharing arrangement. This brought together Sudan Armed Forces, Rapid Support Forces and a civilian cabinet. Reforms were tabled to reduce spending on fuel imports and address the desperate economic situation.

However, the proposals for economic reform competed for government and international attention with calls to fast-track the “de-Islamisation” of Sudan, and to purge collaborators of the ousted regime from civil service ranks.

Inflationary pressures worsened as food and energy prices rose. It also strengthened a growing regional black market in which fuel, wheat, sesame and much else was illicitly traded across borders. At the same time, divisions grew in Sudan’s political establishment and among protesters in its streets.

The government’s efforts to push back against growing control of economic activities by the Sudan Armed Forces and Rapid Support Forces ultimately contributed to the October 2021 coup against Prime Minister Abdallah Hamdok.

Overlapping crises

The coup only deepened the crisis. So too did global supply shocks, such as those caused by the COVID-19 pandemic and the Russia-Ukraine conflict, which sent the prices of fuel, food and fertiliser skyrocketing globally, including in Sudan. Fertiliser prices increased by more than 400%. The state’s retreat from subsidising essential inputs for agricultural production, such as diesel and fertiliser, led farmers to drastically reduce their planting, further exacerbating the food production and affordability crunch.

Amid these overlapping energy, food and political crises, Sudan’s Armed Forces and Rapid Support Forces have been violently competing for control of the political economy’s remaining lucrative niches, such as key import-export channels. Both believe the survival of their respective institutions is essential to preventing the country from descending into total disintegration.

In view of such contradictions and complexity, there are no easy solutions to Sudan’s multiple crises. The political, economic and humanitarian situation is likely to worsen further.

A version of this article was first published by the Center on Global Energy Policy.

Harry Verhoeven, Senior Research Scholar at the Center on Global Energy Policy, Columbia University

This article is republished from The Conversation under a Creative Commons license.

Just how partisan is the press, and should the public be worried?

From anonymous online commenters up to the highest levels of American leadership, public discourse is awash today in accusations that the media are politically slanted. But to what extent is that true and how does it affect politics?

To explore those questions, Knowable Magazine turned to political economist David Strömberg of Stockholm University, who researches media influence on politics and policy. Strömberg examined media bias around the globe and through history in a 2015 article in the Annual Review of Economics. His findings are counterintuitive, revealing that ideological bias may have less impact than we expect. This interview has been edited for length and clarity.

How long have researchers studied media bias?

Researchers started thinking about these things because of the rise of fascism in Europe in the 1930s, which was also around the time radio was introduced. People saw that the Nazis and fascists in Europe were using radio a lot for propaganda. This raised fears from onlooking nations that their own citizens could be brainwashed with propaganda. And so the first studies set out to test that.

Some early research interviewed U.S. voters before a presidential election, looked at the news material they consumed, and asked whether their opinions shifted based on whatever news material they had been exposed to. They found that people’s ideological or political standpoints rarely changed. The reason for that was that Republicans were reading Republican newspapers and Democrats were reading Democratic papers, so people were basically just becoming more firm in whatever belief they had before.

How do the media function when everything is working?

The ideal role of the media is to provide information. It’s almost impossible for people to inform themselves about what politicians are doing directly, and it’s just boring to many people. Media is a very effective way of having a few big actors with lots of resources finding stories of interest and presenting them to voters. This makes politicians much more accountable.

And what does damaging bias look like?

The way that economists have defined damaging bias, it typically means that you’re misrepresenting the facts, suppressing facts or just lying. This would be different than, let’s say, a conservative newspaper endorsing a Republican candidate — this kind of ideological bias is not necessarily suppressing any information or lying.

Today we’re hearing a lot of concern that many media outlets in the U.S. are ideologically biased. What does research tell us — are these concerns well-founded?

Researchers look at things like what candidates and ballot propositions newspapers endorse and what kind of think tanks they cite, and then compare newspapers to members of Congress and Supreme Court judges. There has been an increase in the polarization of U.S. media, to the left and right. But if you look at empirical studies of how biased the U.S. media are, on average it’s been pretty close to centrist.

>k_subscribe_ad<>/k_subscribe_ad<

These studies look at a large set of U.S. newspapers. There are, of course, extreme cases where outlets are ideologically biased. But if you’re thinking about the average influence of the media, you should consider the average position of the media.

What about more implicit measures, like a 2014 survey that found that, while most American journalists identified as independent in 2013, four times as many identified as liberal than identified as conservative?

There’s been a long discussion of slants of different important actors in the newsmaking process. Typically, journalists are left of center, whereas advertisers and owners may be more right of center. Of course, the end output that we care about is what is in the newspapers. Typically, studies find that nowadays in the U.S., newspapers are slightly to the left of center, but still moderate compared to other actors like politicians and Supreme Court judges.

Is this the first time the U.S. has been dealing with media bias?

No. If you go back to the late 1800s, most newspapers had a party affiliation and their content was much more slanted. Around the 1890s to 1920s, the share of newspapers who had strong party affiliations dropped drastically, though even then there were newspapers with some Republican or Democratic leaning.

What caused the shift away from explicit party affiliations?

People have speculated. There is some evidence that advertising revenue and competition between papers may be what was driving the change. As it became more profitable to write stories that interested people, biases fell. One study found that in areas where ad revenues were higher, the slant was less.

And since then, it’s been standard for media outlets to strive for balance in their coverage?

Actually, that’s really an American norm. In Europe, that shift didn’t happen — most newspapers have a clear political affiliation. But that doesn’t mean that they were incredibly slanted — certainly not biased in the sense of producing fake news. It just means that they have a standpoint that’s transparent.

When a news outlet has an ideological bias, how much power does it have to carry that over to its audience?

It’s always been the dream of researchers to find big effects of media on how people are voting. I mean these are the hypotheses the studies in the ’40s started with. But one of the consistent findings all the way back to the first big studies is that it’s very difficult to change people’s voting intentions — for example, what political party they prefer. If you are a right-wing person and you get left-wing media, or vice versa — first of all, you just don’t read it. And even if you were exposed to some news that is not aligned with your ideological positions, you just wouldn’t take it in.

But that doesn’t necessarily mean it doesn’t affect election results, because it could be that the media increase voter turnout by energizing voters.

You argue in your review article that this tendency of voters to seek out media that are aligned with their own biases might, counterintuitively, be good for voters. How?

Political accountability works best if voters don’t make mistakes. By mistake, I mean voting for a candidate that is not the better candidate for you. If you have a media source that has the same exact preference as you do, and you just follow their endorsement, for example, you would never make a mistake.

So it can benefit voters to seek out news sources that share their views. But are there drawbacks to that as well?

One drawback is that ideological media outlets polarize the electorate by reaffirming voters’ pre-held positions. This means that the media are likely to make the Democratic voters support Democrats more strongly and the Republican voters support the Republicans more strongly.

And a very polarized electorate is not good for political accountability, because strongly partisan voters will not be very responsive to important nonpartisan factors. A politician might be slightly corrupt or not be very efficient in policymaking, but very few voters care — because they only care about his strong ideological positions on, let’s say, abortion.

What happens when slant crosses into damaging bias — not just ideological, but misrepresenting facts, suppressing facts or lying? Can the media harm society?

Some evidence exists that the media could provide worse outcomes for society, but this comes mainly from totalitarian states where there’s little media competition — there’s just one media outlet and the rulers use it to implement terrible actions. One example was how radio was used by the Rwandan government in the 1990s to, basically, tell people to commit genocide. One study found that where there was better radio reception, there were more civilian deaths. Another study from 2013 shows that the Nazi control over radio broadcasts increased support for anti-Semitic policies in places where there was a prior history of rioting and attacks targeting Jewish communities. But these are extreme examples that don’t tend to happen in democracies with a free press.

What do negative media effects look like in democracies with a free press?

There are instances of people being misguided by the media. For example, in the run-up to the Iraq War, much of the mainstream U.S. media coverage uncritically supported the government line that Iraq was developing weapons of mass destruction. Outlets like the New York Times later apologized for mistakes in their reporting.

“Because we are emotionally outraged by media bias, we tend to think about it a lot and forget about newspaper deaths and journalist layoffs.”

David Strömberg

But the negative effects of media in democracies usually have more to do with influencing politicians to focus on the “wrong” issues. Obviously, some issues are intrinsically more newsworthy, like plane crashes or volcanic eruptions as opposed to events like traffic accidents, famines or endemic hunger, which are just constant problems — not “news.” News coverage puts a spotlight on certain issues and incentivizes politicians to work on them.

In democracies with a free press, what should people do if they’re worried their news is biased?

It’s like with any other business: Consumers apply pressure by what they consume. So you have to switch to a different media source if you don’t like what another source is doing.

How does media bias compare to other issues facing journalism today?

Because we are emotionally outraged by media bias, we tend to think about it a lot and forget about newspaper deaths and journalist layoffs. But there is much less evidence that media bias matters significantly. There is quite a bit more evidence that the volume of news matters more for making democracy work.

The media market in the U.S. is very mature, and there are many newspapers. And even consumers of the most biased media news sites often consume other news. So in a media market like the U.S., if you add one additional news source with a Republican or Democratic slant to the many others, it makes little difference.

And even newspapers with strong Republican or Democratic news slants are typically still covering their in-party corruption scandals. So it’s much more important that you have a local newspaper than if it has slant one way or the other.

Of course, if you go someplace like Russia where they don’t have many media outlets, then media slant matters much more. If you go from zero opposition television stations to one, it makes a huge difference.

And a higher volume of financially viable presses means the press is harder to silence.

How so?

A theoretical argument is that you have to silence each and every outlet in order for information not to get out. You can shut down outlets and throw journalists in jail — like we’ve seen happening in Turkey — or bribe the press, or hire editors more politically aligned with leadership. But if even one news outlet covers a piece of information, then the news will be out.

A clear example is a paper that looked at bribery by leadership in Peru. In the 1990s, Peru’s secret police chief, Vladimiro Montesinos, paid out many bribes and he recorded it all — all the bribes paid to different branches of government and the press to keep them silent about corruption. One important finding in that paper is that it was much more costly to silence the media than to silence judges and legislators. Typical bribes to TV channels were about 100 times more than bribes to politicians.

You wrote your review in 2015. Have there been important changes since then?

There are two big ones — falling advertisement revenues and social media. I think social media is changing the media landscape both in countries like the U.S., but even more so in countries like China that don’t have a mature and independent press and media market. It’s much more difficult to censor millions of users than it is to censor a few media outlets.

But leaders in dictatorial regimes can also use social media for surveillance, paying firms to look for posts that predict protests and other similar events before they happen. This is a big industry in China, where calls for protests have been picked up by politicians, and then they take measures like making work days on the weekend, and forcing students to be in school, to prevent a protest.

And what about the spread of false information on social media?

There’s two obvious reasons why you see this kind of fake news on social media. One is that those who post fake news want to influence an election, or something else. The other is that it’s profitable. We have these fake news-producing sites in Macedonia who know nothing about U.S. politics but they know how to write a news article or post that will get many clicks. The thing that makes social media conducive to fake news is there’s much less of a reputational concern for users on Facebook or Twitter. They can write something crazy and then just open up another account, and write something crazy. A media outlet in the U.S., like a local newspaper, cannot do that. Reputation is what makes people pay for their articles.

And what is the impact of falling ad revenues?

Falling advertisement revenues is a major issue now in the media industry, leading to a less financially viable press. So we have to think about if we need to start having government subsidize the media. There’s an economic argument for this, based in economic theories about behaviors that affect other people. If you are better informed, then the politicians will become less corrupt — this is good for you and good for everyone else. Just as you might put a tax on something that is bad, like polluting, you want to have a subsidy on something that is good, like getting informed.

But the effects of government subsidies on bias are not clear. It’s a topic for future research.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Wednesday, April 26, 2023

Hot Jupiters were the first kind of exoplanet found. A quarter-century later, they still perplex and captivate — and their origins hold lessons about planet formation in general.

In 1995, after years of effort, astronomers made an announcement: They’d found the first planet circling a sun-like star outside our solar system. But that planet, 51 Pegasi b, was in a quite unexpected place — it appeared to be just around 4.8 million miles away from its home star and able to dash around the star in just over four Earth-days. Our innermost planet, Mercury, by comparison, is 28.6 million miles away from the sun at its closest approach and orbits it every 88 days.

What’s more, 51 Pegasi b was big — half the mass of Jupiter, which, like its fellow gas giant Saturn, orbits far out in our solar system. For their efforts in discovering the planet, Michel Mayor and Didier Queloz were awarded the 2019 Nobel Prize for Physics alongside James Peebles, a cosmologist. The Nobel committee cited their “contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos.”

The phrase “hot Jupiter” came into parlance to describe planets like 51 Pegasi b as more and more were discovered in the 1990s. Now, more than two decades later, we know a total of 4,000-plus exoplanets, with many more to come, from a trove of planet-seeking telescopes in space and on the ground: the now-defunct Kepler; and current ones such as TESS, Gaia, WASP, KELT and more. Only a few more than 400 meet the rough definition of a hot Jupiter — a planet with a 10-day-or-less orbit and a mass 25 percent or greater than that of our own Jupiter. While these close-in, hefty worlds represent about 10 percent of the exoplanets thus far detected, it’s thought they account for just 1 percent of all planets.

Still, hot Jupiters stand to tell us a lot about how planetary systems form — and what kinds of conditions cause extreme outcomes. In a 2018 paper in the Annual Review of Astronomy and Astrophysics, astronomers Rebekah Dawson of the Pennsylvania State University and John Asher Johnson of Harvard University took a look at hot Jupiters and how they might have formed — and what that means for the rest of the planets in the galaxy. Knowable Magazine spoke with Dawson about the past, present and future of planet-hunting, and why these enigmatic hot Jupiters remain important. This conversation has been edited for length and clarity.

What is a hot Jupiter?

A hot Jupiter is a planet that’s around the mass and size of Jupiter. But instead of being far away from the sun like our own Jupiter, it’s very close to its star. The exact definitions vary, but for the purpose of the Annual Review article we say it’s a Jupiter within about 0.1 astronomical units of its star. An astronomical unit is the distance between Earth and the sun, so it’s about 10 times closer to its star — or less — than Earth is to the sun.

What does being so close to their star do to these planets?

That’s an interesting and debated question. A lot of these hot Jupiters are much larger than our own Jupiter, which is often attributed to radiation from the star heating and expanding their gas layers.

It can have some effects on what we see in the atmosphere as well. These planets are tidally locked, so that the same side always faces the star, and depending on how much the heat gets redistributed, the dayside can be much hotter than the nightside.

Some hot Jupiters have evidence of hydrogen gas escaping from their atmospheres, and some particularly hot-hot Jupiters show a thermal inversion in their atmosphere — where the temperature increases with altitude. At such high temperatures, molecules like water vapor and titanium oxide and metals like sodium and potassium in the gas phase can be present in the atmosphere.

What might explain how a planet ends up so close to its star?

There are three categories of models that people have come up with. One is that maybe these planets form close to their stars to begin with. Originally, people sort of dismissed this. But more recently, astronomers have been taking this theory a bit more seriously as more studies and simulations have shown the conditions under which this could happen.

Another explanation is that during the stage when the planetary system was forming out of a disk of gas and dust, the Jupiter was pulled in closer to its star.

The last explanation is that the Jupiter could have started far away from the star and then gotten onto a very elliptical orbit — probably through gravitational interactions with other bodies in the system — so that it passed very close to the host star. It got so close that the star could raise strong tides on the Jupiter, just like the moon raises tides on the Earth. That could shrink and circularize its orbit so that it ended up close to the star, in the position we observe.

Are there things we see in the planetary systems that have hot Jupiters that other systems don’t have?

There are some trends. One is that most hot Jupiters don’t have other small planets nearby, in contrast to other types of planetary systems we see. If we see a small hot planet, or if we see a gas giant that’s a bit farther away from its star, it often has other planets nearby. So hot Jupiters are special in being so lonely.

The loneliness trend ties in to how hot Jupiters formed so close to their stars. In the scenario where the planet gets onto an elliptical orbit that shrinks and circularizes, that would probably wipe out any small planets in the way. That said, there are a few systems where a hot Jupiter does have a small planet nearby. With those, it’s not a good explanation.

Planetary systems with hot Jupiters often have other giant planets in the system farther away — out beyond where the Earth is, typically. Perhaps, if hot Jupiters originated from highly eccentric orbits, those faraway planets are responsible for exciting their eccentricities to begin with. Or there could have been responsible planets that got ejected from the system in the process, so we don’t necessarily have to still see them in the system.

Another big trend is that hot Jupiters tend to be around stars that are more metal-rich. Astronomers refer to metals as any element heavier than hydrogen or helium. There’s more iron and other elements in the star, and we think that this may affect the disk of gas and dust that the planets formed out of. There are more solids available, and that could facilitate forming giant planets by providing material for their cores, which would then accrete gas and become gas giants.

Having more metals in the system could enable the creation of multiple giant planets. That could cause the type of gravitational interaction that would put the hot Jupiter onto a high eccentricity orbit.

Hot Jupiters like 51 Pegasi b were the first type of planet discovered around sun-like stars. What led to their discovery?

It occurred after astronomers started using a technique called the radial velocity method to look for extrasolar planets. They expected to find analogs to our own Jupiter, because giant planets like this would produce the biggest signal. It was a very happy surprise to find hot Jupiters, which produce an even larger signal, on a shorter timescale. It was a surprising but fortuitous discovery.

Can you explain the radial velocity method?

It detects the motion of the host star due to the planet. We often think of stars sitting still and there’s a planet orbiting around it. But the star is actually doing its own little orbit around the center of mass between the two objects, and that’s what the radial velocity method detects. More specifically, it detects the doppler shift of the star’s light as it goes in its orbit and moves towards or away from us.

One of the other common ways to find planets is the transit method, which looks for the dimming of a star’s light due to a planet passing in front of it. It’s easier to find hot Jupiters than smaller planets this way because they block more of the star’s light. And if they are close to the star they transit more frequently in a given period of time, so we’re more likely to detect them.

In the 1990s, many of the exoplanets astronomers discovered were hot Jupiters. Since then, we’ve found more and different kinds of planets — hot Jupiters are relatively rare compared with Neptune-sized worlds and super-Earths. Why is it still important to find and study them?

One big motivation is the fact that they’re out there and that they weren’t predicted from our theories of how planetary systems form and evolve, so there must be some major pieces missing in those theories.

Those missing ingredients probably affect many planetary systems even if the outcome isn’t a hot Jupiter — a hot Jupiter, we think, is probably an extreme outcome. If we don’t have a theory that can make hot Jupiters at all, then we’re probably missing out on those important processes.

A helpful thing about hot Jupiters is that they are a lot easier to detect and characterize using transits and radial velocity, and we can look at the transit at different wavelengths to try to study the atmosphere. They are really helpful windows into planet characterization.

Hot Jupiters are still going to always be the planets we can probe in the most detail. So even though people don’t necessarily get excited about the discovery of a new hot Jupiter anymore, increasing the sample lets us gather more details about their orbits, compositions, sizes or what the rest of their planetary system looks like, to try to test theories of their origins. In turn, they’re teaching us about processes that affect all sorts of planetary systems.

What questions are we going to be able to answer about hot Jupiters as the next-generation observatories come up, such as the James Webb Space Telescope and larger ground-based telescopes?

With James Webb, the hope is to be able to characterize a huge number of hot Jupiters’ atmospheric properties, and these might be able to help us test where they formed and what their formation conditions were like. And my understanding is that James Webb can study hot Jupiters super quickly, so it could get a really big sample of them and help statistically test some of these questions.

The Gaia mission will be really helpful for characterizing the outer part of their planetary systems and in particular can help us measure whether massive and distant planets are in the same plane as a transiting hot Jupiter; different theories predict differently on whether that should be the case. Gaia is very special in being able to give us three-dimensional information, when usually we have only a two-dimensional view of the planetary system.

TESS [the Transiting Exoplanet Survey Satellite space telescope] is going on right now — and its discoveries are around really bright stars, so it becomes possible to study the whole system that has a hot Jupiter using the radial velocity method to better characterize the overall architecture of the planetary system. Knowing what’s farther out will help us test some of the ideas about hot Jupiter origins.

TESS and other surveys also have more young stars in the sample. We can see what the occurrence rate and properties are of hot Jupiters closer to when they formed. That, too, will help us distinguish between different formation scenarios.

They’re alien worlds to us, but what can hot Jupiters tell us about the origins of our own solar system? These days, many missions are concentrating on Earth-sized planets.

What we’re all still struggling to see is: Where does our solar system fit into a bigger picture of how planetary systems form and evolve, and what produces the diversity of planetary systems we see? We want to build a very complete blueprint that can explain everything from our solar system, to a system with hot Jupiters, to a system more typical of what [the retired space telescope] Kepler found, which are compact, flat systems of a bunch of super-Earths.

We still don’t have a great explanation for why our solar system doesn’t have a hot Jupiter and other solar systems do. We’d like some broad theory that can explain all types of planetary systems that we’ve observed. By identifying missing processes or physics in our models of planet formation that allow us to account for hot Jupiters, we’re developing that bigger picture.

Do you have any other thoughts?

The one thing I might add is that, as we put together all the evidence for our review, we found that none of the theories can explain everything. And that motivates us to believe that there’s probably multiple ways to make a hot Jupiter — so it’s all the more important to study them.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.