Thursday, April 20, 2023

Will the Earth last forever?

‘Earthrise,’ a photo of the Earth taken by Apollo 8 astronaut Bill Anders, Dec. 4, 1968. NASA/Bill Anders via Wikipedia
Shichun Huang, University of Tennessee

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


Will the Earth last forever? – Solomon, age 5, California


Everything that has a beginning has an end. But the Earth will last for a very long time, and its end will come billions of years after anyone who is alive here now is gone.

Before we talk about the future of our planet, let’s review its history and when life appeared on it. The history of human beings is very, very short compared with that of Earth.

4 billion years old

Our planet formed from a giant cloud of gas and dust in space, which is called a nebula, about 4.6 billion years ago. The first continent might have formed on its surface as early as 4.4 billion years ago.

The atmosphere of the early Earth did not contain oxygen, so it would have been toxic to human beings if they had been present then. It was very different from Earth’s atmosphere today, which is about 21% oxygen. Many life forms, including humans, need oxygen to live.

Where did that oxygen come from? Scientists believe that atmospheric oxygen started to rise about 2.4 billion years ago in a shift they call the Great Oxidation Event.

Tiny microorganisms had already existed on Earth’s surface for a while. Some of them developed the ability to produce energy from sunlight, the way plants do today. As they did it, they released oxygen. It built up in the atmosphere and made it possible for more complex life forms to evolve.

Cyanobacteria, also known as blue-green algae, were the first organisms that produced oxygen on Earth. Today you can find them all around – even in a pond in New York City’s Central Park.

This took a long time. The first animals, which may have been sea sponges, probably appeared about 660 million years ago. Depending how we define humans, humans emerged in Africa about 200,000 years to 2 million years ago, and spread out everywhere from there.

Humans have only been present on Earth for a tiny fraction of our planet’s history.

Billions more to go

Now, as we think about the future of the Earth, we know there are two essential factors that humans need to live here.

First, the Sun provides most of the energy that living things on Earth need to survive. Plants use sunlight to grow and to produce oxygen. Animals, including humans, rely directly or indirectly on plants for food and oxygen.

The other thing that makes the Earth habitable for life is that our planet’s surface keeps moving and shifting. This ever-changing surface environment produces weather patterns and chemical changes in the oceans and on the continents that have enabled life to evolve on Earth.

The movement of the giant pieces of Earth’s outer layer, which are called plates, is driven by heat in the interior of the Earth. This source will keep the Earth’s interior hot for billions of years.

So, what will change? Scientists estimate that the Sun will keep shining for another 5 billion years. But it will gradually get brighter and brighter, and warm the Earth more and more.

This warming is so slow that we wouldn’t even notice it. In about 1 billion years, our planet will be too hot to maintain oceans on its surface to support life. That’s a really long time away: an average human lifetime is about 73 years, so a billion is more than 13 million human lifetimes.

Long after that – about 5 billion years from now – our Sun will expand into an even bigger star that astronomers call a “red giant,” which eventually will engulf the Earth. Just as our planet existed for over 4 billion years before humans appeared, it will last for another 4 billion to 5 billion years, long after it becomes uninhabitable for humans.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Shichun Huang, Associate Professor of Earth and Planetary Sciences, University of Tennessee

This article is republished from The Conversation under a Creative Commons license.

Darwin’s finches fall prey to a blood-sucking parasite

An invasive fly could mean the loss of bird species on the Galápagos Islands. To save them, scientists may introduce another invasive insect.

They come at night like a swarm of bogeymen from under the bed. Dozens of hungry fly maggots wriggling up through the maze of a bird’s nest once the chicks — their prey — have nestled in for the night. It’s the maggots’ time to feed, and every night the menu is the same: blood and tissue.

Newly hatched maggots take up temporary residence in a chick’s nostril or ear canal, or under a feather quill, all prime places to find blood. More mature maggots linger in the nest, harassing the chicks from below. These larger fly larvae dine until morning and then squirm back into the entwined twigs where they wait until night comes again.

In the end, chicks may lose anywhere from 18 to 55 percent of their blood during these nightly raids. On average, more than half of the nestlings in an infested nest die because of fly larvae. Those that do survive are often left with a permanent reminder of their battle—a hole in their beak.

The night-marauding larvae of the parasitic fly Philornis downsi are known to scourge birds in Brazil, Argentina, Trinidad and Tobago, and parts of Central America. Common targets include nestlings of the rufous-tailed jacamar and the smooth-billed ani, which have evolved to tolerate the pest. But on the Galápagos Islands, over 550 miles from the fly’s native home on mainland Ecuador, birds are defenseless.

Galápagos birds are no strangers to invasive species. Since the islands were discovered in 1535, more than 1,500 foreign species have found their way there — some accidentally introduced, others intentionally. Among those that persist are 810 plants, 27 vertebrates, 499 insects and 70 other land invertebrates. But few have threatened the birds as much as this fly, and even fewer have invaded the remote isles with such virulence.

Flies have now infected nests of every land bird species on the four inhabited Galápagos islands , says Sonia Kleindorfer, an evolutionary biologist at Flinders University in Australia. That includes the islands’ most famous and iconic birds: Darwin’s finches — a “world heritage system that shaped human thought about how life evolved on planet Earth” that is now under threat, says Kleindorfer.

Survey data show that finches with the highest density of the parasite are suffering dramatic declines, Kleindorfer notes. If nothing is done, “it’s likely that bird populations will become extinct on different islands in the next decades.”

In 2012, scientists founded an organization called the Philornis Working Group to coordinate their efforts to solve the problem. In early 2018, more than two dozen members of the group, hailing from 15 institutions around the globe, gathered in the Galápagos to discuss progress, new findings and a way to save these birds and preserve the islands’ ecosystem.

And, ever the biologists, they hope to use the crisis as an opportunity to learn more about the impacts of invasive species and how the dynamics of host-parasite relationships may alter evolution.

“It’s a kind of laboratory for studying host defense and parasite virulence,” says Dale Clayton, an evolutionary biologist at the University of Utah who has studied this epidemic for the past decade. Studying the battle between bird and fly, then, isn’t just about rescuing a group of birds. The Galápagos have offered yet again a natural experiment, Clayton says, by which scientists can learn how nature works.

Reading the beak

The Galápagos finches comprise 18 related species (depending on how you divide them up — some only recognize 14) across 19 islands. Finches may be best known for the diversity of their beaks, which are adapted for specific diets or skills. The large ground finch, for example, has a short, stout beak for crushing seeds like a nutcracker. The woodpecker finch has a long, broad beak for drumming on trees and using twigs or cactus spines to pry out insects hidden under bark. And the green warbler-finch has a tweezer-like beak for snatching insects.

Darwin was the first to note these beak differences, which later inspired his theory of evolution by natural selection, and they still dazzle researchers today. For the past 80 years, scientists have scrupulously measured and documented the beaks of Darwin’s finches, literally watching the birds evolve. And their meticulous records also helped deduce something else: when the P. downsi fly arrived.

In a 2016 study, Kleindorfer and a colleague compared the nostril sizes of small ground finches and medium tree finches — regular targets of the flies — measured between 2004 and 2014 with those of museum specimens collected between 1899 and 1962, roughly the year when scientists suspect the fly arrived. In each year’s group of birds measured this century, at least a few carried signature holes in their beaks. In specimens collected before 1962, the scientists found none.

That finding lined up fairly well with the first recorded sightings of the fly in 1964, when scientists on the Galápagos reported it buzzing around Santa Cruz Island. They caught and collected eight flies that year, but nobody thought anything of it. Surveys done decades later, though, showed that the fly had spread to other islands. Then in 1997, the first maggots were found lurking in the nostrils of two woodpecker finch nestlings, and the fly’s parasitic nature revealed itself.

It’s unclear how the flies first arrived. Boats, planes and imported animals, plants or fresh produce — means that brought other invasive species — have all been suggested. Another suspect is the smooth-billed ani, a known host of the fly. The ani was deliberately introduced to the Galápagos in the 1960s in a misguided attempt to control ticks on cattle (the birds eat insects as well as vertebrates such as frogs).

Whatever the introduction route, the fly’s invasion has been swift, in biological terms. In less than 60 years, the flies have invaded 13 of the 15 islands surveyed and have infected every species of land bird that scientists have looked at. Some species, like the medium ground finch, have been dramatically affected. “One year — of the nests we were studying — not a single nest produced a single offspring that fledged from the nest,” Clayton says.

At least the medium ground finch population, which the International Union for Conservation of Nature considers of “least concern,” has been able to maintain a stable population despite the fly’s attack. The situation is far more critical for a few highly threatened species. The medium tree finch, with only about 2,500 individuals left, and the mangrove finch, estimated at 100 or less, consistently fall victim to a high density of fly larvae. Mangrove finches lost 14 percent of their young to flies between 2007 and 2008. If this continues, scientists estimate that these birds will be gone in roughly 50 to 100 years. “We’re looking at the probable extinction of the first species of bird in the Galápagos,” Clayton says.

Learning from the arms race

Parasites generally live in a constant balancing act. They infect their hosts for food, and that may kill the hosts. The trick is to not kill hosts too soon, and certainly not to kill them all. Otherwise, the parasites might not survive either.

Yet the flies right now are teetering toward that side — presumably, Clayton says, because they are new to the islands. “The hosts,” he says, “are sitting ducks.”

Birds in P. downsi’s native habitat have evolved ways to avoid, tolerate or adapt to the pest and its approximately four dozen other parasitic fly relatives. Some birds, such as Costa Rica’s white-throated magpie jay, start their breeding season before the fly’s breeding season. Other birds have decreased the number of eggs that they lay, perhaps so they can feed more to each chick to compensate for blood loss.

On the Galápagos, however, there has been scant time for bird and fly to coevolve to a sustainable situation, and the birds have proved much more vulnerable. Only a few exceptions exist, such as the Galápagos mockingbird, which seems less affected by fly infections. This may be because mockingbird chicks, when parasitized, beg for food more earnestly.

Aided by fly larva data amassed since 1997, scientists now have a rare opportunity to study how bird and fly adapt to each other. Ecological theory predicts that epidemics like this will end if the host evolves defenses, says Kleindorfer. But it’s an open question how quickly that can happen.

In the Galápagos bird-fly system, scientists have already seen some changes. In 2012, scientists saw finch species rubbing themselves with leaves from guayabillo trees, plants with natural repellents that ward off mosquitoes and, it turns out, P. downsi larvae. Video recorders placed within nests revealed chicks climbing on top of one another to avoid the larvae, often leaving the weakest of them to be the targeted victim.

Kleindorfer and five of her colleagues documented a new hybrid tree finch species on Floreana Island in a 2014 paper. The hybrid bird, a cross between the critically endangered medium tree finch and the common small tree finch, showed fewer fly infestations than either parent species. The hybrid’s numbers are increasing: They represented 19 percent of tree finches in 2005 and 49 percent in 2014.

The flies have also responded. In 2012, scientists noticed fly larvae in nests when female birds were still incubating their eggs. The flies were laying their own eggs earlier, so instead of the larvae just feeding on the blood of chicks, they also fed on the mother bird.

It’s hard to predict what will ultimately happen, Clayton says. Many factors could lead to either bird or fly gaining the upper hand. But with the fly right now in the lead, some researchers are finding ways to intervene.

The extermination order

Among the attendees at the Philornis Working Group meeting in February was George Heimpel, an entomologist from the University of Minnesota. He was there to discuss one thing: how to eliminate the fly.

Numerous eradication methods have been considered, including the release of sterilized male flies and the setting of fly traps. In 2013, a few clever biologists coated cotton balls with the insecticide permethrin, then offered the cotton to birds building their nests. Permethrin kills fly larvae and leaves the birds unaffected, but it’s a short-term solution, Heimpel says — possibly a dangerous one. Permethrin lasts for a single finch breeding season, but eliminating all the flies would take years and entail repeated use of permethrin. That might kill the island’s endemic insects, Heimpel says. The consequences of exposing birds to the pesticide over many years are unknown.

Instead, Heimpel suggests an unusual solution that he thinks will be most effective: Introduce the fly’s own parasitic enemy, a parasitoid wasp. These wasps target the larvae of another species, such as a spider or fly, and inject eggs into the growing larva or the larval cocoon. The wasps then hatch and feast on the developing larvae.

For the past six years, Heimpel and his team have investigated the behavior of Conura annulifera, a parasitoid wasp found on mainland Ecuador. Heimpel started studying the wasp in 2012 because rumor had it that the wasp parasitized only the larvae of P. downsi. Between 2015 and 2017, Heimpel and his students performed lab and field studies to confirm this favoritism. They offered the wasps larvae from many kinds of flies, moths and other insects in addition to P. downsi larvae, then collected the larvae and let them pupate into adults.

Many insects emerged from the pupae, but C. annulifera emerged from only one: P. downsi, confirming that the wasps target these flies alone. “Those pupae all look about the same to us,” Heimpel emphasizes. But not to this little wasp, he says.

The wasp’s specialization holds promise for removing the fly, but the irony of purposely introducing another species to eliminate an accidental one is hard to miss. “At this stage, there’s no other alternative,” says Mark Hoddle, an entomologist at the University of California, Riverside. The parasitoid wasp “offers the best chance for finding a sustainable, highly targeted solution to this seemingly intractable pest problem.”

Hoddle’s own team released an Australian ladybug species on the islands starting in 2002 to help curb the plant-decimating appetite of the cottony cushion scale insect, another Australian native. The ladybug was known to especially target the scale insect, and after years of quarantined tests to verify this fact and confirm that it wouldn’t cause undesired impacts on other Galápagos species, the bug was finally introduced. After seven years, the scale insect population dropped between 60 and 98 percent.

Heimpel and his team are now applying for permits to perform similar quarantined experiments with the parasitoid wasps. If all goes well, the wasps will be released into the wild within a few years. “Our job is to think very deeply about the risks and to only contemplate a release if the risks are much lower than the benefits,” says Heimpel. After all, if nothing’s done, the risk is tremendous: losing Darwin’s iconic finches.

Editor's note: This story was updated on May 24 to correct an error and to clarify concerns over permethrin and an experiment with wasp larvae. George Heimpel expressed concern over permethrin's effects on insects, not on crabs or spiders, as the story originally said.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Overconfidence dictates who gets ‘top jobs’ and research shows men benefit more than women

fizkes/Shutterstock
Nikki Shure, UCL and Anna Adamecz, UCL

There has been a steady stream of popular literature in recent years telling women to “lean in”, be more confident, and not worry about “imposter syndrome”.

Men, on the other hand, are often seen to be overconfident compared to women. Our recent research shows they are 19% more likely to self-assess their abilities higher than they actually are – and this difference can actually affect career outcomes for men and women.

We already know that women are less likely to make partner at law firms and reach corporate leadership positions. But roles such as chief executive, production manager, senior police officer, lawyer and doctor tend to be well paid and secure. The over-representation of men in such jobs may be an important driver of inequalities in the labour market such as the gender pay gap.

Our research shows that 24% of men versus 16% of women are in such “top jobs” by the age of 42. It also indicates that factors leading to this trend actually start showing up in adolescence. In fact, we believe ours is one of the first studies to link overconfidence captured in adolescence to real job market outcomes in mid-career.

We used data on approximately 3,600 people born in Great Britain who are taking part in the 1970 British Cohort Study. This means we can follow them from birth into the labour market and have access to information about their family background, the circumstances in which they grew up, and the life choices they make.

We constructed a measure of overconfidence using their test scores on a range of cognitive assessments taken at ages five, ten and 16. We compared this to data they provided rating their own ability in several domains. We found that overconfident people were more likely, on average, to be in top jobs at the age of 42 compared to similar adults who didn’t overrate their talents according to our overconfidence scale.

When it comes to explaining the gender gap in top jobs, our measure of overconfidence represented up to 11% of the significant 8 percentage point gender gap in top jobs at age 42 (with men taking more of these top jobs). These results highlight the importance of overconfidence for predicting such achievements, but they also provide some insight into the factors that affect career-related confidence levels.

Confidence factors: university, industry and children

Once we accounted for university attendance and subject, our measure of overconfidence explained 6% of the gender gap in top jobs. This shows the importance of success at school and choice of university subject and institution in paving the way to a top job by mid-career.

In fact, university participation and subject choice matter quite a lot, according to our findings. The gender gap in top jobs is considerably larger among graduates (15 percentage points) compared to non-graduates (6.5 percentage points), while the role of overconfidence mattered more for those who had attended university.

For example, male graduates were 58% more likely than female graduates to be in a top job in the field of science, technology, engineering and maths (STEM), and had 34% greater odds of being in a senior role in law, economics and management (LEM). Interestingly, while overconfidence explained 12% of the gender gap in top LEM roles, it did not matter for top jobs in STEM. This may be down to the more technical nature of these jobs compared to those in LEM.

Apart from industry, other factors also seem to contribute to career gender gaps. Unsurprisingly, having children counts. With many adults having families with children still living at home by middle age, working mothers were 27% less likely than working fathers to be in a top job by mid-career. However, overconfidence did not explain any of this gender gap. This suggests that women are simply more likely than men to change their working patterns once they start a family.

Woman doing paperwork in modern office, co-workers in the background.
Businesses can help build employee confidence. Kateryna Onyshchuk/Shutterstock

How employers can help

Research highlights how men are more likely to assess their abilities favourably and communicate this to others. And since overconfident people may put themselves forward more often and sooner for promotions, this exacerbates the gender gap in top jobs.

So, our findings suggest that employers should rethink how they recruit and promote people. Employers could give more regular performance-based feedback and encourage women to apply for promotions sooner than they might choose to on their own, for example. This is especially relevant for LEM jobs where we found that overconfidence explained the largest portion of the gender gap.

And since overconfidence loses its importance among those who have children, lack of childcare and flexibility in the workplace clearly remains a substantial barrier to career progression for women.

Requiring women to “lean in” or engage in confidence-building interventions is not the solution. Focusing on imposter syndrome or women being underconfident puts the onus on them to change. Instead, we all need to find ways to change the system.

Nikki Shure, Associate Professor in Economics, UCL and Anna Adamecz, Research Associate in Economics, UCL

This article is republished from The Conversation under a Creative Commons license.

Speaker McCarthy lays out initial cards in debt ceiling debate: 5 essential reads on why it’s a high-stakes game

Speaker Kevin McCarthy said the House would vote on a debt ceiling bill ‘within weeks.’ AP Photo/Seth Wenig
Bryan Keogh, The Conversation and Matt Williams, The Conversation

House Speaker Kevin McCarthy has laid out an opening gambit in what is likely to be a lengthy battle over the debt ceiling, suggesting that Republicans are open to a deal – but at a very high price.

On April 17, 2023, McCarthy told a gathering at the New York Stock Exchange that the Republican-controlled House would vote “in the coming weeks” on a bill to “lift the debt ceiling into the next year.” The catch? The Democrats would have to agree to freeze spending at 2022 levels and roll back regulations, among other conditions.

It is unlikely that such a bargain would get through the Democratic-controlled Senate or get the signature of President Joe Biden. As such, McCarthy’s comments might best be viewed as an early salvo in what could be protracted negotiations to avert a debt ceiling crisis.

Explaining why the U.S. has a debt ceiling in the first place – and why it is a constant source of political wrangling – is a complicated matter. Here are five articles from The Conversation’s archive that provide some of the answers.

1. What exactly is the debt ceiling?

So, some basics. The debt ceiling was established by the U.S. Congress in 1917. It limits the total national debt by setting out a maximum amount that the government can borrow.

Steven Pressman, an economist at The New School, explained the original aim was “to let then-President Woodrow Wilson spend the money he deemed necessary to fight World War I without waiting for often-absent lawmakers to act. Congress, however, did not want to write the president a blank check, so it limited borrowing to $11.5 billion and required legislation for any increase.”

Since then, the debt ceiling has been increased dozens of times. It currently stands at US$31.4 trillion – a figure already reached. As a result, the Treasury has taken “extraordinary measures” to enable it to keep borrowing without breaching the ceiling. Such measures, however, can only be temporary – meaning at one point Congress will have to act to lift the ceiling or default on its debt obligations, which is expected to happen in July or August.

2. ‘Catastrophic’ consequences

How bad could it be if the U.S. does default on its debt obligations? Well, pretty bad, according to Michael Humphries, deputy chair of business administration at Touro University, who wrote two articles on the consequences.

“The knock-on effect of the U.S. defaulting would be catastrophic. Investors such as pension funds and banks holding U.S. debt could fail. Tens of millions of Americans and thousands of companies that depend on government support could suffer. The dollar’s value could collapse, and the U.S. economy would most likely sink back into recession,” he wrote.

3. Undermining the dollar

And that’s not all.

Such a default could undermine the U.S. dollar’s position as a “unit of account,” which makes it a widely used currency in global finance and trade. Loss of this status would be a severe economic and political blow to the U.S. But Humphries conceded that putting a dollar value on the price of a default is hard:

“The truth is, we really don’t know what will happen or how bad it will get. The scale of the damage caused by a U.S. default is hard to calculate in advance because it has never happened before.”

4. Can McCarthy make a deal?

Many of these concessions are known, such as allowing a single member of the House to call for a vote to remove him as speaker. But there many be others that remain secret and could be influencing McCarthy’s decision-making, argued Stanley M. Brand, a law professor at Penn State and former general counsel for the House. These could make it much harder to reach a deal with Biden over the debt ceiling.

“Some of the new rules spawned by McCarthy’s concessions may appear to democratize the procedures for considering and passing legislation. But they are likely to make it difficult for members to get the working majority necessary to pass legislation,” Brand explained. “That could make things such as raising the statutory debt ceiling, which is necessary to avert a government shutdown and financial crisis, and passing legislation to fund the government, difficult.”

5. The GOP endgame: A balanced budget

Another condition McCarthy agreed to in January is to push for a “balanced budget” within 10 years. His most recent speech on the debt ceiling made no mention of this, but it’s likely hardliners within his party will continue to demand it – putting his ability to negotiate a compromise in jeopardy.

The U.S. government hasn’t had a balanced budget since 2001, the year President Bill Clinton left office. Linda J. Bilmes, a senior lecturer in public policy and public finance at Harvard Kennedy School who worked in the Clinton administration from 1997 to 2001, explained how they achieved that rare feat and why it’s unlikely to be repeated today.

“Back in 1997, after the smoke cleared, both the Clinton administration and the Republicans in Congress were able to claim some political credit for the resulting budget surpluses,” she wrote. “But – crucially – both parties recognized that a deal was in the best interest of the country and were able to line up their respective members to get the votes in Congress needed to approve it. The contrast with the current political landscape is stark.”

Bryan Keogh, Deputy Managing Editor and Senior Editor of Economy and Business, The Conversation and Matt Williams, Senior Breaking News and International Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license. 

AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt’

You don’t have to see the future to know that AI has ethical baggage. Wang Yukun/Moment via Getty Images
Casey Fiesler, University of Colorado Boulder

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later – at a cost.

In software development, the term “technical debt” refers to the implied cost of making future fixes as a consequence of choosing faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it does hit the market, you’ll find out what the bugs are and can hopefully fix them then.

However, negative news stories about generative AI tend not to be about these kinds of bugs. Instead, much of the concern is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may be replaced, to name a few. These problems are not software glitches. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.

As a technology ethics educator and researcher, I have thought a lot about these kinds of “bugs.” What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.

Off to the races

As soon as OpenAI’s ChatGPT was released in November 2022, the starter pistol for today’s AI race, I imagined the debt ledger starting to fill.

Within months, Google and Microsoft released their own generative AI programs, which seemed rushed to market in an effort to keep up. Google’s stock prices fell when its chatbot Bard confidently supplied a wrong answer during the company’s own demo. One might expect Microsoft to be particularly cautious when it comes to chatbots, considering Tay, its Twitter-based bot that was almost immediately shut down in 2016 after spouting misogynist and white supremacist talking points. Yet early conversations with the AI-powered Bing left some users unsettled, and it has repeated known misinformation.

A hand holding a smartphone with a poem written by AI.
Not all AI-generated writing is so delightful. Smith Collection/Gado/Archive Photos via Getty Images

When the social debt of these rushed releases comes due, I expect that we will hear mention of unintended or unanticipated consequences. After all, even with ethical guidelines in place, it’s not as if OpenAI, Microsoft or Google can see the future. How can someone know what societal problems might emerge before the technology is even fully developed?

The root of this dilemma is uncertainty, which is a common side effect of many technological revolutions, but magnified in the case of artificial intelligence. After all, part of the point of AI is that its actions are not known in advance. AI may not be designed to produce negative consequences, but it is designed to produce the unforeseen.

However, it is disingenuous to suggest that technologists cannot accurately speculate about what many of these consequences might be. By now, there have been countless examples of how AI can reproduce bias and exacerbate social inequities, but these problems are rarely publicly identified by tech companies themselves. It was external researchers who found racial bias in widely used commercial facial analysis systems, for example, and in a medical risk prediction algorithm that was being applied to around 200 million Americans. Academics and advocacy or research organizations like the Algorithmic Justice League and the Distributed AI Research Institute are doing much of this work: identifying harms after the fact. And this pattern doesn’t seem likely to change if companies keep firing ethicists.

Speculating – responsibly

I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only way to decrease ethical debt is to take the time to think ahead about things that might go wrong – but this is not something that technologists are necessarily taught to do.

Scientist and iconic science fiction writer Isaac Asimov once said that sci-fi authors “foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” Of course, science fiction writers do not tend to be tasked with developing these solutions – but right now, the technologists developing AI are.

So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this process of ethical speculation. I don’t mean designing with far-off robot wars in mind; I mean the ability to consider future consequences at all, including in the very near future.

Half a dozen students, including one in a hijab, chat at long tables, with a professor at the back of the photo.
Learning to speculate about tech’s consequences – not just for tomorrow, but for the here and now. Maskot/Getty Images

This is a topic I’ve been exploring in my teaching for some time, encouraging students to think through the ethical implications of sci-fi technology in order to prepare them to do the same with technology they might create. One exercise I developed is called the Black Mirror Writers Room, where students speculate about possible negative consequences of technology like social media algorithms and self-driving cars. Often these discussions are based on patterns from the past or the potential for bad actors.

Ph.D. candidate Shamika Klassen and I evaluated this teaching exercise in a research study and found that there are pedagogical benefits to encouraging computing students to imagine what might go wrong in the future – and then brainstorm about how we might avoid that future in the first place.

However, the purpose isn’t to prepare students for those far-flung futures; it is to teach speculation as a skill that can be applied immediately. This skill is especially important for helping students imagine harm to other people, since technological harms often disproportionately impact marginalized groups that are underrepresented in computing professions. The next steps for my research are to translate these ethical speculation strategies for real-world technology design teams.

Time to hit pause?

In March 2023, an open letter with thousands of signatures advocated for pausing training AI systems more powerful than GPT-4. Unchecked, AI development “might eventually outnumber, outsmart, obsolete and replace us,” or even cause a “loss of control of our civilization,” its writers warned.

As critiques of the letter point out, this focus on hypothetical risks ignores actual harms happening today. Nevertheless, I think there is little disagreement among AI ethicists that AI development needs to slow down – that developers throwing up their hands and citing “unintended consequences” is not going to cut it.

We are only a few months into the “AI race” picking up significant speed, and I think it’s already clear that ethical considerations are being left in the dust. But the debt will come due eventually – and history suggests that Big Tech executives and investors may not be the ones paying for it.

This article is republished from The Conversation under a Creative Commons license. 

Wednesday, April 19, 2023

Robots designed to self-construct


Robot researcher Mark Yim offers a look inside the promising field of modular reconfigurable robotics — bots that can shift form to tackle an array of tasks

For most of us, the word “robot” conjures something like C-3PO — a humanoid creature programmed to interact with flesh-and-blood people in a more or less human way. But the roster of real-world robots is considerably more varied. The list includes Boston Dynamics’ dog-inspired robots, Dalek-like security bots, industrial arms on an assembly line and any number of flying insect-inspired robots. If a machine is designed to do a complicated task in an automated fashion, it’s a robot.

A robot, it turns out, doesn’t even need to have a fixed shape. That’s the vision of researchers who work in modular reconfigurable robotics and are pursuing bots that can assemble themselves, by rearranging similar or identical parts into whatever shape suits the task at hand. These robots can take the form of snakes, lattices, trusses and more, and can be set to any challenge — providing construction support, doing repair work or scouring for survivors after a natural disaster.

Or rather, they will hopefully do all that some day. In May 2019, the Annual Review of Control, Robotics, and Autonomous Systems surveyed both the promise of modular reconfigurable robots (MRR), and the many barriers that remain before they can become a reality. To get a sense of where these reconfigurable robots are right now, and where they could be in the future , Knowable Magazine spoke with University of Pennsylvania roboticist Mark Yim, one of the coauthors of the report and director of Penn’s General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory.

This conversation has been edited for length and clarity.

What are modular reconfigurable robots?

LEGO is one analogy that people often use. LEGO bricks are modular and can be rearranged in lots of different ways, so that would be kind of like the modular reconfigurable robots, except these would be LEGO bricks that can rearrange themselves. Very often there will be one type of module, with hundreds of identical units; the reconfiguring of those modules is a big part of the research.

Another analogy comes from the very first modular robotics paper in 1988, when Toshio Fukuda and his colleagues in Japan had this idea called CEBOTS, or cellular robotic systems. The idea was that you have different cells in your body. Similarly, you could have robots that could come together and have a whole organism.

If you are not among that particular community, modular reconfigurable robots can be a little bit broader. In the Annual Reviews article, we talked about a typical standard milling machine where we have different types of tools that go into it. It’s modular in the sense that these tools can be swapped out, but they’re not identical. It’s a very different type of thing.

What would the goal of a modular reconfigurable robotic system be?

You’ve actually hit on one of the important, yet difficult issues with modular reconfigurable robots. One of the characteristics is that they reconfigure into different shapes. They can do lots and lots of different things. Usually that means that they don’t do most of them very well, at least not optimally. So the types of situations in which they work the best are those that require lots of different uses. The examples we’ve given in the past are things like planetary exploration. If you want to send something to Mars, you don’t know what you will find there, so a robot that can do lots of different things will be useful.

A near-term thing that most robotics researchers always talk about is search and rescue. You’ve got a building that’s fallen. You don’t know if you’ll need a wheeled robot or a legged robot. You don’t know if you need something small enough to go through tight spaces or under rubble, or big enough to move that rubble, or tall enough to look over a wall to see if something’s on the other side. You don’t know if you need to lift up objects. So having a system that can adapt, rearrange itself and change its functionality for the situation would be one in which modular reconfigurable robots would excel.

Can you describe some of the main types of modular robots?

Maybe the most popular type among the reconfigurable robotics research community is what we call the lattice configuration. This is a little bit more like LEGO bricks. The modules form a lattice as you put them together. The reason why they may be most popular is because they are a bit easier to program. It’s easier to think about how you get them to rearrange.

There was a Carnegie Mellon group that came up with an application they called “telepario.” If television is a visual medium, telepario was the idea of making a system that could make a physical shape that you could change. So instead of taking a video of a person and transmitting it somewhere, you could take a 3D capture of that person and use lattice-style robots to translate it into a remote 3D system that would move as the person moves. Of course, that’s really hard, and we’re not that close to making that system. Part of the reason it would just be used for 3D representation is because we don’t know how well that system could work in an environment. Could it move objects? Maybe not.

There’s another kind, which we call a chain-style reconfigurable robot. They form shapes more like standard robot arms and industrial arms. That one is better for doing typical robot tasks and assembly. The modules tend to need to be larger — usually you can’t have millions of modules; you have 10, 20 or 30.

What does it take to make something self-reconfigurable, rather than something that has to be reconfigured by hand?  

The dividing line is not that crisp, actually. Having something reconfigure itself requires basically a docking mechanism that you can actuate, and you can get two parts to attach and detach automatically. There are a variety of things like that around now. Like refueling on jet fighters: They have this thing where a fighter jet flies up behind a tanker with a trailing refueling tube and docks with the end of the tube. That’s a very simple, one-element docking, and it’s definitely self-reconfiguring. But is it a modular reconfigurable robot? Not quite as we think of it. In our research, you need more than a handful of modules, all of which can attach, detach and rearrange themselves.

How do you make a good docking system?

That’s really tricky. With magnets, you can get strong attachments, but sometimes it’s hard to make them detach. There’s also glue, you can weld things or you can have screws — there’s lots of different ways to get things to attach, but getting them to detach is an interesting problem.

For example, the classic mechanical engineering style is to have hooks or latches that grab onto things and release. One of the issues that we’ve had with this is that those systems can be large: Over 50 percent of the volume of a particular module might be the latching and docking mechanism. So a good portion of that robot is not useful for the task that it’s doing, because it’s made for the reconfiguration part. So that’s another way in which things aren’t necessarily the most efficient. But those things can be very strong and release with zero force.

There are people trying to come up with something like a switchable glue. Often what will happen is you have something that’s not terribly strong as a glue; it’s kind of like an adhesive. And then when it releases, it’s not completely releasing either. It’s still kind of tacky. The ratio of bonding strength to releasing forces is not great, but it takes very, very little space. Depending on the mechanism for actuating it, you could release the bond with electricity, heat or sometimes light.

One group also came up with melting a low-temperature alloy, which can be very strong and doesn’t take a huge amount of space. But detaching it means melting it and that does take a lot of energy. And there’s some question about how many times you can repeat that before it starts to fail. So you may run into robustness issues.

A lot of people dream about making tiny, tiny modules. Science fiction movies often come into play. People think about the Terminator movies with a liquid metal robot — molecule-sized modules turn into a Terminator. People are trying to make the module as small as possible, but reaching that small of a size is still far in the future.

What areas of technological development are helping to bring these modular robotic systems into a real-world setting?

The thing that I think is exciting is the truss reconfiguration systems.

Trusses are naturally modular. Making them reconfigurable basically gets past the constraint of having modular systems that are small, weak and require millions of moving parts, which just makes things very difficult from a practical point of view.

The variable topology truss project we’ve been working on doesn’t have to be field-ready or so strong that it can shore up buildings. It can be at a human scale. The robots could move objects around that weigh tens of pounds instead of tons. Even 10 pounds can be a large load for most of these really small modular systems. So scaling things up to human size is making things more practical.

I think if we take a step back, there’s probably an intermediary approach where we can do things that are not quite so ambitious, but more practical and may be nearer term as well.

What is the first of these systems going to look like when it becomes a reality?

That’s a tricky question, partly because what it looks like, or what shape it is — that’s the variable. It’s a blob, so it can take whatever shape we want it to. I guess the bigger question is, what do we think it might do? We want to think visually, imagine that something is the size of a person or a dog or something like that.

But if it’s a bunch of trusses, the beams can change their length and rearrange their attachments. So it could be any shape you wanted.

What do you envision the farther future looks like?

The dream is that they become useful for the average consumer. One of my former colleagues came up with a concept he called “the bucket of stuff.” It’s just a bucket of modules and you say, “I need to make the bed.” The robot climbs out, makes the bed and goes back into the bucket. Or “Change the oil in the car — take whatever form you need to do that task” — a kind of generically useful robot system that doesn’t have to be humanoid shape, but can be anything, and therefore is more versatile. I think when you can get something to be low-cost and that useful, it takes a while, but that’s the dream.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

If 1% of COVID-19 cases result in death, does that mean you have a 1% chance of dying if you catch it? A mathematician explains the difference between a population statistic and your personal risk

The risk of dying from COVID-19 varies from person to person. Jasmin Merdan/Moment via Getty Images
Joseph Stover, Gonzaga University

As of April 2023, about 1% of people who contracted COVID-19 ended up dying. Does that mean you have a 1% chance of dying from COVID-19?

That 1% is what epidemiologists call the case fatality rate, calculated by dividing the number of confirmed COVID-19 deaths by the number of confirmed cases. The case fatality rate is a statistic, or something that is calculated from a data set. Specifically, it is a type of statistic called a sample proportion, which measures the proportion of data that satisfies some criteria – in this case, the proportion of COVID-19 cases that ended with death.

The goal of calculating a statistic like case fatality rate is normally to estimate an unknown proportion. In this case, if every person in the world were infected with COVID-19, what proportion would die? However, some people also use this statistic as a guide to estimate personal risk as well.

It is natural to think of such a statistic as a probability. For example, popular statements that you are more likely to get struck by lightning than die in a terrorist attack, or die driving to work than get killed in a plane crash, are based on statistics. But is it accurate to take these statements literally?

I’m a mathematician who studies probability theory. During the pandemic, I watched health statistics become a national conversation. The public was inundated with ever-changing data as research unfolded in real time, calling attention to specific risk factors such as preexisting conditions or age. However, using these statistics to accurately determine your own personal risk is nearly impossible since it varies so much from person to person and depends on intricate physical and biological processes.

The mathematics of probability

In probability theory, a process is considered random if it has an unpredictable outcome. This unpredictability could simply be due to difficulty in getting the necessary information to accurately predict the outcome. Random processes have observable events that can each be assigned a probability, or the tendency for that process to give that particular result.

A typical example of a random process is flipping a coin. A coin flip has two possible outcomes, each assigned a probability of 50%. Even though most people might think of this process as random, knowing the precise force applied to the coin can allow an observer to predict the outcome. But a coin flip is still considered random since measuring this force is not practical in real-life settings. A slight change can result in a different outcome for the coin flip.

You could predict the outcome of a coin toss if you had the right information.

A common way to think about the probability of heads being 50% is that, when a coin is flipped several times, you would expect 50% of those flips to be heads. For a large number of flips, in fact, very close to 50% of the flips will be heads. A mathematical theorem called the law of large numbers guarantees this, stating that running proportion of outcomes will get closer and closer to the actual probability when the process is repeated many times. The more you flip the coin, the running percentage of flips that are heads will get closer and closer to 50%, essentially with certainty. This depends on each repeated coin flip happening in essentially identical conditions, though.

The 1% case fatality rate of COVID-19 can be thought of as the running percentage of COVID-19 cases that have resulted in death. It doesn’t represent the true average probability of death, though, since the virus, and the global population’s immunity and behavior, have changed so much over time. The conditions are not constant.

Only if the virus stopped evolving, everyone’s immunity and risk of death were identical and unchanging over time, and there were always people available to become infected, then, by the law of large numbers, would the case fatality rate get closer to the true average probability of death over time.

A 1% chance of dying?

The biological process of a disease leading to death is complex and uncertain. It is unpredictable and therefore random. Each person has a real physical risk of dying from COVID-19, though this risk varies over time and place and between individuals. So, at best, 1% could be the average probability of death within the population.

Health risks vary among demographic groups, too. For example, elderly individuals have a much higher risk of death than younger individuals. Tracking COVID-19 infections and how they end for a large number of people that are demographically similar to you would give a better estimate of personal risk.

Pedestrian crosses street in front of cars
You have a much smaller chance of dying from a car accident if you aren’t near any roads or cars. georgeclerk/E+ via Getty Images

Case fatality rate is a probability, but only when you look at the specific data set it was directly calculated from. If you were to write the outcome of every COVID-19 case in that data set on a strip of paper and randomly select one from a hat, you have a 1% chance of selecting a case that ended in death. Doing this only for cases from a particular group, such as a group of older adults with a higher risk or young children with a lower risk, would cause the percentage to be higher or lower. This is why 1% may not be a great estimate of personal risk for every person across all demographic groups.

We can apply this logic to car accidents. The chance of getting into a car crash on a 1,000-mile road trip is about 1 in 366. But if you are never anywhere near roads or cars, then you would have a 0% chance. This is really a probability only in the sense of drawing names from a hat. It also applies unevenly across the population – say, due to differences in driving behavior and local road conditions.

Although a population statistic is not the same thing as a probability, it might be a good estimate of it. But only if everyone in the population is demographically similar enough so that the statistic doesn’t change much when calculated for different subgroups.

The next time you’re confronted with such a population statistic, recognize what it actually is: It’s just the percent of a particular population that satisfies some criteria. Chances are, you’re not average for that population. Your own personal probability could be higher or lower.

Joseph Stover, Associate Professor of Mathematics, Gonzaga University

This article is republished from The Conversation under a Creative Commons license.

Yet another case of mishandling classified documents and alleged violations of the Espionage Act: 3 essential reads

Yet another case of mishandling classified documents and alleged violations of the Espionage Act: 3 essential reads

Jack Teixeira is suspected of leaking classified U.S. documents on Western allies and the war in Ukraine. Stefani Reynolds/AFP via Getty Images
Howard Manly, The Conversation

The stunning arrest of 21-year-old Massachusetts Air National Guardsman Jack Teixeira on charges of illegally sharing U.S. intelligence has once again renewed questions on the handling of classified documents.

Since the discovery a decade ago of top-secret documents leaked by Edward Snowden, questions on the vulnerability of the nation’s most sensitive intelligence were only intensified after a variety of classified papers were found earlier this year in the possession of former U.S. President Donald Trump at his home at Mar-a-Lago in Florida.

Teixeira is accused of the “alleged unauthorized removal, retention and transmission of classified national defense information.” He has not entered a plea as yet to the charges involving the leaking of U.S. intelligence, including documents on Russian efforts in Ukraine and spying on U.S. allies.

The charges carry a maximum penalty of up to 15 years in prison.

Over the years, The Conversation U.S. has published numerous stories exploring the nature of classified documents – and how different motivations play a part in an individual’s decision to mishandle the nation’s secrets. Here are selections from those articles.

1. What are classified documents?

Before coming to academia, Jeffrey Fields worked for many years as an analyst at both the State Department and the Department of Defense.

In general, Fields writes, classified information is “the kind of material that the U.S. government or an agency deems sensitive enough to national security that access to it must be controlled and restricted.”

Of the three levels of classification, a “confidential” designation is the lowest and contains information whose release could damage U.S. national security, Fields explains.

The next level is “secret” and refers to information whose disclosure could cause “serious” damage to U.S. national security.

The most serious designation is “top secret” and means disclosure of the document could cause “exceptionally grave” damage to national security.

2. Violations of the Espionage Act

On April 14, 2023, U.S. prosecutors charged Teixeira in connection with violations of the Espionage Act.

Joseph Ferguson and Thomas A. Durkin are both attorneys who specialize in and teach national security law. They explain the Espionage Act.

Typically, violations of the act apply to the unauthorized gathering, possessing or transmitting of certain sensitive government information and fall under 18 U.S.C. section 793.

Ferguson and Durkin also urge patience before rendering judgment on any case involving violations of the Espionage Act, in part because of the classified nature of the potential evidence and the risk that further exposure would have on U.S. national security.

“The Espionage Act is serious and politically loaded business,” they write. “These cases are controversial and complicated in ways that counsel patience and caution before reaching conclusions.”

3. How to fight future leaking

Cassandra Burke Robertson is a scholar of legal ethics who has studied ethical decision-making in the political sphere.

She points out that criminal prosecutions alone may not be the only way to prevent the flow of classified information.

It all depends on an individual’s motivation.

But unlike Snowden, Reality Leigh Winner or Chelsea Manning, Teixeira does not appear to have wanted to right a perceived wrong or become what is known as a whistleblower.

In cases where the motive is unclear, Robertson suggests that a potential deterrent is establishing a workplace environment that encourages employees to bring potential ethical and legal violations to an internal authority for review.

Known as internal whistleblowing, such actions may prove effective in not only protecting classified information from reaching the public but also prevent another national security embarrassment.

Howard Manly, Race + Equity Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Does online opioid treatment work?

The Covid-19 pandemic brought a sudden shift to virtual health care. That has increased access — and possibly outcomes, too — for patients with opioid use disorder.

While Covid-19’s death toll grabbed the spotlight these past two years, another epidemic continued marching grimly onward in America: deaths from opioid overdose. A record 68,630 individuals died from opioid overdoses in 2020, partly as a result of the isolation and social distancing forced by the pandemic; early data suggest that death rates in many states were even worse in the first half of 2021.

But the coronavirus pandemic may also have had a paradoxical benefit for those addicted to opioids: Because Covid-19 made in-person health care unsafe, US telehealth regulations were relaxed so that more services — including addiction treatment — could be provided online. As a result, people with opioid use disorder are accessing medication and support across the country in greater numbers than ever before. While it’s too soon to know for sure whether this helps more people kick their addiction, early signs are promising.

The federal government estimates that 2.7 million Americans — nearly 1 percent of the population — have opioid use disorder, also known as opioid addiction. It is a chronic brain disease that develops over time because of repeated use of prescription opioids such as hydrocodone, oxycodone and morphine or illicit fentanyl and heroin. A person with opioid use disorder has a 20 times greater risk of death from overdose, infectious diseases, trauma and suicide than one who does not.

Fortunately, two medications — methadone and buprenorphine, both approved by the US Food & Drug Administration — help individuals manage withdrawal symptoms and control or eliminate their compulsive opioid use. Patients who receive these medications fare better than those who do not on a long list of outcomes, says Eric Weintraub, who heads the Division of Alcohol and Drug Abuse at the University of Maryland School of Medicine. They have fewer overdosesless injection drug use; reduced risk for disease transmissiondecreased criminal activity; lower rates of illegal drug use; and  better treatment-retention rates. Indeed, people with opioid use disorder receiving long-term treatment with methadone or buprenorphine are up to 50 percent less likely to die from an overdose.

“The first thing you do with the patient is give them an opportunity at medication,” says Dennis McCarty, an addiction researcher at Oregon Health & Science University who coauthored a look at treatments for opioid use disorder in the 2018  Annual Review of Public Health. “And then you can talk to them about counseling and their other needs — but first, let’s get them on medication.”

Yet only about 19 percent of adults with opioid use disorder received medication treatment in 2019, according to the National Survey on Drug Use and Health. One big reason for this is that regulations require physicians to see patients in person before prescribing medications in most cases. That makes it difficult for many people, particularly those in rural areas, to get treatment. Nearly 90 percent of rural communities have too few treatment programs to meet demand, for example, and about 30 percent of Americans live in counties without any buprenorphine providers.

Weintraub and his colleagues, based in Baltimore, used to travel to 70 miles to Denton, Maryland, to see opioid use disorder patients at the Caroline County Health Department. “But we found that people as close as Federalsburg and Greensboro, which were 15 or 20 minutes away, couldn’t make it there because there is no public transportation,” he says. Some individuals were driving tractors or bikes to get to treatment.

The barrier to buprenorphine treatment began to melt away in early 2020 when Covid-19 first swept across the country. Shortly after the government declared an official public health emergency, it relaxed regulations, allowing approved doctors to prescribe buprenorphine via telehealth without an initial in-person visit. For the first time, too, telehealth services could be performed across state lines, giving patients access to doctors across the country. Government and private insurers agreed to pay for services delivered via telehealth. These changes allowed providers to quickly pivot to telemedicine to help people in their communities get treatment in a totally new way.

At Sadler Health Center in south-central Pennsylvania, for example, patients seeking treatment before the pandemic had to submit a urine sample for drug screening before every appointment. All sessions — held weekly to monthly, depending on the patient’s stage of recovery — were face-to-face. When the pandemic hit, the state stopped requiring drug screens, and the federal government allowed Sadler clinicians to prescribe medication via telehealth without an initial in-person visit.

In addition to traditional providers like Sadler, for-profit companies — which had emerged before the pandemic, recognizing a business opportunity in the huge unmet need for opioid use disorder — began expanding rapidly. By allowing companies to prescribe from afar, the change in rules has allowed much more explosive growth, McCarty thinks.

Bicycle Health, for example, was founded in Redwood City, California, in 2017, and by early 2020, the company still had just one clinic that served about 100 neighborhood residents. After a patient’s first in-person visit and prescription, treatment and support services were delivered online. But when that in-person visit was no longer required during the pandemic, Bicycle grew, and now serves more than 10,000 patients in 23 states. Other companies such as Halcyon Health, Bright Heart Health, Ophelia, Workit Health and Eleanor Health also serve clients in multiple states, primarily via phone apps or webcams.

Although online treatment clearly removes some of the access barriers for opioid addicts to start on drug therapy, researchers are still not sure whether it works as well as in-person treatment for ongoing care. McCarty and colleagues recently reviewed nine studies to determine the effectiveness of telemedicine for medication-assisted opioid use disorder. In every study they looked at, telemedicine programs performed similarly to in-person treatment in adherence to medication and retention in the program, and in lowering rates of illicit drug use.

However, they noted, none of the studies was particularly robust, because very few clinicians had used telehealth before the pandemic. The researchers found just two randomized controlled trials — generally considered the strongest research design — and both had only a few participants; most of the other studies were observational accounts of single programs, which mostly also had few participants.

Research comparing in-person treatment for opioid addiction, pre-Covid, with telehealth treatment during the pandemic is beginning to emerge. One rural health center in Pennsylvania that pivoted to telehealth saw 91 percent of patients still in the program three months later, for example, compared with 94 percent for its pre-Covid in-person program. More definitive evidence is on the way, thanks to a larger study launched by the National Drug Abuse Treatment Clinical Trials Network to investigate the effectiveness of virtual treatment for opioid use disorder. Their results are expected in a few years.

David Kan, chief medical officer for Bright Heart Health, thinks that the emerging research about telemedicine in general offers encouragement for its use in treating opioid use disorder. “The research around telemedicine, no matter the disorder, has shown that people do as well as in-person care, but customer satisfaction tends to be higher,” he says. Kan also points to a review of 13 studies that examined psychotherapy and medication prescriptions delivered via telemedicine to address nicotine, alcohol and opioid use disorders. The authors of that review concluded that most patients were highly satisfied with telemedicine treatments, making them an effective alternative — especially when in-person treatments may be impractical.

For opioid use disorder treatment, the fact that patients like online treatment is significant, Kan says. “We have to make treatment easier than access to drugs themselves. If we can’t make treatment easier, then the alternative — which is continued use — will be more appealing.”

While medication is the best evidence-based treatment, most patients also need therapy and other support, says Shannon Schwoeble, recovery coach program manager at Bicycle Health. She oversees 24 peer-led online support groups, available to patients six days a week, and thinks online therapy may have some advantages over in-person sessions. “When I hear their stories, a lot of times it’s, ‘I was embarrassed. I don’t want to be seen going into a clinic,’ or ‘I’m not ready for my family to know that I’m doing this yet, but I really need this because I know it’s going to help me.’”

Medication reduces the cravings, says Schwoeble, who is in long-term recovery herself. “Then the real work starts, and you’re able to start unpacking all of the stuff that caused you to use in the first place.”

Patients may wonder whether they can receive effective support over Zoom, but Schwoeble sees it work. “You have your computer and you’re able to separate yourself a little bit,” she says. “We have people from all over the country, all demographics, walks of life, coming together and providing each other with support.”

But telemedicine may not work for everyone, says Karen Scott, a physician and president of the New York-based Foundation for Opioid Response Efforts (FORE), which gives grants to organizations fighting America’s opioid epidemic. FORE focuses on programs targeting the most high-risk opioid users, many of whom have additional social, mental health and other problems that must be addressed alongside their addiction.

“This takes much more than a 15-minute visit and writing a prescription for buprenorphine,” she says. “For the populations that are at highest risk of death and at highest risk of not staying in treatment, it requires a whole package of wraparound services as well as building a relationship.”

Several organizations that received FORE grants did use telemedicine to connect with patients early in the pandemic. “But from what we hear from a number of our grantees, as soon as they were able to safely transition back to in-person visits, they really wanted to do that because they were worried,” Scott says.

Currently, the telemedicine regulations are loosened only until the government-declared public health emergency comes to an end. McCarty and others expect that, at least for opioid use disorder, regulations permitting telehealth will be continued. But that hasn’t happened yet, leaving today’s burgeoning online treatment programs at risk.

Considering that some 2 million Americans need treatment and 90 percent don’t get it, says Ankit Gupta, founder of Bicycle Health, “I don’t see how you can fill that access gap without telemedicine.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

10 Tips to Recognize Ripe Fruits

(Culinary.net) Keeping fresh fruit around the house provides a healthier alternative when your sweet tooth comes calling. Understanding how and when to buy at the peak of ripeness (or just before, in some cases) can help you avoid food waste while keeping your doctor happy.

Consider these simple tips for recognizing ripe fruits:

  • Strawberries: Check the area at the top of the berry near the stem and leaves. A ripe strawberry is fully red; green or white near the top means the fruit is underripe.
  • Watermelon: The “field spot,” or the area where the melon sat on the ground, should be yellow, and a tap on the rind should produce a hollow sound.
  • Cherries: Flesh should appear dark with a crimson color and feel firm.
  • Blueberries: Similar to cherries, color should deepen to dark blue. A reddish or pink color may be visible in unripe berries.
  • Blackberries: Look for a smooth texture without any red appearance. Because blackberries don’t ripen after being picked, they tend to spoil quickly.
  • Cantaloupe: You should detect a sweet smell, and the melon should feel heavy upon lifting.
  • Peaches: A sweet, fragrant odor should be apparent. Skin should feel tender but not soft.
  • Pineapple: Smell is again an important factor for pineapple – a sweet scent shows it’s ready, but a vinegary one likely means it’s overripe.
  • Raspberries: Generally follow the same rules as blackberries. Best eaten within a couple days of purchase, a bright red color represents ripe berries.
  • Bananas: A ripe banana features a peel lightly spotted without significant bruising. Your best bet may be to purchase bananas still slightly green and allow them to ripen at home.

Find more food tips, tricks, recipes and videos at Culinary.net.

SOURCE:
Culinary.net