Saturday, November 4, 2023

Pursuing fusion power

Scientists have been chasing the dream of harnessing the reactions that power the Sun since the dawn of the atomic era. Interest, and investment, in the carbon-free energy source is heating up.

For the better part of a century now, astronomers and physicists have known that a process called thermonuclear fusion has kept the Sun and the stars shining for millions or even billions of years. And ever since that discovery, they’ve dreamed of bringing that energy source down to Earth and using it to power the modern world.

It’s a dream that’s only become more compelling today, in the age of escalating climate change. Harnessing thermonuclear fusion and feeding it into the world’s electric grids could help make all our carbon dioxide-spewing coal- and gas-fired plants a distant memory. Fusion power plants could offer zero-carbon electricity that flows day and night, with no worries about wind or weather — and without the drawbacks of today’s nuclear fission plants, such as potentially catastrophic meltdowns and radioactive waste that has to be isolated for thousands of centuries.

In fact, fusion is the exact opposite of fission: Instead of splitting heavy elements such as uranium into lighter atoms, fusion generates energy by merging various isotopes of light elements such as hydrogen into heavier atoms.

To make this dream a reality, fusion scientists must ignite fusion here on the ground — but without access to the crushing levels of gravity that accomplish this feat at the core of the Sun. Doing it on Earth means putting those light isotopes into a reactor and finding a way to heat them to hundreds of millions of degrees centigrade — turning them into an ionized “plasma” akin to the insides of a lightning bolt, only hotter and harder to control. And it means finding a way to control that lightning, usually with some kind of magnetic field that will grab the plasma and hold on tight while it writhes, twists and tries to escape like a living thing.

Both challenges are daunting, to say the least. It was only in late 2022, in fact, that a multibillion-dollar fusion experiment in California finally got a tiny isotope sample to put out more thermonuclear energy than went in to ignite it. And that event, which lasted only about one-tenth of a nanosecond, had to be triggered by the combined output of 192 of the world’s most powerful lasers.

Today, though, the fusion world is awash in plans for much more practical machines. Novel technologies such as high-temperature superconductors are promising to make fusion reactors smaller, simpler, cheaper and more efficient than once seemed possible. And better still, all those decades of slow, dogged progress seem to have passed a tipping point, with fusion researchers now experienced enough to design plasma experiments that work pretty much as predicted.

“There is a coming of age of technological capability that now matches up with the challenge of this quest,” says Michl Binderbauer, CEO of the fusion firm TAE Technologies in Southern California.

Indeed, more than 40 commercial fusion firms have been launched since TAE became the first in 1998 — most of them in the past five years, and many with a power-reactor design that they hope to have operating in the next decade or so. “‘I keep thinking that, oh sure, we’ve reached our peak,” says Andrew Holland, who maintains a running count as CEO of the Fusion Industry Association, an advocacy group he founded in 2018 in Washington, DC. “But no, we keep seeing more and more companies come in with different ideas.”

None of this has gone unnoticed by private investment firms, which have backed the fusion startups with some $6 billion and counting. This combination of new technology and private money creates a happy synergy, says Jonathan Menard, head of research at the Department of Energy’s Princeton Plasma Physics Laboratory in New Jersey, and not a participant in any of the fusion firms.

Compared with the public sector, companies generally have more resources for trying new things, says Menard. “Some will work, some won’t. Some might be somewhere in between,” he says. “But we’re going to find out, and that’s good.”

Granted, there’s ample reason for caution — starting with the fact that none of these firms has so far shown that it can generate net fusion energy even briefly, much less ramp up to a commercial-scale machine within a decade. “Many of the companies are promising things on timescales that generally we view as unlikely,” Menard says.

But then, he adds, “we’d be happy to be proven wrong.”

With more than 40 companies trying to do just that, we’ll know soon enough if one or more of them succeeds. In the meantime, to give a sense of the possibilities, here is an overview of the challenges that every fusion reactor has to overcome, and a look at some of the best-funded and best-developed designs for meeting those challenges.

Prerequisites for fusion

The first challenge for any fusion device is to light the fire, so to speak: It has to take whatever mix of isotopes it’s using as fuel, and get the nuclei to touch, fuse and release all that beautiful energy.

This means literally “touch”: Fusion is a contact sport, and the reaction won’t even begin until the nuclei hit head on. What makes this tricky is that every atomic nucleus contains positively charged protons and — Physics 101 — positive charges electrically repel each other. So the only way to overcome that repulsion is to get the nuclei moving so fast that they crash and fuse before they’re deflected.

This need for speed requires a plasma temperature of at least 100 million degrees C. And that’s just for a fuel mix of deuterium and tritium, the two heavy isotopes of hydrogen. Other isotope mixes would have to get much hotter — which is why “DT” is still the fuel of choice in most reactor designs.

But whatever the fuel, the quest to reach fusion temperatures generally comes down to a race between researchers’ efforts to pump in energy with an external source such as microwaves, or high-energy beams of neutral atoms, and plasma ions’ attempts to radiate that energy away as fast as they receive it.

The ultimate goal is to get the plasma past the temperature of “ignition,” which is when fusion reactions will start to generate enough internal energy to make up for that radiating away of energy — and power a city or two besides.

But this just leads to the second challenge: Once the fire is lit, any practical reactor will have to keep it lit — as in, confine these superheated nuclei so that they’re close enough to maintain a reasonable rate of collisions for long enough to produce a useful flow of power.

In most reactors, this means protecting the plasma inside an airtight chamber, since stray air molecules would cool down the plasma and quench the reaction. But it also means holding the plasma away from the chamber walls, which are so much colder than the plasma that the slightest touch will also kill the reaction. The problem is, if you try to hold the plasma away from the walls with a non-physical barrier, such as a strong magnetic field, the flow of ions will quickly get distorted and rendered useless by currents and fields within the plasma.

Unless, that is, you’ve shaped the field with a great deal of care and cleverness — which is why the various confinement schemes account for some of the most dramatic differences between reactor designs.

Finally, practical reactors will have to include some way of extracting the fusion energy and turning it into a steady flow of electricity. Although there has never been any shortage of ideas for this last challenge, the details depend critically on which fuel mix the reactor uses.

With deuterium-tritium fuel, for example, the reaction produces most of its energy in the form of high-speed particles called neutrons, which can’t be confined with a magnetic field because they don’t have a charge. This lack of an electric charge allows the neutrons to fly not only through the magnetic fields but also through the reactor walls. So the plasma chamber will have to be surrounded by a “blanket”: a thick layer of some heavy material like lead or steel that will absorb the neutrons and turn their energy into heat. The heat can then be used to boil water and generate electricity via the same kind of steam turbines used in conventional power plants.

Many DT reactor designs also call for including some lithium in the blanket material, so that the neutrons will react with that element to produce new tritium nuclei. This step is critical: Since each DT fusion event consumes one tritium nucleus, and since this isotope is radioactive and doesn’t exist in nature, the reactor would soon run out of fuel if it didn’t exploit this opportunity to replenish it.

The complexities of DT fuel are cumbersome enough that some of the more audacious fusion startups have opted for alternative fuel mixes. Binderbauer’s TAE, for example, is aiming for what many consider the ultimate fusion fuel: a mix of protons and boron-11. Not only are both ingredients stable, nontoxic and abundant, their sole reaction product is a trio of positively charged helium-4 nuclei whose energy is easily captured with magnetic fields, with no need for a blanket.

But alternative fuels present different challenges, such as the fact that TAE will have to get its proton-boron-11 mix to up fusion temperatures of at least a billion degrees Celsius, roughly 10 times higher than the DT threshold.

A plasma donut

The basics of these three challenges — igniting the plasma, sustaining the reaction, and harvesting the energy — were clear from the earliest days of fusion energy research. And by the 1950s, innovators in the field had begun to come up with any number of schemes for solving them — most of which fell by the wayside after 1968, when Soviet physicists went public with a design they called the tokamak.

Like several of the earlier reactor concepts, tokamaks featured a plasma chamber something like a hollow donut — a shape that allowed the ions to circulate endlessly without hitting anything — and controlled the plasma ions with magnetic fields generated by current-carrying coils wrapped around the outside of the donut.

But tokamaks also featured a new set of coils that caused an electric current to go looping around and around the donut right through the plasma, like a circular lightning bolt. This current gave the magnetic fields a subtle twist that went a surprisingly long way toward stabilizing the plasma. And while the first of these machines still couldn’t get anywhere close to the temperatures and confinement times a power reactor would need, the results were so much better than anything seen before that the fusion world pretty much switched to tokamaks en masse.

Since then, more than 200 tokamaks of various designs have been built worldwide, and physicists have learned so much about tokamak plasmas that they can confidently predict the performance of future machines. That confidence is why an international consortium of funding agencies has been willing to commit more than $20 billion to build ITER (Latin for “the way”): a tokamak scaled up to the size of a 10-story building. Under construction in southern France since 2010, ITER is expected to start experiments with deuterium-tritium fuel in 2035. And when it does, physicists are quite sure that ITER will be able to hold and study burning fusion plasmas for minutes at a time, providing a unique trove of data that will hopefully be useful in the construction of power reactors.

But ITER was also designed as a research machine with a lot more instrumentation and versatility than a working power reactor would ever need — which is why two of today’s best-funded fusion startups are racing to develop tokamak reactors that would be a lot smaller, simpler and cheaper.

First out of the gate was Tokamak Energy, a UK firm founded in 2009. The company has received some $250 million in venture capital over the years to develop a reactor based on “spherical tokamaks” — a particularly compact variation that looks more like a cored apple than a donut.

But coming up fast is Commonwealth Fusion Systems in Massachusetts, an MIT spinoff that wasn’t even launched until 2018. Although Commonwealth’s tokamak design uses a more conventional donut configuration, access to MIT’s extensive fundraising network has already brought the company nearly $2 billion.

Both firms are among the first to generate their magnetic fields with cables made of high-temperature superconductors (HTS). Discovered in the 1980s but only recently available in cable form, these materials can carry an electrical current without resistance even at a relatively torrid 77 Kelvins, or -196 degrees Celsius, warm enough to be achieved with liquid nitrogen or helium gas. This makes HTS cables much easier and cheaper to cool than the ones that ITER will use, since those will be made of conventional superconductors that need to be bathed in liquid helium at 4 Kelvins.

But more than that, HTS cables can generate much stronger magnetic fields in a much smaller space than their low-temperature counterparts — which means that both companies have been able to shrink their power plant designs to a fraction of the size of ITER.

As dominant as tokamaks have been, however, most of today’s fusion startups are not using that design. They’re reviving older alternatives that could be smaller, simpler and cheaper than tokamaks, if someone could make them work.

Plasma vortices

Prime examples of these revived designs are fusion reactors based on smoke-ring-like plasma vortices known as the field-reversed configuration (FRC). Resembling a fat, hollow cigar that spins on its axis like a gyroscope, an FRC vortex holds itself together with its own internal currents and magnetic fields — which means there’s no need for an FRC reactor to keep its ions endlessly circulating around a donut-shaped plasma chamber. In principle, at least, the vortex will happily stay put inside a straight cylindrical chamber, requiring only a light-touch external field to hold it steady. This means that an FRC-based reactor could ditch most of those pricey, power-hungry external field coils, making it smaller, simpler and cheaper than a tokamak or almost anything else.

In practice, unfortunately, the first experiments with these whirling plasma cigars back in the 1960s found that they always seemed to tumble out of control within a few hundred microseconds, which is why the approach was mostly pushed aside in the tokamak era.

Yet the basic simplicity of an FRC reactor never fully lost its appeal. Nor did the fact that FRCs could potentially be driven to extreme plasma temperatures without flying apart — which is why TAE chose the FRC approach in 1998, when the company started on its quest to exploit the 1-billion-degree proton-boron-11 reaction.

Binderbauer and his TAE cofounder, the late physicist Norman Rostoker, had come up with a scheme to stabilize and sustain the FRC vortex indefinitely: Just fire in beams of fresh fuel along the vortex’s outer edges to keep the plasma hot and the spin rate high.

It worked. By the mid-2010s, the TAE team had shown that those particle beams coming in from the side would, indeed, keep the FRC spinning and stable for as long as the beam injectors had power — just under 10 milliseconds with the lab’s stored-energy supply, but as long as they want (presumably) once they can siphon a bit of spare energy from a proton-boron-11-burning reactor. And by 2022, they had shown that their FRCs could retain that stability well above 70 million degrees C.

With the planned 2025 completion of its next machine, the 30-meter-long Copernicus, TAE is hoping to actually reach burn conditions above 100 million degrees (albeit using plain hydrogen as a stand-in). This milestone should give the TAE team essential data for designing their DaVinci machine: a reactor prototype that will (they hope) start feeding p-B11-generated electricity into the grid by the early 2030s.

Plasma in a can

Meanwhile, General Fusion of Vancouver, Canada, is partnering with the UK Atomic Energy Authority to construct a demonstration reactor for perhaps the strangest concept of them all, a 21st-century revival of magnetized target fusion. This 1970s-era concept amounts to firing a plasma vortex into a metal can, then crushing the can. Do that fast enough and the trapped plasma will be compressed and heated to fusion conditions. Do it often enough and a more or less continuous string of fusion energy pulses back out, and you’ll have a power reactor.

In General Fusion’s current concept, the metal can will be replaced by a molten lead-lithium mix that’s held by centrifugal force against the sides of a cylindrical container spinning at 400 RPM. At the start of each reactor cycle, a downward-pointing plasma gun will inject a vortex of ionized deuterium-tritium fuel — the “magnetized target” — which will briefly turn the whirling, metal-lined container into a miniature spherical tokamak. Next, a forest of compressed-air pistons arrayed around the container’s outside will push the lead-lithium mix into the vortex, crushing it from a diameter of three meters down to 30 centimeters within about five milliseconds, and raising the deuterium-tritium to fusion temperatures.

The resulting blast will then strike the molten lead-lithium mix, pushing it back out to the rotating cylinder walls and resetting the system for the next cycle — which will start about a second later. Meanwhile, on a much slower timescale, pumps will steadily circulate the molten metal to the outside so that heat exchangers can harvest the fusion energy it’s absorbed, and other systems can scavenge the tritium generated from neutron-lithium interactions.

All these moving parts require some intricate choreography, but if everything works the way the simulations suggest, the company hopes to build a full-scale, deuterium-tritium-burning power plant by the 2030s.

It’s anybody’s guess when (or if) the particular reactor concepts mentioned here will result in real commercial power plants — or whether the first to market will be one of the many alternative reactor designs being developed by the other 40-plus fusion firms.

But then, few if any of these firms see the quest for fusion power as either a horse race or a zero-sum game. Many of them have described their rivalries as fierce, but basically friendly — mainly because, in a world that’s desperate for any form of carbon-free energy, there’s plenty of room for multiple fusion reactor types to be a commercial success.

“I will say my idea is better than their idea. But if you ask them, they will probably tell you that their idea is better than my idea,” says physicist Michel Laberge, General Fusion’s founder and chief scientist. “Most of these guys are serious researchers, and there’s no fundamental flaw in their schemes.” The actual chance of success, he says, is improved by having more possibilities. “And we do need fusion on this planet, badly.”

Editor’s note: This story was changed on November 2, 2023, to correct the amount of compression that General Fusion is aiming for in its reactor; it is 30 centimeters, not 10. The text was also changed to clarify that the blast of energy leads to the resetting of the magnetized target reactor.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Vampire viruses prey on other viruses to replicate themselves − and may hold the key to new antiviral therapies

The satellite virus MiniFlayer (purple) infects cells by attaching itself to the neck of its helper virus, MindFlayer (gray). Tagide deCarvalho, CC BY-SA
Ivan Erill, University of Maryland, Baltimore County

Have you ever wondered whether the virus that gave you a nasty cold can catch one itself? It may comfort you to know that, yes, viruses can actually get sick. Even better, as karmic justice would have it, the culprits turn out to be other viruses.

Viruses can get sick in the sense that their normal function is impaired. When a virus enters a cell, it can either go dormant or start replicating right away. When replicating, the virus essentially commandeers the molecular factory of the cell to make lots of copies of itself, then breaks out of the cell to set the new copies free.

Sometimes a virus enters a cell only to find that its new temporary dwelling is already home to another dormant virus. Surprise, surprise. What follows is a battle for control of the cell that can be won by either party.

But sometimes a virus will enter a cell to find a particularly nasty shock: a viral tenant waiting specifically to prey on the incoming virus.

I am a bioinformatician, and my laboratory studies the evolution of viruses. We frequently run into “viruses of viruses,” but we recently discovered something new: a virus that latches onto the neck of another virus.

A world of satellites

Biologists have known of the existence of viruses that prey on other viruses – referred to as viral “satellites” – for decades. In 1973, researchers studying bacteriophage P2, a virus that infects the gut bacterium Escherichia coli, found that this infection sometimes led to two different types of viruses emerging from the cell: phage P2 and phage P4.

Bacteriophage P4 is a temperate virus, meaning it can integrate into the chromosome of its host cell and lie dormant. When P2 infects a cell already harboring P4, the latent P4 quickly wakes up and uses the genetic instructions of P2 to make hundreds of its own small viral particles. The unsuspecting P2 is lucky to replicate a few times, if at all. In this case, biologists refer to P2 as a “helper” virus, because the satellite P4 needs P2’s genetic material to replicate and spread.

Bacteriophages are viruses that infect bacteria.

Subsequent research has shown that most bacterial species have a diverse set of satellite-helper systems, like that of P4-P2. But viral satellites are not limited to bacteria. Shortly after the largest known virus, mimivirus, was discovered in 2003, scientists also found its satellite, which they named Sputnik. Plant viral satellites that lurk in plant cells waiting for other viruses are also widespread and can have important effects on crops.

Viral arms race

Although researchers have found satellite-helper viral systems in pretty much every domain of life, their importance to biology remains underappreciated. Most obviously, viral satellites have a direct impact on their “helper” viruses, typically maiming them but sometimes making them more efficient killers. Yet that is probably the least of their contributions to biology.

Satellites and their helpers are also engaged in an endless evolutionary arms race. Satellites evolve new ways to exploit helpers and helpers evolve countermeasures to block them. Because both sides are viruses, the results of this internecine war necessarily include something of interest to people: antivirals.

Recent work indicates that many antiviral systems thought to have evolved in bacteria, like the CRISPR-Cas9 molecular scissors used in gene editing, may have originated in phages and their satellites. Somewhat ironically, with their high turnover and mutation rates, helper viruses and their satellites turn out to be evolutionary hot spots for antiviral weaponry. Trying to outsmart each other, satellite and helper viruses have come up with an unparalleled array of antiviral systems for researchers to exploit.

MindFlayer and MiniFlayer

Viral satellites have the potential to transform how researchers understand antiviral strategies, but there is still a lot to learn about them. In our recent work, my collaborators and I describe a satellite bacteriophage completely unlike previously known satellites, one that has evolved a unique, spooky lifestyle.

Undergraduate phage hunters at the University of Maryland, Baltimore County isolated a satellite phage called MiniFlayer from the soil bacterium Streptomyces scabiei. MiniFlayer was found in close association with a helper virus called bacteriophage MindFlayer that infects the Streptomyces bacterium. But further research revealed that MiniFlayer was no ordinary satellite.

Microscopy image of a small round virus colored violet attached to the base of a larger round virus colored gray with a long tail
This image shows Streptomyces satellite phage MiniFlayer (purple) attached to the neck of its helper virus, Streptomyces phage MindFlayer (gray). Tagide deCarvalho, CC BY-SA

MiniFlayer is the first satellite phage known to have lost its ability to lie dormant. Not being able to lie in wait for your helper to enter the cell poses an important challenge to a satellite phage. If you need another virus to replicate, how do you guarantee that it makes it into the cell around the same time you do?

MiniFlayer addressed this challenge with evolutionary aplomb and horror-movie creativity. Instead of lying in wait, MiniFlayer has gone on the offensive. Borrowing from both “Dracula” and “Alien,” this satellite phage evolved a short appendage that allows it to latch onto its helper’s neck like a vampire. Together, the unwary helper and its passenger travel in search of a new host, where the viral drama will unfold again. We don’t yet know how MiniFlayer subdues its helper, or whether MindFlayer has evolved countermeasures.

If the recent pandemic has taught us anything, it is that our supply of antivirals is rather limited. Research on the complex, intertwined and at times predatory nature of viruses and their satellites, like the ability of MiniFlayer to attach to its helper’s neck, has the potential to open new avenues for antiviral therapy.

Ivan Erill, Professor of Biological Sciences, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license.

The remaining frontiers in fighting hepatitis C

A scientist whose work was key to identifying, studying and finding treatments for this life-threatening virus discusses the scientific journey and challenges that persist

A, B, C, D, E: It’s a short, menacing alphabet representing the five types of virus causing viral hepatitis, a sickness afflicting some 400 million people around the world today.

Hepatitis viruses are a set of very different pathogens that kill 1.4 million people annually and infect more than HIV and the malaria pathogen do combined. Most of the deaths are from cirrhosis of the liver or hepatic cancer due to chronic infections with hepatitis viruses B or C, picked up through contact with contaminated blood.

Hepatitis B was the first of the five to be discovered, in the 1960s, by biochemist Baruch S. Blumberg. Hepatitis A, which is most commonly spread through contaminated food and water, was next, discovered in 1973 by researchers Stephen Mark Feinstone, Albert Kapikian and Robert Purcell.

Screening tests for those two types of viruses paved the way to discovering a third. In the 1970s, hematologist Harvey Alter examined unexplained cases of hepatitis in patients after blood transfusions and found that only 25 percent of such cases were caused by the hepatitis B virus, and none were linked to the hepatitis A virus. The rest were caused by an unidentified transmissible agent that could persist in the body as a chronic infection and lead to liver cirrhosis and liver cancer.

The agent behind this disease, named non-A, non-B hepatitis, remained a mystery for a decade until Michael Houghton, a microbiologist working at the biotechnology company Chiron Corporation, and his team sequenced the agent’s genome in 1989 after years of intensive investigation. They identified it as a novel virus of the family to which yellow fever virus belongs: the flaviviruses, a group of RNA viruses often transmitted through the bite of infected arthropods.

But there was more to the story. Scientists needed to show that this new virus could, indeed, cause hepatitis C on its own — a feat achieved in 1997, when Charles M. Rice, then a virologist at Washington University in St. Louis, and others succeeded in creating a form of the virus in the lab that could replicate in the only animal model for hepatitis C, the chimpanzee. When they injected the virus into the liver of chimpanzees, it triggered clinical hepatitis, demonstrating the direct connection between hepatitis C and non-A, non-B hepatitis.

The findings led to lifesaving hepatitis C tests to avert infections through transfusions with contaminated blood, as well as for the development of effective antiviral medications to treat the disease. In 2020, in the thick of the SARS-CoV-2 pandemic, Alter, Houghton and Rice received a Nobel Prize in Medicine for their work on identifying the virus.

To learn more about hepatitis C history and the treatment and prevention challenges that remain, Knowable Magazine spoke with Rice, now at the Rockefeller University, at the 72nd Lindau Nobel Laureate Meeting in Germany in June 2023. This conversation has been edited for length and clarity.

What were the challenges at the time you began your research on hepatitis C?

The realization that an agent was behind non-A, non-B hepatitis had initiated a virus hunt to try and figure out what the causative agent was. Michael Houghton and his group at Chiron won that race and reported the partial sequence of the virus in 1989 in Science.

It was an interesting kind of a dilemma for me as an early-stage assistant professor at Washington University in St. Louis, where I’d been working on yellow fever. All of a sudden, we had this new human virus that dropped into our laps and joined the flavivirus family; we had to decide if we were going to shift some of our attention to work on this virus. Initially, people in the viral hepatitis field invited us to meetings, but because we were doing work on the related virus, yellow fever, not because we were considered majors player in the field.

The main challenge was that we could not grow the virus in cell culture. And the only experimental model was the chimpanzee, so it was really difficult for laboratories to study this virus.

There were two major goals. One was to establish a cell culture system where you could replicate the virus and study it. And the other was try and create a system where we could do genetics on the virus. It was shown to be an RNA virus, and the collection of tools available for modifying RNA at that time, in the early 1990s, was not the same as it was for DNA. Now that’s changed to some extent, with modern editing technologies.

If there’s one lesson to be learned from this hepatitis C story, it’s that persistence pays off.

This journey started with an unknown virus and ended up with treatment in a relatively short period of time.

I don’t think it was a short period of time, between all of the failures to actually get a cell culture system and to show that we had a functional clone. From 1989, when the virus sequence was reported, to 2011, when the first antiviral compounds were produced, was 22 years.

And then, that initial generation of treatment compounds was not the greatest, and they were combined with the treatment that we were trying to get rid of — interferon — that made people quite ill and didn’t always cure them. They only had about a 50 percent cure rate.

It was 2014 when the interferon-free cocktails came about. And that was really amazing.

There were people who thought, “You are not going to be able to develop a drug cocktail that can eliminate this virus.” It was presumptuous to think that one could, but it was accomplished by biotech and the pharmaceutical industry. So it is really quite a success story, but I wish it could have been faster.

What are the current challenges in combating hepatitis C?

One thing that was a little sobering and disappointing for me was that when these medical advances are made and shown to be efficacious, it is not possible to get these drugs to everybody who needs them and successfully treat them. It’s a lot more complicated, in part because of the economics — how much the companies decide to charge for the drugs.

Also, it’s difficult to identify people who are infected with hepatitis C, because it’s often asymptomatic. Even when identified, getting people into treatment is challenging given differences in public health capabilties which vary at the local, national and global levels. So we have wonderful drugs that can basically cure anybody, but I think we still could use a vaccine for hepatitis C.

During the first year of the Covid-19 pandemic, you won the Nobel Prize for the discovery of the hepatitis C virus. What was that experience like?

It was December 2020, and we were working on SARS-Cov-2 in the peak of the pandemic in New York City. My spouse and the dogs were off at our house in Connecticut, and I was living in the apartment in Manhattan. And I got this call at 4:30 in the morning. It was pretty shocking.

The pandemic made people more aware of what a highly infectious, disease-causing virus can do to our world. It encouraged the rapid dissemination of research results and more open publications. It also really made us appreciate how the same virus does different things depending upon who’s infected: In the case of Covid-19, it’s not good to be old, for example.

After many decades working with viruses, what would you say is the next frontier in virology?

There’s a lot that we don’t understand about these viruses. The more we study them, the more we understand about ourselves, our cells and our antiviral defense systems.

And there’s also great power in terms of being able to diagnose new viruses. The sequencing technology, the functional genomics technologies, all of those things, when applied to virology, give us a much richer picture of how these viruses interact with cells. I think it’s a golden age.

You have been working with flaviviruses (dengue, Zika, yellow fever and hepatitis C) for many decades. Zika and dengue pose an ongoing threat worldwide and, in particular, Latin America. Based on the successful example of hepatitis C, what can scientific research do to mitigate the impact of these viruses?

For viruses like Zika, developing a vaccine is probably going to be fairly straightforward — except that since Zika is so transient, it makes it hard to prove that your vaccine works. You would have to do a human challenge study, in which volunteers are deliberately exposed to an infection in a safe way with health-care support.

For dengue, it’s much more difficult, because there are four different serotypes — different versions of the same virus — and infection with one serotype can put you at increased risk of more severe disease if you get infected with a second serotype. Eliciting a balanced response that would protect you against all four dengue serotypes is the holy grail of trying to develop a dengue vaccine.

People are using various approaches to accomplish that. The classic one is to take live attenuated versions — weakened forms of viruses that have been modified so they can’t cause severe illness but can still stimulate the immune system — of each of the four serotypes and mix them together. Another is to make chimeric viruses: a combination of genetic material from different viruses, resulting in new viruses that have features of each of the four dengue serotypes, engineered into the backbone of the yellow fever vaccine. But this hasn’t worked as well as people have hoped. I think the cocktail of live, attenuated dengue variants is probably the most advanced approach. But I would guess that given the success of Covid-19 mRNA vaccines, the mRNA approach will also be tried out.

These diseases are not going to go away. You can’t eradicate every mosquito. And you can’t really immunize every susceptible vertebrate host. So occasionally there’s going to be spillover into the human population. We need to keep working on these because they are big problems.

You began your career at the California Institute of Technology studying RNA viruses, such as the mosquito-borne Sindbis virus, and then flaviviruses that cause encephalitis, polyarthritis, yellow fever and dengue fever. Later on, you also studied hepatitis C virus. Is there any advantage for virologists in changing the viruses they study throughout their career?

They’re all interesting, right? And they are all different in their own ways. I say that my career has been a downward spiral of tackling increasingly intricate viruses. Initially, the alphaviruses — a viral family which includes chikungunya virus, for example — were easy. The classical flaviviruses — like yellow fever, dengue fever, West Nile viruses and Zika virus, among others — were a little more difficult, but the hepatitis C virus was impossible for 15 years, until we, and others, finally achieved a complete replication system in the laboratory.

We coexist daily with viruses, but the pandemic may have given people the idea that all these microorganisms are invariably life-threatening.

We have to treat them with respect. We’ve seen what can happen with the emergence of a novel coronavirus that can spread during an asymptomatic phase of infection. You can’t be prepared for everything, but in some respects our response was a lot slower and less effective than it could have been.

If there’s anything that we’ve learned over the last 10 years with the new nucleic acid sequencing technologies, it’s that our past view of the virosphere was a very narrow. And if you really look at what’s out there, the estimated virus diversity is a staggering number, like 1031 types. Although most of them are not pathogenic to humans, some are. We have to take this threat seriously.

Is science prepared?

I think so, but there has to be an investment, a societal investment. And that investment has to not only be an investment in infrastructure that can react quickly to something new, but also to establish a repository of protective antibodies and small molecules against viruses that we know could be future threats.

Often, these things go in cycles. There’s a disaster, like the Covid-19 pandemic, people are changed by the experience, but then they think “Oh, well, the virus has faded into the background, the threat is over.” And that’s just not the case. We need a more sustained plan rather than a reactive stance. And that’s hard to do when resources and money are limited.

What is the effect of science illiteracy, conspiracy theories and lack of science information on the battle against viruses?

These are huge issues, and I don’t know the best way to combat them and educate people. Any combative, confrontational kind of response — it’s just not going to work. People will get more resolute in their entrenched beliefs and not hear or believe compelling evidence to the contrary.

It’s frustrating. I think that we have amazing tools and the power to make really significant advances to help people. It is more than a little discouraging for scientists when there’s a substantial fraction of people who don’t believe in things that are well-supported by facts.

It’s in large part an educational problem. I think we don’t put enough money into education, particularly early education. A lot of people don’t understand how much of what we take for granted today is underpinned by science. All this technology — good, bad or ugly — is all science.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

What is a virtual power plant? An energy expert explains

A large-scale battery storage system in Long Beach, Calif., provides renewable electricity during peak demand periods. Patrick T. Fallon/AFP via Getty Images
Daniel Cohan, Rice University

After nearly two decades of stagnation, U.S. electricity demand is surging, driven by growing numbers of electric cars, data centers and air conditioners in a warming climate. But traditional power plants that generate electricity from coal, natural gas or nuclear energy are retiring faster than new ones are being built in this country. Most new supply is coming from wind and solar farms, whose output varies with the weather.

That’s left power companies seeking new ways to balance supply and demand. One option they’re turning to is virtual power plants.

These aren’t massive facilities generating electricity at a single site. Rather, they are aggregations of electricity producers, consumers and storers – collectively known as distributed energy resources – that grid managers can call on as needed.

Some of these sources, such as batteries, may deliver stored electric power. Others may be big electricity consumers, such as factories, whose owners have agreed to cut back their power use when demand is high, freeing up energy for other customers. Virtual power sources typically are quicker to site and build, and can be cleaner and cheaper to operate, than new power plants.

Virtual power plants are more resilient against service outages than large, centralized generating stations because they distribute energy resources across large areas.

A growing resource

Virtual power plants aren’t new. The U.S. Department of Energy estimates that there are already 30 to 60 gigawatts of them in operation today. A gigawatt is 1 billion watts – roughly the output of 2.5 million solar photovoltaic panels or one large nuclear reactor.

Most of these virtual power plants are industrial customers that have agreed to reduce demand when conditions are tight. But as growing numbers of homes and small businesses add rooftop solar panels, batteries and electric cars, these energy customers can become not only consumers but also suppliers of power to the grid.

For example, homeowners can charge up their batteries with rooftop solar when it’s sunny, and discharge power back to the grid in the evening when demand is high and prices sometimes spike.

As smart thermostats and water heaters, rooftop solar panels and batteries enable more customers to participate in them, DOE estimates that virtual power plants could triple in scale by 2030. That could cover roughly half of the new capacity that the U.S. will need to cover growing demand and replace retiring older power plants. This growth would help to limit the cost of building new wind and solar farms and gas plants.

And because virtual power plants are located where electricity is consumed, they’ll ease the burden on aging transmission systems that have struggled to add new lines.

A hand points to a lighted electronic panel.
A battery display panel inside a model home in Menifee, Calif., where 200 houses in a development are all-electric, equipped with solar panels and batteries and linked by a microgrid that can power the community during outages. Watchara Phomicinda/MediaNews Group/The Press-Enterprise via Getty Images

New roles for power customers

Virtual power plants scramble the roles of electricity producers and consumers. Traditional power plants generate electricity at central locations and transmit it along power lines to consumers. For the grid to function, supply and demand must be precisely balanced at all times.

Customer demand is typically assumed to be a given that fluctuates with the weather but follows a fairly predictable pattern over the course of a day. To satisfy it, grid operators dispatch a mix of baseload sources that operate continuously, such as coal and nuclear plants, and more flexible sources such as gas and hydropower that can modulate their output quickly as needed.

Output from wind and solar farms rises and falls during the day, so other sources must operate more flexibly to keep supply and demand balanced. Still, the basic idea is that massive facilities produce power for millions of passive consumers.

Virtual power plants upend this model by embracing the fact that consumers can control their electricity demand. Industrial consumers have long found ways to flex their operations, limiting demand when power supplies are tight in return for incentives or discounted rates.

Now, thermostats and water heaters that communicate with the grid can let households modulate their demand too. For example, smart electric water heaters can heat water mostly when power is abundant and cheap, and limit demand when power is scarce.

In Vermont, Green Mountain Power is offering its customers incentives to install batteries that will provide power back to the grid when it’s needed most. In Texas, where I live, deadly blackouts in 2021 highlighted the importance of bolstering our isolated power grid. Now, utilities here are using Tesla Powerwalls to help turn homes into virtual power sources. South Australia aims to connect 50,000 homes with solar and batteries to build that country’s largest virtual power plant.

People wait at a propane gas station, bundled in heavy clothes.
People line up to refill propane tanks in Houston after a severe winter storm caused electricity blackouts and a catastrophic failure of Texas’ power grid in February 2021. Go Nakamura/Getty Images

Virtual power, real challenges

Virtual power plants aren’t a panacea. Many customers are reluctant to give up even temporary control of their thermostats, or have a delay when charging their electric car. Some consumers are also concerned about the security and privacy of smart meters. It remains to be seen how many customers will sign up for these emerging programs and how effectively their operators will modulate supply and demand.

There also are challenges at the business end. It’s a lot harder to manage millions of consumers than dozens of power plants. Virtual power plant operators can overcome that challenge by rewarding customers for allowing them to flex their supply and demand in a coordinated fashion.

As electricity demand rises to meet the needs of growing economies and replace fossil fuel-burning cars and furnaces, and reliance on renewable resources increases, grid managers will need all the flexibility they can get to balance the variable output of wind and solar generation. Virtual power plants could help reshape electric power into an industry that’s more nimble, efficient and responsive to changing conditions and customers’ needs.

Daniel Cohan, Associate Professor of Civil and Environmental Engineering, Rice University

This article is republished from The Conversation under a Creative Commons license. 

What makes an ideal main street? This is what shoppers told us

Irina Grotkjaer/Unsplash
Louise Grimmer, University of Tasmania; Martin Grimmer, University of Tasmania, and Paul J. Maginn, The University of Western Australia

A lot of dedication and effort goes into making main streets attractive. Local governments, planners, place makers, economic development managers, trade associations and retailers work hard to design, improve and revitalise main streets. The goal is to make them attractive places to increase shopper numbers, provide pleasant places for communities, and boost local economies.

Despite the efforts that go into planning, maintaining and marketing local shopping areas, the people who use these places are often not consulted about what they actually want and need on their main street. Our research is the only-known Australian study to ask shoppers about the key elements, and shops and services, they regard as contributing to the ideal main street.

So what types of stores and services do they want?

Pharmacies are the top choice. Intriguingly, four types of stores/services that are disappearing from main streets around Australia – the post office, bank, department store and newsagent – are in the top ten (out of 45 choices in our survey).

What are the key shops and services?

We wanted to find out what consumers see as their ideal local shopping street. What kinds of shops and services matter most for them? Which other elements of local shopping places do they want?

Curiously, users are often not asked these questions. Yet their answers are essential if we are to design new towns, suburbs and regional centres, and improve existing ones, so more people want to work, shop and visit them.

We surveyed a representative sample of 655 shoppers from around Australia about their local shopping preferences.

We provided a list of 45 different stores and services. Participants were asked to rank them in order of importance from one to 45.

Overwhelmingly, participants considered the pharmacy the most important store or service for an ideal main street. Across gender, age and location, pharmacies were consistently number one.

Similarly, four types of stores and services – the post office, bank, department store and newsagent – appeared in the top ten most important, regardless of demographics.

The top ten stores and services in an ideal main street. Louise Grimmer

What other key elements are important?

We then asked participants about the importance of different elements of main streets. We provided 21 elements and participants were asked to rate each on a Likert scale from 1, “not at all important”, to 7, “extremely important”.

Shoppers rated “cleanliness” as the most important element for their ideal shopping area. It was followed by “safety and security” and “parking”.

Aside from the “retail mix”, in most areas local councils have control over nine of the ten top elements. “Safety and security” also involves police and individual security services that centres and some stores employ.

The top ten elements of an ideal main street. Louise Grimmer

Motivation for shopping affects choices

We also tested for shoppers’ levels of hedonic and utilitarian orientation. Hedonic shoppers really enjoy the act of shopping. They experience euphoria and pleasure and they buy so they can go shopping, rather than shopping so they can buy.

Utilitarian shoppers, on the other hand, are rational and cognitive and they view shopping as a task or chore. Buying products they need is simply a “means to an end”. They get no great satisfaction from the activity.

Hedonic shoppers are more often women. Men tend to be more utilitarian. We tend to become more utilitarian as we get older.

We were interested to find out if people’s responses to our questions were different depending on whether they were hedonic (shop for pleasure) or utlilitarian (shop for practical needs) shoppers.

For the most important store or service, hedonic and utilitarian shoppers both rated a pharmacy as number one. And they ranked similar stores and services in their top ten.

Top ten stores and services for hedonic shoppers. Louise Grimmer

But there were some differences. Hedonic shoppers included a lifestyle/gift store and department store in their top ten. Utilitarian shoppers did not. Instead they rated the post office and the newsagent as important.

This finding makes sense. Lifestyle stores, gift shops and department stores offer the hedonic shopper the chance to browse and enjoy quality surroundings and service. The post office and newsagent allow the utilitarian shopper to complete tasks quickly and easily – no browsing required.

Top ten stores and services for utilitarian shoppers. Louise Grimmer

Despite similarities in their top-ranked shops and services, hedonic and utilitarian shoppers’ rankings of the most important elements of local shopping areas were starkly different.

For hedonic shoppers, the complete visitor experience, including the surroundings and atmosphere, is an important aspect of their ideal shopping area. Their top ten elements reflected this. They selected a combination of tangible elements, including public art, aesthetics, greenery and lighting, to complement the more ephemeral such as events and activities, night-time economy, sustainability and history and culture.

The top ten elements for hedonic shoppers. Louise Grimmer

Utilitarian shoppers rated elements that help make a task-oriented shopping trip easier. Wayfinding (all the ways to help people navigate a space), signage and information, walkability, retail mix, and services and amenities were important for them.

The only two elements both groups agreed should be in the top ten were lighting, and seating and tables.

The top ten elements for utilitarian shoppers. Louise Grimmer

Making main streets the best they can be

There is an increasing understanding that retailing will not continue to be the main or sole reason people visit town centres. While still important, retail will more often complement services, attractions and “experiences” as the major factors that entice visitors.

This requires local councils, chambers of commerce and marketing organisations to perform a juggling act. They need to market shopping precincts as being attractive for shoppers while showcasing a range of services and attractions in these areas that appeal to other types of visitors.

Making shopping areas the best they can be is challenging work. Different people want different things from main streets.

Our findings provides insights for local councils, which have a primary policy responsibility for main streets, as well as developers, investors and individual store owners. This knowledge can help them better plan and improve the retail and service mix for everyone.

Louise Grimmer, Retail Scholar, University of Tasmania; Martin Grimmer, Pro Vice-Chancellor and Professor of Marketing, University of Tasmania, and Paul J. Maginn, Interim Director, UWA Public Policy Institute; Associate Professor & Programme co-ordinator (Masters of Public Policy), The University of Western Australia

This article is republished from The Conversation under a Creative Commons license. 

Smart Solutions for School

Must-have essentials for back-to-school season

With school bells ringing for students of all ages, it’s important to make sure your student has all the necessities to be successful this year.

While that often means running from store to store in search of supplies, stylish clothes and other essentials, these top picks for securing valuables, decorating dorm rooms, planning out schedules, getting necessary nutrition and staying hydrated can help ensure your student is geared up for success in the classroom and beyond.

Find more back-to-school essentials and tips for success in the classroom at eLivingtoday.com.

Protect New Purchases on Campus

A new school year brings plenty of excitement, but it can also be stressful for students moving away from home who need to safeguard valuables like tablets, smartphones, passports, or an emergency credit card. To help alleviate back-to-school worries, SentrySafe, a leading name in fire-resistant and security storage for more than 90 years, offers solutions to provide peace of mind for parents and students. An affordable, convenient, and fireproof option, the 1200 Fire Chest protects items against fires up to 1,500 F for 30 minutes. It also features a built-in key lock and convenient handle for added security and simplified transport. Find more back-to-school security solutions at sentrysafe.com.

Quick and Easy Meals That Deserve an A+

Keeping weeknight dinners and school lunches simple means more time for family and less stress during the week. Cook up quick and easy weeknight dinners, school lunches or on-the-go snacks with Minute Rice Cups. Ready in only 1 minute, the BPA-free cups are available in a variety of flavors such as Chicken & Herb, Cilantro & Lime, Jalapeno and more. Visit MinuteRice.com to get meal ideas today.

Make Organization Personal

Help your student keep notes, study times and test dates organized with a quality planner that also showcases his or her personality. Available in a myriad of trendy colors and patterns – like polka dots, stripes or chevron – as well as various calendar layouts like daily, weekly or monthly, the right planner can help students of all ages stay on track, achieve goals and preserve memories in one stylish and organized place.

Sleep in Style

Where a student sleeps may be one of the last things on his or her mind when thinking about the excitement that awaits in college but getting plenty of sleep is key to success. Amp the appeal of the dorm-issue mattress with stylish and comfy bedding that reflects your personality. Look for quality threads you can snuggle into, and coordinate with pillows to make your bed a cozy place to sit and study by day.

Take H2O on the Go

A durable reusable water bottle can make your back-to-school routine even easier. With a variety of sizes and styles available in a multitude of colors and designs, there’s almost certain to be an option for students of all ages and activity levels. Look for durable, leak-proof stainless steel or hard plastic options that offer different lid styles, including wide-opening or those with retractable straws, to make hydrating on the walk between classes a breeze.

SOURCE:
SentrySafe
Minute Rice

Despite his government’s failure to anticipate Hamas’ deadly attack, don’t count Netanyahu out politically


Brent E Sasley, University of Texas at Arlington

Since the brutal Hamas attack on Israel on Oct. 7, 2023, news analysts and the public have focused on Israeli Prime Minister Benjamin Netanyahu and his role in the intelligence failure that preceded the attack, in which 1,400 people were killed.

In other parliamentary democracies, a failure of this magnitude would normally cost leaders their jobs, or at least spark challenges to their leadership.

But a closer look at Netanyahu’s political history shows that he is not like other leaders.

Over the last 24 years, he has been able not only to survive the rough and hard-hitting Israeli political arena, but to stay on top of it. Despite numerous setbacks and challenges that might well have terminated the career of other leaders, Netanyahu has come back to lead his party and take the prime minister’s office, again and again. His first term, 1996 to 1999, ended in a humiliating defeat. But he returned to his party’s leadership at the end of 2005. Between 2009 and 2023, he was able to form a coalition government five times.

It is possible that this time might be different, and that the government’s failure has been so devastating for Israelis that Netanyahu will be unable to recover. A week after the Israel-Hamas war began, a small majority of Israelis wanted Netanyahu to resign.

But based on his history, he might survive this scandal.

A Time Magazine cover with a photo of a man's face, and a headline saying
In 2012, Time ran a cover story that called Benjamin Netanyahu ‘King Bibi.’ Screenshot, Time Magazine

Mr. Security?

Netanyahu won his first election in May 1996, beating Labor leader Shimon Peres by a narrow margin. It was the country’s first split-ticket vote, in which citizens voted for both a party to represent them in parliament and for an individual for prime minister. Netanyahu won by claiming he could better protect Israelis in the wake of a surge of terrorist attacks in February and March of that year that had killed over 50 citizens.

Since then, commentators, especially those abroad, have referred to him as something like a protector of Israel. In 2012, Time ran a cover story that called Netanyahu “King Bibi.” A post-Oct. 7 piece in Foreign Policy referred to him as “Mr. Security,” a name it was said that Israelis themselves used.

Netanyahu has never presided over any military or diplomatic process that strengthened Israeli security; quite the opposite. His tenures have been marked by several intelligence failures and miscalculations, by the Oct. 7 attack and an inconclusive war with Hamas in 2014. He was indicted on corruption charges in 2019, but his trial has yet to conclude.

As a scholar of Israeli politics, I have watched Netanyahu ride a right-wing wave to win power several times since the mid-1990s.

It’s clear to me that his ability to win elections is rooted not in his own political foresight and reputation as a successful defender of Israel, but more a function of Israel’s political system and his ability to make wild promises to prospective coalition partners.

Route to power

Netanyahu’s political successes have often been the result of the public’s apparent decision that he is the best out of a set of poor choices.

The Israeli electoral system produces fragmented outcomes. It is common for dozens of parties to run in an election, and for 10 to win representation in the Knesset, Israel’s legislative body. A government is formed through bargaining between the parties, until a coalition obtains 61 votes – a simple majority – in the 120-seat Knesset.

The existence of so many parties, representing a range of views on religion in the public sphere, the Israeli-Palestinian conflict, Zionism and the relationship between the Jewish state and its Arab citizens, gives the person who aims to be prime minister options when trying to cobble together a coalition.

Because all the parties know this, and they know they can threaten to join a government under someone else, promises must be made to these parties by would-be leaders to secure their place in the government and their support in the Knesset.

These promises can include offering ministerial posts to leaders of the parties or commitments to provide more government funding to certain religious communities.

A group of women in the nighttime protesting and carrying signs that say things like 'Cease fire Hostage deal.'
Protesters in Tel Aviv, Israel, call for a cease-fire, a hostage deal and, in Hebrew, Benjamin Netanyahu’s resignation, on Oct. 28, 2023. Alexi J. Rosenfeld/Getty Images

Promises made

Netanyahu has excelled at making promises in order to stay in or gain power, even when they have gone against what the majority of Israelis want and his own prior commitments.

The most egregious example occurred after the 2022 elections when Netanyahu formed a government with far-right and fascist parties. Some of his promises included creating a militia under the control of Itamar Ben Gvir, leader of the Otzma Yehudit party, widely known for its anti-Arab racism.

Another promise Netanyahu made to entice Knesset members to join him in a coalition was to overhaul the judiciary, reducing its independence and making it a tool of the government. This promise became legislation and sparked what has become weekly protests against the policy as a threat to Israeli democracy, drawing hundreds of thousands of Israelis.

Netanyahu’s increasingly extreme promises indicate a desperation born out of fear of losing power. This is not surprising, since in every election since 2009, his party barely got a plurality of votes. If he could not form a majority coalition, another party and its leader could.

The highest percentage of the popular vote his Likud party has ever won was 29%, in 2020. Even then, Likud’s main rival, the Blue and White Party, won 27% of the vote. In other elections since then, Likud has won around 24% or 25%.

Netanyahu himself is more popular than his party, but not by much. In most of the elections that Netanyahu competed in as head of Likud, results commonly showed that a little more than half of voters supported him over his closest rivals.

In part, this support stems from his long years in politics. Netanyahu is a well-established figure, so there is some comfort for voters in choosing a candidate who is well known.

As head of Likud, he has been leader of one of the country’s oldest major parties. And though its share of seats has dropped over the years, Likud remains firmly entrenched in Israel’s political constellation. It can be difficult for observers to disentangle support for Netanyahu from support for the party.

Finally, no Israel government has lasted its full four-year term since 1988, forcing new elections to be called. There is a constant fear among coalition partners that a new election will weaken them. Supporting Netanyahu and Likud has often been the best way to avoid another election.

It may be, then, that contrary to expectations, Netanyahu will be able to outlast disasters as he has before, and remain a player in Israeli politics.

Brent E Sasley, Associate Professor of Political Science, University of Texas at Arlington

This article is republished from The Conversation under a Creative Commons license. 

How to Make Higher-Quality Choices at the Grocery Store

Grocery shopping can be stressful when there are so many options, especially if you’re making a conscious effort to make high-quality food choices while you shop. Arming yourself with a plan and plenty of information can help you make smarter choices and feel good about the meals you prepare for your family.

According to the Food Marketing Institute’s Power of Meat Report, 62% of consumers are looking for better-for-you meat and poultry options. Consider these ways you can pick up higher-quality products on your next trip to the grocery store.

Make a list and stick to it. Going shopping without a plan is a surefire way to make the trip to the grocery store less productive. Creating a list and identifying high-quality products that fit your needs can help you avoid impulse purchases. Plus, list-making can also help save money if you plan meals that let you use ingredients across multiple recipes for minimal waste.

Pay attention to labels. Food labels contain insightful details that can help you make well-informed decisions about the foods you buy. Especially when it comes to fresh products, like protein, you can learn a lot about how the food was raised, simply from its label. For example, Perdue’s “No Antibiotics Ever” label is the gold standard when it comes to reducing antibiotic use in chicken farming, compared to the “no hormones or steroids” label, which shows adherence solely to federal regulations.

“You can feel good about purchasing our products labeled No Antibiotics Ever knowing they were raised and fed in such a way that no antibiotics were ever needed,” said Dr. Bruce Stewart-Brown, senior vice president of technical services and innovation at Perdue Farms. “In order to achieve No Antibiotics Ever raised chickens, we worked hard to change our feed and care approach over the last 20 years.”

Know how to select fresh foods. If you find yourself overwhelmed when it comes to selecting produce and fresh meat, you’re not alone. When choosing fruits and vegetables, you generally want produce with a consistent color that is firm but not hard to the touch. Many fresh fruits and veggies emit an appealing fragrance at their peak ripeness.

When it comes to meat and poultry products, you can use a similar approach. For example, if you’re shopping for chicken, press down on the chicken in the package. If it’s plump and somewhat resilient, reverting to its shape, it’s a fresher pack. Also be wary of excess liquid in the pack, which can dilute the flavor or contribute to a soggy texture. You may also wonder which cuts are best. For a formal family meal, consider cooking a whole bird, which offers white and dark meat to please all appetites and can serve as a beautiful mealtime centerpiece.

Take some shortcuts. Even if you aim to prepare fresh, home-cooked meals most nights, there are sure to be some evenings when you need to squeeze in a quick meal around work, school and extracurriculars. Having a few simple go-to recipes can help. For example, an easy stir-fry with fresh chicken and frozen veggies can shave off prep time while still providing a hot, well-balanced meal. If you’re meal prepping for the week, marinate pre-cut chicken thighs or legs in different spices and seasonings to make cooking throughout the week simpler. Or try an option like Perdue’s Short Cuts, which include a variety of ready-to-eat, roasted, perfectly seasoned chicken breast strips.

Shop the store’s perimeter last. In most stores, fresh foods are located in refrigerated sections around the perimeter of the store. This is where you’ll find produce, fresh meat, poultry and dairy, giving you most of the essential ingredients for wholesome, well-balanced meals. Saving this section of the store for your last stop can help ensure perishable items spend less time away from refrigeration before you check out.  

Find chicken recipes and poultry shopping tips at perdue.com.

SOURCE:
Perdue Farms