Sunday, April 9, 2023

From the Big Bang to dark energy, knowledge of the cosmos has sped up in the past century — but big questions linger

“The first thing we know about the universe is that it’s really, really big,” says cosmologist Michael Turner, who has been contemplating this reality for more than four decades now. “And because the universe is so big,” he says, “it’s often beyond the reach of our instruments, and of our ideas.”

Certainly our current understanding of the cosmic story leaves some huge unanswered questions, says Turner, an emeritus professor at the University of Chicago and a visiting faculty member at UCLA. Take the question of origins. We now know that the universe has been expanding and evolving for something like 13.8 billion years, starting when everything in existence exploded outward from an initial state of near-infinite temperature and density — a.k.a. the Big Bang. Yet no one knows for sure what the Big Bang was, says Turner. Nor does anyone know what triggered it, or what came beforehand — or whether it’s even meaningful to talk about “time” before that initial event.

Then there’s the fact that the most distant stars and galaxies our telescopes can potentially see are confined to the “observable” universe: the region that encompasses objects such as galaxies and stars whose light has had time to reach us since the Big Bang. This is an almost inconceivably vast volume, says Turner, extending tens of billions of light-years in every direction. Yet we have no way of knowing what lies beyond. Just more of the same, perhaps, stretching out to infinity. Or realms that are utterly strange — right down to laws of physics that are very different from our own.

But then, as Turner explains in the 2022 Annual Review of Nuclear and Particle Science, mysteries are only to be expected. The scientific study of cosmology, the field that focuses on the origins and evolution of the universe, is barely a century old. It has already been transformed more than once by new ideas, new technologies and jaw-dropping discoveries — and there is every reason to expect more surprises to come.

Knowable Magazine  recently spoke with Turner about how these transformations occurred and what cosmology’s future might be. This interview has been edited for length and clarity.

You say in your article that modern, scientific cosmology didn’t get started until roughly the 1920s. What happened then?

It’s not as though nothing happened earlier. People have been speculating about the origin and evolution of the universe for as long as we know of. But most of what was done before about 100 years ago we would now call galactic astronomy, which is the study of stars, planets and interstellar gas clouds within our own Milky Way. At the time, in fact, a lot of astronomers argued that the Milky Way was the universe — that there was nothing else.

But two big things happened in the 1920s. One was the work of a young astronomer named Edwin Hubble. He took an interest in the nebulae, which were these fuzzy patches of light in the sky that astronomers had been cataloging for hundreds of years. There had always been a debate about their nature: Were they just clouds of gas relatively close by in the Milky Way, or other “island universes” as big as ours?

Nobody had been able to figure that out. But Hubble had access to a new 100-inch telescope, which was the largest in the world at that time. And that gave him an instrument powerful enough to look at some of the biggest and brightest of the nebulae, and show that they contained individual stars, not just gas. By 1925, he was also able to estimate the distance to the very brightest nebula, in the constellation of Andromeda. It lay well outside the Milky Way. It was a whole other galaxy just like ours.

So that paper alone solved the riddle of the nebulae and put Hubble on the map as a great astronomer. In today’s terms, he had identified the fundamental architecture of the universe, which is that it consists of these collections of stars organized into galaxies like our own Milky Way — about 200 billion of them in the part of the universe we can see.

But he didn’t stop there. In those days there was this — well, “war” is probably too strong a word, but a separation between the astronomers who took pictures and the astrophysicists who used spectroscopy, which was a technique that physicists had developed in the 19th century to analyze the wavelengths of light emitted from distant objects. Once you started taking spectra of things like stars or planets, and comparing their emissions with those from known chemical elements in the laboratory, you could say, “Oh, not only do I know what it’s made of, but I know its temperature and how fast it’s moving towards or away from us.” So you could start really studying the object.

Just like in other areas of science, though, the very best people in astronomy use all the tools at hand, be they pictures or spectra. In Hubble’s case, he paid particular attention to an earlier paper that had used spectroscopy to measure the velocity of the nebulae. Now, the striking thing about this paper was that some of the nebulae were moving away from us at many hundreds of kilometers per second. In spectroscopic terms they had a high “redshift,” meaning that their emissions were shifted toward longer wavelengths than you’d see in the lab.

So in 1929, when Hubble had solid distance data for two dozen galaxies and reasonable estimates for more, he plotted those values against the redshift data. And he got a striking correlation: The further away a galaxy was, the faster it was moving away from us.

This was the relation that’s now known as Hubble’s law. It took a while to figure out what it meant, though.

Why? Did it require a second big development?

Yes. A bit earlier, in 1915, Albert Einstein had put forward his theory of general relativity, which was a complete paradigm shift and reformulation of gravity. His key insight was that space and time are not fixed, as physicists had always assumed, but are dynamic. Matter and energy bend space and time around themselves, and the “force” we call gravity is just the result of objects being deflected as they move around in this curved space-time. As the late physicist John Archibald Wheeler famously said, “Space tells matter how to move, and matter tells space how to curve.”

It took a few years to connect Einstein’s theory with observation. But by the early or mid-1930s, it was clear that what Hubble had discovered was not that galaxies are moving away from us into empty space, but that space itself is expanding and carrying the galaxies along with it. The whole universe is expanding.

And at least a few scientists in the 1930s began to realize that Hubble’s discovery also meant there was a beginning to the universe.

The turning point was probably George Gamow, a Soviet physicist who defected to the US in the 1930s. He had studied general relativity as a student in Leningrad, and knew that Einstein’s equations implied that the universe had expanded from a “singularity” — a mathematical point where time began and the radius of the universe was zero. It’s what we now call the Big Bang.

But Gamow also knew nuclear physics, which he had helped develop before World War II. And around 1948, he and his collaborators started to combine general relativity and nuclear physics into a model of the universe’s beginning to explain where the elements in the periodic table came from.

Their key idea was that the universe started out hot, then cooled as it expanded the way gas from an aerosol can does. This was totally theoretical at the time. But it would be confirmed in 1965 when radio astronomers discovered the cosmic microwave background radiation. This radiation consists of high-energy photons that emerged from the Big Bang and cooled down as the universe expanded, until today they are just 3 degrees Kelvin above absolute zero — which is also the average temperature of the universe as a whole.

In this hot, primordial soup — called ylem by Gamow — matter would not exist in the form it does today. The extreme heat would boil atoms into their constituent components — neutrons, protons and electrons. Gamow’s dream was that nuclear reactions in the cooling soup would have produced all the elements, as neutrons and protons combined to make the nuclei of the various atoms in the periodic table.

But his idea came up short. It took a number of years and a village of people to get the calculations right. But by the 1960s, it was clear that what would come from these nuclear reactions was mostly hydrogen, plus a lot of helium — about 25 percent by weight, exactly what astronomers observed — plus a little bit of deuterium, helium-3 and lithium. Heavier elements such as carbon and oxygen were made later, by nuclear reactions in stars and other processes.

So by the early 1970s, we had the creation of the light elements in a hot Big Bang, the expansion of the universe and the microwave background radiation — the three observational pillars of what’s been called the standard model of cosmology, and what I call the first paradigm.

But you note that cosmologists almost immediately began to shift toward a second paradigm. Why? Was the Big Bang model wrong?

Not wrong — our current understanding still has a hot Big Bang beginning — but incomplete. By the 1970s the idea of a hot beginning was attracting the attention of particle physicists, who saw the Big Bang as a way to study particle collisions at energies you couldn’t hope to reach at accelerators here on Earth. So the field suddenly got a lot bigger, and people started asking questions that suggested the standard cosmology was missing something.

For example, why is the universe so smooth? The intensity and temperature of the microwave background radiation, which is the best measure we have of the whole universe, is almost perfectly uniform in every direction. There’s nothing in Einstein’s cosmological equations that says this has to be the case.

The biggest things in the universe originated from the unimaginably small.

On the flip side, though — why is that cosmic smoothness only almost  perfect? After all, the most prominent features of the universe today are the galaxies, which must have formed as gravity magnified tiny fluctuations in the density of matter in the early universe. So where did those fluctuations come from? What seeded the galaxies?

Around this time, evidence had accumulated that neutrons and protons were made of smaller bits — quarks — which meant that the neutron-proton soup would eventually boil, too, becoming a quark soup at the earliest times. So maybe the answers lie in that early quark soup phase, or even earlier.

This is the possibility that led Alan Guth to his brilliant paper on cosmic inflation in 1981.

What is cosmic inflation?

Guth’s idea was that in the tiniest fraction of a second after the initial singularity, according to new ideas in particle physics, the universe ought to undergo a burst of accelerated expansion. This would have been an exponential expansion, far faster than in the standard Big Bang model. The size of the universe would have doubled and doubled and doubled again, enough times to take a subatomic patch of space and blow it up to the scale of the observable universe.

This explained the uniformity of the universe right away, just like if you had a balloon and blew it up until it was the size of the Earth or bigger: It would look smooth. But inflation also explained the galaxies. In the quantum world, it’s normal for things like the number of particles in a tiny region to bounce around. Ordinarily, this averages out to zero and we don’t notice it. But when cosmic inflation produced this tremendous expansion, it blew up these subatomic fluctuations to astrophysical scales, and provided the seeds for galaxy formation.

This result is the poster child for the connection between particle physics and cosmology: The biggest things in the universe — galaxies and clusters of galaxies — originated from quantum fluctuations that were unimaginably small.

You have written that the second paradigm has three pillars, cosmic inflation being the first. What about the other two?

When the details of inflation were being worked out in the early 1980s, people saw there was something else missing. The exponential expansion would have stretched everything out until space was “flat” in a certain mathematical sense. But according to Einstein’s general relativity, the only way the universe could be flat was if its mass and energy content averaged out to a certain critical density. This value was really small, equivalent to a few hydrogen atoms per cubic meter.

But even that was a stretch: Astronomers’ best measurements for the mean density of all the planets, stars and gas in the universe — all the stuff made of atoms — wasn’t even 10 percent of the critical density. (The modern figure is 4.9 percent.) So something else that was not made of atoms had to be making up the difference.

That something turned out to have two components, one of which astronomers had already begun to detect through its gravitational effects. Fritz Zwicky found the first clue back in the 1930s, when he looked at the motions of galaxies in distant clusters. Each of these galactic clusters was obviously held together by gravity, because their galaxies were all close and not flying apart. Yet the velocities Zwicky found were really high, and he concluded that the visible stars alone couldn’t produce nearly enough gravity to keep the galaxies bound. The extra gravity had to be coming from some form of “dark matter” that didn’t shine, but that outweighed the visible stars by a large factor.

Then Vera Rubin and Kent Ford really brought it home in the 1970s with their studies of rotation in ordinary nearby galaxies, starting with Andromeda. They found that the rotation rates were way too fast: There weren’t nearly enough stars and interstellar gas to hold these galaxies together. The extra gravity had to be coming from something invisible — again, dark matter.

Particle physicists loved the dark matter idea, because their unified field theories contained hypothetical particles with names like neutralino, or axion, that would have been produced in huge numbers during the Big Bang, and that had exactly the right properties. They wouldn’t give off light because they had no electric charge and very weak interactions with ordinary matter. But they would have enough mass to produce dark matter’s gravitational effects.

We haven’t yet detected these particles in the laboratory. But we do know some things about them. They’re “cold,” for example, meaning that they move slowly compared to the speed of light. And we know from computer simulations that without the gravity of cold dark matter, those tiny density fluctuations in the ordinary matter that emerged from the Big Bang would never have collapsed into galaxies. They just didn’t have enough gravity by themselves.

So that was the second pillar, cold dark matter. And the third?

As the simulations and the observations improved, cosmologists began to realize that even dark matter was only a fraction of the critical density needed to make the universe flat. (The modern figure is 26.8 percent.) The missing piece was found in 1998 when two groups of astronomers did a very careful measurement of the redshift in distant galaxies, and found that the cosmic expansion was gradually accelerating.

So something — I suggested calling it “dark energy,” and the name stuck — is pushing the universe apart. Our best understanding is that dark energy leads to repulsive gravity, something that is built into Einstein’s general relativity. The crucial feature of dark energy is its elasticity or negative pressure. And further, it can’t be broken into particles — it is more like an extremely elastic medium.

While dark energy remains one of the great mysteries of cosmology and particle physics, it seems to be mathematically equivalent to the cosmological constant that Einstein suggested in 1917. In the modern interpretation, though, it corresponds to the energy of nature’s quantum vacuum. This leads to an extraordinary picture: the cosmic expansion speeding up rather than slowing, all caused by the repulsive gravity of a very elastic, mysterious component of the universe called dark energy. The equally extraordinary evidence for this extraordinary claim has built up ever since and the two teams that made the 1998 discovery were awarded the Nobel Prize in Physics in 2011.

So here is where we are: a flat, critical-density universe comprising ordinary matter at about 5 percent, particle dark matter at about 25 percent and dark energy at about 70 percent. The cosmological constant is still called lambda, the Greek letter that Einstein used. And so the new paradigm is referred to as the lambda-cold dark matter model of cosmology.

So this is your second paradigm — inflation plus cold dark matter plus dark energy?

Yes. And it’s this amazing, glass-half-full, half-empty situation. The lambda-cold dark matter paradigm has these three pillars that are well established with evidence, and that allow us to describe the evolution of the universe from a tiny fraction of a second until today. But we know we’re not done.

For example, you say, “Wow, cosmic inflation sounds really important. It’s why we have a flat universe today and explains the seeds for galaxies. Tell me the details.” Well, we don’t know the details. Our best understanding is that inflation was caused by some still unknown field similar to the Higgs boson discovered in 2012.

Then you say, “Yeah, this dark matter sounds really important. Its gravity is responsible for the formation of all the galaxies and clusters in the universe. What is it?” We don’t know. It’s probably some kind of particle left over from the Big Bang, but we haven’t found it.

“You say, ‘Yeah, this dark matter sounds really important. Its gravity is responsible for the formation of all the galaxies and clusters in the universe. What is it?’ We don’t know.”

And then finally you say, “Oh, dark energy is 70 percent of the universe. That must be really important. Tell me more about it.” And we say, it’s consistent with a cosmological constant. But really, we don’t have a clue why the cosmological constant should exist or have the value it does.

So now cosmology has left us with three physics questions: Dark matter, dark energy and inflation — what are they?

Does that mean we need a third cosmological paradigm to find the answers?

Maybe. It could be that everything’s done in 30 years because we just flesh out our current ideas. We discover that dark matter really is some particle like the axion, that dark energy really is just the constant quantum energy of empty space, and that inflation really was caused by the Higgs field.

But more likely than not, if history is any guide, we’re missing something and there’s a surprise on the horizon.

Some cosmologists are trying to find this surprise by following the really big questions. For example: What was the Big Bang? And what happened beforehand? The Big Bang theory we talked about earlier is anything but a theory of the Big Bang itself; it’s a theory of what happened afterwards.

Remember, the actual Big Bang event, according to Einstein’s general relativity, was this singularity that saw the creation of matter, energy, space and time itself. That’s the big mystery, which we struggle even to talk about in scientific terms: Was there a phase before this singularity? And if so, what was it like? Or, as many theorists think, does the singularity in Einstein’s equations represent the instant when space and time themselves emerged from something more fundamental?

Another possibility that has captured the attention of scientists and public alike is the multiverse. This follows from inflation, where we imagine blowing up a small bit of space to an enormous size. Could that happen more than once, at different places and times? And the answer is yes: You could have had different patches of the wider multiverse inflating into entirely different universes, maybe with different laws of physics in each one. It could be the biggest idea since Copernicus moved us out of the center of the universe. But it’s also very frustrating because right now, it isn’t science: These universes would be completely disconnected, with no way to access them, observe them or show that they actually exist.

Yet another possibility is in the title of my Annual Reviews  article: The road to precision cosmology. It used to be that cosmology was really difficult because the instruments weren’t quite up to the task. Back in the 1930s, Hubble and his colleague Milton Humason struggled for years to collect redshifts for a few hundred galaxies, in part because they were recording one spectrum at a time on photographic plates that collected less than 1 percent of the light. Now astronomers use electronic CCD detectors — the same kind that everyone carries around in their phone — that collect almost 100 percent of the light. It’s as if you increased your telescope size without any construction.

And we have projects like the Dark Energy Spectroscopic Instrument on Kitt Peak in Arizona that can collect the spectra of 5,000 galaxies at once — 35 million of them over five years.

So cosmology used to be a data-poor science in which it was hard to measure things within any reliable precision. And today, we are doing precision cosmology, with percent-level accuracy. And further, we are sometimes able to measure things in two different ways, and see if the results agree, creating cross-cuts that can confirm our current paradigm or reveal cracks in it.

A prime example of this is the expansion rate of the universe, what’s called the Hubble parameter — the most important number in cosmology. If nothing else, it tells us the age of the universe: The bigger the parameter, the younger the universe, and vice versa. Today we can measure it directly with the velocities and distances of galaxies out to a few hundred-million light years, at the few percent level.

But there is now another way to measure it with satellite observations of the microwave background radiation, which gives you the expansion rate when the universe was about 380,000 years old, at even greater precision. With the lambda-cold dark matter model you can extrapolate that expansion rate forward to the present day and see if you get the same number as you do with redshifts. And you don’t: The numbers differ by almost 10 percent — an ongoing puzzle that’s called the Hubble tension.

So maybe that’s the loose thread — the tiny discrepancy in the precision measurements that could lead to another paradigm shift. It could be just that the direct measurements of galaxy distances are wrong, or that the microwave background numbers are wrong. But maybe we are finding something that’s missing from lambda-cold dark matter. That would be extremely exciting.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

3 Tips for Creating a Summer of Unplugged Fun

Between school, work and entertainment, there are times when screens can seem like a pervasive part of modern life. For all the positive aspects of technology, there can also be a desire for children to have stretches of unplugged learning and participate in educational activities that do not require a screen.

Why Unplugged Learning Matters
“Unplugged learning is important to balance the screen time children may experience with other forms of learning; to promote physical activities, social interaction and creativity; and develop the essential skills that bolster them throughout their exploration and growth as individuals,” said Rurik Nackerud from KinderCare’s education team.

Summer can be an ideal time to focus on unplugged learning as it often brings a break from the traditional academic year and activities.

“We want summer to be a time when children can put down technology and connect with one another face-to-face, build important creativity skills and learn how to be social with one another without the buffer of screens,” said Khy Sline from KinderCare’s education team. “They can play, run, be immature and laugh with their friends, giggle at the silly things and find joys in those in-person interactions with one another.”

Tips for Creating Unplugged Fun as a Family

  1. Get Outdoors. Make time as a family to get outside and explore, even if it’s simply a walk around the block after dinner. Help children notice the little things like a bug on the sidewalk or the way the sun filters through tree leaves to make patterns on the ground. Ask them about the things they see and give your children the space to ask questions and work together to find the answers. This helps teach children collaborative learning skills: asking questions, sharing ideas and working together to reach an answer.
     
  2. Read Together. This could mean going to the library to check out new books or exploring your family’s bookshelves for old favorites. Snuggle up together for family story time. If children are old enough to read on their own, invite them to read to you or their younger siblings. Talk about the story or even act out favorite parts to help your children actively participate in story time, which may help them better understand the story’s concepts.
     
  3. Encourage Creative Thinking. Help children expand their ability to think creatively by working together to make a craft or project. For example, the next time a delivery box arrives at your home, encourage your children to turn it into something new using craft supplies on hand. A blanket could turn a box into a table for a pretend restaurant while some tape or glue could transform it into a rocket ship or train. When everyone’s done creating and playing, the box can be broken down for recycling. This activity can help children literally think outside of the box and apply their own unique ideas and creativity to create something new.

For more tips to encourage unplugged learning this summer, visit kindercare.com.

 

SOURCE:
KinderCare

What does ‘moral hazard’ mean? A scholar of financial regulation explains why it’s risky for the government to rescue banks

A real payload. tiero/iStock via Getty Images Plus
Cassandra Jones Havard, University of South Carolina

Moral hazard” refers to the risks that someone or something becomes more inclined to take because they have reason to believe that an insurer will cover the costs of any damages.

The concept describes financial recklessness. It has its roots in the advent of private insurance companies about 350 years ago. Soon after they began to form, it became clear that people who bought insurance policies took risks they wouldn’t have taken without that coverage.

Here are some illustrative examples: Having worker’s compensation insurance could potentially encourage some workers to stay out of work longer than needed for their health. Or, homeowners insurance may explain why a homeowner might not bother spending their own money on a small repair not covered by their insurance policy because they figure that over time it will turn into a larger problem that would be covered.

Or think of what happens when someone rents a car and parks it where it can easily be damaged. That carelessness reflects an assumption that the rental car company’s insurance policy will pay for the repairs.

Why moral hazard matters

U.S. banks are insured by the Federal Deposit Insurance Corporation, or FDIC, and the risk-takers are both banks and the bank’s depositors.

Congress established the FDIC during the Great Depression, which began with a spate of bank runs. The goal was to boost confidence in the banking system.

The Dodd-Frank Financial Reform Act, enacted after the 2008 financial crisis, was supposed to reduce moral hazard. One way it did that was by making it clear that accounts of more than US$250,000 aren’t insured by the FDIC unless the bank’s failure presents a systemic risk to the financial system.

The implicit assumption behind the government’s insurance limit, which prior to 2008 stood at $100,000, is that depositors who have accounts worth more than the limit will bear the loss of bank failure along with the bank’s executives and shareholders. Yet boosting the size of the guarantee amount also made future bank bailouts more costly, which in turn increased moral hazard.

And when Silicon Valley Bank failed in March 2023, all its depositors got access to their funds – including those with accounts that exceeded the $250,000 limit – because the government made an exception.

‘Too big to fail’

I teach and write about moral hazard in the banking industry as a banking law professor. As it happens, my banking law class had discussed moral hazard and bank failure for three class sessions held before the 2023 spring break.

When the students returned from their vacation, news of Silicon Valley Bank’s failure appeared to be the start of what might become a bank crisis.

“What happened? It’s completely different from what you taught us!” the students in my class exclaimed, almost in unison. Questions tumbled from their heads demanding an explanation.

Why did the government apparently throw out concerns about moral hazard when SVB failed?

Any explanation would have to begin with what moral hazard can mean in the context of banking, which can summon the colloquial phrase “too big to fail.”

That controversial concept applies to how the government responds in the aftermath of the risky behavior of a bank – if the collapse of the bank is likely to harm the economy. Yet, in reducing the risk of a widespread financial crisis, the government can end up sending the message that it’s willing to protect banks that engage in reckless behavior – and to shield their customers from the consequences.

Cassandra Jones Havard, Professor of Law, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The ancient origins of glass

Featuring ingots, shipwrecks, pharaohs and an international trade in colors, the material’s rich history is being traced using modern archaeology and materials science

Today, glass is ordinary, on-the-kitchen-shelf stuff. But early in its history, glass was bling for kings.

Thousands of years ago, the pharaohs of ancient Egypt surrounded themselves with the stuff, even in death, leaving stunning specimens for archaeologists to uncover. King Tutankhamen’s tomb housed a decorative writing palette and  two blue-hued headrests made of solid glass that may once have supported the head of sleeping royals. His funerary mask sports blue glass inlays that alternate with gold to frame the king’s face.

In a world filled with the buff, brown and sand hues of more utilitarian Late Bronze Age materials, glass — saturated with blue, purple, turquoise, yellow, red and white — would have afforded the most striking colors other than gemstones, says Andrew Shortland, an archaeological scientist at Cranfield University in Shrivenham, England. In a hierarchy of materials, glass would have sat slightly beneath silver and gold and would have been valued as much as precious stones were.

But many questions remain about the prized material. Where was glass first fashioned? How was it worked and colored, and passed around the ancient world? Though much is still mysterious, in the last few decades materials science techniques and a reanalysis of artifacts excavated in the past have begun to fill in details.

This analysis, in turn, opens a window onto the lives of Bronze Age artisans, traders and kings, and the international connections between them.

Glass from the past

Glass, both ancient and modern, is a material usually made of silicon dioxide, or silica, that is characterized by its disorderly atoms. In crystalline quartz, atoms are pinned to regularly spaced positions in a repeating pattern. But in glass, the same building blocks — a silicon atom buddied up with oxygens — are arranged topsy-turvy.

Archaeologists have found glass beads dating to as early as the third millenium BCE. Glazes based on the same materials and technology date earlier still. But it was in the Late Bronze Age — 1600 to 1200 BCE — that the use of glass seems to have really taken off, in Egypt, Mycenaean Greece and Mesopotamia, also called the Near East (located in what’s now Syria and Iraq).

Unlike today, glass of those times was often opaque and saturated with color, and the source of the silica was crushed quartz pebbles, not sand. Clever ancients figured out how to lower the melting temperature of the crushed quartz to what could be reached in Bronze Age furnaces: They used the ash of desert plants, which contain high levels of salts such as sodium carbonate or bicarbonates. The plants also contain lime — calcium oxide — that made the glass more stable. Ancient glassmakers also added materials that impart color to glass, such as cobalt for dark blue, or lead antimonate for yellow. The ingredients melded in the melt, contributing chemical clues that researchers look for today.

“We can start to parse the raw materials that went into the production of the glass and then suggest where in the world it came from,” says materials scientist Marc Walton of Northwestern University in Evanston, Illinois, coauthor of an article about materials science and archaeological artifacts and artwork in the 2021 Annual Review of Materials Research.

But those clues have taken researchers only so far. When Shortland and colleagues were investigating glass’s origins around 20 years ago, glass from Egypt, the Near East and Greece appeared to be chemical lookalikes, difficult to distinguish based on the techniques available at the time.

The exception was blue glass, thanks to work by Polish-born chemist Alexander Kaczmarczyk who in the 1980s discovered that elements such as aluminum, manganese, nickel and zinc tag along with the cobalt that gives glass an abyssal blue hue. By examining the relative amounts of these, Kaczmarczyk’s team even tracked the cobalt ore used for blue coloring to its mineral source in specific Egyptian oases.

Picking up where Kaczmarczyk left off, Shortland set out to understand how ancient Egyptians worked with that cobalt ore. The material, a sulfate-containing compound called alum, won’t incorporate into the glass. But in the lab, Shortland and colleagues reproduced a chemical reaction that Late Bronze Age craftspeople may have used to create a compatible pigment. And they created a deep blue glass that did, in fact, resemble Egyptian blue glass.

In the first years of this century, a relatively new method offered more insights. Called laser ablation inductively coupled mass spectrometry, or LA-ICP-MS, the technique uses a laser to remove a tiny speck of material, invisible to the naked eye. (“That’s very much more acceptable to a museum than getting the big hammer out and taking a piece off,” Shortland says.) It then uses mass spectrometry to measure a suite of elements, creating a chemical fingerprint of the sample.

Based on this method, in 2009 Shortland, Walton and others analyzed Late Bronze Age glass beads unearthed in Greece, which some researchers proposed had its own glass production workshops. The analysis revealed that the Grecian glass had either Near Eastern or Egyptian signatures, supporting the idea that Greece imported glass from both places and, though it may have worked the glass, did not make it locally. Egyptian glasses tended to have higher levels of lanthanum, zirconium and titanium, while Near Eastern glasses tended to have more chromium.

Obscure origins

But where was glass first birthed? For at least 100 years, researchers have debated over two main contenders: the Near East and Egypt. Based on some beautiful, well-preserved glass artifacts dating from around 1500 BCE, Egypt was favored at first. But by the 1980s, researchers were placing their bets on the Near East after excavators found loads of glass at Nuzi, a Late Bronze Age provincial town in modern-day Iraq, thought to date from the 1500s BCE.

Around that same time, though, a reanalysis of archaeological texts revealed that Nuzi was 100 to 150 years younger than estimated, and the Egyptian glass industry from that time period seems to have been more advanced — favoring Egypt once again.

But that isn’t the end of the story. Glass can degrade, especially in wet conditions. Objects from Egypt’s ancient tombs and towns have lasted millennia, aided by the desert’s nearly ideal preservation environment. Near Eastern glass, on the other hand, from tombs on Mesopotamian floodplains, more frequently faced attacks by water, which can leach out stabilizing compounds and turn glass to flaky powder.

This deteriorated glass is difficult to identify and impossible to display, meaning lots of Near East glass may be missed. “I think a lot of the glass has effectively disappeared,” Shortland says. “Early excavations were less bothered about this flaky ex-glass than they might have been about other things.”

The bottom line: “You can’t really decide which is the earliest at the moment,” Shortland says.

Finding glassmaking

It’s even tricky to parse where glass was made at all. That’s partly because the material was frequently exchanged, both as finished objects and as raw glass to be worked into beads or vessels.

Glass helped to tie ancient empires together, says Thilo Rehren, an archaeological materials scientist at the Cyprus Institute in Nicosia who has examined the craftsmanship behind objects from Tut’s tomb, among others. Kings shipped materials to other rulers, expecting goods or loyalty in return, he says. Ancient inventories from the Late Bronze Age reveal an exchange of ivory, gems, wood, animals, people and more, and while the role of glass in this convention of gifting and tribute isn’t fully understood, the composition of artifacts supports glass swaps too.

In a glass bead necklace excavated in Gurob, Egypt, in an area thought to once have been a harem palace, Shortland and colleagues found the chemical signature associated with Mesopotamia: relatively high levels of chromium. The beads’ location implied that the bling was probably gifted to Pharaoh Thutmose III along with Near Eastern women who became the king’s wives. With chemistry on the case, “we’re now just beginning to see some of this exchange going on between Egypt and other areas,” Shortland says.

In the early 1980s, divers found the mother lode of such exchanges off the coast of Turkey in a sunken vessel from the 1300s BCE called the Uluburun shipwreck. Analysis of its contents reveals a global economy, says Caroline Jackson, an archaeologist at the University of Sheffield in England. Possibly a Phoenician ship on a gift-giving expedition, the vessel was hauling items from all over: ivory, copper, tin, even amber from the Baltic. From the wreck, excavators retrieved a load of colored glass — 175 unfinished blocks, called ingots, for glassworking.

Most of the ingots were cobalt-colored deep blue, but the ship was also ferrying purple and turquoise ingots. Jackson and her colleagues chipped a few small fragments off of three ingots and reported in 2010 that the raw glass blocks were Egyptian in origin, based on the concentration of trace metals.

Tracing glassmaking

Another reason why it’s tricky to identify sites for glassmaking is that the process makes little waste. “You get a finished object, and that, of course, goes into the museum,” Rehren says. That led him and archaeologist Edgar Pusch, working in in a flea-ridden dig house on the Nile Delta about 20 years ago, to ponder pottery pieces for signs of an ancient glassmaking studio. The site, near present day Qantir, Egypt, was the capital of Pharaoh Ramses II in the 1200s BCE.

Rehren and Pusch saw that many of the vessels had a lime-rich layer, which would have acted as a nonstick barrier between glass and the ceramic, allowing glass to be lifted out easily. Some of these suspected glassmaking vessels — including a reused beer jar — contained white, foamy-looking semi-finished glass. Rehren and Pusch also linked the color of the pottery vessels to the temperature they’d withstood in the furnace. At around 900 degrees Celsius, the raw materials could have been melted, to make that semi-finished glass. But some crucibles were dark red or black, suggesting they’d been heated to at least 1,000 degrees Celsius, a high enough temperature to finish melting the glass and color it evenly to produce a glass ingot.

Some crucibles even contained lingering bits of red glass, colored with copper. “We were able to identify the evidence for glassmaking,” Rehren says. “Nobody knew what it should have looked like.”

Since then, Rehren and colleagues have found similar evidence of glassmaking and ingot production at other sites, including the ancient desert city of Tell el-Amarna, known as Amarna for short, briefly the capital of Akhenaton during the 1300s BCE. And they noticed an interesting pattern. In Amarna’s crucibles, only cobalt blue glass fragments showed up. But at Qantir, where red-imparting copper was also worked to make bronze, excavated crucibles contain predominantly red glass fragments. (“Those people knew exactly how to deal with copper — that was their special skill,” Rehren says.) At Qantir, Egyptian Egyptologist Mahmoud Hamza even unearthed a large corroded red glass ingot in the 1920s. And at a site called Lisht, crucibles with glass remains contain primarily turquoise-colored fragments.

The monochrome finds at each site suggest that workshops specialized in one color, Rehren says. But artisans apparently had access to a rainbow. At Amarna, glass rods excavated from the site — probably made from re-melted ingots — come in a variety of colors, supporting the idea that colored ingots were shipped and traded for glassworking at many locations.

Glass on the ground

Archaeologists continue to pursue the story of glass at Amarna — and, in some cases, to more carefully repeat the explorations of earlier archaeologists.

In 1921-22, a British team led by archaeologist Leonard Woolley (most famous for his excavations at Ur) excavated Amarna. “Let’s put it bluntly — he made a total mess,” says Anna Hodgkinson, an Egyptologist and archaeologist at the Free University of Berlin. In a hurry and focused on more showy finds, Woolley didn’t do due diligence in documenting the glass. Excavating in 2014 and 2017, Hodgkinson and colleagues worked to pick up the missed pieces.

Hodgkinson’s team found glass rods and chips all over the area of Amarna they excavated. Some were unearthed near  relatively low-status households without kilns, a headscratcher because of the assumed role of glass in signifying status. Inspired by even older Egyptian art that depicted two metalworkers blowing into a fire with pipes, the archaeologists wondered whether small fires could be used to work glass. Sweating and getting stinky around the flames,  they discovered they could reach high enough temperatures to form beads in smaller fires than those typically associated with glasswork. Such tiny fireplaces may have been missed by earlier excavators, Hodgkinson says, so perhaps glassworking was less exclusive than researchers have always thought. Maybe women and children were also involved, Hodgkinson speculates, reflecting on the many hands required to maintain the fire.

Rehren, too, has been rethinking whom glass was for, since Near Eastern merchant towns had so much of it and large amounts were shipped to Greece. “It doesn’t smell to me like a closely controlled royal commodity,” he says. “I’m convinced that we will, in 5, 10 years, be able to argue that glass was an expensive and specialist commodity, but not a tightly controlled one.” Elite, but not just for royalty.

Researchers are also starting to use materials science to track down a potential trade in colors. In 2020, Shortland and colleagues reported using isotopes — versions of elements that differ in their atomic weights — to trace the source of antimony, an element that can be used to create a yellow color or that can make glass opaque. “The vast majority of the very early glass — that’s the beginning of glassmaking — has antimony in it,” Shortland says. But antimony is quite rare, leading Shortland’s team to wonder where ancient glassmakers got it from.

The antimony isotopes in the glass, they found, matched ores containing antimony sulfide, or stibnite, from present-day Georgia in the Caucasus — one of the best pieces of evidence for an international trade in colors.

Researchers are continuing to examine the era of first glass. While Egypt has gotten a large share of the attention, there are many sites in the Near East that archaeologists could still excavate in search of new leads. And with modern-day restrictions on moving objects to other countries or even off-site for analysis, Hodgkinson and other archaeologists are working to apply portable methods in the field and develop collaborations with local researchers. Meanwhile, many old objects may yield new clues  as they are analyzed again with more powerful techniques.

As our historical knowledge about glass continues to be shaped, Rehren cautions against certainty in the conclusions. Though archaeologists, aided by records and what’s known of cultural contexts, carefully infer the significance and saga of artifacts, only a fraction of a percent of the materials that once littered any given site even survives today. “You get conflicting information, conflicting ideas,” he says. All these fragments of information, of glass, “you can assemble in different ways to make different pictures.”

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

A Beautifully Baked Beef Dinner

(Culinary.net) Many families crave savory and delicious weeknight meals. After a long day of work and school, it’s time to gather around the table to share a mouthwatering meal and memories together.

For something truly wholesome, try this Beef Tenderloin with Roasted Cauliflower and Spinach Salad. It’s a full meal the whole family can enjoy, and you’ll be surprised at how easy it makes it to feed all the smiling faces.

This meal has layers of flavor and sneaks in a few vegetables like spinach and cauliflower, but even picky eaters can’t resist trying it.

Start with a beef tenderloin and drizzle it generously with olive oil. Add 2 tablespoons of pepper. Flip and repeat on the other side. Bake for 12 minutes at 475 F.

Next, add one head of cauliflower to a mixing bowl with five shallots cut into quarters. Add 2 tablespoons of olive oil; mix well with salt and pepper, to taste. Add this to the baking sheet with the beef tenderloin and bake 18-25 minutes.

While that’s cooking, add 3 tablespoons of olive oil to a mixing bowl with lemon juice, Dijon mustard, salt, pepper and baby spinach.

To plate, add baby spinach salad first then the cauliflower and shallot mixture and, finally, that juicy, perfectly cooked beef tenderloin. Garnish with cranberries for a splash of color.

This meal is satisfying and only requires some mixing bowls and a large sheet pan to make cleanup a breeze so you can focus on what really matters most: time with your loved ones.

Find more recipes and savory main dishes at Culinary.net.

Watch video to see how to make this recipe!

Beef Tenderloin with Roasted Cauliflower and Spinach Salad

Servings: 4-6

  • 1          beef tenderloin (4 pounds), wrapped with butcher’s twine
  • 9          tablespoons olive oil, divided
  • 4          teaspoons pepper, divided
  • 1          head cauliflower
  • 5          shallots, quartered
  • 2          teaspoons salt, divided
  • 3          tablespoons lemon juice
  • 2          teaspoons Dijon mustard
  • 1          package (5 1/2 ounces) baby spinach
  • dried cranberries, for garnish
  1. Heat oven to 475 F. Place beef on baking sheet. Rub 4 tablespoons olive oil and 2 teaspoons pepper into beef. Bake 12 minutes.
  2. In large bowl, toss cauliflower, shallots, 1 teaspoon salt and 1 teaspoon pepper to combine. Scatter vegetables around beef and bake 18-25 minutes, or until desired doneness is reached. Allow meat to rest 15 minutes covered in aluminum foil.
  3. In medium bowl, whisk 3 tablespoons olive oil, lemon juice, mustard and remaining salt and pepper until combined. Add spinach; stir until combined.
  4. Serve by layering spinach topped with cauliflower and shallots then sliced tenderloin. Garnish with dried cranberries.
SOURCE:
Culinary.net

Saturday, April 8, 2023

April 26: Join a conversation about the teenage brain’s strengths and vulnerabilities, how adults can support teenagers with mental health issues, and how teens can help one another

April 26, 2023 | 12 p.m. Pacific | 3 p.m. Eastern | 7 p.m. UTC

REGISTER

It may be difficult for older adults to fathom, but today’s teenagers have never lived in a world where depression, anxiety and other mental health disorders weren’t rife — and on the rise — among their peers. Just a few decades ago, many psychiatrists thought depression was a condition that affected only adults. Now we know better: Researchers think more than half of mental health disorders, including depression, begin by age 14.

The teenage years are a dynamic period of brain development, when neuronal connections undergo intense remodeling and pruning. This flexibility allows teenagers to learn quickly and adapt to a changing environment, but it can also make them vulnerable. Many questions have yet to be answered, such as why the risk of mental illness increases severalfold during adolescence, why some teens appear more resilient to mental health problems than others, and when the brain should be considered “mature.”

On Wednesday, April 26, join leading neuroscientist BJ Casey and teen mental health advocate Diana Chao for a conversation with Knowable Magazine and Annual Reviews about the teen brain’s unique strengths and challenges, and why many experts have declared a global mental health emergency in children and adolescents. We’ll talk about what adults can do to support the teenagers in their lives — and crucially, how teens can help one another.

This event is the second in a series of events and articles exploring the brain across the lifespan. “Inside the brain: A lifetime of change,” is supported by a grant from the Dana Foundation.

Register here for “The baby brain: Learning in leaps and bounds” and “ The mature mind: Aging resiliently.” If you can’t attend the live events, please register to receive an email when the replays are available.

Speakers

BJ Casey

Neuroscientist, Barnard College-Columbia University

BJ Casey is the Christina L. Williams Professor of Neuroscience in the Department of Neuroscience and Behavior at Barnard College-Columbia University. She pioneered the use of functional magnetic resonance imaging to examine the developing human brain, particularly during adolescence. Her scientific discoveries have been published in top-tier journals, including Science, Nature Medicine, Nature Neuroscience and the Proceedings of the National Academy of Sciences. She has received the Association for Psychological Science Lifetime Achievement Mentor Award and the American Psychological Association Distinguished Scientific Contribution Award. She is an elected member of the American Academy of Arts and Science.

Diana Chao

Mental health activist and founder of Letters to Strangers

Diana Chao founded Letters to Strangers (L2S) when she was a sophomore in high school, after bipolar disorder and a blinding condition nearly ended her life. Today, L2S is the largest global youth-for-youth mental health nonprofit, impacting over 35,000 people annually on six continents and publishing the world’s first youth-for-youth mental health guidebook for free. Chao has been honored by two US presidents at the White House and named a 2021 Princess Diana Legacy Award Winner, a 2020 L’Oréal Paris Women of Worth and a 2019 Oprah Magazine Health Hero. Chao studied geosciences at Princeton University and works as a climate scientist for Kinetic Analysis Corporation.

Moderator

Emily Underwood

Science Content Producer, Knowable Magazine

Emily Underwood has been covering science for over a decade, including as a neuroscience reporter for Science. She has a master’s degree in science writing from Johns Hopkins University, and her reporting has won national awards, including a 2018 National Academies Keck Futures Initiatives Communication Award for magazine writing.

About

This event is part of an ongoing series of live events and science journalism from Knowable Magazine and Annual Reviews, a nonprofit publisher dedicated to synthesizing and integrating knowledge for the progress of science and the benefit of society.

The Dana Foundation is a private philanthropic organization dedicated to advancing neuroscience and society.

Resources

More from Knowable Magazine

Related Annual Reviews articles

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Keep Your Car Safer and On the Road Longer

For many families, cars are huge, long-term investments second only to homes. Many are looking for ways to keep their cars on the road longer and make them safer to continue to serve their needs for years to come. 

No matter what or where you drive, you can keep your current vehicle looking and performing its best – and even update it to make it safer – with these tips inspired by eBay Motors’ Parts of America tour, a cross-country tour exploring unique car cultures across America.

Choose the Right Tires
If it’s time to trade your tires in, take the time to learn what options are available for your vehicle. For those in fair weather states, summer performance tires offer the best possible fuel efficiency all year round. Families living in milder states with occasional snow may consider all-season tires that trade efficiency for safety on a variety of surfaces. Finally, when it comes to driving in a winter wonderland, there is no substitute for specialized rubber and tread patterns – purchase a dedicated set of snow tires to ensure you’re safe all winter long. No matter your situation, a new set of tires can maximize safety and extend the life of your car.

New Look, New Ride
One way to breathe new life into your ride is to take it to the next level aesthetically. With enthusiast communities growing around nearly every make and model of vehicle, it’s easy to find parts to make your vision a reality. One of the most eye-catching additions is a new set of wheels, and there are thousands of brands, styles and sizes to choose from for every car. The addition of front, side and rear aerodynamics kits, such as front splitters or rear spoilers, can give any ride that athletic look. Upgrading stock headlight and taillight units – many fitted with high-visibility LEDs – has never been easier.

Upgrade Your Tech
Safety and creature comforts alike can add to your enjoyment of your vehicle, even if you’ve been driving it for several years. Many cars can be updated with the latest and greatest features available in new rides, including high-tech infotainment equipped with digital assistants, front and rear cameras, parking sensors, blind spot warning and even collision avoidance systems. As families look to extend their cars’ lifespans, these technology upgrades can make driving comfortable and safer.

Power and Performance
While looks and tech can bring new experiences to your car, no change has quite the same impact as improving its performance. Options abound for those looking to improve the power and handling of their ride, such as replacing the exhaust system, lowering springs, adding a coilover kit or conducting a full suspension replacement.

Find Purpose-Built Parts
Whether you’re an amateur DIY-er looking to maintain and make small upgrades to your vehicle or an expert looking to make bigger modifications, finding parts and accessories that fit your vehicle is crucial. From hard-to-find performance modifications to made-to-fit cosmetic accessories, eBay Motors offers parts and accessories for nearly any vehicle, skillset and project. The app offers an entire catalog of inventory with 122 million live parts listings at any given time, giving auto enthusiasts the ability to purchase from an expansive inventory from the convenience of a smartphone. What’s more, features like Buy It Now, My Garage and Fitment Finder enable users to easily search parts and accessories, verify the items fit their vehicle and make immediate purchases for what they need.

Skip the Wait
The global supply chain continues to recover from disruptions that have stretched back several years, and many customers are feeling the strain when it comes time to upgrade, maintain or repair their vehicles. Some shops around the country are quoting waiting times of several months just to have the right part delivered for service. However, families can find relief and get their car back on the road quicker by looking online to source their much-needed parts. In fact, many technicians work with customers to have parts delivered directly to their shop from online sources to expedite and simplify the process.

Auto enthusiasts can find more helpful tips, tricks and resources at ebaymotors.com.
SOURCE:
eBay Motors

A Family Favorite in Just 5 Minutes

(Culinary.net) Running short on time from a busy schedule shouldn’t mean skipping out on your favorite desserts. In fact, it should be all the more reason to enjoy a sweet treat as a reward for all that hard work.

When you’re due for a bite into dark chocolate goodness, all it takes is a few minutes out of your day to make 5-Minute Dark Chocolate Cereal Bars. This quick and simple dessert makes it easy to celebrate the day’s accomplishments without added stress.

As a fun way for little ones to help in the kitchen, you can cook together the butter, marshmallows, peanut butter and cereal then let the kiddos drizzle the key ingredient: melted chocolate. All that’s left to do is cut and serve or pack a few off to school and work for an afternoon treat.  

Find more seasonal dessert recipes at Culinary.net.

If you made this recipe at home, use #MyCulinaryConnection on your favorite social network to share your work.

Watch video to see how to make this recipe!


5-Minute Dark Chocolate Cereal Bars

Recipe adapted from ScrummyLane.com
  • 4          tablespoons butter
  • 10        ounces marshmallows
  • 1/2       cup peanut butter
  • 6          cups cereal
  • 4          ounces milk chocolate, melted
  • 4          ounces dark chocolate, melted
  1. Heat saucepan over low heat. Add butter, marshmallows and peanut butter; stir to combine. Add cereal; mix until coated.
  2. Line 9-by-13-inch pan with parchment paper. Add cereal mixture to pan.
  3. In bowl, mix milk chocolate and dark chocolate. Drizzle chocolate over cereal mixture; spread evenly then allow to cool.
  4. Cut into bars and serve.
SOURCE:
Culinary.net

About Us

    Our site is always changing and growing.  We change and construct live so maybe you will see changes happening so just refresh and keep going.....  We are just your normal average locals who would like to bring information forward.  Connecting you with businesses, and information here and around the world.  More curious and seeking than opinionated and that's a little of who we are.  You may not agree with everything you read here but we believe it is important to see what others are writing about in the world for it influences others.  So if your ever talking to someone and they seem confident in their beliefs maybe they are convinced by what they read or being told.  We urge you to do your own research before developing an opinion or reacting.  We encourage anyone to THINK FOR THEMSLEVES! BE FREE!   BE OPEN!  LEARN!

     Challenging and competitive as we will agree with many things and not agree with everyone, every article or conversation.  Remaining open to another point of view helps us keep growing.  To your best life and ours, we love you.

    We are all here right now together in this World and we believe WE ARE ONE.  We all think differently and have different views, likes, and dislikes. We can all agree to disagree and LIVE TOGETHER!

   Please enjoy the articles of information we present as some may help you in your knowledge and some may anger you or not in the end hopefully will help make an even better world for all of us.

     Maybe you are close to home or far beyond our home.  The world has so much to offer and exploring it is one key to understanding.  

   WE LOVE ALABAMA AND SHELBY COUNTY AND ALL SURROUNDING COMMUNITIES. WE BELIEVE WE HAVE THE BEST TO OFFER AS MANY OTHER PLACES IN THE WORLD.  

Email: team@shelbycountygazette.com

                                       Docendo discimus
               


How heat pumps of the 1800s are becoming the technology of the future

Innovative thinking has done away with problems that long dogged the electric devices — and both scientists and environmentalists are excited about the possibilities

It was an engineering problem that had bugged Zhibin Yu for years — but now he had the perfect chance to fix it. Stuck at home during the first UK lockdown of the Covid-19 pandemic, the thermal engineer suddenly had all the time he needed to refine the efficiency of heat pumps: electrical devices that, as their name implies, move heat from the outdoors into people’s homes.

The pumps are much more efficient than gas heaters, but standard models that absorb heat from the air are prone to icing up, which greatly reduces their effectiveness.

Yu, who works at the University of Glasgow, UK, pondered the problem for weeks. He read paper after paper. And then he had an idea. Most heat pumps waste some of the heat that they generate — and if he could capture that waste heat and divert it, he realized, that could solve the defrosting issue and boost the pumps’ overall performance. “I suddenly found a solution to recover the heat,” he recalls. “That was really an amazing moment.”

Yu’s idea is one of several recent innovations that aim to make 200-year-old heat pump technology even more efficient than it already is, potentially opening the door for much greater adoption of heat pumps worldwide. To date, only about 10 percent of space heating requirements around the world are met by heat pumps, according to the International Energy Agency (IEA). But due to the current energy crisis and growing pressure to reduce fossil fuel consumption in order to combat climate change, these devices are arguably more crucial than ever.

Since his 2020 lockdown brainstorming, Yu and his colleagues have built a working prototype of a heat pump that stores leftover heat in a small water tank. In a paper published in the summer of 2022, they describe how their design helps the heat pump to use less energy. Plus, by separately rerouting some of this residual warmth to part of the heat pump exposed to cold air, the device can defrost itself when required, without having to pause heat supply to the house.

The idea relies on the very principle by which heat pumps operate: If you can seize heat, you can use it. What makes heat pumps special is the fact that instead of just generating heat, they also capture heat from the environment and move it into your house — eventually transferring that heat to radiators or forced-air heating systems, for instance. This is possible thanks to the refrigerant that flows around inside a heat pump. When the refrigerant encounters heat — even a tiny amount in the air on a cold day — it absorbs that modicum of warmth.

A compressor then forces the refrigerant to a higher pressure, which raises its temperature to the point where it can heat your house. It works because an increase of pressure pushes the refrigerant molecules closer together, increasing their motion. The refrigerant later expands again, cooling as it does so, and the cycle repeats. The entire cycle can run in reverse, too, allowing heat pumps to provide cooling when it’s hot in summer.

The magic of a heat pump is that it can move multiple kilowatt-hours of heat for each kWh of electricity it uses. Heat pump efficiencies are generally measured in terms of their coefficient of performance (COP). A COP of 3, for example, means 1 kWh of juice yields 3 kWh of warmth — that’s effectively 300 percent efficiency. The COP you get from your device can vary depending on the weather and other factors.

It’s a powerful concept, but also an old one. The British mathematician, physicist and engineer Lord Kelvin proposed using heat pump systems for space heating way back in 1852. The first heat pump was designed and built a few years later and used industrially to heat brine in order to extract salt from the fluid. In the 1950s, members of the British Parliament discussed heat pumps when coal stocks were running low. And in the years following the 1973-74 oil crisis, heat pumps were touted as an alternative to fossil fuels for heating. “ Hope rests with the future heat pump,” one commentator wrote in the 1977 Annual Review of Energy.

Now the world faces yet another reckoning over energy supplies. When Russia, one of the world’s biggest sources of natural gas, invaded Ukraine in February 2022, the price of gas soared — which in turn shoved heat pumps into the spotlight because with few exceptions they run on electricity, not gas. The same month, environmentalist Bill McKibben wrote a widely shared blog post titled “Heat pumps for peace and freedom” in which, referring to the Russian president, he argued that the US could “peacefully punch Putin in the kidneys” by rolling out heat pumps on a massive scale while lowering Americans’ dependence on fossil fuels. Heat pumps can draw power from domestic solar panels, for instance, or a power grid supplied predominantly by renewables.

Running the devices on green electricity can help to fight climate change, too, notes Karen Palmer, an economist and senior fellow at Resources for the Future, an independent research organization in Washington, DC, who coauthored an analysis of policies to enhance energy efficiency in the 2018 Annual Review of Resource Economics. “Moving towards greater use of electricity for energy needs in buildings is going to have to happen, absent a technology breakthrough in something else,” she says.

The IEA estimates that, globally, heat pumps have the potential to reduce carbon dioxide emissions by at least 500 million metric tons in 2030, equivalent to the annual CO 2 emissions produced by all the cars in Europe today.

Despite their long history and potential virtues, heat pumps have struggled to become commonplace in some countries. One reason is cost: The devices are substantially more expensive than gas heating units and, because natural gas has remained relatively cheap for decades, homeowners have had little incentive to switch.

There has also long been a perception that heat pumps won’t work as well in cold climates, especially in poorly insulated houses that require a lot of heat. In the UK, for example, where houses tend to be rather drafty, some homeowners have long considered gas boilers a safer bet because they can supply hotter water ( around 140 to 160 degrees Fahrenheit), to radiators, which makes it easier to heat up a room. By contrast, heat pumps tend to be most efficient when heating water to around 100 degrees Fahrenheit.

The cold-climate problem is arguably less of an issue than some think, however, given that there are multiple modern air source devices on the market that work well even when outside temperatures drop as low as minus 10 degrees Fahrenheit. Norway, for example, is considered one of the world leaders in heat pump deployment. Palmer has a heat pump in her US home, along with a furnace as backup. “If it gets really cold, we can rely on the furnace,” she says.

Innovations in heat pump design are leading to units that are even more efficient, better suited to houses with low levels of insulation and — potentially — cheaper, too. For example, Yu says his and his colleagues’ novel air source heat pump design could improve the COP by between 3 percent and 10 percent, while costing less than existing heat pump designs with comparable functionality. They are now looking to commercialize the technology.

Yu’s work is innovative, says Rick Greenough, an energy systems engineer now retired from De Montfort University in the UK. “I must admit this is a method I hadn’t actually thought of,” he says.

And there are plenty more ideas afoot. Greenough, for instance, has experimented with storing heat in the ground during warmer months, where it can be exploited by a heat pump when the weather turns cool. His design uses a circulating fluid to transfer excess heat from solar hot-water panels into shallow boreholes in the soil. That raises the temperature of the soil by around 22 degrees Fahrenheit, to a maximum of roughly 66 degrees Fahrenheit, he says. Then, in the winter, a heat pump can draw out some of this stored heat to run more efficiently when the air gets colder. This technology is already on the market, offered by some installers in the UK, notes Greenough.

But most current heat pumps still only generate relatively low output temperatures, so owners of drafty homes may need to take on the added cost of insulation when installing a heat pump. Fortunately, a solution may be emerging: high-temperature heat pumps.

“We said, ‘Hey, why not make a heat pump that can actually one-on-one replace a gas boiler without having to really, really thoroughly insulate your house?’” says Wouter Wolfswinkel, program manager for business development at Swedish energy firm Vattenfall, which manufactures heat pumps. Vattenfall and its Dutch subsidiary Feenstra have teamed up to develop a high-temperature heat pump, expected to debut in 2023.

In their design, they use CO2 as a refrigerant. But because the heat-pump system’s hot, high-pressure operating conditions prevent the gas from condensing or otherwise cooling down very easily, they had to find a way of reducing the refrigerant’s temperature in order for it to be able to absorb enough heat from the air once again when it returns to the start of the heat pump loop. To this end, they added a “buffer” to the system: a water tank where a layer of cooler water rests beneath hotter water above. The heat pump uses the lower layer of cooler water from the tank to adjust the temperature of the refrigerant as required. But it can also send the hotter water at the top of the tank out to radiators, at temperatures up to 185 degrees Fahrenheit.

The device is slightly less efficient than a conventional, lower temperature heat pump, Wolfswinkel acknowledges, offering a COP of around 265 percent versus 300 percent, depending on conditions. But that’s still better than a gas boiler (no more than 95 percent efficient), and as long as electricity prices aren’t significantly higher than gas prices, the high temperature heat pump could still be cheaper to run. Moreover, the higher temperature means that homeowners needn’t upgrade their insulation or upsize radiators right away, Wolfswinkel notes. This could help people make the transition to electrified heating more quickly.

A key test was whether Dutch homeowners would go for it. As part of a pilot trial, Vattenfall and Feenstra installed the heat pump in 20 households of different sizes in the town of Heemskerk, not far from Amsterdam. After a few years of testing, in June 2022 they gave homeowners the option of taking back their old gas boiler, which they had kept in their homes, or of using the high temperature heat pump on a permanent basis. “All of them switched to the heat pump,” says Wolfswinkel.

In some situations, home-by-home installations of heat pumps might be less efficient than building one large system to serve a whole neighborhood. For about a decade, Star Renewable Energy, based in Glasgow, has been building district systems that draw warmth from a nearby river or sea inlet, including a district heating system connected to a Norwegian fjord. A Scandinavian fjord might not be the first thing that comes to mind if you say the word “heat” — but the water deep in the fjord actually holds a fairly steady temperature of 46 degrees Fahrenheit, which heat pumps can exploit.

Via a very long pipe, the district heating system draws in this water and uses it to heat the refrigerant, in this case ammonia. A subsequent, serious increase of pressure for the refrigerant — to 50 atmospheres — raises its temperature to 250 degrees Fahrenheit. The hot refrigerant then passes its heat to water in the district heating loop, raising the temperature of that water to 195 degrees Fahrenheit. The sprawling system provides 85 percent of the hot water needed to heat buildings in the city of Drammen.

“That type of thing is very exciting,” says Greenough.

Not every home will be suitable for a heat pump. And not every budget can accommodate one, either. Yu himself says that the cost of replacing the gas boiler in his own home remains prohibitive. But it’s something he dreams of doing in the future. With ever-improving efficiencies, and rising sales in multiple countries, heat pumps are only getting harder for their detractors to dismiss. “Eventually,” says Yu, “I think everyone will switch to heat pumps.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter.

Knowable Magazine | Annual Reviews