Thursday, September 25, 2025

2 newly launched NASA missions will help scientists understand the influence of the Sun, both from up close and afar

NASA’s IMAP mission is one of two launching in September 2025. NASA/Princeton University/Patrick McPike
Ryan French, University of Colorado Boulder

Even at a distance of 93 million miles (150 million kilometers) away, activity on the Sun can have adverse effects on technological systems on Earth. Solar flares – intense bursts of energy in the Sun’s atmosphere – and coronal mass ejections – eruptions of plasma from the Sun – can affect the communications, satellite navigation and power grid systems that keep society functioning.

On Sept. 24, 2025, NASA launched two new missions to study the influence of the Sun on the solar system, with further missions scheduled for 2026 and beyond.

I’m an astrophysicist who researches the Sun, which makes me a solar physicist. Solar physics is part of the wider field of heliophysics, which is the study of the Sun and its influence throughout the solar system.

The field investigates the conditions at a wide range of locations on and around the Sun, ranging from its interior, surface and atmosphere, and the constant stream of particles flowing from the Sun – called the the solar wind. It also investigates the interaction between the solar wind and the atmospheres and magnetic fields of planets.

The importance of space weather

Heliophysics intersects heavily with space weather, which is the influence of solar activity on humanity’s technological infrastructure.

In May 2024, scientists observed the strongest space weather event since 2003. Several Earth-directed coronal mass ejections erupted from the Sun, causing an extreme geomagnetic storm as they interacted with Earth’s magnetic field.

This event produced a beautiful light show of the aurora across the world, providing a view of the northern and southern lights to tens of millions of people at lower latitudes for the first time.

However, geomagnetic storms come with a darker side. The same event triggered overheating alarms in power grids around the world, and triggered a loss in satellite navigation that may have cost the U.S. agricultural industry half a billion dollars.

However, this is far from the worst space weather event on record, with stronger events in 1989 and 2003 knocking out power grids in Canada and Sweden.

But even those events were small compared with the largest space weather event in recorded history, which took place in September 1859. This event, considered the worst-case scenario for extreme space weather, was called the Carrington Event. The Carrington Event produced widespread aurora, visible even close to the equator, and caused disruption to telegraph machines.

If an event like the Carrington event occurred today, it could cause widespread power outages, losses of satellites, days of grounded flights and more. Because space weather can be so destructive to human infrastructure, scientists want to better understand these events.

NASA’s heliophysics missions

NASA has a vast suite of instruments in space that aim to better understand our heliosphere, the region of the solar system in which the Sun has significant influence. The most famous of these missions include the Parker Solar Probe, launched in 2018, the Solar Dynamics Observatory, launched in 2010, the Solar and Heliospheric Observatory, launched in 1995, and the Polarimeter to Unify the Corona and Heliosphere, launched on March 11, 2025.

The most recent additions to NASA’s collection of heliophysics missions launched on Sept. 24, 2025: Interstellar Mapping and Acceleration Probe, or IMAP, and the Carruthers Geocorona Observatory. Together, these instruments will collect data across a wide range of locations throughout the solar system.

IMAP is en route to a region in space called Lagrange Point 1. This is a location 1% closer to the Sun than Earth, where the balancing gravity of the Earth and Sun allow spacecraft to stay in a stable orbit.

IMAP contains 10 scientific instruments with varying science goals, ranging from measuring the solar wind in real time to improve forecasting of space weather that could arrive at Earth, to mapping the outer boundary between the heliosphere and interstellar space.

IMAP will study the solar wind from a region in space nearer to the Sun where spacecraft can stay in a stable orbit.

This latter goal is unique, something scientists have never attempted before. It will achieve this goal by measuring the origins of energetic neutral atoms, a type of uncharged particle. These particles are produced by plasma, a charged gas of electrons and protons, throughout the heliosphere. By tracking the origins of incoming energetic neutral atoms, IMAP will build a map of the heliosphere.

The Carruthers Geocorona Observatory is heading to the same Lagrange-1 orbit as IMAP, but with a very different science target. Instead of mapping all the way to the very edge of the heliosphere, the Carruthers Geocorona Observatory is observing a different target – Earth’s exosphere. The exosphere is the uppermost layer of Earth’s atmosphere, 375 miles (600 kilometers) above the ground. It borders outer space.

Specifically, the mission will observe ultraviolet light emitted by hydrogen within the exosphere, called the geocorona. The Carruthers Geocorona Observatory has two primary objectives. The first relates directly to space weather.

The observatory will measure how the exosphere – our atmosphere’s first line of defense from the Sun – changes during extreme space weather events. The second objective relates more to Earth sciences: The observatory will measure how water is transported from Earth’s surface up into the exosphere.

A radarlike image of a sphere, with a bright spot shown in yellow, with a green and red outline.
The first image of Earth’s outer atmosphere, the geocorona, taken from a telescope designed and built by the late American space physicist and engineer George Carruthers. The telescope took the image while on the Moon during the Apollo 16 mission in 1972. G. Carruthers (NRL) et al./Far UV Camera/NASA/Apollo 16, CC BY

Looking forward

IMAP and the Carruthers Geocorona Observatory are two heliophysics missions researching very different parts of the heliosphere. In the coming years, future NASA missions will launch to measure the object at the center of heliophysics – the Sun.

In 2026, the Sun Coronal Ejection Tracker is planned to launch. It is a small satellite the size of a shoebox – called a CubeSat – with the aim to study how coronal mass ejections change as they travel through the Sun’s atmosphere.

In 2027, NASA plans to launch the much larger Multi-slit Solar Explorer to capture high-resolution measurements of the Sun’s corona using a state-of-the-art instrumentation. This mission will work to understand the origins of solar flares, coronal mass ejections and heating within the Sun’s atmosphere.

Ryan French, Research Scientist, Laboratory for Atmospheric and Space Physics, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. 

Friday, September 5, 2025

Lessons from sports psychology research


Scientists are probing the head games that influence athletic performance, from coaching to coping with pressure

Since the early years of this century, it has been commonplace for computerized analyses of athletic statistics to guide a baseball manager’s choice of pinch hitter, a football coach’s decision to punt or pass, or a basketball team’s debate over whether to trade a star player for a draft pick.

But many sports experts who actually watch the games know that the secret to success is not solely in computer databases, but also inside the players’ heads. So perhaps psychologists can offer as much insight into athletic achievement as statistics gurus do.

Sports psychology has, after all, been around a lot longer than computer analytics. Psychological studies of sports appeared as early as the late 19th century. During the 1970s and ’80s, sports psychology became a fertile research field. And within the last decade or so, sports psychology research has exploded, as scientists have explored the nuances of everything from the pursuit of perfection to the harms of abusive coaching.

“Sport pervades cultures, continents, and indeed many facets of daily life,” write Mark Beauchamp, Alan Kingstone and Nikos Ntoumanis, authors of an overview of sports psychology research in the 2023 Annual Review of Psychology.

Their review surveys findings from nearly 150 papers investigating various psychological influences on athletic performance and success. “This body of work sheds light on the diverse ways in which psychological processes contribute to athletic strivings,” the authors write. Such research has the potential not only to enhance athletic performance, they say, but also to provide insights into psychological influences on success in other realms, from education to the military. Psychological knowledge can aid competitive performance under pressure, help evaluate the benefit of pursuing perfection and assess the pluses and minuses of high self-confidence.

Confidence and choking

In sports, high self-confidence (technical term: elevated self-efficacy belief) is generally considered to be a plus. As baseball pitcher Nolan Ryan once said, “You have to have a lot of confidence to be successful in this game.” Many a baseball manager would agree that a batter who lacks confidence against a given pitcher is unlikely to get to first base.

Various studies suggest that self-talk can increase confidence, enhance focus, control emotions and initiate effective actions.

And in fact, a lot of psychological research actually supports that view, suggesting that encouraging self-confidence is a beneficial strategy. Yet while confident athletes do seem to perform better than those afflicted with self-doubt, some studies hint that for a given player, excessive confidence can be detrimental. Artificially inflated confidence, unchecked by honest feedback, may cause players to “fail to allocate sufficient resources based on their overestimated sense of their capabilities,” Beauchamp and colleagues write. In other words, overconfidence may result in underachievement.

Other work shows that high confidence is usually most useful in the most challenging situations (such as attempting a 60-yard field goal), while not helping as much for simpler tasks (like kicking an extra point).

Of course, the ease of kicking either a long field goal or an extra point depends a lot on the stress of the situation. With time running out and the game on the line, a routine play can become an anxiety-inducing trial by fire. Psychological research, Beauchamp and coauthors report, has clearly established that athletes often exhibit “impaired performance under pressure-invoking situations” (technical term: “choking”).

In general, stress impairs not only the guidance of movements but also perceptual ability and decision-making. On the other hand, it’s also true that certain elite athletes perform best under high stress. “There is also insightful evidence that some of the most successful performers actually seek out, and thrive on, anxiety-invoking contexts offered by high-pressure sport,” the authors note. Just ask Michael Jordan or LeBron James.

Many studies have investigated the psychological coping strategies that athletes use to maintain focus and ignore distractions in high-pressure situations. One popular method is a technique known as the “quiet eye.” A basketball player attempting a free throw is typically more likely to make it by maintaining “a longer and steadier gaze” at the basket before shooting, studies have demonstrated.

“In a recent systematic review of interventions designed to alleviate so-called choking, quiet-eye training was identified as being among the most effective approaches,” Beachamp and coauthors write.

Another common stress-coping method is “self-talk,” in which players utter instructional or motivational phrases to themselves in order to boost performance. Saying “I can do it” or “I feel good” can self-motivate a marathon runner, for example. Saying “eye on the ball” might help a baseball batter get a hit.

Researchers have found moderate benefits of self-talk strategies for both novices and experienced athletes, Beauchamp and colleagues report. Various studies suggest that self-talk can increase confidence, enhance focus, control emotions and initiate effective actions.

Moderate performance benefits have also been reported for other techniques for countering stress, such as biofeedback, and possibly meditation and relaxation training.

“It appears that stress regulation interventions represent a promising means of supporting athletes when confronted with performance-related stressors,” Beauchamp and coauthors conclude.

Pursuing athletic perfection

Of course, sports psychology encompasses many other issues besides influencing confidence and coping with pressure. Many athletes set a goal of attaining perfection, for example, but such striving can induce detrimental psychological pressures. One analysis found that athletes pursuing purely personal high standards generally achieved superior performance. But when perfectionism was motivated by fear of criticism from others, performance suffered.

Similarly, while some coaching strategies can aid a player’s performance, several studies have shown that abusive coaching can detract from performance, even for the rest of an athlete’s career.

Beauchamp and his collaborators conclude that a large suite of psychological factors and strategies can aid athletic success. And these factors may well be applicable to other areas of human endeavor where choking can impair performance (say, while performing brain surgery or flying a fighter jet).

But the authors also point out that researchers shouldn’t neglect the need to consider that in sports, performance is also affected by the adversarial nature of competition. A pitcher’s psychological strategies that are effective against most hitters might not fare so well against Shohei Ohtani, for instance.

Besides that, sports psychology studies (much like computer-based analytics) rely on statistics. As Adolphe Quetelet, a pioneer of social statistics, emphasized in the 19th century, statistics do not define any individual — average life expectancy cannot tell you when any given person will die. On the other hand, he noted, no single exceptional case invalidates the general conclusions from sound statistical analysis.

Sports are, in fact, all about the quest of the individual (or a team) to defeat the opposition. Success often requires defying the odds — which is why gambling on athletic events is such a big business. Sports consist of contests between the averages and the exceptions, and neither computer analytics nor psychological science can tell you in advance who is going to win. That’s why they play the games.

Knowable 

Monday, September 1, 2025

New forms of steel for stronger, lighter cars

Automakers are tweaking production processes to create a slew of new steels with just the right properties, allowing them to build cars that are both safer and more fuel-efficient

Like many useful innovations, it seems, the creation of high-quality steel by Indian metallurgists more than two thousand years ago may have been a happy confluence of clever workmanship and dumb luck.

Firing chunks of iron with charcoal in a special clay container produced something completely new, which the Indians called wootz. Roman armies were soon wielding wootz steel swords to terrify and subdue the wild, hairy tribes of ancient Europe.

Twenty-four centuries later, automakers are relying on electric arc furnaces, hot stamping machines and quenching and partitioning processes that the ancients could never have imagined. These approaches are yielding new ways to tune steel to protect soft human bodies when vehicles crash into each other, as they inevitably do — while curbing car weights to reduce their deleterious impact on the planet.

“It is a revolution,” says Alan Taub, a University of Michigan engineering professor with many years in the industry. The new steels, dozens of varieties and counting, combined with lightweight polymers and carbon fiber-spun interiors and underbodies, hark back to the heady days at the start of the last century when, he says, “Detroit was Silicon Valley.”

Such materials can reduce the weight of a vehicle by hundreds of pounds — and every pound of excess weight that is shed saves roughly $3 in fuel costs over the lifetime of the car, so the economics are hard to deny. The new maxim, Taub says, is “the right material in the right place.”

The transition to battery-powered vehicles underscores the importance of these new materials. Electric vehicles may not belch pollution, but they are heavy — the Volvo XC40 Recharge, for example, is 33 percent heavier than the gas version (and would be heavier still if the steel surrounding passengers were as bulky as it used to be). Heavy can be dangerous.

“Safety, especially when it comes to new transportation policies and new technologies, cannot be overlooked,” Jennifer Homendy, chief of the National Transportation Safety Board, told the Transportation Research Board in 2023. Plus, reducing the weight of an electric vehicle by 10 percent delivers roughly 14 percent improvement in range.

As recently as the 1960s, the steel cage around passengers was made of what automakers call soft steel. The armor from Detroit’s Jurassic period was not much different from what Henry Ford had introduced decades earlier. It was heavy and there was a lot of it.

With the 1965 publication of Ralph Nader’s Unsafe at Any Speed: The Designed-In Dangers of the American Automobile, big automakers realized they could no longer pursue speed and performance exclusively. The oil embargos of the 1970s only hastened the pace of change: Auto steel now had to be both stronger and lighter, requiring less fuel to push around.

In response, over the past 60 years, like chefs operating a sous vide machine to produce the perfect bite, steelmakers — their cookers arc furnaces reaching thousands of degrees Fahrenheit, with robots doing the cooking — have created a vast variety of steels to match every need. There are high-strength, hardened steels for the chassis; corrosion-resistant stainless steels for side panels and roofs; and highly stretchable metals in bumpers to absorb impacts without crumpling.

Tricks with the steel

Most steel is more than 98 percent iron. It is the other couple of percent — sometimes only hundredths of a single percent, in the case of metals added to confer desired properties — that make the difference. Just as important are treatment methods: the heating, cooling and processing, such as rolling the sheets prior to forming parts. Modifying each, sometimes by only seconds, changes the metal’s structure to yield different properties. “It’s all about playing tricks with the steel,” says John Speer, director of the Advanced Steel Processing and Products Research Center at the Colorado School of Mines.

At the most basic level, the properties of steel are about microstructure: the arrangement of different types, or phases, of steel in the metal. Some phases are harder, while others confer ductility, a measure of how much the metal can be bent and twisted out of shape without shearing and creating jagged edges that penetrate and tear squishy human bodies. At the atomic level, there are principally four phases of auto steel, including the hardest yet most brittle, called martensite, and the more ductile austenite. Carmakers can vary these by manipulating the times and temperatures of the heating process to produce the properties they want.

Academic researchers and steelmakers, working closely with automakers, have developed three generations of what is now called advanced high-strength steel. The first, adopted in the 1990s and still widely employed, had a good combination of strength and ductility. A second generation used more exotic alloys to achieve even greater ductility, but those steels proved expensive and challenging to manufacture.

The third generation, which Speer says is beginning to make its way onto the factory floor, uses heating and cooling techniques to produce steels that are stronger and more formable than the first generation; nearly ten times as strong as common steels of the past; and much cheaper (though less ductile) than second-generation steels.

Steelmakers have learned that cooling time is a critical factor in creating the final arrangements of atoms and therefore the properties of the steel. The most rapid cooling, known as quenching, freezes and stabilizes the internal structure before it undergoes further change during the hours or days it could otherwise take to reach room temperature.

One of the strongest types of modern auto steel — used in the most critical structural components, such as side panels and pillars — is made by superheating the metal with boron and manganese to a temperature above 850 degrees Celsius. After becoming malleable, the steel is transferred within 10 seconds to a die, or form, where the part is shaped and rapidly cooled.

In one version of what is known as transformation-induced plasticity, the steel is heated to a high temperature, cooled to a lower temperature and held there for a time and then rapidly quenched. This produces islands of austenite surrounded by a matrix of softer ferrite, with regions of harder bainite and martensite. This steel can absorb a large amount of energy without fracturing, making it useful in bumpers and pillars.

Recipes can be further tweaked by the use of various alloys. Henry Ford was employing alloys of steel and vanadium more than a century ago to improve the performance of steel in his Model T, and alloy recipes continue to improve today. One modern example of the use of lighter metals in combination with steel is the Ford Motor Company’s aluminum-intensive F-150 truck, the 2015 version weighing nearly 700 pounds less than the previous model.

A process used in conjunction with new materials is tube hydroforming, in which a metal is bent into complex shapes by the high-pressure injection of water or other fluids into a tube, expanding it into the shape of a surrounding die. This allows parts to be made without welding two halves together, saving time and money. A Corvette aluminum frame rail, the largest hydroformed part in the world, saved 20 percent in mass from the steel rail it replaced, according to Taub, who coauthored a 2019 article on automotive lightweighting in the Annual Review of Materials Research.

New alloys

More recent introductions are alloys such as those using titanium and particularly niobium, which increase strength by stabilizing a metal’s microstructure. In a 2022 paper, Speer called the introduction of niobium “one of the most important physical metallurgy developments of the 20th century.”

One tool now shortening the distance between trial and error is the computer. “The idea is to use the computer to develop materials faster than through experimentation,” Speer says. New ideas can now be tested down to the atomic level without workmen bending over a bench or firing up a furnace.

The ever-continuing search for better materials and processes led engineer Raymond Boeman and colleagues to found the Institute for Advanced Composites Manufacturing Innovation (IACMI) in 2015, with a $70 million federal grant. Also known as the Composites Institute, it is a place where industry can develop, test and scale up new processes and products.

“The field is evolving in a lot of ways,” says Boeman, who now directs the institute’s research on upscaling these processes. IACMI has been working on finding more climate-friendly replacements for conventional plastics such as the widely used polypropylene. In 1960, less than 100 pounds of plastic were incorporated into the typical vehicle. By 2017, the figure had risen to nearly 350 pounds, because plastic is cheap to make and has a high strength-to-weight ratio, making it ideal for automakers trying to save on weight.

By 2019, according to Taub, 10-15 percent of a typical vehicle was made of polymers and composites, everything from seat components to trunks, door parts and dashboards. And when those cars reach the end of their lives, their plastic and other difficult-to-recycle materials known as automotive shredder residue, 5 million tons of it, ends up in landfills — or, worse, in the wider environment.

Researchers are working hard to develop stronger, lighter and more environmentally friendly plastics. At the same time, new carbon fiber products are enabling these lightweight materials to be used even in load-bearing places such as structural underbody parts, further reducing the amount of heavy metal used in auto bodies.

Clearly, work remains to make autos less of a threat, both to human bodies and the planet those bodies travel over every day, to work and play. But Taub says he is optimistic about Detroit’s future and the industry’s ability to solve the problems that came with the end of the horse-and-buggy days. “I tell students they will have job security for a long time.”

Knowable Magazine

Friday, August 29, 2025

I’m an astrophysicist mapping the universe with data from the Chandra X-ray Observatory − clear, sharp photos help me study energetic black holes

BlackBull MetaTrader 4
NASA’s Chandra X-ray Observatory detects X-ray emissions from astronomical events. NASA/CXC & J. Vaughan
Giuseppina Fabbiano, Smithsonian Institution

When a star is born or dies, or when any other very energetic phenomenon occurs in the universe, it emits X-rays, which are high-energy light particles that aren’t visible to the naked eye. These X-rays are the same kind that doctors use to take pictures of broken bones inside the body. But instead of looking at the shadows produced by the bones stopping X-rays inside of a person, astronomers detect X-rays flying through space to get images of events such as black holes and supernovae.

Images and spectra – charts showing the distribution of light across different wavelengths from an object – are the two main ways astronomers investigate the universe. Images tell them what things look like and where certain phenomena are happening, while spectra tell them how much energy the photons, or light particles, they are collecting have. Spectra can clue them in to how the event they came from formed. When studying complex objects, they need both imaging and spectra.

Scientists and engineers designed the Chandra X-ray Observatory to detect these X-rays. Since 1999, Chandra’s data has given astronomers incredibly detailed images of some of the universe’s most dramatic events.

The Chandra craft, which looks like a long metal tube with six solar panels coming off it in two wings.
The Chandra spacecraft and its components. NASA/CXC/SAO & J.Vaughan

Stars forming and dying create supernova explosions that send chemical elements out into space. Chandra watches as gas and stars fall into the deep gravitational pulls of black holes, and it bears witness as gas that’s a thousand times hotter than the Sun escapes galaxies in explosive winds. It can see when the gravity of huge masses of dark matter trap that hot gas in gigantic pockets.

An explosion of light and color, and a cloud with points of bright light.
On the left is the Cassiopeia A supernova. The image is about 19 light years across, and different colors in the image identify different chemical elements (red indicates silicon, yellow indicates sulfur, cyan indicates calcium, purple indicates iron and blue indicates high energy). The point at the center could be the neutron star remnant of the exploded star. On the right are the colliding ‘Antennae’ galaxies, which form a gigantic structure about 30,000 light years across. Chandra X-ray Center

NASA designed Chandra to orbit around the Earth because it would not be able to see any of this activity from Earth’s surface. Earth’s atmosphere absorbs X-rays coming from space, which is great for life on Earth because these X-rays can harm biological organisms. But it also means that even if NASA placed Chandra on the highest mountaintop, it still wouldn’t be able to detect any X-rays. NASA needed to send Chandra into space.

I am an astrophysicist at the Smithsonian Astrophysical Observatory, part of the Center for Astrophysics | Harvard and Smithsonian. I’ve been working on Chandra since before it launched 25 years ago, and it’s been a pleasure to see what the observatory can teach astronomers about the universe.

Supermassive black holes and their host galaxies

Astronomers have found supermassive black holes, which have masses ten to 100 million times that of our Sun, in the centers of all galaxies. These supermassive black holes are mostly sitting there peacefully, and astronomers can detect them by looking at the gravitational pull they exert on nearby stars.

But sometimes, stars or clouds fall into these black holes, which activates them and makes the region close to the black hole emit lots of X-rays. Once activated, they are called active galactic nuclei, AGN, or quasars.

My colleagues and I wanted to better understand what happens to the host galaxy once its black hole turns into an AGN. We picked one galaxy, ESO 428-G014, to look at with Chandra.

An AGN can outshine its host galaxy, which means that more light comes from the AGN than all the stars and other objects in the host galaxy. The AGN also deposits a lot of energy within the confines of its host galaxy. This effect, which astronomers call feedback, is an important ingredient for researchers who are building simulations that model how the universe evolves over time. But we still don’t quite know how much of a role the energy from an AGN plays in the formation of stars in its host galaxy.

Luckily, images from Chandra can provide important insight. I use computational techniques to build and process images from the observatory that can tell me about these AGNs.

Three images of a black hole, from low to high resolution, with a bright spot above and right from the center surrounded by clouds.
Getting the ultimate Chandra resolution. From left to right, you see the raw image, the same image at a higher resolution and the image after applying a smoothing algorithm. G. Fabbiano

The active supermassive black hole in ESO 428-G014 produces X-rays that illuminate a large area, extending as far as 15,000 light years away from the black hole. The basic image that I generated of ESO 428-G014 with Chandra data tells me that the region near the center is the brightest, and that there is a large, elongated region of X-ray emission.

The same data, at a slightly higher resolution, shows two distinct regions with high X-ray emissions. There’s a “head,” which encompasses the center, and a slightly curved “tail,” extending down from this central region.

I can also process the data with an adaptive smoothing algorithm that brings the image into an even higher resolution and creates a clearer picture of what the galaxy looks like. This shows clouds of gas around the bright center.

My team has been able to see some of the ways the AGN interacts with the galaxy. The images show nuclear winds sweeping the galaxy, dense clouds and interstellar gas reflecting X-ray light, and jets shooting out radio waves that heat up clouds in the galaxy.

These images are teaching us how this feedback process operates in detail and how to measure how much energy an AGN deposits. These results will help researchers produce more realistic simulations of how the universe evolves.

The next 25 years of X-ray astronomy

The year 2024 marks the 25th year since Chandra started making observations of the sky. My colleagues and I continue to depend on Chandra to answer questions about the origin of the universe that no other telescope can.

By providing astronomers with X-ray data, Chandra’s data supplements information from the Hubble Space Telescope and the James Webb Space Telescope to give astronomers unique answers to open questions in astrophysics, such as where the supermassive black holes found at the centers of all galaxies came from.

For this particular question, astronomers used Chandra to observe a faraway galaxy first observed by the James Webb Space Telescope. This galaxy emitted the light captured by Webb 13.4 billion years ago, when the universe was young. Chandra’s X-ray data revealed a bright supermassive black hole in this galaxy and suggested that supermassive black holes may form by the collapsing clouds in the early universe.

Sharp imaging has been crucial for these discoveries. But Chandra is expected to last only another 10 years. To keep the search for answers going, astronomers will need to start designing a “super Chandra” X-ray observatory that could succeed Chandra in future decades, though NASA has not yet announced any firm plans to do so.The Conversation

Giuseppina Fabbiano, Senior Astrophysicist, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license.

BlackBull MetaTrader 4

Sunday, April 27, 2025

How does your brain create new memories? Neuroscientists discover ‘rules’ for how neurons encode new information

Neurons that fire together sometimes wire together. PASIEKA/Science Photo Library via Getty Images
William Wright, University of California, San Diego and Takaki Komiyama, University of California, San Diego

Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades.

But how does your brain achieve this incredible feat?

In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn.

Learning in the brain

The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data.

These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses.

It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain.

Diagram of neuron, featuring a relatively large cell body with a long branching tail extending from it
Neurons are the basic units of the brain. OpenStax, CC BY-SA

For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain.

In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear.

Defining the rules

We decided to monitor the activity of individual synaptic connections within the brain during learning to see whether we could identify activity patterns that determine which connections would get stronger or weaker.

To do this, we genetically encoded biosensors in the neurons of mice that would light up in response to synaptic and neural activity. We monitored this activity in real time as the mice learned a task that involved pressing a lever to a certain position after a sound cue in order to receive water.

We were surprised to find that the synapses on a neuron don’t all follow the same rule. For example, scientists have often thought that neurons follow what are called Hebbian rules, where neurons that consistently fire together, wire together. Instead, we saw that synapses on different locations of dendrites of the same neuron followed different rules to determine whether connections got stronger or weaker. Some synapses adhered to the traditional Hebbian rule where neurons that consistently fire together strengthen their connections. Other synapses did something different and completely independent of the neuron’s activity.

Our findings suggest that neurons, by simultaneously using two different sets of rules for learning across different groups of synapses, rather than a single uniform rule, can more precisely tune the different types of inputs they receive to appropriately represent new information in the brain.

In other words, by following different rules in the process of learning, neurons can multitask and perform multiple functions in parallel.

Future applications

This discovery provides a clearer understanding of how the connections between neurons change during learning. Given that most brain disorders, including degenerative and psychiatric conditions, involve some form of malfunctioning synapses, this has potentially important implications for human health and society.

For example, depression may develop from an excessive weakening of the synaptic connections within certain areas of the brain that make it harder to experience pleasure. By understanding how synaptic plasticity normally operates, scientists may be able to better understand what goes wrong in depression and then develop therapies to more effectively treat it.

Microscopy image of mouse brain cross-section with lower middle-half dusted green
Changes to connections in the amygdala – colored green – are implicated in depression. William J. Giardino/Luis de Lecea Lab/Stanford University via NIH/Flickr, CC BY-NC

These findings may also have implications for artificial intelligence. The artificial neural networks underlying AI have largely been inspired by how the brain works. However, the learning rules researchers use to update the connections within the networks and train the models are usually uniform and also not biologically plausible. Our research may provide insights into how to develop more biologically realistic AI models that are more efficient, have better performance, or both.

There is still a long way to go before we can use this information to develop new therapies for human brain disorders. While we found that synaptic connections on different groups of dendrites use different learning rules, we don’t know exactly why or how. In addition, while the ability of neurons to simultaneously use multiple learning methods increases their capacity to encode information, what other properties this may give them isn’t yet clear.

Future research will hopefully answer these questions and further our understanding of how the brain learns.The Conversation

William Wright, Postdoctoral Scholar in Neurobiology, University of California, San Diego and Takaki Komiyama, Professor of Neurobiology, University of California, San Diego

This article is republished from The Conversation under a Creative Commons license. 

Sunday, March 9, 2025

Colliding plasma ejections from the Sun generate huge geomagnetic storms − studying them will help scientists monitor future space weather

Shirsh Lata Soni, University of Michigan

The Sun periodically ejects huge bubbles of plasma from its surface that contain an intense magnetic field. These events are called coronal mass ejections, or CMEs. When two of these ejections collide, they can generate powerful geomagnetic storms that can lead to beautiful auroras but may disrupt satellites and GPS back on Earth.

On May 10, 2024, people across the Northern Hemisphere got to witness the impact of these solar activities on Earth’s space weather.

Bright colors visible across the night sky, with a tree silhouetted in the foreground.
The northern lights, as seen here from Michigan in May 2024, are caused by geomagnetic storms in the atmosphere. Shirsh Lata Soni

Two merging CMEs triggered the largest geomagnetic storm in two decades, which manifested in brightly colored auroras visible across the sky.

I’m a solar physicist. My colleagues and I aim to track and better understand colliding CMEs with the goal of improving space weather forecasts. In the modern era, where technological systems are increasingly vulnerable to space weather disruptions, understanding how CMEs interact with each other has never been more crucial.

Coronal mass ejections

CMEs are long and twisted – kind of like ropes – and how often they happen varies with an 11-year cycle. At the solar minimum, researchers observe about one a week, but near the solar maximum, they can observe, on average, two or three per day.

During the solar maximum, solar flares and coronal mass ejections are more common.

When two or more CMEs interact, they generate massive clouds of charged particles and magnetic fields that may compress, merge or reconnect with each other during the collision. These interactions can amplify the impact of the CMEs on Earth’s magnetic field, sometimes creating geomagnetic storms.

Why study interacting CMEs?

Nearly one-third of CMEs interact with other CMEs or the solar wind, which is a stream of charged particles released from the outer layer of the Sun.

In my research team’s study, published in May 2024, we found that CMEs that do interact or collide with each other are much more likely to cause a geomagnetic storm – two times more likely than an individual CME. The mix of strong magnetic fields and high pressure in these CME collisions is likely what causes them to generate storms.

During solar maxima, when there can be more than 10 CMEs per day, the likelihood of CMEs interacting with each other increases. But researchers aren’t sure whether they become more likely to generate a geomagnetic storm during these periods.

Scientists can study interacting CMEs as they move through space and watch them contribute to geomagnetic storms using observations from space- and ground-based observatories.

In this study, we looked at three CMEs that interacted with each other as they traveled through space using the space-based observatory STEREO. We validated these observations with three-dimensional simulations.

The CME interactions we studied generated a complex magnetic field and a compressed plasma sheath, which is a layer of charged particles that, once they reach the upper atmosphere of Earth from space, interacts with its magnetic field.

When this complex structure encountered Earth’s magnetosphere, it compressed the magnetosphere and triggered an intense geomagnetic storm.

Four images showing a CME–CME interaction based on white-light observations from the STEREO telescope.
Four images show three interacting CMEs, based on observations from the STEREO telescope. In images C and D, you can see the northeast flank of CME-1 and CME-2 that interact with the southwest part of CME-3. Shirsh Lata Soni

This same process generated the geomagnetic storm from May 2024.

Between May 8-9, multiple Earth-directed CMEs erupted from the Sun. When these CMEs merged, they formed a massive, combined structure that arrived at Earth late on May 10, 2024. This structure triggered the extraordinary geomagnetic storm many people observed. People even in parts of the southern U.S. were able to see the northern lights in the sky that night.

More technology and higher stakes

Scientists have an expansive network of space- and ground-based observatories, such as the Parker Solar Probe, Solar Orbiter, the Solar Dynamics Observatory and others, available to monitor the heliosphere – the region surrounding the Sun – from a variety of vantage points.

These resources, coupled with advanced modeling capabilities, provide timely and effective ways to investigate how CMEs cause geomagnetic storms. The Sun will reach its solar maximum in the years 2024 and 2025. So, with more complex CMEs coming from the Sun in the next few years and an increasing reliance on space-based infrastructure for communication, navigation and scientific exploration, monitoring these events is more important than ever.

Integrating the observational data from space-based missions such as Wind and ACE and data from ground-based facilities such as the e-Callisto network and radio observatories with state-of-the-art simulation tools allows researchers to analyze the data in real time. That way, they can quickly make predictions about what the CMEs are doing.

These advancements are important for keeping infrastructure safe and preparing for the next solar maximum. Addressing these challenges today ensures resilience against future space weather.

This article was updated to clarify how a compressed plasma sheath interacts with the Earth’s upper atmosphere and magnetic field.

Shirsh Lata Soni, Postdoctoral Research Fellow, University of Michigan

This article is republished from The Conversation under a Creative Commons license. 

Saturday, February 15, 2025

Property and sovereignty in space − as countries and companies take to the stars, they could run into disputes

As travel to the Moon grows more accessible, countries may have to navigate territorial disputes. Neil A. Armstrong/NASA via AP
Wayne N White Jr, Embry-Riddle Aeronautical University

Private citizens and companies may one day begin to permanently settle outer space and celestial bodies. But if we don’t enact governing laws in the meantime, space settlers may face legal chaos.

Many wars on Earth start over territorial disputes. In order to avoid such disputes in outer space, nations should consider enacting national laws that specify the extent of each settler’s authority in outer space and provide a process to resolve conflicts.

I have been researching and writing about space law for over 40 years. Through my work, I’ve studied ways to avoid war and resolve disputes in space.

Property in space

Space is an international area, and companies and individuals are free to land their space objects – including satellites, human-crewed and robotic spacecraft and human-inhabited facilities – on celestial bodies and conduct operations anywhere they please. This includes both outer space and celestial bodies such as the Moon.

A lander – the Apollo 14 Lunar Module – on the Moon's surface
Space objects include landers, rovers, satellites and other objects on the surface of or in orbit around a celestial body. Stocktrek Images/Stocktrek Images via Getty Images

The 1967 Outer Space Treaty prohibits territorial claims in outer space and on celestial bodies in order to avoid disputes. But without national laws governing space settlers, a nation might attempt to protect its citizens’ and companies’ interests by withdrawing from the treaty. They could then claim the territory where its citizens have placed their space objects.

Nations enforce territorial claims through military force, which would likely cost money and lives. An alternative to territorial claims, which I’ve been investigating and have come to prefer, would be to enact real property rights that are consistent with the Outer Space Treaty.

Territorial claims can be asserted only by national governments, while property rights apply to private citizens, companies and national governments that own property. A property rights law could specify how much authority settlers have and protect their investments in outer space and on celestial bodies.

The Outer Space Treaty

In 1967, the Outer Space Treaty went into effect. As of January 2025, 115 countries are party to this treaty, including the United States and most nations that have a space program.

The Outer Space Treaty is the main international agreement governing outer space. However, it is not self-executing.

The Outer Space Treaty outlines principles for the peaceful exploration and use of outer space and celestial bodies. However, the treaty does not specify how it will apply to the citizens and companies of nations that are parties to the treaty.

For this reason, the Outer Space Treaty is largely not a self-executing treaty. This means U.S. courts cannot apply the terms of the treaty to individual citizens and companies. For that to happen, the United States would need to enact national legislation that explains how the terms of the treaty apply to nongovernmental entities.

One article of the Outer Space Treaty says that participating countries should make sure that all of their citizens’ space activities comply with the treaty’s terms. Another article then gives these nations the authority to enact laws governing their citizens’ and companies’ private space activities.

This is particularly relevant to the U.S., where commercial activity in space is rapidly increasing.

UN Charter

It is important to note that the Outer Space Treaty requires participating nations to comply with international law and the United Nations Charter.

In the U.N. Charter, there are two international law concepts that are relevant to property rights. One is a country’s right to defend itself, and the other is the noninterference principle.

The international law principle of noninterference gives nations the right to exclude others from their space objects and the areas where they have ongoing activity.

But how will nations apply this concept to their private citizens and companies? Do individual people and companies have the right to exclude others in order to prevent interference with their activities? What can they do if a foreign person interferes or causes damage?

The noninterference principle in the U.N. Charter governs relations between nations, not individuals. Consequently, U.S. courts likely wouldn’t enforce the noninterference principle in a case involving two private parties.

So, U.S. citizens and companies do not have the right to exclude others from their space objects and areas of ongoing activity unless the U.S. enacts legislation giving them that right.

US laws and regulations

The United States has recognized the need for more specific laws to govern private space activities. It has sought international support for this effort through the nonbinding Artemis Accords.

Four officials sitting at a table in front of a screen with the flags of countries party to the Artemis Accords.
The Artemis Accords outline a framework for the peaceful exploration of outer space. Brendan Smialowski/AFP via Getty Images

As of January 2025, 50 nations have signed the Artemis Accords.

The accords explain how important components of the Outer Space Treaty will apply to private space activities. One section of the accords allows for safety zones, where public and private personnel, equipment and operations are protected from harmful interference by other people. The rights to self-defense and noninterference from the U.N. Charter provide a legal basis for safety zones.

Aside from satellite and rocket-launch regulations, the United States has enacted only a few laws – including the Commercial Space Launch Competitiveness Act of 2015 – to govern private activities in outer space and on celestial bodies.

As part of this act, any U.S. citizen collecting mineral resources in outer space or on celestial bodies has a right to own, transport, use and sell those resources. This act is an example of national legislation that clarifies how the Outer Space Treaty applies to U.S. citizens and companies.

Property rights

Enacting property rights for outer space would make it clear what rights and obligations property owners have and the extent of their authority over their property.

All nations on Earth have a form of property rights in their legal systems. Property rights typically include the rights to possess, control, develop, exclude, enjoy, sell, lease and mortgage properties. Enacting real property rights in space would create a marketplace for buying, selling, renting and mortgaging property.

Because the Outer Space Treaty prohibits territorial claims, space property rights would not necessarily be “land grabs.” Property rights would operate a little differently in space than on Earth.

Property rights in space would have to be based on the authority that the Outer Space Treaty gives to nations. This authority allows them to govern their citizens and their assets by enacting laws and enforcing them in their courts.

Space property rights would include safety zones around property to prevent interference. So, people would have to get the property owner’s permission before entering a safety zone.

If a U.S. property owner were to sell a space property to a foreign citizen or company, the space objects on the property would have to stay on the property or be replaced with the purchaser’s space objects. That would ensure that the owner’s country still has authority over the property.

Also, if someone transferred their space objects to a foreign citizen or company, the buyer would have to change their objects’ international registration, which would give the buyer’s nation authority over the space objects and the surrounding property.

Nations could likely avoid some territorial disputes if they enact real property laws in space that clearly describe how national authority over property changes when it is sold. Enacting property rights could reduce the legal risks for commercial space companies and support the permanent settlement of outer space and celestial bodies.

U.S. property rights law could also contain a reciprocity provision, which would encourage other nations to pass similar laws and allow participating countries to mutually recognize each other’s property rights.

With a reciprocity provision, property rights could support economic development as commercial companies around the world begin to look to outer space as the next big area of economic growth.The Conversation

Wayne N White Jr, Adjunct Professor of Aviation and Space Law, Embry-Riddle Aeronautical University

This article is republished from The Conversation under a Creative Commons license. 

Saturday, October 19, 2024

To make nuclear fusion a reliable energy source one day, scientists will first need to design heat- and radiation-resilient materials

Kinguin: Everybody plays.
A fusion experiment ran so hot that the wall materials facing the plasma retained defects. Christophe Roux/CEA IRFM, CC BY
Sophie Blondel, University of Tennessee

Fusion energy has the potential to be an effective clean energy source, as its reactions generate incredibly large amounts of energy. Fusion reactors aim to reproduce on Earth what happens in the core of the Sun, where very light elements merge and release energy in the process. Engineers can harness this energy to heat water and generate electricity through a steam turbine, but the path to fusion isn’t completely straightforward.

Controlled nuclear fusion has several advantages over other power sources for generating electricity. For one, the fusion reaction itself doesn’t produce any carbon dioxide. There is no risk of meltdown, and the reaction doesn’t generate any long-lived radioactive waste.

I’m a nuclear engineer who studies materials that scientists could use in fusion reactors. Fusion takes place at incredibly high temperatures. So to one day make fusion a feasible energy source, reactors will need to be built with materials that can survive the heat and irradiation generated by fusion reactions.

Fusion material challenges

Several types of elements can merge during a fusion reaction. The one most scientists prefer is deuterium plus tritium. These two elements have the highest likelihood of fusing at temperatures that a reactor can maintain. This reaction generates a helium atom and a neutron, which carries most of the energy from the reaction.

Humans have successfully generated fusion reactions on Earth since 1952 – some even in their garage. But the trick now is to make it worth it. You need to get more energy out of the process than you put in to initiate the reaction.

Fusion reactions happen in a very hot plasma, which is a state of matter similar to gas but made of charged particles. The plasma needs to stay extremely hot – over 100 million degrees Celsius – and condensed for the duration of the reaction.

To keep the plasma hot and condensed and create a reaction that can keep going, you need special materials making up the reactor walls. You also need a cheap and reliable source of fuel.

While deuterium is very common and obtained from water, tritium is very rare. A 1-gigawatt fusion reactor is expected to burn 56 kilograms of tritium annually. But the world has only about 25 kilograms of tritium commercially available.

Researchers need to find alternative sources for tritium before fusion energy can get off the ground. One option is to have each reactor generating its own tritium through a system called the breeding blanket.

The breeding blanket makes up the first layer of the plasma chamber walls and contains lithium that reacts with the neutrons generated in the fusion reaction to produce tritium. The blanket also converts the energy carried by these neutrons to heat.

The fusion reaction chamber at ITER will electrify the plasma.

Fusion devices also need a divertor, which extracts the heat and ash produced in the reaction. The divertor helps keep the reactions going for longer.

These materials will be exposed to unprecedented levels of heat and particle bombardment. And there aren’t currently any experimental facilities to reproduce these conditions and test materials in a real-world scenario. So, the focus of my research is to bridge this gap using models and computer simulations.

From the atom to full device

My colleagues and I work on producing tools that can predict how the materials in a fusion reactor erode, and how their properties change when they are exposed to extreme heat and lots of particle radiation.

As they get irradiated, defects can form and grow in these materials, which affect how well they react to heat and stress. In the future, we hope that government agencies and private companies can use these tools to design fusion power plants.

Our approach, called multiscale modeling, consists of looking at the physics in these materials over different time and length scales with a range of computational models.

We first study the phenomena happening in these materials at the atomic scale through accurate but expensive simulations. For instance, one simulation might examine how hydrogen moves within a material during irradiation.

From these simulations, we look at properties such as diffusivity, which tells us how much the hydrogen can spread throughout the material.

We can integrate the information from these atomic level simulations into less expensive simulations, which look at how the materials react at a larger scale. These larger-scale simulations are less expensive because they model the materials as a continuum instead of considering every single atom.

The atomic-scale simulations could take weeks to run on a supercomputer, while the continuum one will take only a few hours.

All this modeling work happening on computers is then compared with experimental results obtained in laboratories.

For example, if one side of the material has hydrogen gas, we want to know how much hydrogen leaks to the other side of the material. If the model and the experimental results match, we can have confidence in the model and use it to predict the behavior of the same material under the conditions we would expect in a fusion device.

If they don’t match, we go back to the atomic-scale simulations to investigate what we missed.

Additionally, we can couple the larger-scale material model to plasma models. These models can tell us which parts of a fusion reactor will be the hottest or have the most particle bombardment. From there, we can evaluate more scenarios.

For instance, if too much hydrogen leaks through the material during the operation of the fusion reactor, we could recommend making the material thicker in certain places, or adding something to trap the hydrogen.

Designing new materials

As the quest for commercial fusion energy continues, scientists will need to engineer more resilient materials. The field of possibilities is daunting – engineers can manufacture multiple elements together in many ways.

You could combine two elements to create a new material, but how do you know what the right proportion is of each element? And what if you want to try mixing five or more elements together? It would take way too long to try to run our simulations for all of these possibilities.

Thankfully, artificial intelligence is here to assist. By combining experimental and simulation results, analytical AI can recommend combinations that are most likely to have the properties we’re looking for, such as heat and stress resistance.

The aim is to reduce the number of materials that an engineer would have to produce and test experimentally to save time and money.

Sophie Blondel, Research Assistant Professor of Nuclear Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. 

Kinguin: Everybody plays.