Sunday, April 9, 2023

Scientific highs and lows of cannabinoids

Hundreds of these cannabis-related chemicals now exist, both natural and synthetic, inspiring researchers in search of medical breakthroughs — and fueling a dangerous trend in recreational use

Editor’s note: Raphael Mechoulam passed away on March 9, 2023, at the age of 92.

The 1960s was a big decade for cannabis: Images of flower power, the summer of love and Woodstock wouldn’t be complete without a joint hanging from someone’s mouth. Yet in the early ’60s, scientists knew surprisingly little about the plant. When Raphael Mechoulam, then a young chemist in his 30s at Israel’s Weizmann Institute of Science, went looking for interesting natural products to investigate, he saw an enticing gap in knowledge about the hippie weed: The chemical structure of its active ingredients hadn’t been worked out.

Mechoulam set to work.

The first hurdle was simply getting hold of some cannabis, given that it was illegal. “I was lucky,” Mechoulam recounts in a personal chronicle of his life’s work, published this month in the Annual Review of Pharmacology and Toxicology. “The administrative head of my Institute knew a police officer. ... I just went to Police headquarters, had a cup of coffee with the policeman in charge of the storage of illicit drugs, and got 5 kg of confiscated hashish, presumably smuggled from Lebanon.”

By 1964, Mechoulam and his colleagues had determined, for the first time, the full structure of both delta-9-tetrahydrocannabinol, better known to the world as THC (responsible for marijuana’s psychoactive “high”) and cannabidiol, or CBD.

That chemistry coup opened the door for cannabis research. Over the following decades, researchers including Mechoulam would identify more than 140 active compounds, called cannabinoids, in the cannabis plant, and learn how to make many of them in the lab. Mechoulam helped to figure out that the human body produces its own natural versions of similar chemicals, called endocannabinoids, that can shape our mood and even our personality. And scientists have now made hundreds of novel synthetic cannabinoids, some more potent than anything found in nature.

Today, researchers are mining the huge number of known cannabinoids — old and new, found in plants or people, natural and synthetic — for possible pharmaceutical uses. But, at the same time, synthetic cannabinoids have become a hot trend in recreational drugs, with potentially devastating impacts.

For most of the synthetic cannabinoids made so far, the adverse effects generally outweigh their medical uses says biologist João Pedro Silva of the University of Porto in Portugal, who studies the toxicology of substance abuse, and coauthored a 2023 assessment of the pros and cons of these drugs in the Annual Review of Pharmacology and Toxicology. But, he adds, that doesn’t mean there aren’t better things to come.

Cannabis’s long medical history

Cannabis has been used for centuries for all manner of reasons, from squashing anxiety or pain to spurring appetite and salving seizures. In 2018, a cannabis-derived medicine — Epidiolex, consisting of purified CBD — was approved for controlling seizures in some patients. Some people with serious conditions, including schizophrenia, obsessive compulsive disorder, Parkinson’s and cancer, self-medicate with cannabis in the belief that it will help them, and Mechoulam sees the promise. “There are a lot of papers on [these] diseases and the effects of cannabis (or individual cannabinoids) on them. Most are positive,” he tells Knowable Magazine.

That’s not to say cannabis use comes with zero risks. Silva points to research suggesting that daily cannabis users have a higher risk of developing psychotic disorders, depending on the potency of the cannabis; one paper showed a 3.2 to 5 times higher risk. Longtime chronic users can develop cannabinoid hyperemesis syndrome, characterized by frequent vomiting. Some public health experts worry about impaired driving, and some recreational forms of cannabis contain contaminants like heavy metals with nasty effects .

Finding medical applications for cannabinoids means understanding their pharmacology and balancing their pros and cons.

Mechoulam played a role in the early days of research into cannabis’s possible clinical uses. Based on anecdotal reports stretching back into ancient times of cannabis helping with seizures, he and his colleagues looked at the effects of THC and CBD on epilepsy. They started in mice and, since CBD showed no toxicity or side effects, moved on to people. In 1980, then at the Hebrew University of Jerusalem, Mechoulam co-published results from a 4.5-month, tiny trial of patients with epilepsy who weren’t being helped by current drugs. The results seemed promising: Out of eight people taking CBD, four had almost no attacks throughout the study, and three saw partial improvement. Only one patient wasn’t helped at all.

“We assumed that these results would be expanded by pharmaceutical companies, but nothing happened for over 30 years,” writes Mechoulam in his autobiographical article. It wasn’t until 2018 that the US Food and Drug Administration approved Epidiolex for treating epileptic seizures in people with certain rare and severe medical conditions. “Thousands of patients could have been helped over the four decades since our original publication,” writes Mechoulam.

Drug approval is a necessarily long process, but for cannabis there have been the additional hurdles of legal roadblocks, as well as the difficulty in obtaining patent protections for natural compounds. The latter makes it hard for a pharmaceutical company to financially justify expensive human trials and the lengthy FDA approval process.

In the United Nations’ 1961 Single Convention on Narcotic Drugs, cannabis was slotted into the most restrictive categories: Schedule I (highly addictive and liable to abuse) and its subgroup, Schedule IV (with limited, if any, medicinal uses). The UN removed cannabis from schedule IV only in December 2020 and, although cannabis has been legalized or decriminalized in several countries and most US states, it remains still ( controversially), on both the US’ and the UN’s Schedule I — the same category as heroin. The US’ cannabis research bill, passed into law in December 2022, is expected to help ease some of the issues in working with cannabis and cannabinoids in the lab.

To date, the FDA has only licensed a handful of medicinal drugs based on cannabinoids, and so far they’re based only on THC and CBD. Alongside Epidiolex, the FDA has approved synthetic THC and a THC-like compound to fight nausea in patients undergoing chemotherapy and weight loss in patients with cancer or AIDS. But there are hints of many other possible uses. The National Institutes of Health registry of clinical trials lists hundreds of efforts underway around the world to study the effect of cannabinoids on autism, sleep, Huntington’s Disease, pain management and more.

In recent years, says Mechoulam, interest has expanded beyond THC and CBD to other cannabis compounds such as cannabigerol (CBG), which Mechoulam and his colleague Yehiel Gaoni discovered back in 1964. His team has made derivatives of CBG that have anti-inflammatory and pain relief properties in mice (for example, reducing the pain felt in a swollen paw) and can prevent obesity in mice fed high-fat diets. A small clinical trial of the impacts of CBG on attention-deficit hyperactivity disorder is being undertaken this year. Mechoulam says that the methyl ester form of another chemical, cannabidiolic acid, also seems “very promising” — in rats, it can suppress nausea and anxiety and act as an antidepressant in an animal model of the mood disorder.

But if the laundry list of possible benefits of all the many cannabinoids is huge, the hard work has not yet been done to prove their utility. “It’s been very difficult to try and characterize the effects of all the different ones,” says Sam Craft, a psychology PhD student who studies cannabinoids at the University of Bath in the UK. “The science hasn’t really caught up with all of this yet.”

A natural version in our bodies

Part of the reason that cannabinoids have such far-reaching effects is because, as Mechoulam helped to discover, they’re part of natural human physiology.

In 1988, researchers reported the discovery of a cannabinoid receptor in rat brains, CB1 (researchers would later find another, CB2, and map them both throughout the human body). Mechoulam reasoned there wouldn’t be such a receptor unless the body was pumping out its own chemicals similar to plant cannabinoids, so he went hunting for them. He would drive to Tel Aviv to buy pig brains being sold for food, he remembers, and bring them back to the lab. He found two molecules with cannabinoid-like activity: anandamide (named after the Sanskrit word ananda for bliss) and 2-AG.

These endocannabinoids, as they’re termed, can alter our mood and affect our health without us ever going near a joint. Some speculate that endocannabinoids may be responsible, in part, for personality quirks, personality disorders or differences in temperament.

Animal and cell studies hint that modulating the endocannabinoid system could have a huge range of possible applications, in everything from obesity and diabetes to neurodegeneration, inflammatory diseases, gastrointestinal and skin issues, pain and cancer. Studies have reported that endocannabinoids or synthetic creations similar to the natural compounds can help mice recover from brain trauma, unblock arteries in rats, fight antibiotic-resistant bacteria in petri dishes and alleviate opiate addiction in rats. But the endocannabinoid system is complicated and not yet well understood; no one has yet administered endocannabinoids to people, leaving what Mechoulam sees as a gaping hole of knowledge, and a huge opportunity. “I believe that we are missing a lot,” he says.

“This is indeed an underexplored field of research,” agrees Silva, and it may one day lead to useful pharmaceuticals. For now, though, most clinical trials are focused on understanding the workings of endocannabinoids and their receptors in our bodies (including how everything from probiotics to yoga affects levels of the chemicals).

‘Toxic effects’ of synthetics

In the wake of the discovery of CB1 and CB2, many researchers focused on designing new synthetic molecules that would bind to these receptors even more strongly than plant cannabinoids do. Pharmaceutical companies have pursued such synthetic cannabinoids for decades, but so far, says Craft, without much success — and some missteps. A drug called Rimonabant, which bound tightly to the CB1 receptor but acted in opposition to CB1’s usual effect, was approved in Europe and other nations (but not the US) in the early 2000s to help to diminish appetite and in that way fight obesity. It was withdrawn worldwide in 2008 due to serious psychotic side effects, including provoking depression and suicidal thoughts.

Some of the synthetics invented originally by academics and drug companies have wound up in recreational drugs like Spice and K2. Such drugs have boomed and new chemical formulations keep popping up: Since 2008, 224 different ones have been spotted in Europe. These compounds, chemically tweaked to maximize psychoactive effects, can cause everything from headaches and paranoia to heart palpitations, liver failure and death. “They have very toxic effects,” says Craft.

For now, says Silva, there is scarce evidence that existing synthetic cannabinoids are medicinally useful: As most of the drug candidates worked their way up the pipeline, adverse effects have tended to crop up. Because of that, says Silva, most pharmaceutical efforts to develop synthetic cannabinoids have been discontinued.

But that doesn’t mean all research has stopped; a synthetic cannabinoid called JWH-133, for example, is being investigated in rodents for its potential to reduce the size of breast cancer tumors. It’s possible to make tens of thousands of different chemical modifications to cannabinoids, and so, says Silva, “it is likely that some of these combinations may have therapeutic potential.” The endocannabinoid system is so important in the human body that there’s plenty of room to explore all kinds of medicinal angles. Mechoulam serves on the advisory board of Israel-based company EPM, for example, which is specifically aimed at developing medicines based on synthetic versions of types of cannabinoid compounds called synthetic cannabinoid acids.

With all this work underway on the chemistry of these compounds and their workings within the human body, Mechoulam, now 92, sees a coming explosion in understanding the physiology of the endocannabinoid system. And with that, he says, “I assume that we shall have a lot of new drugs.”

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

From the Big Bang to dark energy, knowledge of the cosmos has sped up in the past century — but big questions linger

“The first thing we know about the universe is that it’s really, really big,” says cosmologist Michael Turner, who has been contemplating this reality for more than four decades now. “And because the universe is so big,” he says, “it’s often beyond the reach of our instruments, and of our ideas.”

Certainly our current understanding of the cosmic story leaves some huge unanswered questions, says Turner, an emeritus professor at the University of Chicago and a visiting faculty member at UCLA. Take the question of origins. We now know that the universe has been expanding and evolving for something like 13.8 billion years, starting when everything in existence exploded outward from an initial state of near-infinite temperature and density — a.k.a. the Big Bang. Yet no one knows for sure what the Big Bang was, says Turner. Nor does anyone know what triggered it, or what came beforehand — or whether it’s even meaningful to talk about “time” before that initial event.

Then there’s the fact that the most distant stars and galaxies our telescopes can potentially see are confined to the “observable” universe: the region that encompasses objects such as galaxies and stars whose light has had time to reach us since the Big Bang. This is an almost inconceivably vast volume, says Turner, extending tens of billions of light-years in every direction. Yet we have no way of knowing what lies beyond. Just more of the same, perhaps, stretching out to infinity. Or realms that are utterly strange — right down to laws of physics that are very different from our own.

But then, as Turner explains in the 2022 Annual Review of Nuclear and Particle Science, mysteries are only to be expected. The scientific study of cosmology, the field that focuses on the origins and evolution of the universe, is barely a century old. It has already been transformed more than once by new ideas, new technologies and jaw-dropping discoveries — and there is every reason to expect more surprises to come.

Knowable Magazine  recently spoke with Turner about how these transformations occurred and what cosmology’s future might be. This interview has been edited for length and clarity.

You say in your article that modern, scientific cosmology didn’t get started until roughly the 1920s. What happened then?

It’s not as though nothing happened earlier. People have been speculating about the origin and evolution of the universe for as long as we know of. But most of what was done before about 100 years ago we would now call galactic astronomy, which is the study of stars, planets and interstellar gas clouds within our own Milky Way. At the time, in fact, a lot of astronomers argued that the Milky Way was the universe — that there was nothing else.

But two big things happened in the 1920s. One was the work of a young astronomer named Edwin Hubble. He took an interest in the nebulae, which were these fuzzy patches of light in the sky that astronomers had been cataloging for hundreds of years. There had always been a debate about their nature: Were they just clouds of gas relatively close by in the Milky Way, or other “island universes” as big as ours?

Nobody had been able to figure that out. But Hubble had access to a new 100-inch telescope, which was the largest in the world at that time. And that gave him an instrument powerful enough to look at some of the biggest and brightest of the nebulae, and show that they contained individual stars, not just gas. By 1925, he was also able to estimate the distance to the very brightest nebula, in the constellation of Andromeda. It lay well outside the Milky Way. It was a whole other galaxy just like ours.

So that paper alone solved the riddle of the nebulae and put Hubble on the map as a great astronomer. In today’s terms, he had identified the fundamental architecture of the universe, which is that it consists of these collections of stars organized into galaxies like our own Milky Way — about 200 billion of them in the part of the universe we can see.

But he didn’t stop there. In those days there was this — well, “war” is probably too strong a word, but a separation between the astronomers who took pictures and the astrophysicists who used spectroscopy, which was a technique that physicists had developed in the 19th century to analyze the wavelengths of light emitted from distant objects. Once you started taking spectra of things like stars or planets, and comparing their emissions with those from known chemical elements in the laboratory, you could say, “Oh, not only do I know what it’s made of, but I know its temperature and how fast it’s moving towards or away from us.” So you could start really studying the object.

Just like in other areas of science, though, the very best people in astronomy use all the tools at hand, be they pictures or spectra. In Hubble’s case, he paid particular attention to an earlier paper that had used spectroscopy to measure the velocity of the nebulae. Now, the striking thing about this paper was that some of the nebulae were moving away from us at many hundreds of kilometers per second. In spectroscopic terms they had a high “redshift,” meaning that their emissions were shifted toward longer wavelengths than you’d see in the lab.

So in 1929, when Hubble had solid distance data for two dozen galaxies and reasonable estimates for more, he plotted those values against the redshift data. And he got a striking correlation: The further away a galaxy was, the faster it was moving away from us.

This was the relation that’s now known as Hubble’s law. It took a while to figure out what it meant, though.

Why? Did it require a second big development?

Yes. A bit earlier, in 1915, Albert Einstein had put forward his theory of general relativity, which was a complete paradigm shift and reformulation of gravity. His key insight was that space and time are not fixed, as physicists had always assumed, but are dynamic. Matter and energy bend space and time around themselves, and the “force” we call gravity is just the result of objects being deflected as they move around in this curved space-time. As the late physicist John Archibald Wheeler famously said, “Space tells matter how to move, and matter tells space how to curve.”

It took a few years to connect Einstein’s theory with observation. But by the early or mid-1930s, it was clear that what Hubble had discovered was not that galaxies are moving away from us into empty space, but that space itself is expanding and carrying the galaxies along with it. The whole universe is expanding.

And at least a few scientists in the 1930s began to realize that Hubble’s discovery also meant there was a beginning to the universe.

The turning point was probably George Gamow, a Soviet physicist who defected to the US in the 1930s. He had studied general relativity as a student in Leningrad, and knew that Einstein’s equations implied that the universe had expanded from a “singularity” — a mathematical point where time began and the radius of the universe was zero. It’s what we now call the Big Bang.

But Gamow also knew nuclear physics, which he had helped develop before World War II. And around 1948, he and his collaborators started to combine general relativity and nuclear physics into a model of the universe’s beginning to explain where the elements in the periodic table came from.

Their key idea was that the universe started out hot, then cooled as it expanded the way gas from an aerosol can does. This was totally theoretical at the time. But it would be confirmed in 1965 when radio astronomers discovered the cosmic microwave background radiation. This radiation consists of high-energy photons that emerged from the Big Bang and cooled down as the universe expanded, until today they are just 3 degrees Kelvin above absolute zero — which is also the average temperature of the universe as a whole.

In this hot, primordial soup — called ylem by Gamow — matter would not exist in the form it does today. The extreme heat would boil atoms into their constituent components — neutrons, protons and electrons. Gamow’s dream was that nuclear reactions in the cooling soup would have produced all the elements, as neutrons and protons combined to make the nuclei of the various atoms in the periodic table.

But his idea came up short. It took a number of years and a village of people to get the calculations right. But by the 1960s, it was clear that what would come from these nuclear reactions was mostly hydrogen, plus a lot of helium — about 25 percent by weight, exactly what astronomers observed — plus a little bit of deuterium, helium-3 and lithium. Heavier elements such as carbon and oxygen were made later, by nuclear reactions in stars and other processes.

So by the early 1970s, we had the creation of the light elements in a hot Big Bang, the expansion of the universe and the microwave background radiation — the three observational pillars of what’s been called the standard model of cosmology, and what I call the first paradigm.

But you note that cosmologists almost immediately began to shift toward a second paradigm. Why? Was the Big Bang model wrong?

Not wrong — our current understanding still has a hot Big Bang beginning — but incomplete. By the 1970s the idea of a hot beginning was attracting the attention of particle physicists, who saw the Big Bang as a way to study particle collisions at energies you couldn’t hope to reach at accelerators here on Earth. So the field suddenly got a lot bigger, and people started asking questions that suggested the standard cosmology was missing something.

For example, why is the universe so smooth? The intensity and temperature of the microwave background radiation, which is the best measure we have of the whole universe, is almost perfectly uniform in every direction. There’s nothing in Einstein’s cosmological equations that says this has to be the case.

The biggest things in the universe originated from the unimaginably small.

On the flip side, though — why is that cosmic smoothness only almost  perfect? After all, the most prominent features of the universe today are the galaxies, which must have formed as gravity magnified tiny fluctuations in the density of matter in the early universe. So where did those fluctuations come from? What seeded the galaxies?

Around this time, evidence had accumulated that neutrons and protons were made of smaller bits — quarks — which meant that the neutron-proton soup would eventually boil, too, becoming a quark soup at the earliest times. So maybe the answers lie in that early quark soup phase, or even earlier.

This is the possibility that led Alan Guth to his brilliant paper on cosmic inflation in 1981.

What is cosmic inflation?

Guth’s idea was that in the tiniest fraction of a second after the initial singularity, according to new ideas in particle physics, the universe ought to undergo a burst of accelerated expansion. This would have been an exponential expansion, far faster than in the standard Big Bang model. The size of the universe would have doubled and doubled and doubled again, enough times to take a subatomic patch of space and blow it up to the scale of the observable universe.

This explained the uniformity of the universe right away, just like if you had a balloon and blew it up until it was the size of the Earth or bigger: It would look smooth. But inflation also explained the galaxies. In the quantum world, it’s normal for things like the number of particles in a tiny region to bounce around. Ordinarily, this averages out to zero and we don’t notice it. But when cosmic inflation produced this tremendous expansion, it blew up these subatomic fluctuations to astrophysical scales, and provided the seeds for galaxy formation.

This result is the poster child for the connection between particle physics and cosmology: The biggest things in the universe — galaxies and clusters of galaxies — originated from quantum fluctuations that were unimaginably small.

You have written that the second paradigm has three pillars, cosmic inflation being the first. What about the other two?

When the details of inflation were being worked out in the early 1980s, people saw there was something else missing. The exponential expansion would have stretched everything out until space was “flat” in a certain mathematical sense. But according to Einstein’s general relativity, the only way the universe could be flat was if its mass and energy content averaged out to a certain critical density. This value was really small, equivalent to a few hydrogen atoms per cubic meter.

But even that was a stretch: Astronomers’ best measurements for the mean density of all the planets, stars and gas in the universe — all the stuff made of atoms — wasn’t even 10 percent of the critical density. (The modern figure is 4.9 percent.) So something else that was not made of atoms had to be making up the difference.

That something turned out to have two components, one of which astronomers had already begun to detect through its gravitational effects. Fritz Zwicky found the first clue back in the 1930s, when he looked at the motions of galaxies in distant clusters. Each of these galactic clusters was obviously held together by gravity, because their galaxies were all close and not flying apart. Yet the velocities Zwicky found were really high, and he concluded that the visible stars alone couldn’t produce nearly enough gravity to keep the galaxies bound. The extra gravity had to be coming from some form of “dark matter” that didn’t shine, but that outweighed the visible stars by a large factor.

Then Vera Rubin and Kent Ford really brought it home in the 1970s with their studies of rotation in ordinary nearby galaxies, starting with Andromeda. They found that the rotation rates were way too fast: There weren’t nearly enough stars and interstellar gas to hold these galaxies together. The extra gravity had to be coming from something invisible — again, dark matter.

Particle physicists loved the dark matter idea, because their unified field theories contained hypothetical particles with names like neutralino, or axion, that would have been produced in huge numbers during the Big Bang, and that had exactly the right properties. They wouldn’t give off light because they had no electric charge and very weak interactions with ordinary matter. But they would have enough mass to produce dark matter’s gravitational effects.

We haven’t yet detected these particles in the laboratory. But we do know some things about them. They’re “cold,” for example, meaning that they move slowly compared to the speed of light. And we know from computer simulations that without the gravity of cold dark matter, those tiny density fluctuations in the ordinary matter that emerged from the Big Bang would never have collapsed into galaxies. They just didn’t have enough gravity by themselves.

So that was the second pillar, cold dark matter. And the third?

As the simulations and the observations improved, cosmologists began to realize that even dark matter was only a fraction of the critical density needed to make the universe flat. (The modern figure is 26.8 percent.) The missing piece was found in 1998 when two groups of astronomers did a very careful measurement of the redshift in distant galaxies, and found that the cosmic expansion was gradually accelerating.

So something — I suggested calling it “dark energy,” and the name stuck — is pushing the universe apart. Our best understanding is that dark energy leads to repulsive gravity, something that is built into Einstein’s general relativity. The crucial feature of dark energy is its elasticity or negative pressure. And further, it can’t be broken into particles — it is more like an extremely elastic medium.

While dark energy remains one of the great mysteries of cosmology and particle physics, it seems to be mathematically equivalent to the cosmological constant that Einstein suggested in 1917. In the modern interpretation, though, it corresponds to the energy of nature’s quantum vacuum. This leads to an extraordinary picture: the cosmic expansion speeding up rather than slowing, all caused by the repulsive gravity of a very elastic, mysterious component of the universe called dark energy. The equally extraordinary evidence for this extraordinary claim has built up ever since and the two teams that made the 1998 discovery were awarded the Nobel Prize in Physics in 2011.

So here is where we are: a flat, critical-density universe comprising ordinary matter at about 5 percent, particle dark matter at about 25 percent and dark energy at about 70 percent. The cosmological constant is still called lambda, the Greek letter that Einstein used. And so the new paradigm is referred to as the lambda-cold dark matter model of cosmology.

So this is your second paradigm — inflation plus cold dark matter plus dark energy?

Yes. And it’s this amazing, glass-half-full, half-empty situation. The lambda-cold dark matter paradigm has these three pillars that are well established with evidence, and that allow us to describe the evolution of the universe from a tiny fraction of a second until today. But we know we’re not done.

For example, you say, “Wow, cosmic inflation sounds really important. It’s why we have a flat universe today and explains the seeds for galaxies. Tell me the details.” Well, we don’t know the details. Our best understanding is that inflation was caused by some still unknown field similar to the Higgs boson discovered in 2012.

Then you say, “Yeah, this dark matter sounds really important. Its gravity is responsible for the formation of all the galaxies and clusters in the universe. What is it?” We don’t know. It’s probably some kind of particle left over from the Big Bang, but we haven’t found it.

“You say, ‘Yeah, this dark matter sounds really important. Its gravity is responsible for the formation of all the galaxies and clusters in the universe. What is it?’ We don’t know.”

And then finally you say, “Oh, dark energy is 70 percent of the universe. That must be really important. Tell me more about it.” And we say, it’s consistent with a cosmological constant. But really, we don’t have a clue why the cosmological constant should exist or have the value it does.

So now cosmology has left us with three physics questions: Dark matter, dark energy and inflation — what are they?

Does that mean we need a third cosmological paradigm to find the answers?

Maybe. It could be that everything’s done in 30 years because we just flesh out our current ideas. We discover that dark matter really is some particle like the axion, that dark energy really is just the constant quantum energy of empty space, and that inflation really was caused by the Higgs field.

But more likely than not, if history is any guide, we’re missing something and there’s a surprise on the horizon.

Some cosmologists are trying to find this surprise by following the really big questions. For example: What was the Big Bang? And what happened beforehand? The Big Bang theory we talked about earlier is anything but a theory of the Big Bang itself; it’s a theory of what happened afterwards.

Remember, the actual Big Bang event, according to Einstein’s general relativity, was this singularity that saw the creation of matter, energy, space and time itself. That’s the big mystery, which we struggle even to talk about in scientific terms: Was there a phase before this singularity? And if so, what was it like? Or, as many theorists think, does the singularity in Einstein’s equations represent the instant when space and time themselves emerged from something more fundamental?

Another possibility that has captured the attention of scientists and public alike is the multiverse. This follows from inflation, where we imagine blowing up a small bit of space to an enormous size. Could that happen more than once, at different places and times? And the answer is yes: You could have had different patches of the wider multiverse inflating into entirely different universes, maybe with different laws of physics in each one. It could be the biggest idea since Copernicus moved us out of the center of the universe. But it’s also very frustrating because right now, it isn’t science: These universes would be completely disconnected, with no way to access them, observe them or show that they actually exist.

Yet another possibility is in the title of my Annual Reviews  article: The road to precision cosmology. It used to be that cosmology was really difficult because the instruments weren’t quite up to the task. Back in the 1930s, Hubble and his colleague Milton Humason struggled for years to collect redshifts for a few hundred galaxies, in part because they were recording one spectrum at a time on photographic plates that collected less than 1 percent of the light. Now astronomers use electronic CCD detectors — the same kind that everyone carries around in their phone — that collect almost 100 percent of the light. It’s as if you increased your telescope size without any construction.

And we have projects like the Dark Energy Spectroscopic Instrument on Kitt Peak in Arizona that can collect the spectra of 5,000 galaxies at once — 35 million of them over five years.

So cosmology used to be a data-poor science in which it was hard to measure things within any reliable precision. And today, we are doing precision cosmology, with percent-level accuracy. And further, we are sometimes able to measure things in two different ways, and see if the results agree, creating cross-cuts that can confirm our current paradigm or reveal cracks in it.

A prime example of this is the expansion rate of the universe, what’s called the Hubble parameter — the most important number in cosmology. If nothing else, it tells us the age of the universe: The bigger the parameter, the younger the universe, and vice versa. Today we can measure it directly with the velocities and distances of galaxies out to a few hundred-million light years, at the few percent level.

But there is now another way to measure it with satellite observations of the microwave background radiation, which gives you the expansion rate when the universe was about 380,000 years old, at even greater precision. With the lambda-cold dark matter model you can extrapolate that expansion rate forward to the present day and see if you get the same number as you do with redshifts. And you don’t: The numbers differ by almost 10 percent — an ongoing puzzle that’s called the Hubble tension.

So maybe that’s the loose thread — the tiny discrepancy in the precision measurements that could lead to another paradigm shift. It could be just that the direct measurements of galaxy distances are wrong, or that the microwave background numbers are wrong. But maybe we are finding something that’s missing from lambda-cold dark matter. That would be extremely exciting.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

3 Tips for Creating a Summer of Unplugged Fun

Between school, work and entertainment, there are times when screens can seem like a pervasive part of modern life. For all the positive aspects of technology, there can also be a desire for children to have stretches of unplugged learning and participate in educational activities that do not require a screen.

Why Unplugged Learning Matters
“Unplugged learning is important to balance the screen time children may experience with other forms of learning; to promote physical activities, social interaction and creativity; and develop the essential skills that bolster them throughout their exploration and growth as individuals,” said Rurik Nackerud from KinderCare’s education team.

Summer can be an ideal time to focus on unplugged learning as it often brings a break from the traditional academic year and activities.

“We want summer to be a time when children can put down technology and connect with one another face-to-face, build important creativity skills and learn how to be social with one another without the buffer of screens,” said Khy Sline from KinderCare’s education team. “They can play, run, be immature and laugh with their friends, giggle at the silly things and find joys in those in-person interactions with one another.”

Tips for Creating Unplugged Fun as a Family

  1. Get Outdoors. Make time as a family to get outside and explore, even if it’s simply a walk around the block after dinner. Help children notice the little things like a bug on the sidewalk or the way the sun filters through tree leaves to make patterns on the ground. Ask them about the things they see and give your children the space to ask questions and work together to find the answers. This helps teach children collaborative learning skills: asking questions, sharing ideas and working together to reach an answer.
     
  2. Read Together. This could mean going to the library to check out new books or exploring your family’s bookshelves for old favorites. Snuggle up together for family story time. If children are old enough to read on their own, invite them to read to you or their younger siblings. Talk about the story or even act out favorite parts to help your children actively participate in story time, which may help them better understand the story’s concepts.
     
  3. Encourage Creative Thinking. Help children expand their ability to think creatively by working together to make a craft or project. For example, the next time a delivery box arrives at your home, encourage your children to turn it into something new using craft supplies on hand. A blanket could turn a box into a table for a pretend restaurant while some tape or glue could transform it into a rocket ship or train. When everyone’s done creating and playing, the box can be broken down for recycling. This activity can help children literally think outside of the box and apply their own unique ideas and creativity to create something new.

For more tips to encourage unplugged learning this summer, visit kindercare.com.

 

SOURCE:
KinderCare

What does ‘moral hazard’ mean? A scholar of financial regulation explains why it’s risky for the government to rescue banks

A real payload. tiero/iStock via Getty Images Plus
Cassandra Jones Havard, University of South Carolina

Moral hazard” refers to the risks that someone or something becomes more inclined to take because they have reason to believe that an insurer will cover the costs of any damages.

The concept describes financial recklessness. It has its roots in the advent of private insurance companies about 350 years ago. Soon after they began to form, it became clear that people who bought insurance policies took risks they wouldn’t have taken without that coverage.

Here are some illustrative examples: Having worker’s compensation insurance could potentially encourage some workers to stay out of work longer than needed for their health. Or, homeowners insurance may explain why a homeowner might not bother spending their own money on a small repair not covered by their insurance policy because they figure that over time it will turn into a larger problem that would be covered.

Or think of what happens when someone rents a car and parks it where it can easily be damaged. That carelessness reflects an assumption that the rental car company’s insurance policy will pay for the repairs.

Why moral hazard matters

U.S. banks are insured by the Federal Deposit Insurance Corporation, or FDIC, and the risk-takers are both banks and the bank’s depositors.

Congress established the FDIC during the Great Depression, which began with a spate of bank runs. The goal was to boost confidence in the banking system.

The Dodd-Frank Financial Reform Act, enacted after the 2008 financial crisis, was supposed to reduce moral hazard. One way it did that was by making it clear that accounts of more than US$250,000 aren’t insured by the FDIC unless the bank’s failure presents a systemic risk to the financial system.

The implicit assumption behind the government’s insurance limit, which prior to 2008 stood at $100,000, is that depositors who have accounts worth more than the limit will bear the loss of bank failure along with the bank’s executives and shareholders. Yet boosting the size of the guarantee amount also made future bank bailouts more costly, which in turn increased moral hazard.

And when Silicon Valley Bank failed in March 2023, all its depositors got access to their funds – including those with accounts that exceeded the $250,000 limit – because the government made an exception.

‘Too big to fail’

I teach and write about moral hazard in the banking industry as a banking law professor. As it happens, my banking law class had discussed moral hazard and bank failure for three class sessions held before the 2023 spring break.

When the students returned from their vacation, news of Silicon Valley Bank’s failure appeared to be the start of what might become a bank crisis.

“What happened? It’s completely different from what you taught us!” the students in my class exclaimed, almost in unison. Questions tumbled from their heads demanding an explanation.

Why did the government apparently throw out concerns about moral hazard when SVB failed?

Any explanation would have to begin with what moral hazard can mean in the context of banking, which can summon the colloquial phrase “too big to fail.”

That controversial concept applies to how the government responds in the aftermath of the risky behavior of a bank – if the collapse of the bank is likely to harm the economy. Yet, in reducing the risk of a widespread financial crisis, the government can end up sending the message that it’s willing to protect banks that engage in reckless behavior – and to shield their customers from the consequences.

Cassandra Jones Havard, Professor of Law, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The ancient origins of glass

Featuring ingots, shipwrecks, pharaohs and an international trade in colors, the material’s rich history is being traced using modern archaeology and materials science

Today, glass is ordinary, on-the-kitchen-shelf stuff. But early in its history, glass was bling for kings.

Thousands of years ago, the pharaohs of ancient Egypt surrounded themselves with the stuff, even in death, leaving stunning specimens for archaeologists to uncover. King Tutankhamen’s tomb housed a decorative writing palette and  two blue-hued headrests made of solid glass that may once have supported the head of sleeping royals. His funerary mask sports blue glass inlays that alternate with gold to frame the king’s face.

In a world filled with the buff, brown and sand hues of more utilitarian Late Bronze Age materials, glass — saturated with blue, purple, turquoise, yellow, red and white — would have afforded the most striking colors other than gemstones, says Andrew Shortland, an archaeological scientist at Cranfield University in Shrivenham, England. In a hierarchy of materials, glass would have sat slightly beneath silver and gold and would have been valued as much as precious stones were.

But many questions remain about the prized material. Where was glass first fashioned? How was it worked and colored, and passed around the ancient world? Though much is still mysterious, in the last few decades materials science techniques and a reanalysis of artifacts excavated in the past have begun to fill in details.

This analysis, in turn, opens a window onto the lives of Bronze Age artisans, traders and kings, and the international connections between them.

Glass from the past

Glass, both ancient and modern, is a material usually made of silicon dioxide, or silica, that is characterized by its disorderly atoms. In crystalline quartz, atoms are pinned to regularly spaced positions in a repeating pattern. But in glass, the same building blocks — a silicon atom buddied up with oxygens — are arranged topsy-turvy.

Archaeologists have found glass beads dating to as early as the third millenium BCE. Glazes based on the same materials and technology date earlier still. But it was in the Late Bronze Age — 1600 to 1200 BCE — that the use of glass seems to have really taken off, in Egypt, Mycenaean Greece and Mesopotamia, also called the Near East (located in what’s now Syria and Iraq).

Unlike today, glass of those times was often opaque and saturated with color, and the source of the silica was crushed quartz pebbles, not sand. Clever ancients figured out how to lower the melting temperature of the crushed quartz to what could be reached in Bronze Age furnaces: They used the ash of desert plants, which contain high levels of salts such as sodium carbonate or bicarbonates. The plants also contain lime — calcium oxide — that made the glass more stable. Ancient glassmakers also added materials that impart color to glass, such as cobalt for dark blue, or lead antimonate for yellow. The ingredients melded in the melt, contributing chemical clues that researchers look for today.

“We can start to parse the raw materials that went into the production of the glass and then suggest where in the world it came from,” says materials scientist Marc Walton of Northwestern University in Evanston, Illinois, coauthor of an article about materials science and archaeological artifacts and artwork in the 2021 Annual Review of Materials Research.

But those clues have taken researchers only so far. When Shortland and colleagues were investigating glass’s origins around 20 years ago, glass from Egypt, the Near East and Greece appeared to be chemical lookalikes, difficult to distinguish based on the techniques available at the time.

The exception was blue glass, thanks to work by Polish-born chemist Alexander Kaczmarczyk who in the 1980s discovered that elements such as aluminum, manganese, nickel and zinc tag along with the cobalt that gives glass an abyssal blue hue. By examining the relative amounts of these, Kaczmarczyk’s team even tracked the cobalt ore used for blue coloring to its mineral source in specific Egyptian oases.

Picking up where Kaczmarczyk left off, Shortland set out to understand how ancient Egyptians worked with that cobalt ore. The material, a sulfate-containing compound called alum, won’t incorporate into the glass. But in the lab, Shortland and colleagues reproduced a chemical reaction that Late Bronze Age craftspeople may have used to create a compatible pigment. And they created a deep blue glass that did, in fact, resemble Egyptian blue glass.

In the first years of this century, a relatively new method offered more insights. Called laser ablation inductively coupled mass spectrometry, or LA-ICP-MS, the technique uses a laser to remove a tiny speck of material, invisible to the naked eye. (“That’s very much more acceptable to a museum than getting the big hammer out and taking a piece off,” Shortland says.) It then uses mass spectrometry to measure a suite of elements, creating a chemical fingerprint of the sample.

Based on this method, in 2009 Shortland, Walton and others analyzed Late Bronze Age glass beads unearthed in Greece, which some researchers proposed had its own glass production workshops. The analysis revealed that the Grecian glass had either Near Eastern or Egyptian signatures, supporting the idea that Greece imported glass from both places and, though it may have worked the glass, did not make it locally. Egyptian glasses tended to have higher levels of lanthanum, zirconium and titanium, while Near Eastern glasses tended to have more chromium.

Obscure origins

But where was glass first birthed? For at least 100 years, researchers have debated over two main contenders: the Near East and Egypt. Based on some beautiful, well-preserved glass artifacts dating from around 1500 BCE, Egypt was favored at first. But by the 1980s, researchers were placing their bets on the Near East after excavators found loads of glass at Nuzi, a Late Bronze Age provincial town in modern-day Iraq, thought to date from the 1500s BCE.

Around that same time, though, a reanalysis of archaeological texts revealed that Nuzi was 100 to 150 years younger than estimated, and the Egyptian glass industry from that time period seems to have been more advanced — favoring Egypt once again.

But that isn’t the end of the story. Glass can degrade, especially in wet conditions. Objects from Egypt’s ancient tombs and towns have lasted millennia, aided by the desert’s nearly ideal preservation environment. Near Eastern glass, on the other hand, from tombs on Mesopotamian floodplains, more frequently faced attacks by water, which can leach out stabilizing compounds and turn glass to flaky powder.

This deteriorated glass is difficult to identify and impossible to display, meaning lots of Near East glass may be missed. “I think a lot of the glass has effectively disappeared,” Shortland says. “Early excavations were less bothered about this flaky ex-glass than they might have been about other things.”

The bottom line: “You can’t really decide which is the earliest at the moment,” Shortland says.

Finding glassmaking

It’s even tricky to parse where glass was made at all. That’s partly because the material was frequently exchanged, both as finished objects and as raw glass to be worked into beads or vessels.

Glass helped to tie ancient empires together, says Thilo Rehren, an archaeological materials scientist at the Cyprus Institute in Nicosia who has examined the craftsmanship behind objects from Tut’s tomb, among others. Kings shipped materials to other rulers, expecting goods or loyalty in return, he says. Ancient inventories from the Late Bronze Age reveal an exchange of ivory, gems, wood, animals, people and more, and while the role of glass in this convention of gifting and tribute isn’t fully understood, the composition of artifacts supports glass swaps too.

In a glass bead necklace excavated in Gurob, Egypt, in an area thought to once have been a harem palace, Shortland and colleagues found the chemical signature associated with Mesopotamia: relatively high levels of chromium. The beads’ location implied that the bling was probably gifted to Pharaoh Thutmose III along with Near Eastern women who became the king’s wives. With chemistry on the case, “we’re now just beginning to see some of this exchange going on between Egypt and other areas,” Shortland says.

In the early 1980s, divers found the mother lode of such exchanges off the coast of Turkey in a sunken vessel from the 1300s BCE called the Uluburun shipwreck. Analysis of its contents reveals a global economy, says Caroline Jackson, an archaeologist at the University of Sheffield in England. Possibly a Phoenician ship on a gift-giving expedition, the vessel was hauling items from all over: ivory, copper, tin, even amber from the Baltic. From the wreck, excavators retrieved a load of colored glass — 175 unfinished blocks, called ingots, for glassworking.

Most of the ingots were cobalt-colored deep blue, but the ship was also ferrying purple and turquoise ingots. Jackson and her colleagues chipped a few small fragments off of three ingots and reported in 2010 that the raw glass blocks were Egyptian in origin, based on the concentration of trace metals.

Tracing glassmaking

Another reason why it’s tricky to identify sites for glassmaking is that the process makes little waste. “You get a finished object, and that, of course, goes into the museum,” Rehren says. That led him and archaeologist Edgar Pusch, working in in a flea-ridden dig house on the Nile Delta about 20 years ago, to ponder pottery pieces for signs of an ancient glassmaking studio. The site, near present day Qantir, Egypt, was the capital of Pharaoh Ramses II in the 1200s BCE.

Rehren and Pusch saw that many of the vessels had a lime-rich layer, which would have acted as a nonstick barrier between glass and the ceramic, allowing glass to be lifted out easily. Some of these suspected glassmaking vessels — including a reused beer jar — contained white, foamy-looking semi-finished glass. Rehren and Pusch also linked the color of the pottery vessels to the temperature they’d withstood in the furnace. At around 900 degrees Celsius, the raw materials could have been melted, to make that semi-finished glass. But some crucibles were dark red or black, suggesting they’d been heated to at least 1,000 degrees Celsius, a high enough temperature to finish melting the glass and color it evenly to produce a glass ingot.

Some crucibles even contained lingering bits of red glass, colored with copper. “We were able to identify the evidence for glassmaking,” Rehren says. “Nobody knew what it should have looked like.”

Since then, Rehren and colleagues have found similar evidence of glassmaking and ingot production at other sites, including the ancient desert city of Tell el-Amarna, known as Amarna for short, briefly the capital of Akhenaton during the 1300s BCE. And they noticed an interesting pattern. In Amarna’s crucibles, only cobalt blue glass fragments showed up. But at Qantir, where red-imparting copper was also worked to make bronze, excavated crucibles contain predominantly red glass fragments. (“Those people knew exactly how to deal with copper — that was their special skill,” Rehren says.) At Qantir, Egyptian Egyptologist Mahmoud Hamza even unearthed a large corroded red glass ingot in the 1920s. And at a site called Lisht, crucibles with glass remains contain primarily turquoise-colored fragments.

The monochrome finds at each site suggest that workshops specialized in one color, Rehren says. But artisans apparently had access to a rainbow. At Amarna, glass rods excavated from the site — probably made from re-melted ingots — come in a variety of colors, supporting the idea that colored ingots were shipped and traded for glassworking at many locations.

Glass on the ground

Archaeologists continue to pursue the story of glass at Amarna — and, in some cases, to more carefully repeat the explorations of earlier archaeologists.

In 1921-22, a British team led by archaeologist Leonard Woolley (most famous for his excavations at Ur) excavated Amarna. “Let’s put it bluntly — he made a total mess,” says Anna Hodgkinson, an Egyptologist and archaeologist at the Free University of Berlin. In a hurry and focused on more showy finds, Woolley didn’t do due diligence in documenting the glass. Excavating in 2014 and 2017, Hodgkinson and colleagues worked to pick up the missed pieces.

Hodgkinson’s team found glass rods and chips all over the area of Amarna they excavated. Some were unearthed near  relatively low-status households without kilns, a headscratcher because of the assumed role of glass in signifying status. Inspired by even older Egyptian art that depicted two metalworkers blowing into a fire with pipes, the archaeologists wondered whether small fires could be used to work glass. Sweating and getting stinky around the flames,  they discovered they could reach high enough temperatures to form beads in smaller fires than those typically associated with glasswork. Such tiny fireplaces may have been missed by earlier excavators, Hodgkinson says, so perhaps glassworking was less exclusive than researchers have always thought. Maybe women and children were also involved, Hodgkinson speculates, reflecting on the many hands required to maintain the fire.

Rehren, too, has been rethinking whom glass was for, since Near Eastern merchant towns had so much of it and large amounts were shipped to Greece. “It doesn’t smell to me like a closely controlled royal commodity,” he says. “I’m convinced that we will, in 5, 10 years, be able to argue that glass was an expensive and specialist commodity, but not a tightly controlled one.” Elite, but not just for royalty.

Researchers are also starting to use materials science to track down a potential trade in colors. In 2020, Shortland and colleagues reported using isotopes — versions of elements that differ in their atomic weights — to trace the source of antimony, an element that can be used to create a yellow color or that can make glass opaque. “The vast majority of the very early glass — that’s the beginning of glassmaking — has antimony in it,” Shortland says. But antimony is quite rare, leading Shortland’s team to wonder where ancient glassmakers got it from.

The antimony isotopes in the glass, they found, matched ores containing antimony sulfide, or stibnite, from present-day Georgia in the Caucasus — one of the best pieces of evidence for an international trade in colors.

Researchers are continuing to examine the era of first glass. While Egypt has gotten a large share of the attention, there are many sites in the Near East that archaeologists could still excavate in search of new leads. And with modern-day restrictions on moving objects to other countries or even off-site for analysis, Hodgkinson and other archaeologists are working to apply portable methods in the field and develop collaborations with local researchers. Meanwhile, many old objects may yield new clues  as they are analyzed again with more powerful techniques.

As our historical knowledge about glass continues to be shaped, Rehren cautions against certainty in the conclusions. Though archaeologists, aided by records and what’s known of cultural contexts, carefully infer the significance and saga of artifacts, only a fraction of a percent of the materials that once littered any given site even survives today. “You get conflicting information, conflicting ideas,” he says. All these fragments of information, of glass, “you can assemble in different ways to make different pictures.”

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

A Beautifully Baked Beef Dinner

(Culinary.net) Many families crave savory and delicious weeknight meals. After a long day of work and school, it’s time to gather around the table to share a mouthwatering meal and memories together.

For something truly wholesome, try this Beef Tenderloin with Roasted Cauliflower and Spinach Salad. It’s a full meal the whole family can enjoy, and you’ll be surprised at how easy it makes it to feed all the smiling faces.

This meal has layers of flavor and sneaks in a few vegetables like spinach and cauliflower, but even picky eaters can’t resist trying it.

Start with a beef tenderloin and drizzle it generously with olive oil. Add 2 tablespoons of pepper. Flip and repeat on the other side. Bake for 12 minutes at 475 F.

Next, add one head of cauliflower to a mixing bowl with five shallots cut into quarters. Add 2 tablespoons of olive oil; mix well with salt and pepper, to taste. Add this to the baking sheet with the beef tenderloin and bake 18-25 minutes.

While that’s cooking, add 3 tablespoons of olive oil to a mixing bowl with lemon juice, Dijon mustard, salt, pepper and baby spinach.

To plate, add baby spinach salad first then the cauliflower and shallot mixture and, finally, that juicy, perfectly cooked beef tenderloin. Garnish with cranberries for a splash of color.

This meal is satisfying and only requires some mixing bowls and a large sheet pan to make cleanup a breeze so you can focus on what really matters most: time with your loved ones.

Find more recipes and savory main dishes at Culinary.net.

Watch video to see how to make this recipe!

Beef Tenderloin with Roasted Cauliflower and Spinach Salad

Servings: 4-6

  • 1          beef tenderloin (4 pounds), wrapped with butcher’s twine
  • 9          tablespoons olive oil, divided
  • 4          teaspoons pepper, divided
  • 1          head cauliflower
  • 5          shallots, quartered
  • 2          teaspoons salt, divided
  • 3          tablespoons lemon juice
  • 2          teaspoons Dijon mustard
  • 1          package (5 1/2 ounces) baby spinach
  • dried cranberries, for garnish
  1. Heat oven to 475 F. Place beef on baking sheet. Rub 4 tablespoons olive oil and 2 teaspoons pepper into beef. Bake 12 minutes.
  2. In large bowl, toss cauliflower, shallots, 1 teaspoon salt and 1 teaspoon pepper to combine. Scatter vegetables around beef and bake 18-25 minutes, or until desired doneness is reached. Allow meat to rest 15 minutes covered in aluminum foil.
  3. In medium bowl, whisk 3 tablespoons olive oil, lemon juice, mustard and remaining salt and pepper until combined. Add spinach; stir until combined.
  4. Serve by layering spinach topped with cauliflower and shallots then sliced tenderloin. Garnish with dried cranberries.
SOURCE:
Culinary.net

Saturday, April 8, 2023

April 26: Join a conversation about the teenage brain’s strengths and vulnerabilities, how adults can support teenagers with mental health issues, and how teens can help one another

April 26, 2023 | 12 p.m. Pacific | 3 p.m. Eastern | 7 p.m. UTC

REGISTER

It may be difficult for older adults to fathom, but today’s teenagers have never lived in a world where depression, anxiety and other mental health disorders weren’t rife — and on the rise — among their peers. Just a few decades ago, many psychiatrists thought depression was a condition that affected only adults. Now we know better: Researchers think more than half of mental health disorders, including depression, begin by age 14.

The teenage years are a dynamic period of brain development, when neuronal connections undergo intense remodeling and pruning. This flexibility allows teenagers to learn quickly and adapt to a changing environment, but it can also make them vulnerable. Many questions have yet to be answered, such as why the risk of mental illness increases severalfold during adolescence, why some teens appear more resilient to mental health problems than others, and when the brain should be considered “mature.”

On Wednesday, April 26, join leading neuroscientist BJ Casey and teen mental health advocate Diana Chao for a conversation with Knowable Magazine and Annual Reviews about the teen brain’s unique strengths and challenges, and why many experts have declared a global mental health emergency in children and adolescents. We’ll talk about what adults can do to support the teenagers in their lives — and crucially, how teens can help one another.

This event is the second in a series of events and articles exploring the brain across the lifespan. “Inside the brain: A lifetime of change,” is supported by a grant from the Dana Foundation.

Register here for “The baby brain: Learning in leaps and bounds” and “ The mature mind: Aging resiliently.” If you can’t attend the live events, please register to receive an email when the replays are available.

Speakers

BJ Casey

Neuroscientist, Barnard College-Columbia University

BJ Casey is the Christina L. Williams Professor of Neuroscience in the Department of Neuroscience and Behavior at Barnard College-Columbia University. She pioneered the use of functional magnetic resonance imaging to examine the developing human brain, particularly during adolescence. Her scientific discoveries have been published in top-tier journals, including Science, Nature Medicine, Nature Neuroscience and the Proceedings of the National Academy of Sciences. She has received the Association for Psychological Science Lifetime Achievement Mentor Award and the American Psychological Association Distinguished Scientific Contribution Award. She is an elected member of the American Academy of Arts and Science.

Diana Chao

Mental health activist and founder of Letters to Strangers

Diana Chao founded Letters to Strangers (L2S) when she was a sophomore in high school, after bipolar disorder and a blinding condition nearly ended her life. Today, L2S is the largest global youth-for-youth mental health nonprofit, impacting over 35,000 people annually on six continents and publishing the world’s first youth-for-youth mental health guidebook for free. Chao has been honored by two US presidents at the White House and named a 2021 Princess Diana Legacy Award Winner, a 2020 L’Oréal Paris Women of Worth and a 2019 Oprah Magazine Health Hero. Chao studied geosciences at Princeton University and works as a climate scientist for Kinetic Analysis Corporation.

Moderator

Emily Underwood

Science Content Producer, Knowable Magazine

Emily Underwood has been covering science for over a decade, including as a neuroscience reporter for Science. She has a master’s degree in science writing from Johns Hopkins University, and her reporting has won national awards, including a 2018 National Academies Keck Futures Initiatives Communication Award for magazine writing.

About

This event is part of an ongoing series of live events and science journalism from Knowable Magazine and Annual Reviews, a nonprofit publisher dedicated to synthesizing and integrating knowledge for the progress of science and the benefit of society.

The Dana Foundation is a private philanthropic organization dedicated to advancing neuroscience and society.

Resources

More from Knowable Magazine

Related Annual Reviews articles

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Keep Your Car Safer and On the Road Longer

For many families, cars are huge, long-term investments second only to homes. Many are looking for ways to keep their cars on the road longer and make them safer to continue to serve their needs for years to come. 

No matter what or where you drive, you can keep your current vehicle looking and performing its best – and even update it to make it safer – with these tips inspired by eBay Motors’ Parts of America tour, a cross-country tour exploring unique car cultures across America.

Choose the Right Tires
If it’s time to trade your tires in, take the time to learn what options are available for your vehicle. For those in fair weather states, summer performance tires offer the best possible fuel efficiency all year round. Families living in milder states with occasional snow may consider all-season tires that trade efficiency for safety on a variety of surfaces. Finally, when it comes to driving in a winter wonderland, there is no substitute for specialized rubber and tread patterns – purchase a dedicated set of snow tires to ensure you’re safe all winter long. No matter your situation, a new set of tires can maximize safety and extend the life of your car.

New Look, New Ride
One way to breathe new life into your ride is to take it to the next level aesthetically. With enthusiast communities growing around nearly every make and model of vehicle, it’s easy to find parts to make your vision a reality. One of the most eye-catching additions is a new set of wheels, and there are thousands of brands, styles and sizes to choose from for every car. The addition of front, side and rear aerodynamics kits, such as front splitters or rear spoilers, can give any ride that athletic look. Upgrading stock headlight and taillight units – many fitted with high-visibility LEDs – has never been easier.

Upgrade Your Tech
Safety and creature comforts alike can add to your enjoyment of your vehicle, even if you’ve been driving it for several years. Many cars can be updated with the latest and greatest features available in new rides, including high-tech infotainment equipped with digital assistants, front and rear cameras, parking sensors, blind spot warning and even collision avoidance systems. As families look to extend their cars’ lifespans, these technology upgrades can make driving comfortable and safer.

Power and Performance
While looks and tech can bring new experiences to your car, no change has quite the same impact as improving its performance. Options abound for those looking to improve the power and handling of their ride, such as replacing the exhaust system, lowering springs, adding a coilover kit or conducting a full suspension replacement.

Find Purpose-Built Parts
Whether you’re an amateur DIY-er looking to maintain and make small upgrades to your vehicle or an expert looking to make bigger modifications, finding parts and accessories that fit your vehicle is crucial. From hard-to-find performance modifications to made-to-fit cosmetic accessories, eBay Motors offers parts and accessories for nearly any vehicle, skillset and project. The app offers an entire catalog of inventory with 122 million live parts listings at any given time, giving auto enthusiasts the ability to purchase from an expansive inventory from the convenience of a smartphone. What’s more, features like Buy It Now, My Garage and Fitment Finder enable users to easily search parts and accessories, verify the items fit their vehicle and make immediate purchases for what they need.

Skip the Wait
The global supply chain continues to recover from disruptions that have stretched back several years, and many customers are feeling the strain when it comes time to upgrade, maintain or repair their vehicles. Some shops around the country are quoting waiting times of several months just to have the right part delivered for service. However, families can find relief and get their car back on the road quicker by looking online to source their much-needed parts. In fact, many technicians work with customers to have parts delivered directly to their shop from online sources to expedite and simplify the process.

Auto enthusiasts can find more helpful tips, tricks and resources at ebaymotors.com.
SOURCE:
eBay Motors