Sunday, April 9, 2023

Making computer chips act more like brain cells

Flexible organic circuits that mimic biological neurons could increase processing speed and might someday hook right into your head

The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb.

Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft “neuromorphic” computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it.

Like real neurons — but unlike conventional computer chips — these new devices can send and receive both chemical and electrical signals. “Your brain works with chemicals, with neurotransmitters like dopamine and serotonin. Our materials are able to interact electrochemically with them,” says Alberto Salleo, a materials scientist at Stanford University who wrote about the potential for organic neuromorphic devices in the 2021  Annual Review of Materials Research.

Salleo and other researchers have created electronic devices using these soft organic materials that can act like transistors (which amplify and switch electrical signals) and memory cells (which store information) and other basic electronic components. 

The work grows out of an increasing interest in neuromorphic computer circuits that mimic how human neural connections, or synapses, work. These circuits, whether made of silicon, metal or organic materials, work less like those in digital computers and more like the networks of neurons in the human brain.

Conventional digital computers work one step at a time, and their architecture creates a fundamental division between calculation and memory. This division means that ones and zeroes must be shuttled back and forth between locations on the computer processor, creating a bottleneck for speed and energy use.

The brain does things differently. An individual neuron receives signals from many other neurons, and all these signals together add up to affect the electrical state of the receiving neuron. In effect, each neuron serves as both a calculating device — integrating the value of all the signals it has received — and a memory device: storing the value of all of those combined signals as an infinitely variable analog value, rather than the zero-or-one of digital computers.

Researchers have developed a number of different “memristive” devices that mimic this ability. When you run electric currents through them, you change the electrical resistance. Like biological neurons, these devices calculate by adding up the values of all the currents they have been exposed to. And they remember through the resulting value their resistance takes. 

A simple organic memristor, for example, might have two layers of electrically conducting materials. When a voltage is applied, electric current drives positively charged ions from one layer into the other, changing how easily the second layer will conduct electricity the next time it is exposed to an electric current. (See diagram.) “It’s a way of letting the physics do the computing,” says Matthew Marinella, a computer engineer at Arizona State University in Tempe who researches neuromorphic computing.

The technique also liberates the computer from strictly binary values. “When you have classical computer memory, it’s either a zero or a one. We make a memory that could be any value between zero and one. So you can tune it in an analog fashion,” Salleo says.

At the moment, most memristors and related devices aren’t based on organic materials but use standard silicon chip technology. Some are even used commercially as a way of speeding up artificial intelligence programs. But organic components have the potential to do the job faster while using less energy, Salleo says. Better yet, they could be designed to integrate with your own brain. The materials are soft and flexible, and also have electrochemical properties that allow them to interact with biological neurons. 

For instance, Francesca Santoro, an electrical engineer now at RWTH Aachen University in Germany, is developing a polymer device that takes input from real cells and “learns” from it. In her device, the cells are separated from the artificial neuron by a small space, similar to the synapses that separate real neurons from one another. As the cells produce dopamine, a nerve-signaling chemical, the dopamine changes the electrical state of the artificial half of the device. The more dopamine the cells produce, the more the electrical state of the artificial neuron changes, just as you might see with two biological neurons. (See diagram.) “Our ultimate goal is really to design electronics which look like neurons and act like neurons,” Santoro says. 

The approach could offer a better way to use brain activity to drive prosthetics or  computer monitors. Today’s systems use standard electronics, including electrodes that can pick up only broad patterns of electrical activity. And the equipment is bulky and requires external computers to operate.

Flexible, neuromorphic circuits could improve this in at least two ways. They would be capable of translating neural signals in a much more granular way, responding to signals from individual neurons. And the devices might also be able to handle some of the necessary computations themselves, Salleo says, which could save energy and boost processing speed.

Low-level, decentralized systems of this sort — with small, neuromorphic computers processing information as it is received by local sensors — are a promising avenue for neuromorphic computing, Salleo and Santoro say. “The fact that they so nicely resemble the electrical operation of neurons makes them ideal for physical and electrical coupling with neuronal tissue,” Santoro says, “and ultimately the brain.”

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Paleogenomic research has expanded rapidly over the past two decades, igniting heated debate about handling remains. Who gives consent for study participants long gone — and who should speak for them today?

The 2022 Nobel Prize in physiology and medicine has brought fresh attention to paleogenomics, the sequencing of DNA of ancient specimens. Swedish geneticist Svante Pääbo won the coveted prize “for his discoveries concerning the genomes of extinct hominins and human evolution.” In addition to sequencing the Neanderthal genome and identifying a previously unknown early human called Denisova, Pääbo also found that genetic material of these now extinct hominins had mixed with those of our own Homo sapiens after our ancestor migrated from Africa some 70,000 years ago.

The study of ancient DNA has also shed light on other migrations, as well as the evolution of genes involved in regulating our immune system and the origin of our tolerance to lactose, among many other things. The research has also ignited ethical questions. Clinical research on living people requires the informed consent of participants and compliance with federal and institutional rules.

But what do you do when you’re studying the DNA of people who died a long time ago? That gets complicated, says anthropologist Alyssa Bader, coauthor of an article about ethics in human paleogenomics in the 2022 Annual Review of Genomics and Human Genetics.

“Consent takes on new meaning” when participants are no longer around to make their voices heard, Bader and colleagues write. Scientists instead must regulate themselves, and navigate the sometimes contradictory guidelines — some of which prioritize research outcomes; others, the wishes of descendants, even very distant ones, and local communities. There are no clear-cut, ironclad rules, says Bader, now at McGill University in Montreal, Canada: “We don’t necessarily have one unified field standard for ethics.”

Take, for example, research at Pueblo Bonito, a massive stone great house in Chaco Canyon in New Mexico, where a community thrived from 828 to 1126 AD under the rule of ancestral Puebloan peoples. In the late 1800s, archaeologists from the American Museum of Natural History started excavations there, unearthing more than 50,000 tools, ritual objects and other belongings, as well as the remains of 14 people. These human bones remained stored in boxes and drawers, allowing non-Indigenous researchers to study them. Recently, a research team extracted and analyzed their DNA. The study, published in 2017, suggested an exciting finding: The remains found in Pueblo Bonito once belonged to members of a matrilineal dynasty, and leadership at Chaco Canyon was likely passed through a female line that persisted for hundreds of years until the society collapsed.

But the research sparked fierce ethical discussions. Several anthropologists and geneticists, Bader included, criticized the study for its lack of tribal consultation — the Puebloan and Diné communities, who still live in the area, were not asked for permission to carry out the research. The critics also cited the dehumanizing language (such as “cranium 8” or “burial 14”) that authors used to describe the Pueblo ancestors and warned that the controversy would exacerbate feelings of distrust toward scientists.

Bader spoke with Knowable Magazine about what we can learn from research on ancient DNA and why considering the ethics around it is such an urgent task for the field. This conversation has been edited for length and clarity.

What is ancient DNA? And where can we find it?

Well, ancient DNA is the DNA that’s been preserved over hundreds or thousands of years. And it can be from humans, from animals, from plants, from microbes, viruses, bacteria.

An easy explanation would be “DNA from non-living beings.” We have DNA from woolly mammoths, we have ancient DNA from Neanderthals, and we have DNA from more recent human ancestors. So it’s a huge span.

If we’re talking about DNA from humans, we can get it from teeth, from bone, from hair. We find it from coprolites, which is poop. We can get it from something that someone chewed on. Any way that you would leave your DNA now as a living human could also potentially be preserved for the future as well.

With the development of next-generation sequencing technology, there has been a dramatic proliferation of research on ancient genomes from ancestral humans, from very few published before 2009 to more than 1,000 by 2017. What have we learned from peeking into the genomes of ancestral humans?

Oh, there’s a lot of different types of questions that we can address by looking at the DNA from human ancestors. We can see how closely or distantly related they were across continents, across time spans. We can see population movements. We can see how humans and their environments interacted.

But all of it, I think, boils down to just understanding a little bit more about what makes humans who we are now. Ancient DNA is simply using a genomic perspective to understand what things happened in the past to shape what humans are today.

This is something that you’ve also tried to do with your own research, right?

Yes. Part of my family is Tsimshian from southeast Alaska. I grew up in Washington state. But when I was a kid, and even now, one of the things that I really enjoy doing to maintain a close relationship with my family in Alaska is go up and spend the summer fishing. I just went fishing with my grandpa, my uncles and my dad last summer.

That influenced the research I do now, which is thinking about how traditional foods — such as salmon, in my family — shape people’s oral microbiome, the community of bacteria that are in our mouths. And there’s been research showing those bacteria can impact our health outside of the mouth. If they get out of balance, they can cause problems in other areas of your body. They can also support your health.

My research looks at the relationship between traditional food in Indigenous communities of the Pacific Northwest, mostly in Alaska and British Columbia, and how they can support the biological resilience and health of these communities. In short, understanding the way that our diet might be impacting our health on a microbial level.

And you’re also studying the oral microbiome of Tsimshian ancestors.

Yes, we’re comparing the microbes that we find in our ancestors’ mouths with microbes in descendant communities, and trying to answer what folks are eating now, what ancestors were eating in the past, how that stayed the same, how that’s changed through time, and then how that correlates with the microbiome.

When scientists study the DNA of living people, some sort of institutional committee reviews those projects to make sure they are carried out in an ethical manner. What happens when the people you study have been dead for a long time?

The idea of consent and what it means in the context of ancient DNA research is a big challenge in the field. Ancestors themselves don’t have a way to either consent to being part of research or to withhold their consent, the way that a living person who opts into genetic research can. We don’t have a good way to do that with ancestors.

There are a lot of different approaches that researchers take to that, though the one I advocate for, and model my own research practice after, is what we call community-collaborative research. Here, descendant communities stand in for the ancestors, and part of that is because data from ancestors can impact these modern communities.

In what way, exactly?

Well, we can’t really act as if ancestors just exist in this prehistoric or historic bubble and that understanding or learning new things about them doesn’t impact folks who are living now.

These things can tell us a lot about a specific group of ancestors, sure, but they might also be part of the history of living communities. For example, there are researchers looking at relatedness between communities, looking at population histories and migration and movements.

How do you approach your research on the field and with the communities involved?

My approach is about building the relationship with the community as research partners. So I’m not just approaching for permission.

For example, for one of the communities that I worked with, I went out there, introduced myself and had community meetings. I talked about my research expertise and the types of things I was interested in — but I also heard the kind of research that they were interested in. Then we were able to chat about what methods could be used to explore those mutual research interests and plan the project together. I got formal permission to go to the museum where their ancestors were, to be able to look at them and collect samples from them.

I provided updates about where we were in the research process. This was before Covid, so I went out every summer to provide updates. And then, when we started to get data from these analyses, I was interpreting that data with the community. Instead of me presenting it as, you know, “These are the results; this is what the science says,” I was like, “This is the data, this is how we generated it, this is how it’s often interpreted. How should we think about it in the context of community history and community-held knowledge?” That enhances the scientific outcomes.

Did it help that you’re Tsimshian and were familiar with community values, culture and traditions?

I think that the biggest influence that has had is that it’s shaped how I hold myself accountable to my community research partners. So when I’m doing the work and talking to people, I think: “If someone was approaching my family, how would I want them to be treated?” That has a big impact on the way I construct my research collaborations, and also on the way I have turned away from, or pushed away, some of the extractive processes where communities aren’t consulted or are treated as a resource, as something researchers use as they need.

You also mentioned that researchers may follow different approaches to these kinds of ethical issues. Can you talk about this apparent lack of consensus?

The thing about ethics is that they’re culturally constructed. Two different people might have different ideas of what is or is not ethical. And those ideas can also change over time. I think we see a little bit of that with research.

In the review article, we talk about there being some tension. Some folks really orient the research around stakeholders like local and descendant communities, and how it impacts them. You can also take the approach that research is done for the sake of knowledge, regardless of how or who it impacts.

So, depending on how you orient yourself around these perspectives, you might shift your research practices in a specific way. But we don’t have one set of rules or something that everyone is held accountable to. There are no formal consequences if you don’t abide by one of the ethical guidelines, some of which even conflict with one another.

I think there are benefits to there not being one concrete thing, because that means you can adapt to different situations. If you write one set of rules, that also creates limitations. But it also means that it’s difficult for folks to sometimes figure out what they should be doing.

What kind of mistakes have been made?

Particularly in the context of North America, the remains of Indigenous human ancestors have been taken from their communities and used as a resource for researchers. Sometimes communities knew about it and objected. Sometimes communities didn’t even know where their ancestors were, or what they were being used for.

These remains have been collected in museums, displayed in ways that communities didn’t approve of or felt were disrespectful. And in a museum context like that, non-Indigenous scientists didn’t necessarily have to go to a descendant community and ask for permission to do their research. This just continued a history of violence, harm and exploitation.

As ancient DNA came along, then those ancestors’ bones and remains became a source for genomic research. But we don’t want these harms, which came out of archaeological research more broadly, to continue to proliferate in genomic research. We want them to stop.

How can this community-centered approach you advocate for facilitate a more collaborative research?

Genomic data is just one form of information, right? If you think about what makes you as a person, your genes that come from your family and ancestry are one part of what makes you who you are. And I think the same is true for paleogenomic research. The genomes that we study using ancient DNA are one part of a really big story.

When you collaborate with communities and you include community-held knowledge or histories, that improves the narratives that we’re able to tell using genomic information. It can only improve things, because we have more depth, more perspective on the story that we’re trying to tell through genomes.

In my view, the people who should have the most voice in research are the people who potentially bear the most risk from research. Researchers can cause harm by taking samples from ancestors, excluding communities from giving permission, or excluding them from being involved in the research process.

In a deeply collaborative approach, communities are our partners. They’re not only giving consent for samples from ancestors to be taken, but also helping to shape the research questions. Maybe the methods. They are involved in interpreting the data. Or preparing results for publication. Of course, that all depends on how deeply a community does or does not want to be involved in the process.

For you, what does it mean to think ethically about ancient DNA research?

When I think about what may or may not be ethical, I try to think about the way that harm has happened in the past.

So, when I think about how I want to do my research now, I hold myself accountable to communities when I do my work. I don’t think about research as being just a value-neutral thing. I try to think about how my research will impact other people: who will benefit from it, and how I can prevent harms in doing it.

It’s this kind of restorative-justice approach where you say, “Folks were excluded in the past and we want to include them as much as possible now to heal that harm.” To me, that can be achieved by finding new ways to break down the barriers between who is being researched and who is doing the research.

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

How the NRA evolved from backing a 1934 ban on machine guns to blocking nearly all firearm restrictions today

NRA conventiongoers, like these at the gun group’s 2018 big meeting, browse firearms exhibits. Loren Elliott/AFP via Getty Images
Robert Spitzer, State University of New York Cortland

The mass shootings at a Buffalo, New York, supermarket and an elementary school in Uvalde, Texas, just 10 days apart, are stirring the now-familiar national debate over guns seen after the tragic 2012 and 2018 school shootings in Newtown, Connecticut, and Parkland, Florida.

Inevitably, if also understandably, many Americans are blaming the National Rifle Association for thwarting stronger gun laws that might have prevented these two recent tragedies and many others. And despite the proximity in time and location to the Texas shooting, the NRA is proceeding with its plans to hold its annual convention in Houston on May 27-29, 2022. The featured speakers include former President Donald Trump and Sen. Ted Cruz, a Texas Republican.

After spending decades researching and writing about how and why the NRA came to hold such sway over national gun policies, I’ve seen this narrative take unexpected turns in the last few years that raise new questions about the organization’s reputation for invincibility.

People delivered boxes of petitions calling for stronger gun control rules to former Florida Gov. Rick Scott after the 2018 mass shooting in Parkland. AP Photo/Gerald Herbert

Three phases

The NRA’s more than 150-year history spans three distinct eras.

At first the group was mainly concerned with marksmanship. It later played a relatively constructive role regarding safety-minded gun ownership restrictions before turning into a rigid politicized force.

The NRA was formed in 1871 by two Civil War veterans from Northern states who had witnessed the typical soldier’s inability to handle guns.

The organization initially leaned on government support, which included subsidies for shooting matches and surplus weaponry. These freebies, which lasted until the 1970s, gave gun enthusiasts a powerful incentive to join the NRA.

The NRA played a role in fledgling political efforts to formulate state and national gun policy in the 1920s and 1930s after Prohibition-era liquor trafficking stoked gang warfare. It backed measures like requiring a permit to carry a gun and even a gun purchase waiting period.

And the NRA helped shape the National Firearms Act of 1934, with two of its leaders testifying before Congress at length regarding this landmark legislation. They supported, if grudgingly, its main provisions, such as restricting gangster weapons, which included a national registry for machine guns and sawed-off shotguns and taxing them heavily. But they opposed handgun registration, which was stripped out of the nation’s first significant national gun law.

Decades later, in the legislative battle held in the aftermath of President John F. Kennedy’s assassination and amid rising concerns about crime, the NRA opposed another national registry provision that would have applied to all firearms. Congress ultimately stripped it from the Gun Control Act of 1968.

Throughout this period, however, the NRA remained primarily focused on marksmanship, hunting and other recreational activities, although it did continue to voice opposition to new gun laws, especially to its membership.

NPR’s Ron Elving recounts the NRA’s history.

A sharp right turn

By the mid-1970s, a dissident group within the NRA believed that the organization was losing the national debate over guns by being too defensive and not political enough. The dispute erupted at the NRA’s 1977 annual convention, where the dissidents deposed the old guard.

From this point forward, the NRA became ever more political and strident in its defense of so-called “gun rights,” which it increasingly defined as nearly absolute under the Second Amendment.

One sign of how much the NRA had changed: The Second Amendment right to bear arms never came up in the 166 pages of congressional testimony regarding the 1934 gun law. Today, the organization treats those words as its mantra, constantly citing them.

And until the mid-1970s, the NRA supported waiting periods for handgun purchases. Since then, however, it has opposed them. It fought vehemently against the ultimately successful enactment of a five-business-day waiting period and background checks for handgun purchases in 1993.

The NRA’s influence hit a zenith during George W. Bush’s gun-friendly presidency, which embraced the group’s positions. Among other things, his administration let the ban on assault weapons expire, and it supported the NRA’s top legislative priority: enactment in 2005 of special liability protections for the gun industry, the Protection of Lawful Commerce in Arms Act.

People attending the National Rifle Association Leadership Forum in 2017 paid rapt attention to President Donald Trump’s address. AP Photo/Evan Vucci

Having a White House ally isn’t everything

Despite past successes, the NRA has suffered from a series of mostly self-inflicted blows that have precipitated an existential crisis for the organization.

Most significantly, an investigation by the New York Attorney General, filed in 2020, has revealed extensive allegations of rampant cronyism, corruption, sweetheart deals and fraud. Partly as a result of these revelations, NRA membership has apparently declined to roughly 4.5 million, down from a high of about 5 million.

Despite this trend, however, the grassroots gun community is no less committed to its agenda of opposition to new gun laws. Indeed, the Pew Research Center’s findings in 2017 suggested that about 14 million people identify with the group. By any measure, that’s a small minority out of nearly 260 million U.S. voters.

But support for gun rights has become a litmus test for Republican conservativism and is baked into a major political party’s agenda. This laserlike focus on gun issues continues to enhance the NRA’s influence even when the organization faces turmoil. This means that the protection and advancement of gun rights are propelled by the broader conservative movement, so that the NRA no longer needs to carry the ball by itself.

Like Bush, Trump maintained a cozy relationship with the NRA. It was among his 2016 presidential bid’s most enthusiastic backers, contributing US$31 million to his presidential campaign.

When Trump directed the Justice Department to draft a rule banning bump stocks, and indicated his belated support for improving background checks for gun purchases after the Parkland shooting, he was sticking with NRA-approved positions. He also supported arming teachers, another NRA proposal.

Only one sliver of light emerged between the Trump administration and the NRA: his apparent willingness to consider raising the minimum age to buy assault weapons from 18 to 21 – which has not happened. In 2022, a year after Trump left office, 18-year-olds, including the gunmen allegedly responsible for the mass shootings in Uvalde and Buffalo, were able to legally purchase firearms.

In politics, victory usually belongs to whoever shows up. And by showing up, the NRA has managed to strangle every federal effort to restrict guns since the Newtown shooting.

Nevertheless, the NRA does not always win. At least 25 states had enacted their own new gun regulations within five years of that tragedy.

Supreme Court ruling’s repercussions

These latest mass shootings may stir gun safety supporters to mobilize public outrage and turn out voters favoring stricter firearm regulations during the 2022 midterm elections.

But there is a wild card: The Supreme Court will soon rule on New York State Rifle & Pistol Club v. Bruen, the most significant case regarding gun rights it has considered in years. It’s likely that the court will strike down a long-standing New York pistol permit law, broadening the right to carry guns in public across the United States.

Such a decision could galvanize gun safety supporters while also emboldening gun rights activists – making the debate about guns in America even more tumultuous.

This is an updated version of an article originally published on February 23, 2018.

Robert Spitzer, Distinguished Service Professor Emeritus of the Political Science Department, State University of New York Cortland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Scientific highs and lows of cannabinoids

Hundreds of these cannabis-related chemicals now exist, both natural and synthetic, inspiring researchers in search of medical breakthroughs — and fueling a dangerous trend in recreational use

Editor’s note: Raphael Mechoulam passed away on March 9, 2023, at the age of 92.

The 1960s was a big decade for cannabis: Images of flower power, the summer of love and Woodstock wouldn’t be complete without a joint hanging from someone’s mouth. Yet in the early ’60s, scientists knew surprisingly little about the plant. When Raphael Mechoulam, then a young chemist in his 30s at Israel’s Weizmann Institute of Science, went looking for interesting natural products to investigate, he saw an enticing gap in knowledge about the hippie weed: The chemical structure of its active ingredients hadn’t been worked out.

Mechoulam set to work.

The first hurdle was simply getting hold of some cannabis, given that it was illegal. “I was lucky,” Mechoulam recounts in a personal chronicle of his life’s work, published this month in the Annual Review of Pharmacology and Toxicology. “The administrative head of my Institute knew a police officer. ... I just went to Police headquarters, had a cup of coffee with the policeman in charge of the storage of illicit drugs, and got 5 kg of confiscated hashish, presumably smuggled from Lebanon.”

By 1964, Mechoulam and his colleagues had determined, for the first time, the full structure of both delta-9-tetrahydrocannabinol, better known to the world as THC (responsible for marijuana’s psychoactive “high”) and cannabidiol, or CBD.

That chemistry coup opened the door for cannabis research. Over the following decades, researchers including Mechoulam would identify more than 140 active compounds, called cannabinoids, in the cannabis plant, and learn how to make many of them in the lab. Mechoulam helped to figure out that the human body produces its own natural versions of similar chemicals, called endocannabinoids, that can shape our mood and even our personality. And scientists have now made hundreds of novel synthetic cannabinoids, some more potent than anything found in nature.

Today, researchers are mining the huge number of known cannabinoids — old and new, found in plants or people, natural and synthetic — for possible pharmaceutical uses. But, at the same time, synthetic cannabinoids have become a hot trend in recreational drugs, with potentially devastating impacts.

For most of the synthetic cannabinoids made so far, the adverse effects generally outweigh their medical uses says biologist João Pedro Silva of the University of Porto in Portugal, who studies the toxicology of substance abuse, and coauthored a 2023 assessment of the pros and cons of these drugs in the Annual Review of Pharmacology and Toxicology. But, he adds, that doesn’t mean there aren’t better things to come.

Cannabis’s long medical history

Cannabis has been used for centuries for all manner of reasons, from squashing anxiety or pain to spurring appetite and salving seizures. In 2018, a cannabis-derived medicine — Epidiolex, consisting of purified CBD — was approved for controlling seizures in some patients. Some people with serious conditions, including schizophrenia, obsessive compulsive disorder, Parkinson’s and cancer, self-medicate with cannabis in the belief that it will help them, and Mechoulam sees the promise. “There are a lot of papers on [these] diseases and the effects of cannabis (or individual cannabinoids) on them. Most are positive,” he tells Knowable Magazine.

That’s not to say cannabis use comes with zero risks. Silva points to research suggesting that daily cannabis users have a higher risk of developing psychotic disorders, depending on the potency of the cannabis; one paper showed a 3.2 to 5 times higher risk. Longtime chronic users can develop cannabinoid hyperemesis syndrome, characterized by frequent vomiting. Some public health experts worry about impaired driving, and some recreational forms of cannabis contain contaminants like heavy metals with nasty effects .

Finding medical applications for cannabinoids means understanding their pharmacology and balancing their pros and cons.

Mechoulam played a role in the early days of research into cannabis’s possible clinical uses. Based on anecdotal reports stretching back into ancient times of cannabis helping with seizures, he and his colleagues looked at the effects of THC and CBD on epilepsy. They started in mice and, since CBD showed no toxicity or side effects, moved on to people. In 1980, then at the Hebrew University of Jerusalem, Mechoulam co-published results from a 4.5-month, tiny trial of patients with epilepsy who weren’t being helped by current drugs. The results seemed promising: Out of eight people taking CBD, four had almost no attacks throughout the study, and three saw partial improvement. Only one patient wasn’t helped at all.

“We assumed that these results would be expanded by pharmaceutical companies, but nothing happened for over 30 years,” writes Mechoulam in his autobiographical article. It wasn’t until 2018 that the US Food and Drug Administration approved Epidiolex for treating epileptic seizures in people with certain rare and severe medical conditions. “Thousands of patients could have been helped over the four decades since our original publication,” writes Mechoulam.

Drug approval is a necessarily long process, but for cannabis there have been the additional hurdles of legal roadblocks, as well as the difficulty in obtaining patent protections for natural compounds. The latter makes it hard for a pharmaceutical company to financially justify expensive human trials and the lengthy FDA approval process.

In the United Nations’ 1961 Single Convention on Narcotic Drugs, cannabis was slotted into the most restrictive categories: Schedule I (highly addictive and liable to abuse) and its subgroup, Schedule IV (with limited, if any, medicinal uses). The UN removed cannabis from schedule IV only in December 2020 and, although cannabis has been legalized or decriminalized in several countries and most US states, it remains still ( controversially), on both the US’ and the UN’s Schedule I — the same category as heroin. The US’ cannabis research bill, passed into law in December 2022, is expected to help ease some of the issues in working with cannabis and cannabinoids in the lab.

To date, the FDA has only licensed a handful of medicinal drugs based on cannabinoids, and so far they’re based only on THC and CBD. Alongside Epidiolex, the FDA has approved synthetic THC and a THC-like compound to fight nausea in patients undergoing chemotherapy and weight loss in patients with cancer or AIDS. But there are hints of many other possible uses. The National Institutes of Health registry of clinical trials lists hundreds of efforts underway around the world to study the effect of cannabinoids on autism, sleep, Huntington’s Disease, pain management and more.

In recent years, says Mechoulam, interest has expanded beyond THC and CBD to other cannabis compounds such as cannabigerol (CBG), which Mechoulam and his colleague Yehiel Gaoni discovered back in 1964. His team has made derivatives of CBG that have anti-inflammatory and pain relief properties in mice (for example, reducing the pain felt in a swollen paw) and can prevent obesity in mice fed high-fat diets. A small clinical trial of the impacts of CBG on attention-deficit hyperactivity disorder is being undertaken this year. Mechoulam says that the methyl ester form of another chemical, cannabidiolic acid, also seems “very promising” — in rats, it can suppress nausea and anxiety and act as an antidepressant in an animal model of the mood disorder.

But if the laundry list of possible benefits of all the many cannabinoids is huge, the hard work has not yet been done to prove their utility. “It’s been very difficult to try and characterize the effects of all the different ones,” says Sam Craft, a psychology PhD student who studies cannabinoids at the University of Bath in the UK. “The science hasn’t really caught up with all of this yet.”

A natural version in our bodies

Part of the reason that cannabinoids have such far-reaching effects is because, as Mechoulam helped to discover, they’re part of natural human physiology.

In 1988, researchers reported the discovery of a cannabinoid receptor in rat brains, CB1 (researchers would later find another, CB2, and map them both throughout the human body). Mechoulam reasoned there wouldn’t be such a receptor unless the body was pumping out its own chemicals similar to plant cannabinoids, so he went hunting for them. He would drive to Tel Aviv to buy pig brains being sold for food, he remembers, and bring them back to the lab. He found two molecules with cannabinoid-like activity: anandamide (named after the Sanskrit word ananda for bliss) and 2-AG.

These endocannabinoids, as they’re termed, can alter our mood and affect our health without us ever going near a joint. Some speculate that endocannabinoids may be responsible, in part, for personality quirks, personality disorders or differences in temperament.

Animal and cell studies hint that modulating the endocannabinoid system could have a huge range of possible applications, in everything from obesity and diabetes to neurodegeneration, inflammatory diseases, gastrointestinal and skin issues, pain and cancer. Studies have reported that endocannabinoids or synthetic creations similar to the natural compounds can help mice recover from brain trauma, unblock arteries in rats, fight antibiotic-resistant bacteria in petri dishes and alleviate opiate addiction in rats. But the endocannabinoid system is complicated and not yet well understood; no one has yet administered endocannabinoids to people, leaving what Mechoulam sees as a gaping hole of knowledge, and a huge opportunity. “I believe that we are missing a lot,” he says.

“This is indeed an underexplored field of research,” agrees Silva, and it may one day lead to useful pharmaceuticals. For now, though, most clinical trials are focused on understanding the workings of endocannabinoids and their receptors in our bodies (including how everything from probiotics to yoga affects levels of the chemicals).

‘Toxic effects’ of synthetics

In the wake of the discovery of CB1 and CB2, many researchers focused on designing new synthetic molecules that would bind to these receptors even more strongly than plant cannabinoids do. Pharmaceutical companies have pursued such synthetic cannabinoids for decades, but so far, says Craft, without much success — and some missteps. A drug called Rimonabant, which bound tightly to the CB1 receptor but acted in opposition to CB1’s usual effect, was approved in Europe and other nations (but not the US) in the early 2000s to help to diminish appetite and in that way fight obesity. It was withdrawn worldwide in 2008 due to serious psychotic side effects, including provoking depression and suicidal thoughts.

Some of the synthetics invented originally by academics and drug companies have wound up in recreational drugs like Spice and K2. Such drugs have boomed and new chemical formulations keep popping up: Since 2008, 224 different ones have been spotted in Europe. These compounds, chemically tweaked to maximize psychoactive effects, can cause everything from headaches and paranoia to heart palpitations, liver failure and death. “They have very toxic effects,” says Craft.

For now, says Silva, there is scarce evidence that existing synthetic cannabinoids are medicinally useful: As most of the drug candidates worked their way up the pipeline, adverse effects have tended to crop up. Because of that, says Silva, most pharmaceutical efforts to develop synthetic cannabinoids have been discontinued.

But that doesn’t mean all research has stopped; a synthetic cannabinoid called JWH-133, for example, is being investigated in rodents for its potential to reduce the size of breast cancer tumors. It’s possible to make tens of thousands of different chemical modifications to cannabinoids, and so, says Silva, “it is likely that some of these combinations may have therapeutic potential.” The endocannabinoid system is so important in the human body that there’s plenty of room to explore all kinds of medicinal angles. Mechoulam serves on the advisory board of Israel-based company EPM, for example, which is specifically aimed at developing medicines based on synthetic versions of types of cannabinoid compounds called synthetic cannabinoid acids.

With all this work underway on the chemistry of these compounds and their workings within the human body, Mechoulam, now 92, sees a coming explosion in understanding the physiology of the endocannabinoid system. And with that, he says, “I assume that we shall have a lot of new drugs.”

Lea en español

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

From the Big Bang to dark energy, knowledge of the cosmos has sped up in the past century — but big questions linger

“The first thing we know about the universe is that it’s really, really big,” says cosmologist Michael Turner, who has been contemplating this reality for more than four decades now. “And because the universe is so big,” he says, “it’s often beyond the reach of our instruments, and of our ideas.”

Certainly our current understanding of the cosmic story leaves some huge unanswered questions, says Turner, an emeritus professor at the University of Chicago and a visiting faculty member at UCLA. Take the question of origins. We now know that the universe has been expanding and evolving for something like 13.8 billion years, starting when everything in existence exploded outward from an initial state of near-infinite temperature and density — a.k.a. the Big Bang. Yet no one knows for sure what the Big Bang was, says Turner. Nor does anyone know what triggered it, or what came beforehand — or whether it’s even meaningful to talk about “time” before that initial event.

Then there’s the fact that the most distant stars and galaxies our telescopes can potentially see are confined to the “observable” universe: the region that encompasses objects such as galaxies and stars whose light has had time to reach us since the Big Bang. This is an almost inconceivably vast volume, says Turner, extending tens of billions of light-years in every direction. Yet we have no way of knowing what lies beyond. Just more of the same, perhaps, stretching out to infinity. Or realms that are utterly strange — right down to laws of physics that are very different from our own.

But then, as Turner explains in the 2022 Annual Review of Nuclear and Particle Science, mysteries are only to be expected. The scientific study of cosmology, the field that focuses on the origins and evolution of the universe, is barely a century old. It has already been transformed more than once by new ideas, new technologies and jaw-dropping discoveries — and there is every reason to expect more surprises to come.

Knowable Magazine  recently spoke with Turner about how these transformations occurred and what cosmology’s future might be. This interview has been edited for length and clarity.

You say in your article that modern, scientific cosmology didn’t get started until roughly the 1920s. What happened then?

It’s not as though nothing happened earlier. People have been speculating about the origin and evolution of the universe for as long as we know of. But most of what was done before about 100 years ago we would now call galactic astronomy, which is the study of stars, planets and interstellar gas clouds within our own Milky Way. At the time, in fact, a lot of astronomers argued that the Milky Way was the universe — that there was nothing else.

But two big things happened in the 1920s. One was the work of a young astronomer named Edwin Hubble. He took an interest in the nebulae, which were these fuzzy patches of light in the sky that astronomers had been cataloging for hundreds of years. There had always been a debate about their nature: Were they just clouds of gas relatively close by in the Milky Way, or other “island universes” as big as ours?

Nobody had been able to figure that out. But Hubble had access to a new 100-inch telescope, which was the largest in the world at that time. And that gave him an instrument powerful enough to look at some of the biggest and brightest of the nebulae, and show that they contained individual stars, not just gas. By 1925, he was also able to estimate the distance to the very brightest nebula, in the constellation of Andromeda. It lay well outside the Milky Way. It was a whole other galaxy just like ours.

So that paper alone solved the riddle of the nebulae and put Hubble on the map as a great astronomer. In today’s terms, he had identified the fundamental architecture of the universe, which is that it consists of these collections of stars organized into galaxies like our own Milky Way — about 200 billion of them in the part of the universe we can see.

But he didn’t stop there. In those days there was this — well, “war” is probably too strong a word, but a separation between the astronomers who took pictures and the astrophysicists who used spectroscopy, which was a technique that physicists had developed in the 19th century to analyze the wavelengths of light emitted from distant objects. Once you started taking spectra of things like stars or planets, and comparing their emissions with those from known chemical elements in the laboratory, you could say, “Oh, not only do I know what it’s made of, but I know its temperature and how fast it’s moving towards or away from us.” So you could start really studying the object.

Just like in other areas of science, though, the very best people in astronomy use all the tools at hand, be they pictures or spectra. In Hubble’s case, he paid particular attention to an earlier paper that had used spectroscopy to measure the velocity of the nebulae. Now, the striking thing about this paper was that some of the nebulae were moving away from us at many hundreds of kilometers per second. In spectroscopic terms they had a high “redshift,” meaning that their emissions were shifted toward longer wavelengths than you’d see in the lab.

So in 1929, when Hubble had solid distance data for two dozen galaxies and reasonable estimates for more, he plotted those values against the redshift data. And he got a striking correlation: The further away a galaxy was, the faster it was moving away from us.

This was the relation that’s now known as Hubble’s law. It took a while to figure out what it meant, though.

Why? Did it require a second big development?

Yes. A bit earlier, in 1915, Albert Einstein had put forward his theory of general relativity, which was a complete paradigm shift and reformulation of gravity. His key insight was that space and time are not fixed, as physicists had always assumed, but are dynamic. Matter and energy bend space and time around themselves, and the “force” we call gravity is just the result of objects being deflected as they move around in this curved space-time. As the late physicist John Archibald Wheeler famously said, “Space tells matter how to move, and matter tells space how to curve.”

It took a few years to connect Einstein’s theory with observation. But by the early or mid-1930s, it was clear that what Hubble had discovered was not that galaxies are moving away from us into empty space, but that space itself is expanding and carrying the galaxies along with it. The whole universe is expanding.

And at least a few scientists in the 1930s began to realize that Hubble’s discovery also meant there was a beginning to the universe.

The turning point was probably George Gamow, a Soviet physicist who defected to the US in the 1930s. He had studied general relativity as a student in Leningrad, and knew that Einstein’s equations implied that the universe had expanded from a “singularity” — a mathematical point where time began and the radius of the universe was zero. It’s what we now call the Big Bang.

But Gamow also knew nuclear physics, which he had helped develop before World War II. And around 1948, he and his collaborators started to combine general relativity and nuclear physics into a model of the universe’s beginning to explain where the elements in the periodic table came from.

Their key idea was that the universe started out hot, then cooled as it expanded the way gas from an aerosol can does. This was totally theoretical at the time. But it would be confirmed in 1965 when radio astronomers discovered the cosmic microwave background radiation. This radiation consists of high-energy photons that emerged from the Big Bang and cooled down as the universe expanded, until today they are just 3 degrees Kelvin above absolute zero — which is also the average temperature of the universe as a whole.

In this hot, primordial soup — called ylem by Gamow — matter would not exist in the form it does today. The extreme heat would boil atoms into their constituent components — neutrons, protons and electrons. Gamow’s dream was that nuclear reactions in the cooling soup would have produced all the elements, as neutrons and protons combined to make the nuclei of the various atoms in the periodic table.

But his idea came up short. It took a number of years and a village of people to get the calculations right. But by the 1960s, it was clear that what would come from these nuclear reactions was mostly hydrogen, plus a lot of helium — about 25 percent by weight, exactly what astronomers observed — plus a little bit of deuterium, helium-3 and lithium. Heavier elements such as carbon and oxygen were made later, by nuclear reactions in stars and other processes.

So by the early 1970s, we had the creation of the light elements in a hot Big Bang, the expansion of the universe and the microwave background radiation — the three observational pillars of what’s been called the standard model of cosmology, and what I call the first paradigm.

But you note that cosmologists almost immediately began to shift toward a second paradigm. Why? Was the Big Bang model wrong?

Not wrong — our current understanding still has a hot Big Bang beginning — but incomplete. By the 1970s the idea of a hot beginning was attracting the attention of particle physicists, who saw the Big Bang as a way to study particle collisions at energies you couldn’t hope to reach at accelerators here on Earth. So the field suddenly got a lot bigger, and people started asking questions that suggested the standard cosmology was missing something.

For example, why is the universe so smooth? The intensity and temperature of the microwave background radiation, which is the best measure we have of the whole universe, is almost perfectly uniform in every direction. There’s nothing in Einstein’s cosmological equations that says this has to be the case.

The biggest things in the universe originated from the unimaginably small.

On the flip side, though — why is that cosmic smoothness only almost  perfect? After all, the most prominent features of the universe today are the galaxies, which must have formed as gravity magnified tiny fluctuations in the density of matter in the early universe. So where did those fluctuations come from? What seeded the galaxies?

Around this time, evidence had accumulated that neutrons and protons were made of smaller bits — quarks — which meant that the neutron-proton soup would eventually boil, too, becoming a quark soup at the earliest times. So maybe the answers lie in that early quark soup phase, or even earlier.

This is the possibility that led Alan Guth to his brilliant paper on cosmic inflation in 1981.

What is cosmic inflation?

Guth’s idea was that in the tiniest fraction of a second after the initial singularity, according to new ideas in particle physics, the universe ought to undergo a burst of accelerated expansion. This would have been an exponential expansion, far faster than in the standard Big Bang model. The size of the universe would have doubled and doubled and doubled again, enough times to take a subatomic patch of space and blow it up to the scale of the observable universe.

This explained the uniformity of the universe right away, just like if you had a balloon and blew it up until it was the size of the Earth or bigger: It would look smooth. But inflation also explained the galaxies. In the quantum world, it’s normal for things like the number of particles in a tiny region to bounce around. Ordinarily, this averages out to zero and we don’t notice it. But when cosmic inflation produced this tremendous expansion, it blew up these subatomic fluctuations to astrophysical scales, and provided the seeds for galaxy formation.

This result is the poster child for the connection between particle physics and cosmology: The biggest things in the universe — galaxies and clusters of galaxies — originated from quantum fluctuations that were unimaginably small.

You have written that the second paradigm has three pillars, cosmic inflation being the first. What about the other two?

When the details of inflation were being worked out in the early 1980s, people saw there was something else missing. The exponential expansion would have stretched everything out until space was “flat” in a certain mathematical sense. But according to Einstein’s general relativity, the only way the universe could be flat was if its mass and energy content averaged out to a certain critical density. This value was really small, equivalent to a few hydrogen atoms per cubic meter.

But even that was a stretch: Astronomers’ best measurements for the mean density of all the planets, stars and gas in the universe — all the stuff made of atoms — wasn’t even 10 percent of the critical density. (The modern figure is 4.9 percent.) So something else that was not made of atoms had to be making up the difference.

That something turned out to have two components, one of which astronomers had already begun to detect through its gravitational effects. Fritz Zwicky found the first clue back in the 1930s, when he looked at the motions of galaxies in distant clusters. Each of these galactic clusters was obviously held together by gravity, because their galaxies were all close and not flying apart. Yet the velocities Zwicky found were really high, and he concluded that the visible stars alone couldn’t produce nearly enough gravity to keep the galaxies bound. The extra gravity had to be coming from some form of “dark matter” that didn’t shine, but that outweighed the visible stars by a large factor.

Then Vera Rubin and Kent Ford really brought it home in the 1970s with their studies of rotation in ordinary nearby galaxies, starting with Andromeda. They found that the rotation rates were way too fast: There weren’t nearly enough stars and interstellar gas to hold these galaxies together. The extra gravity had to be coming from something invisible — again, dark matter.

Particle physicists loved the dark matter idea, because their unified field theories contained hypothetical particles with names like neutralino, or axion, that would have been produced in huge numbers during the Big Bang, and that had exactly the right properties. They wouldn’t give off light because they had no electric charge and very weak interactions with ordinary matter. But they would have enough mass to produce dark matter’s gravitational effects.

We haven’t yet detected these particles in the laboratory. But we do know some things about them. They’re “cold,” for example, meaning that they move slowly compared to the speed of light. And we know from computer simulations that without the gravity of cold dark matter, those tiny density fluctuations in the ordinary matter that emerged from the Big Bang would never have collapsed into galaxies. They just didn’t have enough gravity by themselves.

So that was the second pillar, cold dark matter. And the third?

As the simulations and the observations improved, cosmologists began to realize that even dark matter was only a fraction of the critical density needed to make the universe flat. (The modern figure is 26.8 percent.) The missing piece was found in 1998 when two groups of astronomers did a very careful measurement of the redshift in distant galaxies, and found that the cosmic expansion was gradually accelerating.

So something — I suggested calling it “dark energy,” and the name stuck — is pushing the universe apart. Our best understanding is that dark energy leads to repulsive gravity, something that is built into Einstein’s general relativity. The crucial feature of dark energy is its elasticity or negative pressure. And further, it can’t be broken into particles — it is more like an extremely elastic medium.

While dark energy remains one of the great mysteries of cosmology and particle physics, it seems to be mathematically equivalent to the cosmological constant that Einstein suggested in 1917. In the modern interpretation, though, it corresponds to the energy of nature’s quantum vacuum. This leads to an extraordinary picture: the cosmic expansion speeding up rather than slowing, all caused by the repulsive gravity of a very elastic, mysterious component of the universe called dark energy. The equally extraordinary evidence for this extraordinary claim has built up ever since and the two teams that made the 1998 discovery were awarded the Nobel Prize in Physics in 2011.

So here is where we are: a flat, critical-density universe comprising ordinary matter at about 5 percent, particle dark matter at about 25 percent and dark energy at about 70 percent. The cosmological constant is still called lambda, the Greek letter that Einstein used. And so the new paradigm is referred to as the lambda-cold dark matter model of cosmology.

So this is your second paradigm — inflation plus cold dark matter plus dark energy?

Yes. And it’s this amazing, glass-half-full, half-empty situation. The lambda-cold dark matter paradigm has these three pillars that are well established with evidence, and that allow us to describe the evolution of the universe from a tiny fraction of a second until today. But we know we’re not done.

For example, you say, “Wow, cosmic inflation sounds really important. It’s why we have a flat universe today and explains the seeds for galaxies. Tell me the details.” Well, we don’t know the details. Our best understanding is that inflation was caused by some still unknown field similar to the Higgs boson discovered in 2012.

Then you say, “Yeah, this dark matter sounds really important. Its gravity is responsible for the formation of all the galaxies and clusters in the universe. What is it?” We don’t know. It’s probably some kind of particle left over from the Big Bang, but we haven’t found it.

“You say, ‘Yeah, this dark matter sounds really important. Its gravity is responsible for the formation of all the galaxies and clusters in the universe. What is it?’ We don’t know.”

And then finally you say, “Oh, dark energy is 70 percent of the universe. That must be really important. Tell me more about it.” And we say, it’s consistent with a cosmological constant. But really, we don’t have a clue why the cosmological constant should exist or have the value it does.

So now cosmology has left us with three physics questions: Dark matter, dark energy and inflation — what are they?

Does that mean we need a third cosmological paradigm to find the answers?

Maybe. It could be that everything’s done in 30 years because we just flesh out our current ideas. We discover that dark matter really is some particle like the axion, that dark energy really is just the constant quantum energy of empty space, and that inflation really was caused by the Higgs field.

But more likely than not, if history is any guide, we’re missing something and there’s a surprise on the horizon.

Some cosmologists are trying to find this surprise by following the really big questions. For example: What was the Big Bang? And what happened beforehand? The Big Bang theory we talked about earlier is anything but a theory of the Big Bang itself; it’s a theory of what happened afterwards.

Remember, the actual Big Bang event, according to Einstein’s general relativity, was this singularity that saw the creation of matter, energy, space and time itself. That’s the big mystery, which we struggle even to talk about in scientific terms: Was there a phase before this singularity? And if so, what was it like? Or, as many theorists think, does the singularity in Einstein’s equations represent the instant when space and time themselves emerged from something more fundamental?

Another possibility that has captured the attention of scientists and public alike is the multiverse. This follows from inflation, where we imagine blowing up a small bit of space to an enormous size. Could that happen more than once, at different places and times? And the answer is yes: You could have had different patches of the wider multiverse inflating into entirely different universes, maybe with different laws of physics in each one. It could be the biggest idea since Copernicus moved us out of the center of the universe. But it’s also very frustrating because right now, it isn’t science: These universes would be completely disconnected, with no way to access them, observe them or show that they actually exist.

Yet another possibility is in the title of my Annual Reviews  article: The road to precision cosmology. It used to be that cosmology was really difficult because the instruments weren’t quite up to the task. Back in the 1930s, Hubble and his colleague Milton Humason struggled for years to collect redshifts for a few hundred galaxies, in part because they were recording one spectrum at a time on photographic plates that collected less than 1 percent of the light. Now astronomers use electronic CCD detectors — the same kind that everyone carries around in their phone — that collect almost 100 percent of the light. It’s as if you increased your telescope size without any construction.

And we have projects like the Dark Energy Spectroscopic Instrument on Kitt Peak in Arizona that can collect the spectra of 5,000 galaxies at once — 35 million of them over five years.

So cosmology used to be a data-poor science in which it was hard to measure things within any reliable precision. And today, we are doing precision cosmology, with percent-level accuracy. And further, we are sometimes able to measure things in two different ways, and see if the results agree, creating cross-cuts that can confirm our current paradigm or reveal cracks in it.

A prime example of this is the expansion rate of the universe, what’s called the Hubble parameter — the most important number in cosmology. If nothing else, it tells us the age of the universe: The bigger the parameter, the younger the universe, and vice versa. Today we can measure it directly with the velocities and distances of galaxies out to a few hundred-million light years, at the few percent level.

But there is now another way to measure it with satellite observations of the microwave background radiation, which gives you the expansion rate when the universe was about 380,000 years old, at even greater precision. With the lambda-cold dark matter model you can extrapolate that expansion rate forward to the present day and see if you get the same number as you do with redshifts. And you don’t: The numbers differ by almost 10 percent — an ongoing puzzle that’s called the Hubble tension.

So maybe that’s the loose thread — the tiny discrepancy in the precision measurements that could lead to another paradigm shift. It could be just that the direct measurements of galaxy distances are wrong, or that the microwave background numbers are wrong. But maybe we are finding something that’s missing from lambda-cold dark matter. That would be extremely exciting.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

3 Tips for Creating a Summer of Unplugged Fun

Between school, work and entertainment, there are times when screens can seem like a pervasive part of modern life. For all the positive aspects of technology, there can also be a desire for children to have stretches of unplugged learning and participate in educational activities that do not require a screen.

Why Unplugged Learning Matters
“Unplugged learning is important to balance the screen time children may experience with other forms of learning; to promote physical activities, social interaction and creativity; and develop the essential skills that bolster them throughout their exploration and growth as individuals,” said Rurik Nackerud from KinderCare’s education team.

Summer can be an ideal time to focus on unplugged learning as it often brings a break from the traditional academic year and activities.

“We want summer to be a time when children can put down technology and connect with one another face-to-face, build important creativity skills and learn how to be social with one another without the buffer of screens,” said Khy Sline from KinderCare’s education team. “They can play, run, be immature and laugh with their friends, giggle at the silly things and find joys in those in-person interactions with one another.”

Tips for Creating Unplugged Fun as a Family

  1. Get Outdoors. Make time as a family to get outside and explore, even if it’s simply a walk around the block after dinner. Help children notice the little things like a bug on the sidewalk or the way the sun filters through tree leaves to make patterns on the ground. Ask them about the things they see and give your children the space to ask questions and work together to find the answers. This helps teach children collaborative learning skills: asking questions, sharing ideas and working together to reach an answer.
     
  2. Read Together. This could mean going to the library to check out new books or exploring your family’s bookshelves for old favorites. Snuggle up together for family story time. If children are old enough to read on their own, invite them to read to you or their younger siblings. Talk about the story or even act out favorite parts to help your children actively participate in story time, which may help them better understand the story’s concepts.
     
  3. Encourage Creative Thinking. Help children expand their ability to think creatively by working together to make a craft or project. For example, the next time a delivery box arrives at your home, encourage your children to turn it into something new using craft supplies on hand. A blanket could turn a box into a table for a pretend restaurant while some tape or glue could transform it into a rocket ship or train. When everyone’s done creating and playing, the box can be broken down for recycling. This activity can help children literally think outside of the box and apply their own unique ideas and creativity to create something new.

For more tips to encourage unplugged learning this summer, visit kindercare.com.

 

SOURCE:
KinderCare

What does ‘moral hazard’ mean? A scholar of financial regulation explains why it’s risky for the government to rescue banks

A real payload. tiero/iStock via Getty Images Plus
Cassandra Jones Havard, University of South Carolina

Moral hazard” refers to the risks that someone or something becomes more inclined to take because they have reason to believe that an insurer will cover the costs of any damages.

The concept describes financial recklessness. It has its roots in the advent of private insurance companies about 350 years ago. Soon after they began to form, it became clear that people who bought insurance policies took risks they wouldn’t have taken without that coverage.

Here are some illustrative examples: Having worker’s compensation insurance could potentially encourage some workers to stay out of work longer than needed for their health. Or, homeowners insurance may explain why a homeowner might not bother spending their own money on a small repair not covered by their insurance policy because they figure that over time it will turn into a larger problem that would be covered.

Or think of what happens when someone rents a car and parks it where it can easily be damaged. That carelessness reflects an assumption that the rental car company’s insurance policy will pay for the repairs.

Why moral hazard matters

U.S. banks are insured by the Federal Deposit Insurance Corporation, or FDIC, and the risk-takers are both banks and the bank’s depositors.

Congress established the FDIC during the Great Depression, which began with a spate of bank runs. The goal was to boost confidence in the banking system.

The Dodd-Frank Financial Reform Act, enacted after the 2008 financial crisis, was supposed to reduce moral hazard. One way it did that was by making it clear that accounts of more than US$250,000 aren’t insured by the FDIC unless the bank’s failure presents a systemic risk to the financial system.

The implicit assumption behind the government’s insurance limit, which prior to 2008 stood at $100,000, is that depositors who have accounts worth more than the limit will bear the loss of bank failure along with the bank’s executives and shareholders. Yet boosting the size of the guarantee amount also made future bank bailouts more costly, which in turn increased moral hazard.

And when Silicon Valley Bank failed in March 2023, all its depositors got access to their funds – including those with accounts that exceeded the $250,000 limit – because the government made an exception.

‘Too big to fail’

I teach and write about moral hazard in the banking industry as a banking law professor. As it happens, my banking law class had discussed moral hazard and bank failure for three class sessions held before the 2023 spring break.

When the students returned from their vacation, news of Silicon Valley Bank’s failure appeared to be the start of what might become a bank crisis.

“What happened? It’s completely different from what you taught us!” the students in my class exclaimed, almost in unison. Questions tumbled from their heads demanding an explanation.

Why did the government apparently throw out concerns about moral hazard when SVB failed?

Any explanation would have to begin with what moral hazard can mean in the context of banking, which can summon the colloquial phrase “too big to fail.”

That controversial concept applies to how the government responds in the aftermath of the risky behavior of a bank – if the collapse of the bank is likely to harm the economy. Yet, in reducing the risk of a widespread financial crisis, the government can end up sending the message that it’s willing to protect banks that engage in reckless behavior – and to shield their customers from the consequences.

Cassandra Jones Havard, Professor of Law, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.