Saturday, April 22, 2023

Raw materials, or sacred beings? Lithium extraction puts two worldviews into tension

A salt pyramid in Uyuni, Bolivia. The rainy season produces a mirror effect in the salt flat. Mario Orospe Hernandez, CC BY-NC-ND
Mario Orospe Hernández, Arizona State University

Located in the heart of South America, Bolivia contains the largest lithium deposits in the world – an enviable position, in many countries’ eyes, as the market for electric vehicles takes off. Though EVs emit fewer greenhouse gases than fuel-powered vehicles, their batteries require more mineralsespecially lithium, which is also used to make batteries for smartphones and computers.

Unlike its neighbors Chile and Argentina, Bolivia has yet to become a major player in the global lithium market. In part, this is because its high-altitude salt flats aren’t suited to the usual extraction method, solar evaporation.

But that looks poised to change: In January 2023, state company YLB signed an agreement with the Chinese consortium CBC, which includes the world’s largest producer of lithium-ion batteries, to introduce a new method called direct lithium extraction.

It may prove an economic boon. But since colonial times, the legacy of mineral abundance in Bolivia has also been one of pollution, poverty and exploitation. While some residents are hopeful about the potential benefits of the growing lithium industry, others are concerned about extraction’s local impact. In particular, direct lithium extraction demands a great deal of fresh water, potentially endangering surrounding ecosystems as has happened in other parts of South America’s “lithium triangle.”

The pale expanse of a salt flat beneath a bright blue sky.
Lithium lies in the underground brine beneath this salt flat. Mario Orospe Hernandez, CC BY-NC-ND

A rapid escalation of lithium extraction in the Bolivian Andes also represents a looming clash between two fundamentally different views of nature: modern industrial society’s and that of the Indigenous communities who call the region home – a focus of my current research collaborations and dissertation project.

The Pachamama

Bolivia is home to 36 ethnic groups across its highland and lowland regions. Aymara and Quechua peoples comprise most of the Indigenous communities in the Andes Mountains.

For these cultures, nature is not a means to human ends. Instead, it is seen as a group of beings with personhood, history and power beyond human reach. For example, the female divinity of fertility, to whom people owe respect, is the Pachamama. Since she sustains and secures the reproduction of life, Andean Indigenous people make offerings to the Pachamama in ancestral rituals known as “challas” that seek to reinforce their connection with her.

A handful of people bend over rows of crops while working in a hillside area.
Local food producers in Chicani, a village on the outskirts of La Paz, Bolivia. Mario Orospe Hernandez, CC BY-NC-ND

Similarly, highland groups recognize mountains not as a set of inert rocks, but as ancestral guardians called “Achachilas” in Aymara and “Apus” in Quechua. Each Andean community praises a nearby mountain whom they believe protects and oversees their lives.

In Uyuni, for example, where one of the two new lithium plants will be constructed, Indigenous communities acknowledge the presence of these sacred beings. To this day, worshipers in nearby Lipez region explain the salt flat’s origin with a traditional legend: It is the mother’s milk of their Apu, a female volcano named Tunupa.

However, religious concepts such as “sacred” or “divine” do not necessarily capture the relationships that Andean Indigenous people have long established with these more-than-human beings, who have been known since pre-colonial times as “huacas.” These entities are not considered “gods,” or thought of as dealing with otherworldly beliefs. Rather, they are treated as integral to people’s earthly everyday life.

A small stack of stones sits before a sandy-colored hill.
A Quechua huaca, also known as the sanctuary of the sacred rock, on the Island of the Sun in Lake Titicaca. Mario Orospe Hernandez, CC BY-NC-ND

For instance, before meals, Quechua and Aymara peoples throw coca leaves or spill their drinks on the ground to share their food with these beings as a sign of gratitude and reciprocity.

Lifeless matter

In industrial societies, on the other hand, nature is understood as something external to humanity – an object that can be mastered through science and technology. The modern economy turns nature into a source of raw materials: morally and spiritually inert matter that is there to be extracted and mobilized worldwide. Within this framework, a mineral like lithium is a resource to be developed in the pursuit of economic gains for human beings.

In fact, the history of these competing notions is deeply entwined with the history of the colonial era, as different cultures came into violent conflict. As the Spanish discovered the mineral bounty of the so-called New World, like gold and silver, they began an intensive extraction of its riches, relying on forced labor from local people and imported slaves.

The concept of “raw materials” can be traced to the theological notion of “prime matter.” The term originally comes from Aristotle, whose work was introduced to Christianity via Latin translations around the 12th century. In the way Christians adapted his idea of prime matter, everything was ordered by its level of “perfection,” ranging from the lowest level – prime matter, the most basic “stuff” of the world – to rocks, plants, animals, humans, angels and, finally, God.

A black and white engraving shows people working in a mine with a ladder leading to the entrance.
A silver mine at Potosi, New Spain – now Bolivia – depicted by Theodor de Bry around 1590. ullstein bild/ullstein bild via Getty Images

The Catholic Church and the Spanish Empire later used this medieval understanding of matter as something passive, without spirit, to justify the extraction of resources during colonial times. The closer things were to prime matter, their argument supposed, the more they needed human imprint and an external purpose to make them valuable.

This notion was also used by Christian colonizers who were intent on destroying traditions that they saw as idolatrous. In their eyes, reverence toward a mountain or the earth itself was worshiping a mere “thing,” a false god. The church and the empire believed it was critical to desacralize these more-than-human beings and treat them as mere resources.

This flattened vision of nature served as the basis for the modern economic concept of raw materials, which was introduced in the 18th century with the birth of economics as a social science.

The road ahead

Bolivia’s lithium projects pose a new potential clash of worldviews. However, extraction initiatives have faced severe setbacks in the last few years, including social protests, the 2019 political crisis and a lack of necessary technology. The Chinese deal represents a new milestone, yet its outcomes are still uncertain: for the economy, for local communities and for the Earth.

Today, electric vehicles are widely considered part of the solution to the climate crisis. Yet they will necessitate a mining surge to meet their battery demands. If societies really want a greener future, technological shifts such as EVs will be just part of the answer, alongside other changes like more sustainable urban planning and improved public transportation.

But in addition, perhaps other cultures could learn from Andean relations with nature as more-than-human beings: an inspiration to rethink development and turn our own way of living into something less destructive.

Mario Orospe Hernández, Ph.D. Candidate in Religious Studies, Arizona State University

This article is republished from The Conversation under a Creative Commons license.

Friday, April 21, 2023

Each generation in Northern Ireland has reflected on the ‘troubles’ in its own way – right up to ‘Derry Girls’

A mural in Derry commemorating the TV show ‘Derry Girls,’ which follows the lives of teenagers growing up amid Northern Ireland’s troubles. Dominic Bryan, CC BY-NC-ND
Joseph Patrick Kelly, College of Charleston

A 9-year-old boy lies on the floor of a working-class rowhouse in Belfast, Northern Ireland, wondrously watching American Westerns on TV. Outside, though, the world’s gone mad. Broken glass and shattered masonry. Barricades go up. Rifle-toting soldiers patrol the streets.

It’s August 1969, the summer that Northern Ireland’s ‘troubles’ flared into violence.

The scene is from “Belfast,” director Kenneth Branagh’s ode to growing up in the grinding conflict that would go on to kill several thousand people. Branagh’s Academy Award-winning film premiered in 2021, more than two decades after the Good Friday Agreement brought the troubles to a close on April 10, 1998 – 25 years ago this month.

This was the second period of so-called troubles in Ireland. The first involved a bloody guerrilla war that ended in 1921, with the island partitioned into an independent, mostly Catholic south and a mostly Protestant north that remained part of the United Kingdom.

But that division did little to settle the age-old war of cultural identity. Since then, each generation of artists has used theater, song and film to reflect on their states’ still-uneasy peace – made all the more complicated by Brexit.

‘Four green fields’

For hundreds of years, British culture stereotyped the “native” Irish as savage, bestial, childlike, lazy, belligerent and, above all else, unruly: a tribe that needed British civilization – and, therefore, its colonization. Irish nationalists like poet W.B. Yeats, who wanted to free the whole of Ireland from British rule, felt they had to flip this script by purging the island of “Anglo” influences, reviving the Irish language and promoting Celtic arts.

In 1902, Yeats wrote the masterpiece of this Celtic revival, “Cathleen ni Houlihan.” The one-act play dramatizes traditional songs and legends about a poor old woman driven from her farm by strangers. Cathleen recruits a groom – on the eve of his wedding day, no less – to help fight to retrieve her “four beautiful green fields.”

A black and white picture of a woman holding up a lantern in a doorway to a room with three people in it.
A scene from ‘Cathleen ni Houlihan.’ Project Gutenberg/Wikimedia Commons

It’s an obvious allegory: She is Ireland, the fields are Ireland’s four provinces, and the strangers are the British. The blood of Irish martyrs nourishes the old woman, and at the play’s end, Cathleen transforms into a young girl “with the walk of a queen.”

Cultural pride helped fuel support for Irish independence, and the Irish Republican Army drove the British out of three of the island’s four provinces by 1922. But a majority of people in much of the final province, Ulster, identified as British, so a new national border was drawn to separate the two communities.

That gerrymandered border sparked a civil war in the new Irish Free State between the “die-hard” nationalists, who wanted to keep fighting the British till they abandoned the north, and the “Free Staters,” who compromised to make peace. Martin McDonagh’s 2022 film “The Banshees of Inisherin,” nominated for nine Academy Awards, can be viewed as an allegory of the Irish Civil War – the tragedy when brothers in arms turn their guns on one another.

Spiraling crisis

Many Protestants loyal to the U.K. viewed the culture of Northern Ireland’s minority Catholic population as a threat and treated them as second-class citizens. In the late 1960s, in part inspired by Martin Luther King Jr.‘s civil rights activism in the U.S., Catholics began campaigning against discrimination. Their demands were met with violence, like the 1972 Bloody Sunday massacre, in which British soldiers shot and killed 14 unarmed protesters in Derry, also known as Londonderry – rival names that themselves reflect the sharp divide between communities.

A soldier stands on a street as two young children, one holding a fake shield, stand in front.
A soldier on patrol in Belfast in 1969. Bettmann/Getty Images

Tribal feelings spiraled higher, pitting mostly Protestant “unionists” loyal to the U.K. against Catholic “nationalists” who sought reunion with the Republic of Ireland. Neighborhoods were segregated and giant walls went up to keep Catholic and Protestant apart, but wave after wave of reprisals came anyway, including bombings and sniper attacks.

As the troubles intensified, folk musician Tommy Makem’s popular song “Four Green Fields” drew again on the legend of Ireland as a poor old woman:

“I have four green fields, one of them’s in bondage

In strangers’ hands, that tried to take it from me

But my sons have sons as brave as were their fathers

My fourth green field will bloom once again,” said she.

It became a nationalist battle call, and a sign of the times, as plenty of young men joined the IRA’s campaign against British control of Northern Ireland.

Nowhere was the “them and us” attitude more evident than on the gable ends of rowhouses, where nationalists and unionists each painted murals celebrating their heroes and remembering the atrocities perpetrated by the other side.

People in dark coats hold white crosses in front of a purple and red mural with people's faces painted in it.
Families of the victims and supporters walk past a mural featuring the 14 victims of Bloody Sunday as they commemorate the 50th anniversary of the massacre, in 2022. Charles McQuillan/Getty Images

‘Sing a new song’

In the mid-1970s, a group of writers and actors, including the Nobel laureate poet Seamus Heaney, tried to blaze a way out of this cultural death spiral. Calling themselves “Ireland’s Field Day,” they tried to create art that could be a “fifth province” of Ireland, a place that would transcend sectarian politics.

U2 wrote its hit song “Sunday, Bloody Sunday,” the first song on its 1983 album “War,” in the same spirit. It begins with images reminiscent of the massacre in Derry 11 years before:

Broken bottles under children’s feet

Bodies strewn across the dead-end street

In U2’s telling, the villain is not the other side. The enemy is the violence itself, generated by the feedback loop of Nationalism and unionism. The only way out is to refuse “to heed the battle call.”

The album ends with the song “40,” a soulful echo of the Bible’s 40th Psalm: “I will sing … sing a new song.”

This kind of thinking helped lead the war-weary people of Northern Ireland to the Good Friday Agreement, also called the Belfast Agreement, in 1998. Its deals shaped the power-sharing system Northern Ireland has today, which legitimizes both identities. People in Northern Ireland can choose to be citizens of the U.K., citizens of the Republic of Ireland, or both.

A black and white photo shows a band performing on stage in front of a large illustration of a boy's face.
U2 performs on a television show in 1983, with an illustration from the cover of its ‘War’ album behind it. Erica Echenberg/Redferns via Getty Images

It has, by and large, worked. Over the years, this commitment to religious, political and racial equality tamped down the tribalism and violence. The border between Ireland and Northern Ireland became less and less relevant. By 2018, half of the people in Northern Ireland described themselves as “neither nationalist nor unionist.”

A new generation

Brexit, however, has turned the line between Ireland and Northern Ireland into the only land border between the U.K. and the EU. Both nationalist and unionist identities are on the uptick, and the proportion of people in Northern Ireland claiming neither identity has plummeted to 37%.

Even so, anthropologist Dominic Bryan, co-chair of Northern Ireland’s Commission on Flags, Identity, Culture, and Tradition, is optimistic that culture has built up a resistance to “us versus them” tribalism – reflected, in part, by how people remember the troubles.

He sent me a picture of a mural in Derry, painted one year after Brexit, which celebrates Lisa McGee’s hit TV show “Derry Girls.” Launched in 2018, the comedy follows the fictional lives of five teenagers growing up in the troubles. Though the show focuses on a Catholic community, it defuses the “us and them” way of thinking about identity. An episode called “Across the Barricades” satirizes facile attempts to get Catholic and Protestant kids to bond; it ends when they recognize their common enemy: parents.

In the last episode of the first season, while the kids deal with the anxieties of a high school talent show, the tone shifts dramatically. The adults are watching a TV news report of “one of the worst atrocities of the Northern Irish conflict.” A bomb has killed 12 people and injured many more, and “anyone with medical training” is urged to “come to the scene immediately.”

The audience doesn’t know if the bomb was detonated by Catholic terrorists or Protestant terrorists. It doesn’t matter. The violence is like a tornado or an earthquake: a disaster suffered by all of Derry’s citizens, who pick up the pieces together.

Joseph Patrick Kelly, Professor of Literature and Director of Irish and Irish American Studies, College of Charleston

This article is republished from The Conversation under a Creative Commons license. 

‘Effective altruism’ has caught on with billionaire donors – but is the world’s most headline-making one on board?

SpaceX founder Elon Musk speaks during a T-Mobile and SpaceX joint event on Aug. 25, 2022, in Texas. Michael Gonzalez/Getty Images
Nicholas G. Evans, UMass Lowell

One of the ways tech billionaire Elon Musk attracts supporters is the vision he seems to have for the future: people driving fully autonomous electric vehicles, colonizing other planets and even merging their brains with artificial intelligence.

Part of such notions’ appeal may be the argument that they’re not just exciting, or profitable, but would benefit humanity as a whole. At times, Musk’s high-tech mission seems to overlap with “longtermism” and “effective altruism,” ideas promoted by Oxford philosopher William MacAskill and several billionaire donors, such as Facebook co-founder Dustin Moskovitz and his wife, former reporter Cari Tuna. The effective altruism movement guides people toward doing the most good they can with their resources, and Musk has claimed that MacAskill’s philosophy echoes his own.

But what do these phrases really mean – and how does Musk’s record stack up?

The greatest good

Effective altruism is strongly related to the ethical theory of utilitarianism, particularly the work of the Australian philosopher Peter Singer.

In simple terms, utilitarianism holds that the right action is whichever maximizes net happiness. Like any moral philosophy, there is a dizzying array of varieties, but utilitarians generally share a couple of important principles.

First is a theory about which values to promote. “Hedonistic utilitarians” seek to promote pleasure and reduce pain. “Preference utilitarians” seek to satisfy as many individual preferences, such as to be healthy or lead meaningful lives, as possible.

Second is impartiality: One person’s pleasure, pain or preferences are as important as anyone else’s. This is often summed up by the expression “each to count for one, and none for more than one.”

Finally, utilitarianism ranks potential choices based on their outcomes, usually prioritizing whichever choice would lead to the greatest value – in other words, the greatest pleasure, the least amount of pain or the most preferences fulfilled.

In concrete terms, this means that utilitarians are likely to support policies like global vaccine distribution, rather than hoarding doses for particular populations, in order to save more lives.

A man in a dark-colored t-shirt speaks on a stage in front of a live audience, with two massive screens behind him.
Effective altruism philosopher William MacAskill gives a TED Talk in Vancouver in 2018. Lawrence Sumulong/Getty Images

Utilitarianism 2.0?

Utilitarianism shares a number of features with effective altruism. When it comes to making ethical decisions, both movements posit that no one person’s pleasure or pain counts more than anyone else’s.

In addition, both utilitarianism and effective altruism are agnostic about how to achieve their goals: what matters is achieving the greatest value, not necessarily how we get there.

Third, utilitarians and effective altruists often have a very wide “moral circle”: in other words, the kinds of living beings that they think ethical people should be concerned about. Effective altruists are frequently vegetarians; many are also champions of animal rights.

Long-term view

But what if people have ethical obligations not just toward sentient beings alive today – humans, animals, even aliens – but toward beings who will be born in a hundred, a thousand or even a billion years?

Longtermists, including many people involved in effective altruism, believe that those obligations matter just as much as our obligations to people living today. In this view, issues that pose an existential risk to humanity, such as a giant asteroid striking earth, are particularly important to solve, because they threaten everyone who could ever live. Longtermists aim to guide humanity past these threats to ensure that future people can exist and live good lives, even in a billion years’ time.

Why do they care? Like utilitarians, effective altruists want to maximize happiness in the universe. If humanity goes extinct, then all those potentially good lives can’t happen. They can’t suffer – but they can’t have good lives, either.

A futuristic drawing of a green atmosphere enclosed in a large dome in a barren landscape.
Can Mars be part of the plan to save humanity? Steven Hobbs/Stocktrek Images via Getty Images

Measuring Musk

Musk has claimed that MacAskill’s effective altruism “is a close match for my philosophy.” But how close is it really? It’s hard to grade someone on their particular moral commitments, but the record seems choppy.

To start, the original motivation for the effective altruism movement was to help the global poor as much as possible.

In 2021, the director of the United Nations World Food Program mentioned Musk’s wealth in an interview, calling on him and fellow billionaire Jeff Bezos to donate US$6 billion. Musk’s net worth is currently estimated to be $180 billion.

The CEO of Tesla, SpaceX and Twitter tweeted that he would donate the money if the U.N. could provide proof that that sum would end world hunger. The head of the World Food Program clarified that $6 billion would not solve the problem entirely, but save an estimated 42 million people from starvation, and provided the organization’s plan.

Musk did not, the public record suggests, donate to the World Food Program, but he did soon give a similar amount to his own foundation – a move some critics dismissed as a tax dodge, since a core principle of effective altruism is giving only to organizations whose cost-effective impact has been rigorously studied.

Making money is hardly a problem in effective altruists’ eyes. They famously have argued that instead of working for nonprofits on important social issues, it may be more impactful to become investment bankers and use that wealth to advance social issues – an idea called “earning to give.” Nonetheless, Musk’s lack of transparency in that donation and his decision to then buy Twitter for seven times that amount have generated controversy.

Futuristic solutions

Musk has claimed that some of the innovations he has invested in are moral imperatives, such as autonomous driving technology, which could save lives on the road. In fact, he has suggested that negative media coverage of autonomous driving is tantamount to killing people by dissuading them from using self-driving cars.

In this view, Tesla seems to be an innovative means to a utilitarian end. But there are dozens of other ways to save lives on the road that don’t require expensive robot cars that just happen to enrich Musk himself: improved public transit, auto safety laws and more walkable cities, to name a few. His Boring Company’s attempts to build tunnels under Los Angeles, meanwhile, have been criticized as expensive and ineffecient.

The most obvious argument for Musk’s supposed longtermism is his rocket and spacecraft company SpaceX, which he has tied to securing the human race’s future against extinction.

Yet some longtermists are concerned about the consequences of a corporate space race, too. Political scientist Daniel Deudney, for example, has argued that the roughshod race to colonize space could have dire political consequences, including a form of interplanetary totalitarianism as militaries and corporations carve up the cosmos. Some effective altruists are worried about these types of issues as humans move toward the stars.

Is anyone, not just Musk, living up to effective altruism’s ideals today?

Answering this question requires thinking about three core questions: Are their initiatives trying to do the most good for everyone? Are they adopting the most effective means to help or simply the most exciting? And just as importantly: What kind of future do they envision? Anyone who cares about doing the most good they can should have an interest in creating the right kinds of future, rather than just getting us to any old future.

Nicholas G. Evans, Assistant Professor of Philosophy, UMass Lowell

This article is republished from The Conversation under a Creative Commons license. 

A treasure hunt for microbes in Chile’s Atacama desert


The famously dry region has long been dismissed as a mostly lifeless wasteland, good for little more than mining of minerals and precious metals. To these researchers, however, it’s a microbial gold mine worthy of protection.

Benito Gómez-Silva is surrounded by nothing. For as far as the eye can see, no plants dot the landscape; no animals amble across the salt-crusted soil that stretches out to the base of distant mountains. Besides some weak wisps of clouds inching slowly past a blazing sun, nothing moves here. The scenery consists exclusively of dirt and rocks.

It’s easy to imagine why Charles Darwin, peering across a nearby expanse of emptiness 187 years ago, proclaimed this region — the Atacama Desert in northern Chile — a place “where nothing can exist.” Indeed, though scattered sources of water support some plant and animal life, for more than a century most scientists accepted Darwin’s conclusion that here in the Atacama’s driest section, called the hyper-arid core, even the most resilient life forms couldn’t last long.

But Darwin was wrong and that’s why Gómez-Silva is here.

Rising before dawn to beat the day’s most brutal heat, we’ve driven for an hour along an increasingly deserted road, watching the terrain grow steadily emptier of plants and human-built structures. After heading south along Chile’s Coast Mountain range, we turn inland towards the Atacama’s heart. Here the University of Antofagasta desert microbiologist will search for a microscopic fungus that he hopes to isolate and grow in his lab.

We’re at the driest non-polar place on Earth, but Gómez-Silva knows there’s water here, hiding in the salt rocks around us. Just like the salt in a kitchen shaker will soak up water in humid weather, the salt rocks absorb tiny amounts of moisture blown in as night-time ocean fog. Then, sometimes for just a few hours, microscopic drops of water coalesce in the nanopores of the salt creating “tiny swimming pools,” Gómez-Silva says — lifelines for microbes that find refuge in the rocks. When moisture and sunlight coincide, these microbial fungi start to photosynthesize and grow their communities, seen as thin, dark lines across the faces of their salt-rock homes. With the gentle tap of the back of a hammer, Gómez-Silva dislodges a few small rocks with particularly prominent markings. They will head to his lab, where his team will break them down and try to extract the microbes inside and keep them alive in laboratory dishes.

Gómez-Silva is part of a small but strong contingency of scientists searching for living microbes here in the world’s oldest desert, a place that’s been dry since the late Jurassic dinosaurs roamed Earth some 150 million years ago. Anything trying to survive here has a host of challenges to contend with beyond the lack of water: intense solar radiation, high concentrations of noxious chemicals and key nutrients in scarce supply. Yet even so, unusual and tiny things do  grow, and researchers like Gómez-Silva say that scientists have a lot to learn from them.

Part of unlocking those secrets involves changing the world’s view of the Atacama, he says, a region that historically has been valued for mining precious minerals above all else. Coauthor of a 2016 Annual Review of Microbiology   paper on the desert’s microbial resources, Gómez-Silva is one of several researchers who believe that the Atacama should be prized for something altogether different: as a place to characterize unknown life forms. Describing such extremophiles — so named because of their ability to thrive in extreme, almost otherworldly conditions — has the potential to develop new tools in biotechnology, to answer questions about the very origins of life and to guide us on how to look for life on other planets.

“For centuries the Atacama was ‘lifeless,’” Gómez-Silva says. “We need to change this concept of the Atacama … because it’s full of microbial life. You just need to know where to look.” 

Extreme conditions

The Atacama stretches some 600 miles along the coast of South America — its borders aren’t precise — and is flanked on the east by the volcanic Altiplano of the Andes Mountain range and to the west by Chile’s Pacific shores. Roughly the size of Cuba, the desert is as varied as it is hostile.

Yet, despite the desolation, scattered treasures attract visitors from around the world. Near the town of San Pedro, about 150 miles to the east of Gómez-Silva’s university, tourists make trips to see the Atacama’s strange moonlike valleys, the lagunas that serve as oases for migrating flamingos and Chile’s El Tatio  geyser field. The desert includes a series of plateaus, ranging in elevation from around sea level to more than 11,000 feet, making it one of the highest deserts in the world. Various international observatories take advantage of that altitude and the desert’s record-low moisture to snap clear pictures of the stars.

The Atacama’s harsh conditions are thanks to the features that mark its borders. Storm fronts moving in from the east rarely breach the towering peaks of the Andes mountains and a thick current of cold ocean waters moving up from Antarctica chills the air along Chile’s coastline, hampering its ability to carry moisture inland. Many parts of this desert receive mere millimeters of rain each year, if any at all. The Atacama Desert city of Arica, just below Peru’s border, holds the record for the world’s longest dry spell — researchers believe not a single drop of rain fell within its borders for more than 14 years in the early 1900s.

Without water, little should survive: Cells shrivel, proteins disintegrate and cellular components can’t move about. The atmosphere at the desert’s high altitudes does little to block the sun’s damaging rays. And the lack of flowing water leaves precious metals in place for mining companies, but means distribution of nutrients through the ecosystem is limited, as is the dilution of toxic compounds. Where water bodies do exist in the desert — often in the form of seasonal basins fed by subterranean rivers — they frequently have high concentrations of salts, metals and elements, including arsenic, that are toxic to many cells. Desert plants and animals that manage to make it in the region typically cling to the desert’s outskirts or to scattered fog oases, which are periodically quenched by dense marine fogs called camanchacas. 

Seeing such conditions on an 1850s expedition to the Atacama at the behest of the Chilean government, even German-Chilean naturalist Rodulfo Philippi, who first documented many of the plants and animals that live in the less extreme parts of the Atacama, emphasized that the desert’s value lay in mineral mining, even as he lamented the challenges of unearthing it due to the region’s desolation. 

Mineral mining was more than enough to make the Atacama desirable for Chile, which annexed the area in a bloody, nearly five-year war against Peru and Bolivia that ended in 1883. At the time, the three nations were vying to control reserves of saltpeter — a source of nitrates used in fertilizer and explosives and nicknamed “white gold” — due to massive global demand.

Saltpeter from the ground lost its appeal in the first half of the 20th century when scientists discovered a method for manufacturing nitrates industrially, eliminating the need to dig for them. That spelled death for the saltpeter mines and the towns built up around them. But mining still thrives in the Atacama: Today, Chile is the world’s No. 1 exporter of copper, among the top for lithium, and a major supplier of silver and iron, among other valuable metals and minerals.

Mining has made its mark all through the Atacama Desert. Viewed from space, the Salar de Atacama, a salt flat nearly four times the size of New York City, displays the pale-hued swatches of lithium mines. Gold and copper mines appear as cowlicks, scarring the desert’s surface. On the ground, too, relics of the region’s mining history are not hard to find. Near where Gómez-Silva collects fungi-streaked salt rocks in the Yungay region lies a cemetery with graves dating from the 1800s into the mid-20th century. They are the workers of the abandoned saltpeter mines and their families.

“Life here wasn’t easy,” Gómez-Silva says as he looks down at the headstones of young children lost during that time.

Hidden scientific wealth

A short drive down a dirt road carved through more dirt and small boulders, remnants of science past are also baking under the already punishing morning sun. In 1994, the University of Antofagasta set up a small research station in Yungay with support from NASA, whose astronomers were interested in the Atacama’s harsh, Mars-like conditions. The station was funded only for a few short years, but even after its abandonment the simple structures and the surrounding feeble trees, planted by the university, continued to serve as an unserviced outpost for researchers from all over the world who wanted to know if and how life could endure in such desolate conditions.

On the walls of rooms that once served as the station’s laboratory and kitchen, Gómez-Silva points out where visiting researchers across almost two decades marked their names on the now-peeling paint. Gómez-Silva has spent most of his career in Antofagasta and he fondly remembers a number of the visitors, some of whom have gone on to publish key studies on the limits of life in the desert. 

“When we came down to stay at the station starting in 2001, we brought everything with us: showers, toilets, generators, pumps, kitchen sink…” recalls Chris McKay, an astrogeophysicist at NASA’s Ames Research Center in Silicon Valley, whose name is still visible, written in ink on the Yungay research station wall. But despite the humble settings and the lack of water, “it was magical,” he says. “We would sit around after dinner and talk science. There was no phone, no internet, just us.”

It was NASA investigators who kicked off research into whether life might survive in the dry soils and rocks here in the mid-1960s. But it wasn’t until 2003, when a high-profile paper detailed why the desert was a good analog for Mars, that microbial research in the area really started to take off. Investigations of the Atacama have increased steadily since with scientists from fields including ecology, genetics and microbiology joining the effort. 

Still, scientists have just scratched the surface; the majority of life here is still unknown, says Cristina Dorador, an Atacama microbiologist at the University of Antofagasta. Dorador is one of 155 elected representatives who worked to draft a new constitution for Chile — now awaiting a public vote — after a 2020 vote to replace the nation’s current dictatorship-era document. Part of Dorador’s goal in joining Chile’s constitutional convention, she says, was to help promote the importance of preserving and studying rare environments, like those of the Atacama, that have traditionally been valued only for the resources that could be extracted from them.

“When the country makes an economic decision, they don’t think about what’s happening with bacteria,” Dorador says. “I’m trying to communicate why it’s important to know about and protect those ecosystems.”

Dorador studies microbial mats that thrive beneath the crust of the Atacama salars, or salt flats, that are sometimes submerged under a layer of brine. A slice through one of these mats yields what might be taken for an alien serving of gelatinous lasagna. Inside the pasta-dish-gone-wrong, which can grow to several centimeters thick and is held together in part by cell-exuded goo, live millions of microorganisms of various types. The species cluster together into distinct, colorful layers: Purple streaks often represent bacteria that can avoid oxygen; bright green stripes might indicate ones that produce it. Other colors hint at cells that can capture nitrogen from their surroundings, produce foul-smelling sulfur, or leak methane or carbon dioxide into the air.

The layering results in a community whereby cells of different species can symbiotically exploit one another’s chemical byproducts. Sometimes, the layers rearrange, taking advantage of changing conditions, like a plant might tilt its leaves to best capture the rays of the sun. “They’re just one of my favorite things in the world,” Dorador says.

They are also a glimpse into the past, as this layered community looks very much like what scientists believe were the earliest ecosystems to come about on Earth. As they grow, some microbial mats form mounds of layered sediment that can be left behind as lithified fossils, called stromatolites. The oldest of these stromatolites date back to 3.7 billion years, when Earth’s atmosphere was devoid of oxygen. Thus, living mats, still found in extreme environments the world over, are of great interest to researchers trying to piece together the puzzle of how life as we know it today came to be.

One of those researchers is University of Connecticut astrobiologist Pieter Visscher. Along with colleagues, he has amassed evidence from stromatolite fossils and modern microbial mats suggesting that early-Earth microbes might have used arsenic for photosynthesis in place of the atmospheric oxygen that wasn’t yet around. Throughout his career, Visscher was plagued by a major conundrum in trying to connect today’s mats to their stromatolitic ancestors. The presence of oxygen in the waters around them, he says, would always mean that the naturally occurring mats he studied couldn’t really show him how those early lifeforms functioned.

Then, on a 2012 trip with Argentinian and Chilean colleagues, Visscher found what he was looking for in a vibrant purple microbial mat thriving below the surface of the Atacama’s La Brava, a hypersaline lake more than 7,500 feet above sea level. Unlike previously studied microbial mats, Visscher couldn’t detect oxygen in the La Brava mats or the waters around them then nor during several subsequent visits at different times of the year. Thus they provide an ideal natural laboratory, he says, and have lent weight to earlier theories about the importance of arsenic for early life. 

“I had been looking for well over 30 years to find the right analog,” he says. “This bright purple microbial mat may have been something that was on Earth very early on — 2.8 to 3 billion years ago.” 

No zoo for microbes

Creative survival strategies abound in the Atacama, attracting scientists keen to understand how life may have shifted over time. In 2010, a Chilean team reported the discovery of a new species of microbes living off the dew collecting on spiderweb threads in a coastal Atacama cave well-positioned to swallow early morning fog. The  Dunaliella, a form of green unicellular alga, was the first of its genus to be found living outside aquatic environments, and its discoverers suggested that its adaptation might be like ones that primitive plants made when first colonizing land. 

Other microbes take an active role in seeking out water. In 2020, a group of scientists from the United States described in PNAS  a bacterium living within gypsum rocks that secreted a substance to  dissolve the minerals around it, releasing individual water molecules sequestered inside the rock.

“They’re almost like miners … digging for water,” says David Kisailus, a chemical and environmental engineer at the University of California, Irvine, and one of the study’s authors. “They can actually search out and find the water and extract the water from these rocks.”

Examples like these are just a taste of what Atacama’s microbes might teach us about survival at extremes, Kisailus says. And such lessons might prime us to recognize clues in the search for life on other worlds, or help us adapt to the environmental changes coming to our own. They’ve turned Dorador, who has seen unique salar ecosystems altered dramatically through water lost to mining and other industries, into an advocate for microbial conservation in the desert.

But it’s a challenge, she says, to argue for the protection and the value of life that can’t be seen. Perhaps if people could witness for themselves a cell foraging for nutrients in boiling water or springing to life from a desiccated state when moisture fills the air, they would be impressed and care about preserving those species. But preservation itself is complicated. The Atacama extremophiles are so specialized that most wouldn’t last long outside their alien environments — scientists can’t even keep many of them alive in the lab.

“We don’t have a zoo of microbes,” Dorador says. “To conserve microbes, we have to conserve their habitats.”

Thinking macroscopically 

Arguments for microbial conservation and exploration go beyond scientific curiosity, says Michael Goodfellow, an emeritus professor of microbial systematics at Newcastle University in the United Kingdom. Goodfellow spent much of his career searching for new species of  microbes in extreme environments like the Atacama, Antarctica and deep ocean trenches in the hopes of identifying new molecules for use in antibiotics. He thinks such  bioprospecting in extreme environments should be considered a critical strategy in confronting the world’s impending antibiotic resistance crisis, which kills at least 700,000 people a year globally.

On their early trips to the Atacama’s hyper-arid core, Goodfellow and his colleagues weren’t really expecting to find much, but still thought it prudent to visit the “neglected habitat” where “hardly any work had been done at all.” To their surprise, they were able to isolate a small number of soil bacteria from the group Actinomycetes, a globally common kind of soil microbe that has long been an important focus of antibiotics research. Since then, work on these microbes has turned up more than 40 new molecules, some of which inhibited common disease-causing bacteria in lab studies.

“Our hypothesis was that the harsh environmental factors were selecting for new organisms that produce new compounds,” says Goodfellow, a coauthor of the Annual Review of Microbiology  paper. “Ten years later, I think we’ve proven that hypothesis.”

Bioprospecting in deserts like the Atacama has technological applications as well, says Michael Seeger, a biochemist at the Federico Santa Maria Technical University in Chile. A key example are the microbes responsible for around 10 percent of Chile’s copper production. Copper is often found in a mix of metals, and microbes can help extract it by eating away at other materials in the ore. By giving these microbes free rein of mounds of materials left by mining processes or mixtures of ores where only trace concentrations of copper exist, copper producers can ensure little copper is left behind at their mining sites.

Such metal-munching microbes must be able to handle high levels of acidity because they produce acid as a waste product, which would be deadly for many microbes, says Seeger. To thrive in highly acidic conditions, these acidophiles must have specialized adaptations like cell membranes specialized to block acidic particles, pumps that quickly shunt those damaging elements out of the cell and enzymes capable of making quick repairs to proteins and DNA.

The Atacama is likely to be full of extremophiles like these, with specialized capabilities that make them useful for industry and other practical purposes, says Seeger, who studies the potential of extremophiles to help clean up oil spills and produce bioplastics, among other things. Arsenic-loving microbes might be useful for purifying polluted water sources, and genes borrowed from salt- or drought-tolerant microbes, for example, could be transferred to soil bacteria to boost agriculture in a nation that is facing increasing desertification, he says.

Proteins that function well under extreme conditions could also have important medical applications. Covid PCR tests, for example, would not be possible without a bacterial enzyme that can build DNA strands in extreme temperatures and which was originally plucked from a Yellowstone hot spring. Biologists hope the study of similarly resilient enzymes from desert microbes could lead to additional biotechnological breakthroughs in the future. The Atacama, so extreme in so many different ways, is likely to harbor microbes that are capable of more than we know, Seeger argues, and so it’s crucial to find out what is there.

“When you know what you have, then you can think about what you can do with it,” he says. 

Gómez-Silva, for his part, plans to keep working on figuring just what Chile has in the Atacama. For two years he was unable to visit his desert sampling sites because of strict pandemic lockdown restrictions. Now that they are lifted, he’s grateful to be back.

Heading back to the research truck at the end of his sampling trip to Yungay, Gómez-Silva stops and stoops to pick up one last salt rock with a large, dark streak painted across its top.

“How can we not take this one? It’s beautiful,” he says. Then a chuckle. “I don’t know if you can see beauty here. I can.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Defamation was at the heart of the lawsuit settled by Fox News with Dominion – proving libel in a court would have been no small feat

Election workers in Detroit test their equipment made by Dominion Voting Systems in August 2022. Jeff Kowalsky/AFP via Getty Images
Nicole Kraft, The Ohio State University

The aftershocks of the 2020 presidential election continue to reverberate in politics and the media with Fox News Network’s April 18, 2023, US$787.5 million settlement with U.S. Dominion Inc. The settlement puts an end to Dominion’s defamation suit against the network.

Ahead of opening arguments that were slated to begin April 18, Fox News agreed to pay Dominion for alleged defamation. The lawsuit rested on whether false claims Fox hosts and their guests made about Dominion’s voting machines after President Joe Biden was elected were defamatory. Dominion sued Fox for $1.6 billion.

Fox News hosts said on air that that there were “voting irregularities” with Dominion’s voting machines – while privately saying that such claims were baseless.

The statements have already been proved false. Delaware Superior Court Judge Eric M. Davis ruled on March 31, 2023, that it “is CRYSTAL clear that none of the Statements relating to Dominion about the 2020 election are true.”

The question at hand was whether the statements harmed Dominion’s reputation enough to rise to the level of defamation.

I am a longtime journalist and journalism professor who teaches the realities and challenges of defamation law as it relates to the news industry. Being accused of defamation is among a journalist’s worst nightmares, but it is far easier to throw around as an accusation than it is to actually prove fault.

A blonde white woman stands facing an electronic voting booth.
A voter in Atlanta takes part in midterm elections in November 2022. Nathan Posner/Anadolu Agency via Getty Images

Understanding defamation

Defamation happens when someone publishes or publicly broadcasts falsehoods about a person or a corporation in a way that harms their reputation to the point of damage. When the false statements are written, it is legally considered libel. When the falsehoods are spoken or aired on a live TV broadcast, for example, it is called slander.

To be considered defamation, information or claims must be presented as fact and disseminated so others read or see it and must identify the person or business and offer the information with a reckless disregard for the truth.

Defamation plaintiffs can be private, ordinary people who must prove the reporting was done with negligence to win their suit. Public people like celebrities or politicians have a higher burden of proof, which is summed up as actual malice, or overt intention to harm a reputation.

The ultimate defense against defamation is truth, but there are others.

Opinion that is not provable fact is protected, for example.

Neutral reportage – a legal term that means the media reports fairly, if inaccurately, about public figures – can legally protect journalists.

But Davis rejected both of those arguments in the federal Dominion case.

Davis determined Fox aired falsehoods when it allowed Trump supporters to claim on air that Dominion rigged voting machines to increase President Joe Biden’s number of votes. He also said that these actions harmed the Dominion’s reputation.

Proving actual malice

The primary question for the jury, which had already been seated, would have been whether Fox broadcasters knew the statements were false when they aired them. If they did, it would mean they acted with actual malice, the standard required to prove a case of defamation for a public person, entity or figure.

The U.S. Supreme Court established actual malice as a legal criterion of defamation in 1964 when L.B. Sullivan, a police commissioner in Alabama, felt his reputation had been harmed by a civil rights ad run in The New York Times that contained several inaccuracies. Sullivan sued and was awarded $500,000 by a jury. The state Supreme Court affirmed the decision and the Times appealed.

The U.S. Supreme Court ruled in 1964 that proof of defamation required evidence that the advertisement creator had serious doubts about the truth of the statement and published it anyway, with the goal to harm the subject’s reputation.

Simply put, the burden of proof shifted from the accused to the accuser.

And that is a hurdle most cannot overcome when claiming defamation.

Why proving defamation is so hard

It is incredibly hard to prove in court that someone set out do harm in publishing facts that are ultimately proved to be untrue.

Most times, falsehoods in a story are the result of insufficient information at the time of reporting.

Sometimes an article’s inaccuracies are the result of bad reporting. Other times the errors are a result of actual negligence.

This happened when Rolling Stone magazine published an article in 2014 about the gang rape of a student at the University of Virginia. It turned out that many parts of the story were not true and not properly vetted by the magazine.

Nicole Eramo, the former associate dean of students at the University of Virginia, sued Rolling Stone, claiming the story false alleged that she knew about and covered up a gang rape at a fraternity on campus. They reached a settlement on the lawsuit in 2017.

Not meeting the malice standard

There are also some recent examples of a defamation lawsuit’s not meeting the actual malice standard.

This includes Alaskan politician Sarah Palin, who sued The New York Times over publication of an editorial in 2017 that erroneously stated her political rhetoric led to a mass shooting. The jury said the information might be inaccurate, but she had not proved actual malice standard.

Long before he was president, Donald Trump had a 2011 libel suit dismissed after a New Jersey appeals court said there was no proof a book author showed actual malice when he cited three unnamed sources who estimated Trump was a millionaire, not a billionaire.

It is so difficult for public figures to meet the actual malice standard and prove defamation that most defamation defendants spend most of their legal preparation time trying to prove they are not actually in the public eye. Their reputations, according to the court, are not as fragile as that of a private person.

Private people must prove only negligence to be successful in a defamation lawsuit. That means that someone did not seriously try to consider whether a statement was true or not before publishing it.

Protesters gather outside the Fox News headquarters in New York City ahead of the Dominion trial. Erik McGregor/LightRocket via Getty Images

Defamation cases that went ahead

Some public figures, however, have prevailed in proving defamation.

American actress Carol Burnett won the first-ever defamation suit against the National Inquirer when a jury decided a 1976 gossip column describing her as intoxicated in a restaurant encounter with former Secretary of State Henry A. Kissinger was known to be false when it was published.

Most recently, Cardi B won a defamation lawsuit against a celebrity news blogger who posted videos falsely stating the Grammy-winning rapper used cocaine, had herpes and took part in prostitution.

The case of Dominion

Fox’s payout to Dominion – though only half of what Dominion sued for – reportedly shows that the voting machines company put together a strong case that Fox acted with actual malice.

But Fox pundits have helped the plaintiff’s case by acknowledging they knew information was false before they aired it and leaving a copious trail of comments such as, “this dominion stuff is total bs.”

Fox’s position was that despite knowing claims made by guests about Dominion were false, the claims were newsworthy.

Does this qualify as actual malice or simply bad journalism?

The settlement seems to imply actual malice – and this could send shivers through the political media landscape for years to come.

This is an updated version of an article originally published on April 17, 2023.

Nicole Kraft, Associate Professor of Clinical Communication, The Ohio State University

This article is republished from The Conversation under a Creative Commons license.