For the first time, a majority of Ivy League schools will soon be led by women.
Starting July 1, 2023, Claudine Gay will assume the role of president at Harvard University, Nemat “Minouche” Shafik at Columbia University and Sian Leah Beilock at Dartmouth College. They will join current female presidents at Brown University, Cornell University and University of Pennsylvania.
Felecia Commodore, an associate professor of higher education at Old Dominion University, explains what this means for gender equity in the college presidency – and why U.S. colleges and universities still have a long way to go.
However, the Ivy League is not new to selecting female presidents – they have been doing so for a few decades. Judith Rodin was the first, in 1994, when she became president of the University of Pennsylvania. She was followed by Ruth Simmons at Brown University and Shirley Tilghman at Princeton University, both in 2001. Rodin was succeeded by another woman, Amy Guttman, in 2004.
Still, one reason this moment may be one to watch is that Ivy League institutions are often seen as exemplars of elite, complex institutions. So seeing what one could consider a critical mass of female leaders in the Ivy League could signal the benefit of women in leadership to other boards that are hesitant or slow to hire women as presidents.
How unusual is this across higher ed?
I think it would be more surprising to see mostly female presidents at the majority of large public research universities, or at a majority of the schools in the Power 5 athletic conferences.
Despite what may seem like a boom in women leading institutions, the percentage of women in the presidency at colleges and universities more broadly has plateaued at between 25% and 30% for the past decade. This was after increasing from 9.5% in 1986 to 19% in 1998.
Judith Rodin, right, former president of University of Pennsylvania, and Valerie Jarrett, former senior adviser in the Obama administration, discuss gender parity in the C-suite in 2016.Riccardo Savi/Getty Images for Concordia Summit
What are the biggest challenges that college presidents face?
The biggest priority or challenge really depends on the individual college or university. However, all institutions must ensure they are financially healthy and identify opportunities to strengthen their financial resources. College presidents have reported that they spend the most time on budget and financial management, followed by fundraising.
Particularly in the current higher education marketplace, where the average cost of college runs over US$35,000 per year, college leaders must work to keep their institutions fiscally strong and also competitive and affordable. This may involve, for example, building new infrastructure, creating new programs and cultivating new sources of funding.
What effect does having a woman in the top seat have?
For colleges that have only ever had a man in the president’s role, hiring their first woman as president can signal that the institution embraces change and evolution. This can be an especially important message to send to funders, alumni donors, philanthropists, state legislators and corporate partners, who all play a role in ensuring a particular college’s financial vitality.
Female presidents add to the diversity of the college presidency. They add different perspectives to conversations that shape practices and policies both within their college and across higher education. They might, for example, provide their particular perspective regarding compensation for female faculty members of color, who tend to engage in more unpaid service work on campuses.
Southeast Asia’s Mekong may be the most important river in the world. Known as the “mother of waters,” it is home to the world’s largest inland fishery, and the huge amounts of sediments it transports feed some of the planet’s most fertile farmlands. Tens of millions of people depend on it for their livelihoods.
But how valuable is it in monetary terms? Is it possible to put a dollar value on the multitude of ecosystem services it provides, to help keep those services healthy into the future?
Understanding the value of a river is essential for good management and decision-making, such as where to develop infrastructure and where to protect nature. This is particularly true of the Mekong, which has come under enormous pressure in recent years from overfishing, dam building and climate change, and where decisions about development projects often do not take environmental costs into account.
The Mekong River winds through six countries, across 2,700 miles (about 4,350 kilometers) from the mountains to the sea.Leisa Tyler/LightRocket via Getty Images
“Rivers such as the Mekong function as life-support systems for entire regions,” said Rafael Schmitt, lead scientist at the Natural Capital Project at Stanford University, who has studied the Mekong system for many years. “Understanding their values, in monetary terms, can be critical to fairly judge the impacts that infrastructure development will have on these functions.”
Calculating that value isn’t simple, though. Most of the natural benefits that a river brings are, naturally, under water, and thus hidden from direct observation. Ecosystem services may be hard to track because rivers often flow over large distances and sometimes across national borders.
Enter natural capital accounting
The theory of natural capital suggests that ecosystem services provided by nature – such as water filtration, flood control and raw materials – have economic value that should be taken into account when making decisions that affect these systems.
Proponents maintain that natural capital accounting puts a spotlight on natural systems’ value when weighed against commercial pressures. They say it brings visibility to natural benefits that are otherwise hidden, using language that policymakers can better understand and utilize.
More than a million people live on or around Tonle Sap lake, the world’s largest inland fishery. Climate change and dams can affect its water level and fish stocks.Tang Chhin Sothy/AFP via Getty Images
Several countries have incorporated natural capital accounting in recent years, including Costa Rica, Canada and Botswana. Often, that has led to better protection of natural resources, such as mangrove forests that protect fragile coastlines. The U.S. government also announced a strategy in 2023 to start developing metrics to account for the value of underlying natural assets, such as critical minerals, forests and rivers.
However, natural capital studies have largely focused on terrestrial ecosystems, where the trade-offs between human interventions and conservation are easier to see.
When valuing rivers, the challenges run much deeper. “If you cut down a forest, the impact is directly visible,” Schmitt points out. “A river might look pristine, but its functioning may be profoundly altered by a faraway dam.”
Accounting for hydropower
Hydropower provides one example of the challenges in making decisions about a river without understanding its full value. It’s often much easier to calculate the value of a hydropower dam than the value of the river’s fish, or sediment that eventually becomes fertile farmland.
The rivers of the Mekong Basin have been widely exploited for power production in recent decades, with a proliferation of dams in China, Laos and elsewhere. The Mekong Dam Monitor, run by the nonprofit Stimson Center, monitors dams and their environmental impacts in the Mekong Basin in near-real time.
While hydropower is clearly an economic benefit – powering homes and businesses, and contributing to a country’s GDP – dams also alter river flows and block both fish migration and sediment delivery.
Droughts in the Mekong in recent years, linked to El Niño and exacerbated by climate change, were made worse by dam operators holding back water. That caused water levels to drop to historical low levels, with devastating consequences for fisheries. In the Tonlé Sap Lake, Southeast Asia’s largest lake and the heart of the Mekong fishery, thousands of fishers were forced to abandon their occupation, and many commercial fisheries had to close.
Hydropower dams like the one in the photos above in Cambodia can disrupt a river’s natural services. The Sesan River (Tonlé San) and Srepok River are tributaries of the Mekong. Move the slider to see how the dam changed the water flow. NASA Earth Observatory
One project under scrutiny now in the Mekong Basin is a small dam being constructed on the Sekong River, a tributary, in Laos near the Cambodian border. While the dam is expected to generate a very small amount of electricity, preliminary studies show it will have a dramatically negative impact on many migratory fish populations in the Sekong, which remains the last major free-flowing tributary in the Mekong River Basin.
Valuing the ‘lifeblood of the region’
The Mekong River originates in the Tibetan highlands and runs for 2,700 miles (about 4,350 kilometers) through six countries before emptying into the South China Sea.
Its ecological and biological riches are clearly considerable. The river system is home to over 1,000 species of fish, and the annual fish catch in just the lower basin, below China, is estimated at more than 2 million metric tons.
“The river has been the lifeblood of the region for centuries,” says Zeb Hogan, a biologist at the University of Nevada, Reno, who leads the USAID-funded Wonders of the Mekong research project, which I work on. “It is the ultimate renewable resource – if it is allowed to function properly.”
Establishing the financial worth of fish is more complicated than it appears, though. Many people in the Mekong region are subsistence fishers for whom fish have little to no market value but are crucial to their survival.
The river is also home to some of the largest freshwater fish in the world, like giant stingray and catfish and critically endangered species. “How do you value a species’ right to exist?” asks Hogan.
Sediment, which fertilizes floodplains and builds up the Mekong Delta, has been relatively easy to quantify, says Schmitt, the Stanford scientist. According to his analysis, the Mekong, in its natural state, delivers 160 million tons of sediment each year.
However, dams let through only about 50 million tons, while sand mining in Cambodia and Vietnam extracts 90 million, meaning more sediment is blocked or removed from the river than is delivered to its natural destination. As a result, the Mekong Delta, which naturally would receive much of the sediment, has suffered tremendous river erosion, with thousands of homes being swept away.
A potential ‘World Heritage Site’ designation
A river’s natural services may also include cultural and social benefits that can be difficult to place monetary values on.
A new proposal seeks to designate a bio-rich stretch of the Mekong River in northern Cambodia as a UNESCO World Heritage Site. If successful, such a designation may bring with it a certain amount of prestige that is hard to put in numbers.
The complexities of the Mekong River make our project a challenging undertaking. At the same time, it is the rich diversity of natural benefits that the Mekong provides that make this work important, so that future decisions can be made based on true costs.
Dilbert, the put-upon chronicler of office life, has been given the pink slip.
On Feb. 26, 2023, Andrews McMeel Universal announced that it would no longer distribute the popular comic strip after its creator, Scott Adams, engaged in what many people viewed as a racist rant on his YouTube channel. Hundreds of newspapers had by then decided to quit publishing the strip.
It followed an incident in which Adams, on his program “Real Coffee with Scott Adams,” reacted to a survey by Rasmussan Reports that concluded only 53% of Black Americans agreed with the statement “It’s OK to be white.” If only about half thought it was OK to be white, Adams said, this qualified Black Americans as a “hate group.”
“I don’t want to have anything to do with them,” Adams added. “And I would say, based on the current way things are going, the best advice I would give to white people is to get the hell away from Black people, just get the f— away … because there is no fixing this.”
Adams later doubled down on his statements, writing on Twitter that “Dilbert has been cancelled from all newspapers, websites, calendars, and books because I gave some advice everyone agreed with.”
Adams is wrong. If everyone had agreed with him, “Dilbert” would still be appearing in newspapers.
The first “Dilbert” strip – a comic centered on mocking American office culture – appeared in 1989. It became a hit, and until recently, “Dilbert” ran in more than 2,000 daily newspapers across 65 countries.
Therein lies the moral of the story: Know thy audience.
Adams failed to grasp that being a social critic means your freedom of expression only goes as far as your audience is willing to accept it. Adams could say whatever he wanted to his YouTube audience because his listeners may have agreed with what he said.
Unfortunately for him, what he said on his program did not stay on his program.
But Adams’ comfortable salary depended on his satisfying a wider audience – many of whom found his opinions intolerable.
America’s tradition of free speech
In a country that prides itself on its tradition of free expression, it’s important to explore the limits of free expression in the United States. This can be done in part by looking at social criticism, as I did in my book “Drawn to Extremes: The Use and Abuse of Editorial Cartoons.”
Cartoonists are limited by their imagination, talent, taste and their senses of humor, morality and outrage. If they want an audience they must also consider the tastes and sensibilities of their editors and readers.
The United States may pride itself on its tradition of free speech, but cartoonists throughout the nation’s history have been jailed, beaten, sued and censored for their drawings.
In 1903, the governor of Pennsylvania, Samuel W. Pennypacker, called for restrictions against journalists after a Philadelphia newspaper cartoonist had depicted him as a parrot during the previous fall’s gubernatorial campaign. A state representative then introduced a bill that made it illegal to publish a cartoon “portraying, describing or representing any person … in the likeness of beast, bird, fish, insect or other inhuman animal” that exposed the person to “hatred, contempt, or ridicule.” Another cartoonist then drew the governor as a frothy stein of beer and the bill’s author as a small potato.
The bill failed to pass.
Cartoonists working for the socialist magazine The Masses were accused of undermining the war effort during World War I with their anti-war opinions and prosecuted under the Espionage Act.
And during the Cuban Missile Crisis of 1962, newspapers canceled Walt Kelly’s “Pogo” comic strip after Kelly drew Soviet Premier Nikita Khrushchev as a medal-wearing hog and Cuban leader Fidel Castro as a cigar-smoking goat because they thought the strip might jeopardize the peace process.
Perhaps no cartoonist – before the ax fell on “Dilbert” – has seen his strip canceled by more newspapers than Garry Trudeau, creator of “Doonesbury.” In 1984, dozens of newspapers canceled a series of strips wherein which Doonesbury’s dim-witted newsman Roland Burton Hedley took readers on a trip through then-President Ronald Reagan’s brain, finding “80 billion neurons, or ‘marbles,’ as they are known to the layman.” And Trudeau’s syndicate, Universal Press, refused to distribute a strip that satirized an anti-abortion documentary.
In other countries, cartoonists have been murdered in retaliation for their work. Famously, on Jan. 7, 2015, two French Muslim terrorists entered the Paris office of the satirical French newspaper Charlie Hebdo and killed 12 cartoonists, editors and police officers after the periodical published satirical drawings of the Prophet Muhammad.
The importance of context
Such controversies were generally caused by what cartoonists said in their cartoons. There have been exceptions. Al Capp, who created the comic strip “Li’l Abner,” saw his popularity wane in the 1960s and 1970s when he began expressing his far-right political opinion in both his strip and particularly in his public appearances.
Adams was similarly punished not for what he included in his comic strip but rather what for what he said on his YouTube program.
The context here is important. This was not the first time Adams has been censured after saying something deemed to be offensive. In May 2022, around 80 newspapers canceled “Dilbert” after Adams introduced his first Black character in the 30-plus year run of the strip. The character identified as white to prank his boss’s diversity goals.
Adams lost some newspapers when he decided to mock diversity in the business world. He lost his strip when he used racist language to attack Black people on his YouTube program.
From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 29, 2023, to pause further training of the latest AI technologies or, barring that, for governments to “impose a moratorium.”
These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don’t require technical knowledge to use.
Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right.
To jump ahead to each response, here’s a list of each:
S. Shyam Sundar, Professor of Media Effects & Director, Center for Socially Responsible AI, Penn State
The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot.
Numerous studies in my lab have shown that when a machine, rather than a human, is identified as a source of interaction, it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on. It clouds the user’s judgment and results in the user overly trusting machines. However, simply disabusing people of AI’s infallibility is not sufficient, because humans are known to unconsciously assume competence even when the technology doesn’t warrant it.
Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanness, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as politeness and reciprocity. So, when computers seem sentient, people tend to trust them, blindly. Regulation is needed to ensure that AI products deserve this trust and don’t exploit it.
AI poses a unique challenge because, unlike in traditional engineering systems, designers cannot be sure how AI systems will behave. When a traditional automobile was shipped out of the factory, engineers knew exactly how it would function. But with self-driving cars, the engineers can never be sure how it will perform in novel situations.
Lately, thousands of people around the world have been marveling at what large generative AI models like GPT-4 and DALL-E 2 produce in response to their prompts. None of the engineers involved in developing these AI models could tell you exactly what the models will produce. To complicate matters, such models change and evolve with more and more interaction.
All this means there is plenty of potential for misfires. Therefore, a lot depends on how AI systems are deployed and what provisions for recourse are in place when human sensibilities or welfare are hurt. AI is more of an infrastructure, like a freeway. You can design it to shape human behaviors in the collective, but you will need mechanisms for tackling abuses, such as speeding, and unpredictable occurrences, like accidents.
AI developers will also need to be inordinately creative in envisioning ways that the system might behave and try to anticipate potential violations of social standards and responsibilities. This means there is a need for regulatory or governance frameworks that rely on periodic audits and policing of AI’s outcomes and products, though I believe that these frameworks should also recognize that the systems’ designers cannot always be held accountable for mishaps.
Artificial intelligence researcher Joanna Bryson describes how professional organizations can play a role in regulating AI.
Combining ‘soft’ and ‘hard’ approaches
Cason Schmit, Assistant Professor of Public Health, Texas A&M University
Regulating AI is tricky. To regulate AI well, you must first define AI and understand anticipated AI risks and benefits.
Legally defining AI is important to identify what is subject to the law. But AI technologies are still evolving, so it is hard to pin down a stable legal definition.
Understanding the risks and benefits of AI is also important. Good regulations should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits might be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.
“Soft laws” are the alternative to traditional “hard law” approaches of legislation intended to prevent specific violations. In the soft law approach, a private organization sets rules or standards for industry members. These can change more rapidly than traditional lawmaking. This makes soft laws promising for emerging technologies because they can adapt quickly to new applications and risks. However, soft laws can mean soft enforcement.
Copyleft licensing allows for content to be used, reused or modified easily under the terms of a license – for example, open-source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.
In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. This creates an enforcement entity that exists solely to enforce ethical AI standards and can be funded in part by fines from unethical conduct. This entity is like a patent troll in that it is private rather than governmental and it supports itself by enforcing the legal intellectual property rights that it collects from others. In this case, rather than enforcement for profit, the entity enforces the ethical guidelines defined in the licenses - a “troll for good.”
This model is flexible and adaptable to meet the needs of a changing AI environment. It also enables substantial enforcement options like a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to meet the unique challenges of AI.
Though generative AI has been grabbing headlines of late, other types of AI have been posing challenges for regulators for years, particularly in the area of data privacy.
Four key questions to ask
John Villasenor, Professor of Electrical Engineering, Law, Public Policy, and Management, University of California, Los Angeles
The extraordinary recent advances in large language model-based generative AI are spurring calls to create new AI-specific regulation. Here are four key questions to ask as that dialogue progresses:
1) Is new AI-specific regulation necessary? Many of the potentially problematic outcomes from AI systems are already addressed by existing frameworks. If an AI algorithm used by a bank to evaluate loan applications leads to racially discriminatory loan decisions, that would violate the Fair Housing Act. If the AI software in a driverless car causes an accident, products liability law provides a framework for pursuing remedies.
2) What are the risks of regulating a rapidly changing technology based on a snapshot of time? A classic example of this is the Stored Communications Act, which was enacted in 1986 to address then-novel digital communication technologies like email. In enacting the SCA, Congress provided substantially less privacy protection for emails more than 180 days old.
The logic was that limited storage space meant that people were constantly cleaning out their inboxes by deleting older messages to make room for new ones. As a result, messages stored for more than 180 days were deemed less important from a privacy standpoint. It’s not clear that this logic ever made sense, and it certainly doesn’t make sense in the 2020s, when the majority of our emails and other stored digital communications are older than six months.
A common rejoinder to concerns about regulating technology based on a single snapshot in time is this: If a law or regulation becomes outdated, update it. But this is easier said than done. Most people agree that the SCA became outdated decades ago. But because Congress hasn’t been able to agree on specifically how to revise the 180-day provision, it’s still on the books over a third of a century after its enactment.
3) What are the potential unintended consequences? The Allow States and Victims to Fight Online Sex Trafficking Act of 2017 was a law passed in 2018 that revised Section 230 of the Communications Decency Act with the goal of combating sex trafficking. While there’s little evidence that it has reduced sex trafficking, it has had a hugely problematic impact on a different group of people: sex workers who used to rely on the websites knocked offline by FOSTA-SESTA to exchange information about dangerous clients. This example shows the importance of taking a broad look at the potential effects of proposed regulations.
4) What are the economic and geopolitical implications? If regulators in the United States act to intentionally slow the progress in AI, that will simply push investment and innovation — and the resulting job creation — elsewhere. While emerging AI raises many concerns, it also promises to bring enormous benefits in areas including education, medicine, manufacturing, transportation safety, agriculture, weather forecasting, access to legal services and more.
I believe AI regulations drafted with the above four questions in mind will be more likely to successfully address the potential harms of AI while also ensuring access to its benefits.
On April 3, 2023, NASA announced the four astronauts who will make up the crew of Artemis II, which is scheduled to launch in late 2024. The Artemis II mission will send these four astronauts on a 10-day mission that culminates in a flyby of the Moon. While they won’t head to the surface, they will be the first people to leave Earth’s immediate vicinity and be the first near the Moon in more than 50 years.
This mission will test the technology and equipment that’s necessary for future lunar landings and is a significant step on NASA’s planned journey back to the surface of the Moon. As part of this next era in lunar and space exploration, NASA has outlined a few clear goals. The agency is hoping to inspire young people to get interested in space, to make the broader Artemis program more economically and politically sustainable and, finally, to continue encouraging international collaboration on future missions.
From my perspective as a space policy expert, the four Artemis II astronauts fully embody these goals.
Crew members of the Artemis II mission are NASA astronauts Christina Hammock Koch, Reid Wiseman and Victor Glover and Canadian Space Agency astronaut Jeremy Hansen.NASA
Who are the four astronauts?
The four members of the Artemis II crew are highly experienced, with three of them having flown in space previously. The one rookie flying onboard is notably representing Canada, making this an international mission, as well.
The commander of the mission will be Reid Wiseman, a naval aviator and test pilot. On his previous mission to the International Space Station, he spent 165 days in space and completed a record of 82 hours of experiments in just one week. Wiseman was also the chief of the U.S. astronaut office from 2020 to 2023.
Serving as pilot is Victor Glover. After flying more than 3,000 hours in more than 40 different aircraft, Glover was selected for the astronaut corps in 2013. He was the pilot for the Crew-1 mission, the first mission that used a SpaceX rocket and capsule to bring astronauts to the International Space Station, and served as a flight engineer on the ISS.
The lone woman on the crew is mission specialist Christina Hammock Koch. She has spent 328 days in space, more than any other woman, across the three ISS expeditions. She has also participated in six different spacewalks, including the first three all-women spacewalks. Koch is an engineer by trade, having previously worked at NASA’s Goddard Space Flight Center.
The crew will be rounded out by a Canadian, Jeremy Hansen. Though a spaceflight rookie, he has participated in space simulations like NEEMO 19, in which he lived in a facility on the ocean floor to simulate deep space exploration. Before being selected to Canada’s astronaut corps in 2009, he was an F-18 pilot in the Royal Canadian Air Force.
These four astronauts have followed pretty typical paths to space. Like the Apollo astronauts, three of them began their careers as military pilots. Two, Wiseman and Glover, were trained test pilots, just as most of the Apollo astronauts were.
Mission specialist Koch, with her engineering expertise, is more typical of modern astronauts. The position of mission or payload specialist was created for the space shuttle program, making spaceflight possible for those with more scientific backgrounds.
The crew will make a single flyby of the Moon in an Orion capsule.NASA, CC BY-NC
A collaborative, diverse future
Unlike the Apollo program of the 1960s and 1970s, with Artemis, NASA has placed a heavy emphasis on building a politically sustainable lunar program by fostering the participation of a diverse group of people and countries.
The participation of other countries in NASA missions – Canada in this case – is particularly important for the Artemis program and the Artemis II crew. International collaboration is beneficial for a number of reasons. First, it allows NASA to lean on the strengths and expertise of engineers, researchers and space agencies of U.S. allies and divide up the production of technologies and costs. It also helps the U.S. continue to provide international leadership in space as competition with other countries – notably China – heats up.
The crew of Artemis II is also quite diverse compared with the Apollo astronauts. NASA has often pointed out that the Artemis program will send the first woman and the first person of color to the Moon. With Koch and Glover on board, Artemis II is the first step in fulfilling that promise and moving toward the goal of inspiring future generations of space explorers.
The four astronauts aboard Artemis II will be the first humans to return to the vicinity of the Moon since 1972. The flyby will take the Orion capsule in one pass around the far side of the Moon. During the flight, the crew will monitor the spacecraft and test a new communication system that will allow them to send more data and communicate more easily with Earth than previous systems.
If all goes according to plan, in late 2025 Artemis III will mark humanity’s return to the lunar surface, this time also with a diverse crew. While the Artemis program still has a way to go before humans set foot on the Moon once again, the announcement of the Artemis II crew shows how NASA intends to get there in a diverse and collaborative way.
Charges of media bias – that “the media” are trying to brainwash Americans by feeding the public only one side of every issue – have become as common as campaign ads in the run-up to the midterm elections.
Communications scholars have found that if you ask people in any community, using scientific polling methods, whether their local media are biased, you’ll find that about half say yes. But of that half, typically a little more than a quarter say that their local media are biased against Republicans, and a little less than a quarter say the same local media are biased against Democrats.
Research shows that Republicans and Democrats spot bias only in articles that clearly favor the other party. If an article tilts in favor of their own party, they tend to see it as unbiased.
Many people, then, define “bias” as “anything that doesn’t agree with me.” It’s not hard to see why.
‘Liberal bias’ in the media is a constant topic on Fox News.
In a 2016 Pew Research Center poll, 45% of Republicans said the Democratic Party’s policies are “so misguided that they threaten the nation’s well-being,” and 41% of Democrats said the same about Republicans. A poll conducted in midyear 2022 by Pew showed that “72% of Republicans regard Democrats as more immoral, and 63% of Democrats say the same about Republicans.”
That doesn’t mean that “the media” are biased. There are hundreds of thousands of media outlets in the U.S. – newspapers, radio, network TV, cable TV, blogs, websites and social media. These news outlets don’t all take the same perspective on any given issue.
If you want a very conservative news site, it is not hard to find one, and the same with a very liberal news site.
The Constitution’s First Amendment says Congress shall make no law limiting the freedom of the press. It doesn’t say that Congress shall require all media sources to be “unbiased.” Rather, it implies that as long as Congress does not systematically suppress any particular point of view, then the free press can do its job as one of the primary checks on a powerful government.
When the Constitution was written and for most of U.S. history, the major news sources – newspapers, for most of that time – were explicitly biased. Most were sponsored by a political party or a partisan individual.
The notion of objective journalism – that media must report both sides of every issue in every story – barely existed until the late 1800s. It reached full flower only in the few decades when broadcast television, limited to three major networks, was the primary source of political information.
Since that time, the media universe has expanded to include huge numbers of internet news sites, cable channels and social media posts. So if you feel that the media sources you’re reading or watching are biased, you can read a wider variety of media sources.
Thomas Jefferson described this partisan newspaper, The Gazette of the United States, as ‘a paper of pure Toryism … disseminating the doctrines of monarchy, aristocracy, and the exclusion of the people.’Library of Congress, Chronicling America collection
If it bleeds, it leads
There is one form of actual media bias. Almost all media outlets need audiences in order to exist. Some can’t survive financially without an audience; others want the prestige that comes from attracting a big audience.
Thus, the media define as “news” the kinds of stories that will attract an audience: those that feature drama, conflict, engaging pictures and immediacy. That’s what most people find interesting. They don’t want to read a story headlined “Dog bites man.” They want “Man bites dog.”
The problem is that a focus on such stories crowds out what we need to know to protect our democracy, such as: How do the workings of American institutions benefit some groups and disadvantage others? In what ways do our major systems – education, health care, national defense and others – function effectively or less effectively?
These analyses are vital to citizens – if we fail to protect our democracy, our lives will be changed forever – but they aren’t always fun to read. So they get covered much less than celebrity scandals or murder cases – which, while compelling, don’t really affect the ability to sustain a democratic system.
Writer Dave Barry demonstrated this media bias in favor of dramatic stories in a 1998 column.
He wrote, “Let’s consider two headlines. FIRST HEADLINE: ‘Federal Reserve Board Ponders Reversal of Postponement of Deferral of Policy Reconsideration.’ SECOND HEADLINE: ‘Federal Reserve Board Caught in Motel with Underage Sheep.’ Be honest, now. Which of these two stories would you read?”
By focusing on the daily equivalent of the underage sheep, media can direct our attention away from the important systems that affect our lives. That isn’t the media’s fault; we are the audience whose attention media outlets want to attract.
But as long as we think of governance in terms of its entertainment value and media bias in terms of Republicans and Democrats, we’ll continue to be less informed than we need to be. That’s the real media bias.
(Culinary.net) They might not be the fanciest of foods, but when you eat a filling, protein-packed sandwich, you are usually left satisfied and full of energy. From ham and turkey to mayo and mustard, the possibilities are nearly endless when sandwiches are on the menu.
With so many customizable options for bread, meats, toppings and more, it’s easy to create the perfect sandwich. For example, this Croissant Chicken Salad Sandwich with Sprouts is served on a fluffy, light, mouthwatering croissant and features a hearty mixture of chicken, bacon and veggies to give you that boost you have been craving.
To make the sandwich, line six slices of bacon in a skillet. Cook until slightly crispy. Drain over a paper towel and crush into pieces.
On a cutting board, cut cherry tomatoes in half and chop green onions.
In a mixing bowl, combine chicken, mayonnaise, chopped green onions, pepper, bacon crumbles and halved cherry tomatoes.
Cut croissants in half and scoop a generous amount of chicken salad onto the bottom of the croissant. Top with sprouts and replace top croissant.
The chicken is creamy, the bacon crumbles are crispy and the green onions give it crunch, making this sandwich perfect for nearly any occasion. Whether it’s a bridal shower, picnic at the park with family or just lunch on a weekend afternoon, it can give you the energy to go forward and finish your day strong.
From rural Pennsylvania to Los Angeles, more than 17 million Americans live within a mile of at least one oil or gas well. Since 2014, most new oil and gas wells have been fracked.
Fracking, short for hydraulic fracturing, is a process in which workers inject fluids underground under high pressure. The fluids fracture coal beds and shale rock, allowing the gas and oil trapped within the rock to rise to the surface. Advances in fracking launched a huge expansion of U.S. oil and gas production starting in the early 2000s but also triggered intense debate over its health and environmental impacts.
Fracking fluids are up to 97% water, but they also contain a host of chemicals that perform functions such as dissolving minerals and killing bacteria. The U.S. Environmental Protection Agency classifies a number of these chemicals as toxic or potentially toxic.
We study the oil and gas industry in California and Texas and are members of the Wylie Environmental Data Justice Lab, which studies fracking chemicals in aggregate. In a recent study, we worked with colleagues to provide the first systematic analysis of chemicals found in fracking fluids that would be regulated under the Safe Drinking Water Act if they were injected underground for other purposes. Our findings show that excluding fracking from federal regulation under the Safe Drinking Water Act is exposing the public to an array of chemicals that are widely recognized as threats to public health.
A schematic of a hydraulic fracking operation, with wastewater temporarily stored in a surface waste pit.wetcake via Getty Images
Averting federal regulation
Fracking technologies were originally developed in the 1940s but only entered widespread use for fossil fuel extraction in the U.S. in the early 2000s. Since the process involves injecting chemicals underground and then disposing of contaminated water that flows back to the surface, it faced potential regulation under multiple U.S. environmental laws.
In 1997, the 11th Circuit Court of Appeals ruled that fracking should be regulated under the Safe Drinking Water Act. This would have required oil and gas producers to develop underground injection control plans, disclose the contents of their fracking fluids and monitor local water sources for contamination.
In response, the oil and gas industry lobbied Congress to exempt fracking from regulation under the Safe Drinking Water Act. Congress did so as part of the Energy Policy Act of 2005.
This provision is widely known as the Halliburton Loophole because it was championed by former U.S. Vice President Dick Cheney, who previously served as CEO of oil services company Halliburton. The company patented fracking technologies in the 1940s and remains one of the world’s largest suppliers of fracking fluid.
Though researchers have produced numerous studies on the health effects of these chemicals, federal exemptions and sparse data still make it hard to monitor the impacts of their use. Further, much existing research focuses on individual compounds, not on the cumulative effects of exposure to combinations of them.
Chemical use in fracking
For our review we consulted the FracFocus Chemical Disclosure Registry, which is managed by the Ground Water Protection Council, an organization of state government officials. Currently, 23 states – including major producers like Pennsylvania and Texas – require oil and gas companies to report to FracFocus information such as well locations, operators and the masses of each chemical used in fracking fluids.
We used a tool called Open-FracFocus, which uses open-source coding to make FracFocus data more transparent, easily accessible and ready to analyze.
This 2020 news report examines possible leakage of fracking wastewater from an underground injection well in west Texas.
We found that from 2014 through 2021, 62% to 73% of reported fracks each year used at least one chemical that the Safe Drinking Water Act recognizes as detrimental to human health and the environment. If not for the Halliburton Loophole, these projects would have been subject to permitting and monitoring requirements, providing information for local communities about potential risks.
In total, fracking companies reported using 282 million pounds of chemicals that would otherwise regulated under the Safe Drinking Water Act from 2014 through 2021. This likely is an underestimate, since this information is self-reported, covers only 23 states and doesn’t always include sufficient information to calculate mass.
Chemicals used in large quantities included ethylene glycol, an industrial compound found in substances such as antifreeze and hydraulic brake fluid; acrylamide, a widely used industrial chemical that is also present in some foods, food packaging and cigarette smoke; naphthalene, a pesticide made from crude oil or tar; and formaldehyde, a common industrial chemical used in glues, coatings and wood products and also present in tobacco smoke. Naphthalene and acrylamide are possible human carcinogens, and formaldehyde is a known human carcinogen.
The data also show a large spike in the use of benzene in Texas in 2019. Benzene is such a potent human carcinogen that the Safe Drinking Water Act limits exposure to 0.001 milligrams per liter – equivalent to half a teaspoon of liquid in an Olympic-size swimming pool.
Many states – including states that require disclosure – allow oil and gas producers to withhold information about chemicals they use in fracking that the companies declare to be proprietary information or trade secrets. This loophole greatly reduces transparency about what chemicals are in fracking fluids.
We found that the share of fracking events reporting at least one proprietary chemical increased from 77% in 2015 to 88% in 2021. Companies reported using about 7.2 billion pounds of proprietary chemicals – more than 25 times the total mass of chemicals listed under the Safe Drinking Water Act that they reported.
Closing the Halliburton loophole
Overall, our review found that fracking companies have reported using 28 chemicals that would otherwise be regulated under the Safe Drinking Water Act. Ethylene glycol was used in the largest quantities, but acrylamide, formaldehyde and naphthalene were also common.
Given that each of these chemicals has serious health effects, and that hundreds of spills are reported annually at fracking wells, we believe action is needed to protect public and environmental health, and to enable scientists to rigorously monitor and research fracking chemical use.
Based on our findings, we believe Congress should pass a law requiring full disclosure of all chemicals used in fracking, including proprietary chemicals. We also recommend disclosing fracking data in a centralized and federally mandated database, managed by an agency such as the EPA or the National Institute of Environmental Health Sciences. Finally, we recommend that Congress repeal the Halliburton Loophole and once again regulate fracking under the Safe Drinking Water Act.
As the U.S. ramps up liquefied natural gas exports in response to the war in Ukraine, fracking could continue for the foreseeable future. In our view, it’s urgent to ensure that it is carried out as safely as possible.
Updating the flooring can help infuse new life into tired, outdated bathrooms. For an upscale, polished look that doesn’t have to break the bank, consider installing tile flooring.
Before you get started, you’ll want to make some decisions about the look and feel of your flooring:
Ceramic or stone? Weigh factors such as porosity, how slippery the surface may be when wet and how well it retains heat or cold. Ultimately, your decision hinges on the needs and uses of your family.
Complement or contrast? Define the overall style you want as well as the colors and tones that will help best achieve your vision.
Big or small? Generally, the larger the tile, the fewer grout lines, and too many grout lines in a smaller space can create the illusion of clutter. However, smaller tiles can eliminate the need to make multiple awkward cuts, and small tiles are perfect for creating accent patterns or introducing a splash of color.
When you’ve got your overall look and materials selected, keep these steps in mind as you begin laying the flooring:
Prepare your subfloor. Use a level to check for uneven spots; you need an even surface to prevent cracks in the tile or grout as well as rough spots that could pose tripping hazards. Use patching and leveling material to create a consistent surface. Apply a thin layer of mortar then attach your cement backer board with screws. Cover joints with cement board tape, apply another thin layer of mortar, smooth and allow to dry.
To ensure square placement, draw reference lines on the subfloor using a level and carpenter square. Tile should start in the middle of the room and move out toward the walls, so make your initial reference lines as close to the center as possible. Mark additional reference lines as space allows, such as 2-foot-by-2-foot squares.
Do a test run with your chosen tile by laying it out on the floor. There are color variations in most tile patterns, so you’ll want to verify each tile blends well with the next.
Mix tile mortar and use the thin side of a trowel to apply mortar at a 45-degree angle. Use the combed side to spread evenly and return excess mortar to the bucket. Remember to apply mortar in small areas, working as you go, so it doesn’t dry before you’re ready to lay the tile.
When laying tile, use your reference lines as guides. Press and wiggle tile slightly for the best adherence.
Use spacers to create even lines between one tile and the next, removing excess mortar with a damp sponge or rag.
As you complete a section of tile, use a level and mallet to verify the tiles are sitting evenly.
Let mortar dry 24 hours before grouting.
Remove spacers then apply grout to joints, removing excess as you go.
Allow grout to dry per the manufacturer’s instructions then go back over tile with a damp sponge to set grout lines and clean grout residue.
Once grout has cured – usually at least a couple weeks – apply sealer to protect it.
Find more ideas and tips for updating your bathroom at eLivingtoday.com.