Tuesday, April 4, 2023

Regulating AI: 3 experts explain why it’s difficult to do and important to get right

The new generation of AI tools makes it a lot easier to produce convincing misinformation. Photo by Olivier Douliery/AFP via Getty Images
S. Shyam Sundar, Penn State; Cason Schmit, Texas A&M University, and John Villasenor, University of California, Los Angeles

From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 29, 2023, to pause further training of the latest AI technologies or, barring that, for governments to “impose a moratorium.”

These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don’t require technical knowledge to use.

Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right.

To jump ahead to each response, here’s a list of each:


Human foibles and a moving target
Combining “soft” and “hard” approaches
Four key questions to ask


Human foibles and a moving target

S. Shyam Sundar, Professor of Media Effects & Director, Center for Socially Responsible AI, Penn State

The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot.

Numerous studies in my lab have shown that when a machine, rather than a human, is identified as a source of interaction, it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on. It clouds the user’s judgment and results in the user overly trusting machines. However, simply disabusing people of AI’s infallibility is not sufficient, because humans are known to unconsciously assume competence even when the technology doesn’t warrant it.

Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanness, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as politeness and reciprocity. So, when computers seem sentient, people tend to trust them, blindly. Regulation is needed to ensure that AI products deserve this trust and don’t exploit it.

AI poses a unique challenge because, unlike in traditional engineering systems, designers cannot be sure how AI systems will behave. When a traditional automobile was shipped out of the factory, engineers knew exactly how it would function. But with self-driving cars, the engineers can never be sure how it will perform in novel situations.

Lately, thousands of people around the world have been marveling at what large generative AI models like GPT-4 and DALL-E 2 produce in response to their prompts. None of the engineers involved in developing these AI models could tell you exactly what the models will produce. To complicate matters, such models change and evolve with more and more interaction.

All this means there is plenty of potential for misfires. Therefore, a lot depends on how AI systems are deployed and what provisions for recourse are in place when human sensibilities or welfare are hurt. AI is more of an infrastructure, like a freeway. You can design it to shape human behaviors in the collective, but you will need mechanisms for tackling abuses, such as speeding, and unpredictable occurrences, like accidents.

AI developers will also need to be inordinately creative in envisioning ways that the system might behave and try to anticipate potential violations of social standards and responsibilities. This means there is a need for regulatory or governance frameworks that rely on periodic audits and policing of AI’s outcomes and products, though I believe that these frameworks should also recognize that the systems’ designers cannot always be held accountable for mishaps.

Artificial intelligence researcher Joanna Bryson describes how professional organizations can play a role in regulating AI.


Combining ‘soft’ and ‘hard’ approaches

Cason Schmit, Assistant Professor of Public Health, Texas A&M University

Regulating AI is tricky. To regulate AI well, you must first define AI and understand anticipated AI risks and benefits. Legally defining AI is important to identify what is subject to the law. But AI technologies are still evolving, so it is hard to pin down a stable legal definition.

Understanding the risks and benefits of AI is also important. Good regulations should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits might be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.

Lawmakers are often too slow to adapt to the rapidly changing technological environment. Some new laws are obsolete by the time they are enacted or even introduced. Without new laws, regulators have to use old laws to address new problems. Sometimes this leads to legal barriers for social benefits or legal loopholes for harmful conduct.

Soft laws” are the alternative to traditional “hard law” approaches of legislation intended to prevent specific violations. In the soft law approach, a private organization sets rules or standards for industry members. These can change more rapidly than traditional lawmaking. This makes soft laws promising for emerging technologies because they can adapt quickly to new applications and risks. However, soft laws can mean soft enforcement.

Megan Doerr, Jennifer Wagner and I propose a third way: Copyleft AI with Trusted Enforcement (CAITE). This approach combines two very different concepts in intellectual property — copyleft licensing and patent trolls.

Copyleft licensing allows for content to be used, reused or modified easily under the terms of a license – for example, open-source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.

In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. This creates an enforcement entity that exists solely to enforce ethical AI standards and can be funded in part by fines from unethical conduct. This entity is like a patent troll in that it is private rather than governmental and it supports itself by enforcing the legal intellectual property rights that it collects from others. In this case, rather than enforcement for profit, the entity enforces the ethical guidelines defined in the licenses - a “troll for good.”

This model is flexible and adaptable to meet the needs of a changing AI environment. It also enables substantial enforcement options like a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to meet the unique challenges of AI.

Though generative AI has been grabbing headlines of late, other types of AI have been posing challenges for regulators for years, particularly in the area of data privacy.


Four key questions to ask

John Villasenor, Professor of Electrical Engineering, Law, Public Policy, and Management, University of California, Los Angeles

The extraordinary recent advances in large language model-based generative AI are spurring calls to create new AI-specific regulation. Here are four key questions to ask as that dialogue progresses:

1) Is new AI-specific regulation necessary? Many of the potentially problematic outcomes from AI systems are already addressed by existing frameworks. If an AI algorithm used by a bank to evaluate loan applications leads to racially discriminatory loan decisions, that would violate the Fair Housing Act. If the AI software in a driverless car causes an accident, products liability law provides a framework for pursuing remedies.

2) What are the risks of regulating a rapidly changing technology based on a snapshot of time? A classic example of this is the Stored Communications Act, which was enacted in 1986 to address then-novel digital communication technologies like email. In enacting the SCA, Congress provided substantially less privacy protection for emails more than 180 days old.

The logic was that limited storage space meant that people were constantly cleaning out their inboxes by deleting older messages to make room for new ones. As a result, messages stored for more than 180 days were deemed less important from a privacy standpoint. It’s not clear that this logic ever made sense, and it certainly doesn’t make sense in the 2020s, when the majority of our emails and other stored digital communications are older than six months.

A common rejoinder to concerns about regulating technology based on a single snapshot in time is this: If a law or regulation becomes outdated, update it. But this is easier said than done. Most people agree that the SCA became outdated decades ago. But because Congress hasn’t been able to agree on specifically how to revise the 180-day provision, it’s still on the books over a third of a century after its enactment.

3) What are the potential unintended consequences? The Allow States and Victims to Fight Online Sex Trafficking Act of 2017 was a law passed in 2018 that revised Section 230 of the Communications Decency Act with the goal of combating sex trafficking. While there’s little evidence that it has reduced sex trafficking, it has had a hugely problematic impact on a different group of people: sex workers who used to rely on the websites knocked offline by FOSTA-SESTA to exchange information about dangerous clients. This example shows the importance of taking a broad look at the potential effects of proposed regulations.

4) What are the economic and geopolitical implications? If regulators in the United States act to intentionally slow the progress in AI, that will simply push investment and innovation — and the resulting job creation — elsewhere. While emerging AI raises many concerns, it also promises to bring enormous benefits in areas including education, medicine, manufacturing, transportation safety, agriculture, weather forecasting, access to legal services and more.

I believe AI regulations drafted with the above four questions in mind will be more likely to successfully address the potential harms of AI while also ensuring access to its benefits.

S. Shyam Sundar, James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State; Cason Schmit, Assistant Professor of Public Health, Texas A&M University, and John Villasenor, Professor of Electrical Engineering, Law, Public Policy, and Management, University of California, Los Angeles

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Meet the next four people headed to the Moon – how the diverse crew of Artemis II shows NASA’s plan for the future of space exploration

The Artemis II mission will send four astronauts on a flyby of the Moon. NASA
Wendy Whitman Cobb, Air University

On April 3, 2023, NASA announced the four astronauts who will make up the crew of Artemis II, which is scheduled to launch in late 2024. The Artemis II mission will send these four astronauts on a 10-day mission that culminates in a flyby of the Moon. While they won’t head to the surface, they will be the first people to leave Earth’s immediate vicinity and be the first near the Moon in more than 50 years.

This mission will test the technology and equipment that’s necessary for future lunar landings and is a significant step on NASA’s planned journey back to the surface of the Moon. As part of this next era in lunar and space exploration, NASA has outlined a few clear goals. The agency is hoping to inspire young people to get interested in space, to make the broader Artemis program more economically and politically sustainable and, finally, to continue encouraging international collaboration on future missions.

From my perspective as a space policy expert, the four Artemis II astronauts fully embody these goals.

Four astronauts in orange space suits with their helmets off.
Crew members of the Artemis II mission are NASA astronauts Christina Hammock Koch, Reid Wiseman and Victor Glover and Canadian Space Agency astronaut Jeremy Hansen. NASA

Who are the four astronauts?

The four members of the Artemis II crew are highly experienced, with three of them having flown in space previously. The one rookie flying onboard is notably representing Canada, making this an international mission, as well.

The commander of the mission will be Reid Wiseman, a naval aviator and test pilot. On his previous mission to the International Space Station, he spent 165 days in space and completed a record of 82 hours of experiments in just one week. Wiseman was also the chief of the U.S. astronaut office from 2020 to 2023.

Serving as pilot is Victor Glover. After flying more than 3,000 hours in more than 40 different aircraft, Glover was selected for the astronaut corps in 2013. He was the pilot for the Crew-1 mission, the first mission that used a SpaceX rocket and capsule to bring astronauts to the International Space Station, and served as a flight engineer on the ISS.

The lone woman on the crew is mission specialist Christina Hammock Koch. She has spent 328 days in space, more than any other woman, across the three ISS expeditions. She has also participated in six different spacewalks, including the first three all-women spacewalks. Koch is an engineer by trade, having previously worked at NASA’s Goddard Space Flight Center.

The crew will be rounded out by a Canadian, Jeremy Hansen. Though a spaceflight rookie, he has participated in space simulations like NEEMO 19, in which he lived in a facility on the ocean floor to simulate deep space exploration. Before being selected to Canada’s astronaut corps in 2009, he was an F-18 pilot in the Royal Canadian Air Force.

These four astronauts have followed pretty typical paths to space. Like the Apollo astronauts, three of them began their careers as military pilots. Two, Wiseman and Glover, were trained test pilots, just as most of the Apollo astronauts were.

Mission specialist Koch, with her engineering expertise, is more typical of modern astronauts. The position of mission or payload specialist was created for the space shuttle program, making spaceflight possible for those with more scientific backgrounds.

An artist's impression of a spacecraft flying over the surface of the Moon.
The crew will make a single flyby of the Moon in an Orion capsule. NASA, CC BY-NC

A collaborative, diverse future

Unlike the Apollo program of the 1960s and 1970s, with Artemis, NASA has placed a heavy emphasis on building a politically sustainable lunar program by fostering the participation of a diverse group of people and countries.

The participation of other countries in NASA missions – Canada in this case – is particularly important for the Artemis program and the Artemis II crew. International collaboration is beneficial for a number of reasons. First, it allows NASA to lean on the strengths and expertise of engineers, researchers and space agencies of U.S. allies and divide up the production of technologies and costs. It also helps the U.S. continue to provide international leadership in space as competition with other countries – notably China – heats up.

The crew of Artemis II is also quite diverse compared with the Apollo astronauts. NASA has often pointed out that the Artemis program will send the first woman and the first person of color to the Moon. With Koch and Glover on board, Artemis II is the first step in fulfilling that promise and moving toward the goal of inspiring future generations of space explorers.

The four astronauts aboard Artemis II will be the first humans to return to the vicinity of the Moon since 1972. The flyby will take the Orion capsule in one pass around the far side of the Moon. During the flight, the crew will monitor the spacecraft and test a new communication system that will allow them to send more data and communicate more easily with Earth than previous systems.

If all goes according to plan, in late 2025 Artemis III will mark humanity’s return to the lunar surface, this time also with a diverse crew. While the Artemis program still has a way to go before humans set foot on the Moon once again, the announcement of the Artemis II crew shows how NASA intends to get there in a diverse and collaborative way.

Wendy Whitman Cobb, Professor of Strategy and Security Studies, Air University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Republicans and Democrats see news bias only in stories that clearly favor the other party

If you detect news media bias, that perception may be a result of your own bias. Anna Moneymaker/Getty Images
Marjorie Hershey, Indiana University

Charges of media bias – that “the media” are trying to brainwash Americans by feeding the public only one side of every issue – have become as common as campaign ads in the run-up to the midterm elections.

As a political scientist who has examined media coverage of the Trump presidency and campaigns, I can say that this is what social science research tells us about media bias.

First, media bias is in the eye of the beholder.

Communications scholars have found that if you ask people in any community, using scientific polling methods, whether their local media are biased, you’ll find that about half say yes. But of that half, typically a little more than a quarter say that their local media are biased against Republicans, and a little less than a quarter say the same local media are biased against Democrats.

Research shows that Republicans and Democrats spot bias only in articles that clearly favor the other party. If an article tilts in favor of their own party, they tend to see it as unbiased.

Many people, then, define “bias” as “anything that doesn’t agree with me.” It’s not hard to see why.

‘Liberal bias’ in the media is a constant topic on Fox News.

‘Media’ is a plural word

American party politics has become increasingly polarized in recent decades. Republicans have become more consistently conservative, and Democrats have become more consistently liberal to moderate.

As the lines have been drawn more clearly, many people have developed hostile feelings toward the opposition party.

In a 2016 Pew Research Center poll, 45% of Republicans said the Democratic Party’s policies are “so misguided that they threaten the nation’s well-being,” and 41% of Democrats said the same about Republicans. A poll conducted in midyear 2022 by Pew showed that “72% of Republicans regard Democrats as more immoral, and 63% of Democrats say the same about Republicans.”

Not surprisingly, media outlets have arisen to appeal primarily to people who share a conservative view, or people who share a liberal view.

That doesn’t mean that “the media” are biased. There are hundreds of thousands of media outlets in the U.S. – newspapers, radio, network TV, cable TV, blogs, websites and social media. These news outlets don’t all take the same perspective on any given issue.

If you want a very conservative news site, it is not hard to find one, and the same with a very liberal news site.

First Amendment rules

“The media,” then, present a variety of different perspectives. That’s the way a free press works.

The Constitution’s First Amendment says Congress shall make no law limiting the freedom of the press. It doesn’t say that Congress shall require all media sources to be “unbiased.” Rather, it implies that as long as Congress does not systematically suppress any particular point of view, then the free press can do its job as one of the primary checks on a powerful government.

When the Constitution was written and for most of U.S. history, the major news sources – newspapers, for most of that time – were explicitly biased. Most were sponsored by a political party or a partisan individual.

The notion of objective journalism – that media must report both sides of every issue in every story – barely existed until the late 1800s. It reached full flower only in the few decades when broadcast television, limited to three major networks, was the primary source of political information.

Since that time, the media universe has expanded to include huge numbers of internet news sites, cable channels and social media posts. So if you feel that the media sources you’re reading or watching are biased, you can read a wider variety of media sources.

Front page of the April 15, 1789 edition of the Gazette of the United States
Thomas Jefferson described this partisan newspaper, The Gazette of the United States, as ‘a paper of pure Toryism … disseminating the doctrines of monarchy, aristocracy, and the exclusion of the people.’ Library of Congress, Chronicling America collection

If it bleeds, it leads

There is one form of actual media bias. Almost all media outlets need audiences in order to exist. Some can’t survive financially without an audience; others want the prestige that comes from attracting a big audience.

Thus, the media define as “news” the kinds of stories that will attract an audience: those that feature drama, conflict, engaging pictures and immediacy. That’s what most people find interesting. They don’t want to read a story headlined “Dog bites man.” They want “Man bites dog.”

The problem is that a focus on such stories crowds out what we need to know to protect our democracy, such as: How do the workings of American institutions benefit some groups and disadvantage others? In what ways do our major systems – education, health care, national defense and others – function effectively or less effectively?

These analyses are vital to citizens – if we fail to protect our democracy, our lives will be changed forever – but they aren’t always fun to read. So they get covered much less than celebrity scandals or murder cases – which, while compelling, don’t really affect the ability to sustain a democratic system.

Writer Dave Barry demonstrated this media bias in favor of dramatic stories in a 1998 column.

He wrote, “Let’s consider two headlines. FIRST HEADLINE: ‘Federal Reserve Board Ponders Reversal of Postponement of Deferral of Policy Reconsideration.’ SECOND HEADLINE: ‘Federal Reserve Board Caught in Motel with Underage Sheep.’ Be honest, now. Which of these two stories would you read?”

By focusing on the daily equivalent of the underage sheep, media can direct our attention away from the important systems that affect our lives. That isn’t the media’s fault; we are the audience whose attention media outlets want to attract.

But as long as we think of governance in terms of its entertainment value and media bias in terms of Republicans and Democrats, we’ll continue to be less informed than we need to be. That’s the real media bias.

This story is an updated version of an article that was originally published on Oct. 15, 2020.

Marjorie Hershey, Professor Emeritus of Political Science, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

An Elevated Sandwich for Any Occasion

(Culinary.net) They might not be the fanciest of foods, but when you eat a filling, protein-packed sandwich, you are usually left satisfied and full of energy. From ham and turkey to mayo and mustard, the possibilities are nearly endless when sandwiches are on the menu.

With so many customizable options for bread, meats, toppings and more, it’s easy to create the perfect sandwich. For example, this Croissant Chicken Salad Sandwich with Sprouts is served on a fluffy, light, mouthwatering croissant and features a hearty mixture of chicken, bacon and veggies to give you that boost you have been craving.

To make the sandwich, line six slices of bacon in a skillet. Cook until slightly crispy. Drain over a paper towel and crush into pieces.

On a cutting board, cut cherry tomatoes in half and chop green onions.

In a mixing bowl, combine chicken, mayonnaise, chopped green onions, pepper, bacon crumbles and halved cherry tomatoes.

Cut croissants in half and scoop a generous amount of chicken salad onto the bottom of the croissant. Top with sprouts and replace top croissant.

The chicken is creamy, the bacon crumbles are crispy and the green onions give it crunch, making this sandwich perfect for nearly any occasion. Whether it’s a bridal shower, picnic at the park with family or just lunch on a weekend afternoon, it can give you the energy to go forward and finish your day strong.

Find more recipes at Culinary.net.

If you made this recipe at home, use #MyCulinaryConnection on your favorite social network to share your work.

Watch video to see how to make this recipe!

 

Croissant Chicken Salad Sandwich with Sprouts

Servings: 6

  • 6          strips bacon
  • 1          rotisserie chicken, shredded
  • 1/2       cup mayonnaise
  • 1/4       cup green onions, chopped
  • 1/2       teaspoon pepper
  • 1/2       cup cherry tomatoes, quartered
  • sprouts
  1. In skillet, arrange bacon and cook until slightly crispy. Drain bacon over paper towel; allow to dry. Crush into pieces.
  2. In large bowl, stir chicken, mayonnaise, green onions and pepper until combined. Add bacon and tomatoes; stir until combined.
  3. Cut croissants in half. Spoon generous portion of chicken salad over bottom croissant. Top with sprouts. Replace top croissant.
SOURCE:
Culinary.net

Companies that frack for oil and gas can keep a lot of information secret – but what they disclose shows widespread use of hazardous chemicals

A deep injection well used for disposal of fracking wastewater in Kern County, Calif. Citizens of the Planet/Education Images/Universal Images Group via Getty Images
Vivian R. Underhill, Northeastern University and Lourdes Vera, University at Buffalo

From rural Pennsylvania to Los Angeles, more than 17 million Americans live within a mile of at least one oil or gas well. Since 2014, most new oil and gas wells have been fracked.

Fracking, short for hydraulic fracturing, is a process in which workers inject fluids underground under high pressure. The fluids fracture coal beds and shale rock, allowing the gas and oil trapped within the rock to rise to the surface. Advances in fracking launched a huge expansion of U.S. oil and gas production starting in the early 2000s but also triggered intense debate over its health and environmental impacts.

Fracking fluids are up to 97% water, but they also contain a host of chemicals that perform functions such as dissolving minerals and killing bacteria. The U.S. Environmental Protection Agency classifies a number of these chemicals as toxic or potentially toxic.

The Safe Drinking Water Act, enacted in 1974, regulates underground injection of chemicals that can threaten drinking water supplies. However, Congress has exempted fracking from most federal regulation under the law. As a result, fracking is regulated at the state level, and requirements vary from state to state.

We study the oil and gas industry in California and Texas and are members of the Wylie Environmental Data Justice Lab, which studies fracking chemicals in aggregate. In a recent study, we worked with colleagues to provide the first systematic analysis of chemicals found in fracking fluids that would be regulated under the Safe Drinking Water Act if they were injected underground for other purposes. Our findings show that excluding fracking from federal regulation under the Safe Drinking Water Act is exposing the public to an array of chemicals that are widely recognized as threats to public health.

Diagram of a fracking operation.
A schematic of a hydraulic fracking operation, with wastewater temporarily stored in a surface waste pit. wetcake via Getty Images

Averting federal regulation

Fracking technologies were originally developed in the 1940s but only entered widespread use for fossil fuel extraction in the U.S. in the early 2000s. Since the process involves injecting chemicals underground and then disposing of contaminated water that flows back to the surface, it faced potential regulation under multiple U.S. environmental laws.

In 1997, the 11th Circuit Court of Appeals ruled that fracking should be regulated under the Safe Drinking Water Act. This would have required oil and gas producers to develop underground injection control plans, disclose the contents of their fracking fluids and monitor local water sources for contamination.

In response, the oil and gas industry lobbied Congress to exempt fracking from regulation under the Safe Drinking Water Act. Congress did so as part of the Energy Policy Act of 2005.

This provision is widely known as the Halliburton Loophole because it was championed by former U.S. Vice President Dick Cheney, who previously served as CEO of oil services company Halliburton. The company patented fracking technologies in the 1940s and remains one of the world’s largest suppliers of fracking fluid.

Fracking fluids and health

Over the past two decades, studies have linked exposure to chemicals in fracking fluid with a wide range of health risks. These risks include giving birth prematurely and having babies with low birth weights or congenital heart defects, as well as heart failure, asthma and other respiratory illnesses among patients of all ages.

Though researchers have produced numerous studies on the health effects of these chemicals, federal exemptions and sparse data still make it hard to monitor the impacts of their use. Further, much existing research focuses on individual compounds, not on the cumulative effects of exposure to combinations of them.

Chemical use in fracking

For our review we consulted the FracFocus Chemical Disclosure Registry, which is managed by the Ground Water Protection Council, an organization of state government officials. Currently, 23 states – including major producers like Pennsylvania and Texas – require oil and gas companies to report to FracFocus information such as well locations, operators and the masses of each chemical used in fracking fluids.

We used a tool called Open-FracFocus, which uses open-source coding to make FracFocus data more transparent, easily accessible and ready to analyze.

This 2020 news report examines possible leakage of fracking wastewater from an underground injection well in west Texas.

We found that from 2014 through 2021, 62% to 73% of reported fracks each year used at least one chemical that the Safe Drinking Water Act recognizes as detrimental to human health and the environment. If not for the Halliburton Loophole, these projects would have been subject to permitting and monitoring requirements, providing information for local communities about potential risks.

In total, fracking companies reported using 282 million pounds of chemicals that would otherwise regulated under the Safe Drinking Water Act from 2014 through 2021. This likely is an underestimate, since this information is self-reported, covers only 23 states and doesn’t always include sufficient information to calculate mass.

Chemicals used in large quantities included ethylene glycol, an industrial compound found in substances such as antifreeze and hydraulic brake fluid; acrylamide, a widely used industrial chemical that is also present in some foods, food packaging and cigarette smoke; naphthalene, a pesticide made from crude oil or tar; and formaldehyde, a common industrial chemical used in glues, coatings and wood products and also present in tobacco smoke. Naphthalene and acrylamide are possible human carcinogens, and formaldehyde is a known human carcinogen.

The data also show a large spike in the use of benzene in Texas in 2019. Benzene is such a potent human carcinogen that the Safe Drinking Water Act limits exposure to 0.001 milligrams per liter – equivalent to half a teaspoon of liquid in an Olympic-size swimming pool.

Many states – including states that require disclosure – allow oil and gas producers to withhold information about chemicals they use in fracking that the companies declare to be proprietary information or trade secrets. This loophole greatly reduces transparency about what chemicals are in fracking fluids.

We found that the share of fracking events reporting at least one proprietary chemical increased from 77% in 2015 to 88% in 2021. Companies reported using about 7.2 billion pounds of proprietary chemicals – more than 25 times the total mass of chemicals listed under the Safe Drinking Water Act that they reported.

Closing the Halliburton loophole

Overall, our review found that fracking companies have reported using 28 chemicals that would otherwise be regulated under the Safe Drinking Water Act. Ethylene glycol was used in the largest quantities, but acrylamide, formaldehyde and naphthalene were also common.

Given that each of these chemicals has serious health effects, and that hundreds of spills are reported annually at fracking wells, we believe action is needed to protect public and environmental health, and to enable scientists to rigorously monitor and research fracking chemical use.

Based on our findings, we believe Congress should pass a law requiring full disclosure of all chemicals used in fracking, including proprietary chemicals. We also recommend disclosing fracking data in a centralized and federally mandated database, managed by an agency such as the EPA or the National Institute of Environmental Health Sciences. Finally, we recommend that Congress repeal the Halliburton Loophole and once again regulate fracking under the Safe Drinking Water Act.

As the U.S. ramps up liquefied natural gas exports in response to the war in Ukraine, fracking could continue for the foreseeable future. In our view, it’s urgent to ensure that it is carried out as safely as possible.

Vivian R. Underhill, Postdoctoral Researcher in social Science and Environmental Health, Northeastern University and Lourdes Vera, Assistant Professor of Sociology and Environment and Sustainability, University at Buffalo

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Installing Bathroom Tile Like a Pro

Updating the flooring can help infuse new life into tired, outdated bathrooms. For an upscale, polished look that doesn’t have to break the bank, consider installing tile flooring.

Before you get started, you’ll want to make some decisions about the look and feel of your flooring:

Ceramic or stone? Weigh factors such as porosity, how slippery the surface may be when wet and how well it retains heat or cold. Ultimately, your decision hinges on the needs and uses of your family.

Complement or contrast? Define the overall style you want as well as the colors and tones that will help best achieve your vision.

Big or small? Generally, the larger the tile, the fewer grout lines, and too many grout lines in a smaller space can create the illusion of clutter. However, smaller tiles can eliminate the need to make multiple awkward cuts, and small tiles are perfect for creating accent patterns or introducing a splash of color.

When you’ve got your overall look and materials selected, keep these steps in mind as you begin laying the flooring:

  1. Prepare your subfloor. Use a level to check for uneven spots; you need an even surface to prevent cracks in the tile or grout as well as rough spots that could pose tripping hazards. Use patching and leveling material to create a consistent surface. Apply a thin layer of mortar then attach your cement backer board with screws. Cover joints with cement board tape, apply another thin layer of mortar, smooth and allow to dry.
  2. To ensure square placement, draw reference lines on the subfloor using a level and carpenter square. Tile should start in the middle of the room and move out toward the walls, so make your initial reference lines as close to the center as possible. Mark additional reference lines as space allows, such as 2-foot-by-2-foot squares.
  3. Do a test run with your chosen tile by laying it out on the floor. There are color variations in most tile patterns, so you’ll want to verify each tile blends well with the next.
  4. Mix tile mortar and use the thin side of a trowel to apply mortar at a 45-degree angle. Use the combed side to spread evenly and return excess mortar to the bucket. Remember to apply mortar in small areas, working as you go, so it doesn’t dry before you’re ready to lay the tile.
  5. When laying tile, use your reference lines as guides. Press and wiggle tile slightly for the best adherence.
  6. Use spacers to create even lines between one tile and the next, removing excess mortar with a damp sponge or rag.
  7. As you complete a section of tile, use a level and mallet to verify the tiles are sitting evenly.
  8. Let mortar dry 24 hours before grouting.
  9. Remove spacers then apply grout to joints, removing excess as you go.
  10. Allow grout to dry per the manufacturer’s instructions then go back over tile with a damp sponge to set grout lines and clean grout residue.
  11. Once grout has cured – usually at least a couple weeks – apply sealer to protect it.

 
Find more ideas and tips for updating your bathroom at eLivingtoday.com.

 

Photo courtesy of Unsplash


How men’s golf has been shaken by Saudi Arabia’s billion-dollar drive for legitimacy

Augusta, home of The Masters – golf’s first major championship of the year. Slusing/Shutterstock
Leon Davis, Teesside University and Dan Plumley, Sheffield Hallam University

The first major tournament of 2023 in men’s professional golf could be a particularly tense affair. The Masters, held every April in the US city of Augusta (Georgia), sees the world’s finest players compete for a prize purse of around US$15 million (£12.1m), as well as the famous green jacket for the winner.

Approximately 90 players will compete for that jacket after a tumultuous 12 months for the sport, during which some of the best-known golfers have controversially broken away from the US-based PGA Tour, the biggest and most powerful organiser of professional golf events.

They chose instead to join LIV Golf, a new rival tour funded by Saudi Arabia’s sovereign wealth fund, causing a significant rift among golf’s leading male professionals. Now in the early stages of its crucial second season, with US$2 billion (£1.65 billion) having been invested, LIV Golf is taking a real swing at the golfing establishment.

Our recent research suggests that LIV Golf was constructed not simply to add an extra layer to the men’s professional game or create a breakaway league. In fact, it appears designed to reshape men’s professional golf entirely.

From the outset, LIV Golf promised to be “golf, but louder” – with shorter rounds, limited competitor fields, and lots and lots of money – to make the sport more attractive to new spectators.

A defensive PGA Tour immediately lashed out, banning any players who joined LIV Golf from its own competitions. LIV Golf responded by saying the PGA Tour was being “vindictive” and divisive.

Top players came out fighting for both sides. And while LIV Golf was initially labelled “dead in the water” by Northern Ireland’s four-time major champion Rory McIlroy in early 2022, not everyone agreed. High-profile defections from the PGA Tour to LIV Golf included major champions Phil Mickelson, Dustin Johnson, Brooks Koepka, Bryson DeChambeau, and the 2022 Open Champion Cameron Smith.

But despite huge cash prizes and big-name players, LIV Golf is not yet the roaring success it was designed to be. Sponsors and broadcasters are not desperate to get involved, and many of the first season’s events were only broadcast on Facebook and YouTube channels.

This rival tour still faces significant challenges. Encouraging more players to defect and gaining more broadcast deals – including improving on the one that involves LIV Golf paying an American network to cover its events – will be the gameplan.

Level playing field?

But this is not just about golf. The expensive creation of a rival tour is just part of Saudi Arabia’s ongoing push for legitimacy in the sporting world. And what some call “legitimacy”, others call “sportswashing” – the use of sport by oppressive governments or leaders to distract the rest of the world from from human rights abuses in a bid for soft power.

For Saudi Arabia, LIV Golf is part of a wider economic strategy which seeks to diminish the Gulf state’s reliance on oil. Other sports including Formula 1, football and boxing are already in play.

Donald Trump on Liv Golf stage.
Former US president Donald Trump hosted a LIV event in 2022. L.E.Mormile/Shutterstock

But where does this leave the future of professional golf? LIV Golf claimed that its goal is to “improve the health of professional golf” and “help unlock the sport’s untapped potential”.

There is perhaps some truth in this. Despite the controversy, McIlroy has since reflected that it was a shakeup the PGA Tour needed in order to innovate and adapt. He now believes LIV has benefited everyone who plays professional golf at a high level.

As LIV Golf celebrated the beginning of a new season in February 2023, amid reports of possible financial penalties for LIV players if they decide they want to return to the PGA Tour, it is clear the organisation is not going away.

For the moment it continues to battle for supremacy of the men’s professional game, both on the course and in court. However – ethical issues and a lack of external commercial backing notwithstanding – it’s possible the PGA Tour and LIV Golf could eventually manage to co-exist.

The introduction of LIV Golf has also led to the men’s four major tournaments (the Masters, PGA Championship, US Open and the UK’s Open Championship) becoming even more crucial for players to win. LIV has also made the majors more important for golf fans, as these are now the only men’s events where they get to watch a full-strength field from all tours.

The 2023 Masters will be the first time the players from the PGA and LIV tours have competed against each other in almost nine months. In a game all about control and tradition, LIV Golf has succeeded in creating a fair amount of noise. In years to come, that noise could prove loud enough to completely transform an entire global sport.

Leon Davis, Senior Lecturer in Events Management, Teesside University and Dan Plumley, Principal Lecturer in Sport Finance, Sheffield Hallam University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sex, love and companionship … with AI? Why human-machine relationships could go mainstream

The California-based startup Replika has programmed chatbots to serve as companions. Olivier Douliery/AFP via Getty Images
Marco Dehnert, Arizona State University and Joris Van Ouytsel, Arizona State University

There was once a stigma attached to online dating: Less than a decade ago, many couples who had met online would make up stories for how they met rather than admit that they had done so via an app.

Not so anymore. Online dating is so mainstream that you’re an outlier if you haven’t met your partner on Tinder, Grindr or Hinge.

We bring up online dating to show just how quickly conventions around romance can change. With rapid advances in AI technology over the past few years, these norms may well evolve to include sex, love and friendships with AI-equipped machines.

In our research, we look at how people use technology to form and maintain relationships. But we also look at how people bond with machines – AI-equipped systems like Replika that essentially operate as advanced chatbots, along with physical robots like RealDollx or Sex Doll Genie.

We explore the different forms of sex, love and friendships that people can experience with AI-equipped machines, along with what drives people to forge these relationships in the first place – and why they might become much more common sooner than you’d think.

More than just a cure for loneliness

A common misconception is that people who are lonely and otherwise unsuccessful in relationships are the most likely to turn to AI-equipped machines for romantic and sexual fulfillment.

However, initial research shows that users of this technology differ in only small ways from nonusers, and there is no significant connection between feelings of loneliness and a preference for sex robots.

Someone’s willingness to use sex robots is also less influenced by their personality and seems to be tied to sexual preferences and sensation seeking.

In other words, it seems that some people are considering the use of sex robots mainly because they want to have new sexual experiences.

However, an enthusiasm for novelty is not the only driver. Studies show that people find many uses for sexual and romantic machines outside of sex and romance. They can serve as companions or therapists, or as a hobby.

In short, people are drawn to AI-equipped machines for a range of reasons. Many of them resemble the reasons people seek out relationships with other humans. But researchers are only beginning to understand how relationships with machines might differ from connecting with other people.

Relationships 5.0

Many researchers have voiced ethical concerns about the potential effects of machine companionship. They are concerned that the more that people turn to machine companions, the more they’ll lose touch with other humans – yet another shift toward an existence of being “alone together,” to use sociologist Sherry Turkle’s term.

Despite this apprehension, there is surprisingly little research that examines the effects of machine partners. We know quite a bit about how technology, in general, affects people in relationships, such as the benefits and harms of sexting among young adults, and the ways in which online dating platforms influence the long-term success of relationships.

Understanding the benefits and drawbacks of AI partners is a bit more complicated.

We are now in an age of what sociologist Elyakim Kislev calls “relationships 5.0” in which we are “moving from technologies used as tools controlling human surroundings and work to technologies that are our ecosystem in and of themselves.”

Elderly people in wheelchairs watch a white robot.
A humanoid robot named Pepper performs a comedy routine for residents at a nursing home in Minnesota. Mark Vancleave/Star Tribune via Getty Images

Therapeutic value is often mentioned as one benefit of romantic and sexual AI systems. One study discussed how sex robots for elderly or disabled folks could empower them to explore their sexuality, while almost half of physicians and therapists surveyed in another study could see themselves recommending sex robots in therapy. Robots could also be used in therapy with sexual offenders. But very limited research exists on these uses, which raise a range of ethical questions.

We also have very little knowledge about how human-to-robot relationships compare with human-to-human relationships. However, some of our early research suggests that people get just about the same gratification from sexting with a chatbot as they do with another human.

According to theories about how sexual relationships with artificial partners would work, one of the many factors that could affect the quality of the interactions – and, ultimately, the wider adoption of relationships with robots and AI chatbots – is the associated stigma.

While women are the main purchasers of sex toys – and their use has become a generally accepted practice – people who use what’s called “sextech,” or technology designed to enhance or improve human sexual experiences, are still stigmatized socially. That stigma is even stronger for romantic AI systems or sex robots.

Will you be my v-AI-lentine?

As we have seen with dating apps, technological advancements in the context of relationships initially face skepticism and disagreement. However, there’s no question that people seem capable of forming deep attachments with AI systems.

Take the app Replika. It’s been marketed as the “AI companion who cares” – a virtual boyfriend or girlfriend that promises to engage users in deeply personal conversations, including sexting and dirty talk.

In February, the Italian Data Protection Authority ordered that the app stop processing Italian users’ data. As a result, the developers changed how Replika interacts with its users – and some of these users went on to express feelings of grief, loss and heartbreak, not unlike the emotions felt after a breakup with a human partner.

Legislators are still figuring out how to regulate sex and love with machines. But if we have learned anything about the ways in which technology has already become integrated into our relationships, it is likely that sexual and romantic relationships with AI-equipped systems and robots will become more common in the not-so-distant future.

Marco Dehnert, PhD Candidate in Communication, Arizona State University and Joris Van Ouytsel, Assistant Professor of Interpersonal Communication, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, April 3, 2023

Too many digital distractions are eroding our ability to read deeply, and here’s how we can become aware of what’s happening — podcast

Constant distractions affect our ability to concentrate. (Shutterstock)
Nehal El-Hadi, The Conversation and Daniel Merino, The Conversation

Staying focused on a single task for a long period of time is a growing concern. We are confronted with and have to process incredible amounts of information daily, and our brains are often functioning in overdrive to manage the processing and decision-making required.

In an era of ceaseless notifications from apps, devices and social media platforms, as well as access to more information than we could possibly consider, how do we find ways to manage? And is the way we think, focus and process information changing as a result?

In this episode of The Conversation Weekly, we speak with three researchers who study human-computer interaction, technology design and literacy about how all of these demands on our attention are affecting us, and what we can do about it.

Enhancing learning

Maryanne Wolf is the director of the Center for Dyslexia, Diverse Learners and Social Justice at the University of California in the United States. Her book, Proust and the Squid, presents a history of how the reading brain developed. Since its publication in 2008, Wolf has published extensively on literacy and reading research.

Wolf believes that reading is important because it contributes to a person’s potential and enhances the ability to learn, think and be discerning:

“I’ve become, in essence, obsessed with the deep reading processes that expand the reading brain of the child to achieve their academic potential. But that foundation expands over time with everything we read and learn, so that we begin to be human beings who have the ability to take their background knowledge, use with logical thinking to infer what is the truth — or the lack of truth — in what they are reading.”

a child lying on the floor reads from a book
Reading can help children develop empathy and logical thinking. (Shu

Wolf is concerned that the amount of interaction we have with our screens and devices — and the speed at which we necessarily have to function — has changed us by removing from us the ability to be present.

“We have all changed. We don’t even realize it, but there’s a patience that’s needed inside ourselves to give attention to inference, empathy, critical analysis. It takes effort. And we’re so accustomed to going so fast that the immersiveness is difficult.”

Capturing attention

Kai Lukoff is an assistant professor at Santa Clara University in the U.S., where he directs the Human-Computer Interaction Lab. He researches how apps, platform and technology designers attempt to capture a user’s attention.

“There are a thousand or more engineers, developers, designers on the other side of the screen who are purposefully or intentionally designing these services in order to capture your attention, to get you to spend more time on the site, to get you to click on more ads. And it can be difficult to resist or even understand what’s happening to you when you feel tempted or lost. But of course, that’s not by accident.”

And so as a response, we learn how to quickly sift through content. In other words, we skim as an adaptive strategy. Skimming undermines the kind of attention Wolf notes is required to reap the intellectual, mental and cognitive benefits of deeper reading.

a man holds two smartphones in his hand while sitting in front of a laptop showing charts on its screen
There’s a cognitive cost to media multi-tasking. (Shutterstock)

Cognitive cost

Daniel Le Roux, a senior lecturer at Stellenbosch University in South Africa, is a computer scientist who investigates the psychology of human-computer interaction. He looks at the effects of what we’re doing when we’re “media multitasking,” how we navigate multiple platforms, events and processes — both online and offline — at the same time.

“Everybody’s doing it, and it’s, in a large way, a natural adaptation to the technological environment that that has been created around us.”

Media multi-tasking, like skimming, is an adaptive response to an environment inundated with information. And media multi-tasking comes at a cognitive cost, Le Roux points out.

“We incur what we might call a switch cost; that means our performance in our focal task is going to suffer. If you think of driving as the focal task, the reason we prohibit drivers from using their smartphones while they’re driving is it because it distracts them from the task of driving.”


This episode was hosted by Nehal El-Hadi and written by Mend Mariwany. The executive producer is Mend Mariwany. Eloise Stevens does our sound design, and our theme music is by Neeta Sarl.

You can find us on Twitter @TC_Audio, on Instagram at theconversationdotcom or via email. You can also subscribe to The Conversation’s free daily email here. A transcript of this episode will be available soon.

Listen to “The Conversation Weekly” via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.

Nehal El-Hadi, Science + Technology Editor & Co-Host of The Conversation Weekly Podcast, The Conversation and Daniel Merino, Associate Science Editor & Co-Host of The Conversation Weekly Podcast, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Marburg virus outbreaks are increasing in frequency and geographic spread – three virologists explain

Marburg virus spreads through close contact with infected body fluids. NIAID/flickr, CC BY-SA
Adam Hume, Boston University; Elke Mühlberger, Boston University, and Judith Olejnik, Boston University

The World Health Organization confirmed an outbreak of the deadly Marburg virus disease in the central African country of Equatorial Guinea on Feb. 13, 2023. To date, there have been 11 deaths suspected to be caused by the virus, with one case confirmed. Authorities are currently monitoring 48 contacts, four of whom have developed symptoms and three of whom are hospitalized as of publication. The WHO and the U.S. Centers for Disease Control and Prevention are assisting Equatorial Guinea in its efforts to stop the spread of the outbreak.

Microscopy image of Marburg virus particles
Marburg virus is structurally similar to the Ebola virus. Photo12/Universal Images Group via Getty Images

Marburg virus and the closely related Ebola virus belong to the filovirus family and are structurally similar. Both viruses cause severe disease and death in people, with fatality rates ranging from 22% to 90% depending on the outbreak. Patients infected by these viruses exhibit a wide range of similar symptoms, including fever, body aches, severe gastrointestinal symptoms like diarrhea and vomiting, lethargy and sometimes bleeding.

We are virologists who study Marburg, Ebola and related viruses. Our laboratory has a long-standing interest in researching the underlying mechanisms of how these viruses cause disease in people. Learning more about how Marburg virus is transmitted from animals to humans and how it spreads between people is essential to preventing and limiting future outbreaks.

Marburg virus disease

Marburg virus spreads between people by close contact only after they show symptoms. It is transmitted through infected body fluids such as blood, and is not airborne. Contact tracing is a potent tool to combat outbreaks. The incubation time, or time between infection and the onset of symptoms, ranges from two to 21 days and typically falls between five and 10 days. This means that contacts must be observed for extended periods for potential symptoms.

Marburg virus cannot be detected before patients are symptomatic. One major cause of the spread of Marbug virus disease is postmortem transmission due to traditional burial procedures, where family and friends typically have direct skin-to-skin contact with people who have died from the disease.

There are currently no approved treatments or vaccines against Marburg virus disease. The most advanced vaccine candidates in development use strategies that have been shown to be effective at protecting against Ebola virus disease.

Without effective treatments or vaccines, Marburg virus outbreak control primarily relies on contact tracing, sample testing, patient contact monitoring, quarantines and attempts to limit or modify high-risk activities such as traditional funeral practices.

What causes Marburg virus outbreaks?

Marburg virus outbreaks have an unusual history.

The first recorded outbreak of Marburg virus disease occurred in Europe. In 1967, laboratory workers in Marburg and Frankfurt in Germany, as well as in Belgrade, Yugoslavia (now Serbia) were infected with a previously unknown pathogen after handling infected monkeys that had been imported from Uganda. This outbreak led to the discovery of the Marburg virus.

Identifying the virus took only three months, which, at the time, was incredibly fast considering the available research tools. Despite receiving intensive care, seven of the 32 patients died. This case fatality rate of 22% was relatively low compared to subsequent Marburg virus outbreaks in Africa, which have had a cumulative case fatality rate of 86%. It remains unclear if these differences in lethality are due to variability in patient care options or other factors such as distinct viral strains.

Subsequent Marburg virus disease outbreaks occurred in Uganda and Kenya, as well as the Democratic Republic of the Congo and Angola in Central Africa. In addition to the current outbreak in Equatorial Guinea, recent Marburg virus cases in the West African countries of Guinea in 2021 and Ghana in 2022 highlight that the Marburg virus is not confined to Central Africa.

Strong evidence shows that the Egyptian fruit bat, a natural animal reservoir of Marburg virus, might play an important role in spreading the virus to people. The location of all Marburg virus outbreaks coincides with the natural range of these bats. The large area of Marburg virus outbreaks is unsurprising, given the ecology of the virus. However, the mechanisms of zoonotic, or animal-to-human, spread of Marburg virus still remain poorly understood.

Researchers approaching Bat Cave in Queen Elizabeth National Park
A number of Marburg virus outbreaks are linked to human activity in caves where Egyptian fruit bats are known to roost. Bonnie Jo Mount/The Washington Post via Getty Images

The origin of a number of Marburg virus disease outbreaks is closely linked to human activity in caves where Egyptian fruit bats roost. More than half of the cases in a 1998 outbreak in the northeastern Democratic Republic of the Congo were among gold miners who had worked in Goroumbwa Mine. Intriguingly, the end of the nearly two-year outbreak coincided with the flooding of the cave and the disappearance of the bats in the same month.

Similarly, in 2007, four men who worked in a gold and lead mine in Uganda where thousands of bats were known to roost became infected with Marburg virus. In 2008, two tourists were infected with the virus after visiting Python Cave in the Maramagambo Forest in Uganda. Both developed severe symptoms after returning to their home countries – the woman from the Netherlands died and the woman from the United States survived.

The geographic range of Egyptian fruit bats extends to large portions of sub-Saharan Africa and the Nile River Delta, as well as portions of the Middle East. There is potential for zoonotic spillover events, to occur in any of these regions.

More frequent outbreaks

Although Marburg virus disease outbreaks have historically been sporadic, their frequency has been increasing in recent years.

The increasing emergence and reemergence of zoonotic viruses, including filoviruses (such as Ebola, Sudan and Marburg viruses), coronaviruses (which cause SARS, MERS and COVID-19), henipaviruses (such as Nipah and Hendra viruses) and Mpox appear to be influenced by both human encroachment on previously undisturbed animal habitats and alterations to wildlife habitat ranges due to climate change.

Most Marburg virus outbreaks have occurred in remote areas, which has helped to contain the spread of the disease. However, the large geographic distribution of Egyptian fruit bats that harbor the virus raises concerns that future Marburg virus disease outbreaks could happen in new locations and spread to more densely populated areas, as seen by the devastating Ebola virus outbreak in 2014 in West Africa, where over 11,300 people died.

Adam Hume, Research Assistant Professor of Microbiology, Boston University; Elke Mühlberger, Professor of Microbiology, Boston University, and Judith Olejnik, Senior Research Scientist, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.