Sunday, April 30, 2023

ChatGPT could be an effective and affordable tutor

Educational software has a long history, but chatbots could help students excel like never before. Fabio Principe / EyeEm via Getty Images
Anne Trumbore, University of Virginia

Imagine a private tutor that never gets tired, has access to massive amounts of data and is free for everyone. In 1966, Stanford philosophy professor Patrick Suppes did just that when he made this prediction: One day, computer technology would evolve so that “millions of schoolchildren” would have access to a personal tutor. He said the conditions would be just like the young prince Alexander the Great being tutored by Aristotle.

Now, ChatGPT, a new artificial intelligence-powered chatbot with advanced conversational abilities, may have the capability to become such a tutor. ChatGPT has collected huge amounts of data on a wide range of topics and can pass graduate school exams. As a researcher who studies how computers can be used to help people learn, I think ChatGPT can be used to help students excel academically. However, in its current form, ChatGPT shows an inability to stay focused on one particular task, let alone tutoring.

Philosophy, engineering and artificial intelligence scholars envisioned using the computer as an “intelligent tutor” well before the internet became a global commercial network in the 1990s. I believe lessons from developing those early tutoring systems can offer insight into how students and educators can best make use of ChatGPT as a tutor in the future.

Computers as tutors

Suppes – the Stanford philosophy professor – was a pioneer of a field called “computer-assisted instruction.” He developed some of the earliest educational software. That software provided individual instruction via computer and led students to have better test results than those who didn’t use the program. I worked for Suppes in developing software and other online programs from 2004 to 2012.

Since then, experiments in building “intelligent tutors” to help students have driven advances in artificial intelligence, social networks and computer hardware. And today, the abilities of ChatGPT to write essays, answer philosophical questions and solve computer coding problems may finally achieve Suppes’ goal of truly personalized tutoring via computer.

Early versions of personalized learning

In 1972, a new personalized learning system called PLATO, for Programmed Logic for Automated Teaching Operations, made its debut. It was the first widely available personalized learning system of its kind.

Created by Don Bitzer, a professor of electrical engineering at the University of Illinois, PLATO allowed up to 1,000 students to be logged onto a mainframe computer simultaneously. Each student could complete different online courses in foreign languages, music, math and many other subjects while receiving feedback from the computer on their work.

PLATO enabled students to reach the same level of achievement as in-person classes in less time. And most students preferred this mode of instruction over sitting in a large lecture class. Yet, the system was too expensive to be used by many colleges and universities. Each computer terminal was marketed at over US$8,000 – about $58,000 today – and schools were charged additional fees every time a student used the system. Still, PLATO’s success with students inspired a number of companies to create software that provided a similar kind of tutoring, including the College Curriculum Corporation, which was co-founded by Suppes.

Popular personal computer brands, such as Apple and Commodore, advertised the availability of educational software as a reason for families to invest in a home computer.

By 1985, researchers at Carnegie Mellon University were designing software using advances in artificial intelligence and cognitive psychology. They claimed that the current technology had advanced to a level that enabled computer systems to be designed to serve as effective as human tutors. However, even though there were over 10,000 pieces of educational software available at the time, much of it was of fairly low quality and did not provide real tutoring.

Although the more advanced designs of the educational software developed at Carnegie Mellon enabled students to learn significantly more than students in traditional classrooms, they were not widely used in schools.

In the 1980s and 1990s, a school would need a sizable number of expensive, high-powered computer workstations for students to use an intelligent tutor. Today, the computers are much more powerful and much less expensive.

And early intelligent tutors were used primarily in math, science and technology courses, which limited their appeal. Students could not ask the software programs a question. However, students could get feedback on answers to quizzes and tests.

In 2001, the next generation of intelligent tutoring systems was able to have conversations with students in written English. These systems, early chatbots, used advances in natural language processing to have dialogues with students about a range of subjects. Each system was designed specifically for one subject, such as physics. Adding another subject, like English, would require its own specialized tutoring system.

The next few years brought three developments, which enabled a forward leap in achieving the dream of computerized tutoring. One was the availability of broadband internet, which was faster than the dial-up connections of the 1990s. The second was the lower cost of more powerful computers. The third was investment by technology companies and government funding for university research in artificial intelligence, which led to the development of what we now call chatbots.

The first chatbots

By 2007, early AI chatbots provided tutoring by talking students through their answers to questions. Research showed these chatbots supported learning outcomes similar to the effects of human tutors.. Even so, using chatbot technology to provide tutoring was primarily experimental.

During the 2010s, social networks and advances in artificial intelligence dramatically changed the way people interact with technology. Chatbots are integrated into many parts of our lives and are no longer experimental. People now talk to their phones, to their cars and their home appliances. Many people have virtual assistants at home. They can ask Alexa, Siri or Google for directions or the answers to simple homework questions.

Today, ChatGPT has advanced abilities that enable more complex conversations and can provide individualized tutoring much like what Suppes predicted in 1966.

I asked ChatGPPT why it would be a good tutor, and it responded: “I am a good tutor for a student because I am able to understand and respond to a wide range of questions and prompts, and I can provide detailed and accurate information on a wide range of subjects. Additionally, I am able to communicate this information in a clear and easy-to-understand manner, making it accessible for students of all levels of understanding. Because I am a machine-learning model, I can also provide a large amount of information quickly and efficiently, which can be especially helpful for students who are trying to learn a lot of new material in a short amount of time.”

By contrast, Siri said, “Hmm … I don’t have an answer for that. Is there something else I can help you with?”

Recent research in using chatbots in college courses shows that how chatbots are used is important in encouraging students to learn more deeply. For instance, AI chatbots enabled significant improvements in learning in a graduate course on cloud computing at Carnegie Mellon. These learning gains occurred when these chatbots asked students to build on an existing argument or to provide more information about a claim they had made. In this case, the chatbot asked the student a question, rather than vice versa.

Many educators are worried about students learning less with ChatGPT since it can be used to cheat on assignments and papers. Others are worried about ChatGPT giving wrong answers or spreading misinformation.

Yet the history and research of intelligent tutors show that using the right design to harness the power of chatbots like ChatGPT can make deeper, individualized learning available to almost anyone. For example, if people use ChatGPT to ask students questions that prompt them to revise or explain their work, students will have better learning gains. Since ChatGPT has access to far more knowledge than Aristotle ever did, it has great potential for providing tutoring to students to help them learn more than they would otherwise.

Anne Trumbore, Chief Digital Learning Officer, Sands Institute for Lifelong Learning, University of Virginia

This article is republished from The Conversation under a Creative Commons license. 

Banking crises rooted in a system that rewards excessive risk-taking – as First Republic’s precarious situation shows

Another bank on the brink. AP Photo/Jeff Chiu
Alexandra Digby, University of Rochester; Dollie Davis, Minerva University, and Robson Hiroshi Hatsukami Morgan, Minerva University

First Republic Bank is on the brink of collapse, a victim of the panic that has roiled small and midsize banks since the failure of Silicon Valley Bank in March 2023.

Should First Republic fail, it would underscore how the impact of risky decisions at one bank can quickly spread into the broader financial system. It should also provide the impetus for policymakers and regulators to address a systemic problem that has plagued the banking industry from the savings and loan crisis of the 1980s to the financial crisis of 2008 to the recent turmoil following SVB’s demise: incentive structures that encourage excessive risk-taking.

The Federal Reserve’s top regulator seems to agree. On April 28, 2023, the central bank’s vice chair for supervision delivered a stinging report on the collapse of Silicon Valley Bank, blaming its failures on its weak risk management, as well as supervisory missteps.

We are professors of economics who study and teach the history of financial crises. In each of the financial upheavals since the 1980s, the common denominator was risk. Banks provided incentives that encouraged executives to take big risks to boost profits, with few consequences if their bets turned bad. In other words, all carrot and no stick.

One question we are grappling with now is what can be done to keep history from repeating itself and threatening the banking system, economy and jobs of everyday people.

S&L crisis sets the stage

The precursor to the banking crises of the 21st century was the savings and loan crisis of the 1980s.

The so-called S&L crisis, like the collapse of SVB, began in a rapidly changing interest rate environment. Savings and loan banks, also known as thrifts, provided home loans at attractive interest rates. When the Federal Reserve under Chairman Paul Volcker aggressively raised rates in the late 1970s to fight raging inflation, S&Ls were suddenly earning less on fixed-rate mortgages while having to pay higher interest to attract depositors. At one point, their losses topped US$100 billion.

To help the teetering banks, the federal government deregulated the thrift industry, allowing S&Ls to expand beyond home loans to commercial real estate. S&L executives were often paid based on the size of their institutions’ assets, and they aggressively lent to commercial real estate projects, taking on riskier loans to grow their loan portfolios quickly.

In the late 1980s, the commercial real estate boom turned bust. S&Ls, burdened by bad loans, failed in droves, requiring the federal government take over banks and delinquent commercial properties and sell the assets to recover money paid to insured depositors. Ultimately, the bailout cost taxpayers more than $100 billion.

Short-term incentives

The 2008 crisis is another obvious example of incentive structures that encourage risky strategies.

At all levels of mortgage financing – from Main Street lenders to Wall Street investment firms – executives prospered by taking excessive risks and passing them to someone else. Lenders passed mortgages made to people who could not afford them onto Wall Street firms, which in turn bundled those into securities to sell to investors. It all came crashing down when the housing bubble burst, followed by a wave of foreclosures.

Incentives rewarded short-term performance, and executives responded by taking bigger risks for immediate gains. At the Wall Street investment banks Bear Stearns and Lehman Brothers, profits grew as the firms bundled increasingly risky loans into mortgage-backed securities to sell, buy and hold.

As foreclosures spread, the value of these securities plummeted, and Bear Stearns collapsed in early 2008, providing the spark of the financial crisis. Lehman failed in September of that year, paralyzing the global financial system and plunging the U.S. economy into the worst recession since the Great Depression.

Executives at the banks, however, had already cashed in, and none were held accountable. Researchers at Harvard University estimated that top executive teams at Bear Stearns and Lehman pocketed a combined $2.4 billion in cash bonuses and stock sales from 2000 to 2008.

A familiar ring

That brings us back to Silicon Valley Bank.

Executives tied up the bank’s assets in long-term Treasury and mortgage-backed securities, failing to protect against rising interest rates that would undermine the value of these assets. The interest rate risk was particularly acute for SVB, since a large share of depositors were startups, whose finances depend on investors’ access to cheap money.

When the Fed began raising interest rates last year, SVB was doubly exposed. As startups’ fundraising slowed, they withdrew money, which required SVB to sell long-term holdings at a loss to cover the withdrawals. When the extent of SVB’s losses became known, depositors lost trust, spurring a run that ended with SVB’s collapse.

For executives, however, there was little downside in discounting or even ignoring the risk of rising rates. The cash bonus of SVB CEO Greg Becker more than doubled to $3 million in 2021 from $1.4 million in 2017, lifting his total earnings to $10 million, up 60% from four years earlier. Becker also sold nearly $30 million in stock over the past two years, including some $3.6 million in the days leading up to his bank’s failure.

The impact of the failure was not contained to SVB. Share prices of many midsize banks tumbled. Another American bank, Signature, collapsed days after SVB did.

First Republic survived after it was rescued by a consortium of major banks led by JPMorgan Chase, but the damage was already done. First Republic recently reported that depositors withdrew more than $100 billion in the six weeks following SVB’s collapse, and now it appears that it could soon fail too.

The crisis isn’t over yet. Banks had over $620 billion in unrealized losses at the end of 2022, largely due to rapidly rising interest rates.

The big picture

So, what’s to be done?

We believe the bipartisan bill recently filed in Congress, the Failed Bank Executives Clawback, would be a good start. In the event of a bank failure, the legislation would empower regulators to claw back compensation received by bank executives in the five-year period preceding the failure.

Clawbacks, however, kick in only after the fact. To prevent risky behavior, regulators could require executive compensation to prioritize long-term performance over short-term gains. And new rules could restrict the ability of bank executives to take the money and run, including requiring executives to hold substantial portions of their stock and options until they retire.

The Fed’s new report on what led to SVB’s failure points in this direction. The 102-page report recommends new limits on executive compensation, saying leaders “were not compensated to manage the bank’s risk,” as well as stronger stress-testing and higher liquidity requirements.

We believe these are also good steps, but probably not enough.

It comes down to this: Financial crises are less likely to happen if banks and bank executives consider the interest of the entire banking system, not just themselves, their institutions and shareholders.

Alexandra Digby, Adjunct Assistant professor of Economics, University of Rochester; Dollie Davis, Associate Dean of Faculty, Minerva University, and Robson Hiroshi Hatsukami Morgan, Assistant Professor of Social Sciences, Minerva University

This article is republished from The Conversation under a Creative Commons license. 

8 Ways to Make Your Pet Feel Like Part of the Family

It’s said that pets can be people’s best friends, but the relationships between people and their four-legged friends often run even deeper. In fact, many consider pets to be an integral part of their families.

From eating and playing together to matching outfits and more, consider these practical ways to make dogs and cats feel like they’re important members of your household so you can become two peas in a pod.

Play Often
Playtime is an easy (and fun) way to bond with pets, whether your furry friends prefer fetch, tug-of-war or any other game that keeps you both active. On top of the emotional benefits, it can help keep them in better physical shape – you might even be able to get a little exercise, too.

Feed Human-Grade Food
Sharing a meal with people is a common way to connect, and that same connection can be made with pets. Feeding them at the same time you sit down for a meal can create a special bond. Though it’s tempting to feed them off your plate, consider a human grade food option like Freshly Crafted from Full Moon Pet. Created for pet parents who seek homemade quality ingredients free from additives or preservatives, this fresh frozen food includes 100% human-grade ingredients like real USDA chicken breast and blueberries, meaning it looks, smells and tastes just like food you would cook in your own kitchen while providing complete nutritional benefits pets need to thrive.

With flavors including Homestead Turkey, Ranch-Raised Beef and Free-Range Chicken, these curated meals promote skin and coat health, support immune health and encourage muscle growth.

Set Up a Special Space
Just like humans, pets need a little quiet time now and then. Creating an area they can retreat to with a bed, blankets and favorite toys offers a “safe space” where they can be comfortable. Plus, it’s a perfect spot for them to nap when you’re away and the house is empty.

Schedule Regular Wellness Visits
The veterinarian’s office may not be your furry friend’s favorite hangout, but scheduling regular checkups is an important part of owning pets. A balanced, quality diet can reduce trips to the vet to just essential visits – often annually or bi-annually – and help your pets live happy and healthy lives.

Show Affection
Belly rubs, freshly brushed coats and simply spending time together are all effective ways to show pets your affection. Another way to share your love is to reward dogs with their favorite treats like Full Moon Chicken Jerky, which is a high-protein, meaty treat made with few ingredients, including USDA chicken breast, so you can feel good about providing your pets with a treat that tastes good and is good for them.

Take Pets on Vacation
Skip the anxiety of leaving pets behind during your next trip by letting them join in on the fun. Exploring the world together is an exciting way to bond as you can visit new places, find dog parks and discover pet-friendly restaurants. If it’s your pet’s first road trip, consider a shorter excursion to ensure he or she is comfortable in the car, and make sure to bring all the essentials like food, water, bowls, bedding, toys and medications.

Wear Matching Clothes
When you’re dressed to feel your best, you’re likely to feel more confident in yourself. Provide that same feeling of satisfaction to pets by dressing them in the same clothes you plan to wear for a walk around the block or a trip to the dog park. Next time there’s a family photo session, plan to include furry friends in matching outfits so everyone can be dressed to impress.

Celebrate Special Occasions
Birthdays, “gotcha days” or holidays are perfect moments for celebrating your bond with your pets. Make the day extra special by indulging in favorites like going for a walk, heading to the dog park, gifting a new toy, enjoying specialty treats and more.

To find more ways to bond with your pets, visit FullMoonPet.com.

SOURCE:
Full Moon Pet

Saturday, April 29, 2023

Cannabis-derived products like delta-8 THC and delta-10 THC have flooded the US market – two immunologists explain the medicinal benefits and potential risks

Thousands of cannabis-derived products are now on the market. skodonnell/E+ via Getty Images
Prakash Nagarkatti, University of South Carolina and Mitzi Nagarkatti, University of South Carolina

These days you see signs for delta-8 THC, delta-10 THC and CBD, or cannabidiol, everywhere – at gas stations, convenience stores, vape shops and online. Many people are rightly wondering which of these compounds are legal, whether it is safe to consume them and which of their supposed medicinal benefits hold up to scientific scrutiny.

The rapid proliferation of cannabis products makes clear the need for the public to better understand what these compounds are derived from and what their true benefits and potential risks may be.

We are immunologists who have been studying the effects of marijuana cannabinoids on inflammation and cancer for more than two decades.

We see great promise in these products in medical applications. But we also have concerns about the fact that there are still many unknowns about their safety and their psychoactive properties.

Parsing the differences between marijuana and hemp

Cannabis sativa, the most common type of cannabis plant, has more than 100 compounds called cannabinoids.

The most well-studied cannabinoids extracted from the cannabis plant include delta-9-tetrahydrocannabinol, or delta-9 THC, which is psychoactive. A psychoactive compound is one that affects how the brain functions, thereby altering mood, awareness, thoughts, feelings or behavior. Delta-9 THC is the main cannabinoid responsible for the high associated with marijuana. CBD, in contrast, is non-psychoactive.

Marijuana and hemp are two different varieties of the cannabis plant. In the U.S., federal regulations stipulate that cannabis plants containing greater than 0.3% delta-9 THC should be classified as marijuana, while plants containing less should be classified as hemp. The marijuana grown today has high levels – from 10% to 30% – of delta-9 THC, while hemp plants contain 5% to 15% CBD.

In 2018, the Food and Drug Administration approved the use of CBD extracted from the cannabis plant to treat epilepsy. In addition to being a source of CBD, hemp plants can be used commercially to develop a variety of other products such as textiles, paper, medicine, food, animal feed, biofuel, biodegradable plastic and construction material.

Recognizing the potential broad applications of hemp, when Congress passed the Agriculture Improvement Act, called the Farm Bill, in 2018, it removed hemp from the category of controlled substances. This made it legal to grow hemp.

When hemp-derived CBD saturated the market after passage of the Farm Bill, CBD manufacturers began harnessing their technical prowess to derive other forms of cannabinoids from CBD. This led to the emergence of delta-8 and delta-10 THC.

The chemical difference between delta-8, delta-9 and delta-10 THC is the position of a double bond on the chain of carbon atoms they structurally share. Delta-8 has this double bond on the eighth carbon atom of the chain, delta-9 on the ninth carbon atom, and delta-10 on the 10th carbon atom. These minor differences cause them to exert different levels of psychoactive effects.

Illustration of the chemical formula and structural composition of CBD versus delta9 THC.
Delta-9 THC is believed to be the primary cannabinoid that gives marijuana its psychoactive effects. Both CBD and marijuana have been shown in studies to be beneficial for various medicinal uses. About time/iStock via Getty Images Plus

The properties of delta-9 THC

Delta-9 THC was one of the first forms of cannabinoid to be isolated from the cannabis plant in 1964. The highly psychoactive property of delta-9 THC is based on its ability to activate certain cannabinoid receptors, called CB1, in the brain. The receptor, CB1, is like a lock that can be opened only by a specific key – in this case, delta-9 THC – allowing the latter to affect certain cell functions.

Delta-9 THC mimics the cannabinoids, called endocannabinoids, that our bodies naturally produce. Because delta-9 THC emulates the actions of endocannabinoids, it also affects the same brain functions they regulate, such as appetite, learning, memory, anxiety, depression, pain, sleep, mood, body temperature and immune responses.

The FDA approved delta-9 THC in 1985 to treat chemotherapy-induced nausea and vomiting in cancer patients and, in 1992, to stimulate appetite in HIV/AIDS patients.

The National Academy of Sciences has reported that cannabis is effective in alleviating chronic pain in adults and for improving muscle stiffness in patients with multiple sclerosis, an autoimmune disease. That report also suggested that cannabis may help sleep outcomes and fibromyalgia, a medical condition in which patients complain of fatigue and pain throughout the body. In fact, a combination of delta-9 THC and CBD has been used to treat muscle stiffness and spasms in multiple sclerosis. This medicine, called Sativex, is approved in many countries but not yet in the U.S.

Delta-9 THC can also activate another type of cannabinoid receptor, called CB2, which is expressed mainly on immune cells. Studies from our laboratory have shown that delta-9 THC can suppress inflammation through the activation of CB2. This makes it highly effective in the treatment of autoimmune diseases like multiple sclerosis and colitis as well as inflammation of the lungs caused by bacterial toxins.

However, delta-9 THC has not been approved by the FDA for ailments such as pain, sleep, sleep disorders, fibromyalgia and autoimmune diseases. This has led people to self-medicate against such ailments for which there are currently no effective pharmacological treatments.

Delta-8 THC, a chemical cousin of delta-9

Delta-8 THC is found in very small quantities in the cannabis plant. The delta-8 THC that is widely marketed in the U.S. is a derivative of hemp CBD.

Delta-8 THC binds to CB1 receptors less strongly than delta-9 THC, which is what makes it less psychoactive than delta-9 THC. People who seek delta-8 THC for medicinal benefits seem to prefer it over delta-9 THC because delta-8 THC does not cause them to get very high.

However, delta-8 THC binds to CB2 receptors with a similar strength as delta-9 THC. And because activation of CB2 plays a critical role in suppressing inflammation, delta-8 THC could potentially be preferable over delta-9 THC for treating inflammation, since it is less psychoactive.

There are no published clinical studies thus far on whether delta-8 THC can be used to treat the clinical disorders such as chemotherapy-induced nausea or appetite stimulation in HIV/AIDS that are responsive to delta-9 THC. However, animal studies from our laboratory have shown that delta-8 THC is also effective in the treatment of multiple sclerosis.

The sale of delta-8 THC, especially in states where marijuana is illegal, has become highly controversial. Federal agencies consider all compounds isolated from marijuana or synthetic forms, similar to THC, Schedule I controlled substances, which means they currently have no accepted medical use and have considerable potential for abuse.

However, hemp manufacturers argue that delta-8 THC should be legal because it is derived from CBD isolated from legally cultivated hemp plants.

In this California-based recreational and medical cannabis store, cannabis gummies are “easily” the most popular product.

The emergence of delta-10 THC

Delta-10 THC, another chemical cousin to delta-9 and delta-8, has recently entered the market.

Scientists do not yet know much about this new cannabinoid. Delta-10 THC is also derived from hemp CBD. People have anecdotally reported feeling euphoric and more focused after consuming delta-10 THC. Also, anecdotally, people who consume delta-10 THC say that it causes less of a high than delta-8 THC.

And virtually nothing is known about the medicinal properties of delta-10 THC. Yet it is being marketed in similar ways as the other more well-studied cannabinoids, with claims of an array of health benefits.

The future of cannabinoid derivatives

Research and clinical trials using marijuana or delta-9 THC to treat many medical conditions have been hampered by their classification as Schedule 1 substances. In addition, the psychoactive properties of marijuana and delta-9 THC create side effects on brain functions; the high associated with them causes some people to feel sick, or they simply hate the sensation. This limits their usefulness in treating clinical disorders.

In contrast, we feel that delta-8 THC and delta-10 THC, as well as other potential cannabinoids that could be isolated from the cannabis plant or synthesized in the future, hold great promise. With their strong activity against the CB2 receptors and their lower psychoactive properties, we believe they offer new therapeutic opportunities to treat a variety of medical conditions.

Prakash Nagarkatti, Professor of Pathology, Microbiology and Immunology, University of South Carolina and Mitzi Nagarkatti, Professor of Pathology, Microbiology and Immunology, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. 

A Fruity Sprinkle Surprise

(Culinary.net) To kids, birthday parties are a big deal and only happen once a year. From the decorations to their friends and all the sweet, delicious treats to devour, it can be an overwhelming amount of excitement and awe.

They receive gifts, get to have fun with their friends and family, and get to snack on treats they typically don’t have on a regular basis. This is part of what makes birthdays so fun.

It can be a lot of pressure for parents, though. You want everything to be perfect and fall in line with expectations, especially when it comes to the food and treats served to everyone that day.

At the next party you’re hosting, try this delightful Fruity Sprinkles Smoothie that fits the theme for nearly any colorful birthday bash.

It’s made with frozen blueberries, frozen strawberries and frozen mango for a healthier alternative to sugar-filled birthday cake. Topped with fluffy, fun whipped cream and mini sprinkles, it still provides a sweet, festive treat. Plus, this smoothie can be made in a matter of minutes using only one kitchen appliance for easy clean up.

To make it, blend frozen blueberries, frozen strawberries, frozen mango, milk and yogurt until well combined.

Pour the mixture into four smoothie glasses and garnish each with whipped cream and sprinkles to add some extra color.
It’s that easy to make and even better to enjoy while watching your kid make wonderful memories with friends and family.
Find more fun celebration recipes at Culinary.net.

If you made this recipe at home, use #MyCulinaryConnection on your favorite social network to share your work.

Watch video to see how to make this recipe!

 

Fruity Sprinkles Smoothie

Servings: 4

  • 1          cup frozen blueberries
  • 2          cups frozen strawberries
  • 1          cup frozen mango
  • 1 1/2    cups milk
  • 1          carton (6 ounces) vanilla yogurt
  • whipped cream
  • sprinkles
  1. In blender, blend blueberries, strawberries, mango, milk and yogurt until combined.
  2. Pour smoothie into four glasses. Garnish with whipped cream and sprinkles.
SOURCE:
Culinary.net

The quest for autism’s causes, and what it reveals about all of us


The more researchers look, the more multifaceted the risk factors appear — and the more we learn about how the brain works and develops

As alarm grew over autism prevalence at the turn of this century, there was much public talk of a growing “epidemic.” That language has since softened, and it is now clear that many autistic people were there all along, their condition unrecognized until relatively recently.

But what is the cause? The emerging narrative today is that there is no single cause — rather, multiple factors, roughly sorted into the categories of genetics and environment, work together in complex ways. Because of this complexity and the hundreds of gene variants that have been implicated, developing human brains may follow many possible paths to arrive at a place on the autism spectrum.

And this may help explain something true about autism: It varies greatly from one person to the next.

As clinicians view it, autism involves communication deficits and formulaic, repetitive behaviors that present obstacles to establishing conventional relationships. The soft borders of that definition — where does communication difficulty cross over into communication deficit? — suggest blurred margins between people who are diagnosed with autism and those who approach, but never quite cross, the line into diagnostic territory.

Those who do have diagnoses display behaviors on a continuum of intensity. Their use of spoken language ranges from not speaking at all to being hyperverbal. They can have a unique interest in the finer details of window blinds or an intense but more socially tolerated fascination with dinosaurs. As with many human behaviors, each feature exists on a spectrum, and these spectra blend in a person to create what clinicians call autism.

By pinpointing risk-associated genes and uncovering their roles, studying the roots of autism also is providing new insights into the development of all human brains, autistic or not. Here is a taste of what we now know, and what we don’t, about autism’s causes — and what that search is teaching us about everybody’s neurology.

They know it when they see it

Despite the many and varied threads that may interweave to cause autism, the condition is largely identifiable. What clinicians are really saying when they diagnose autism, says James McPartland, a clinical psychologist at the Yale Child Study Center, is that they see a recognizable, if broadly defined, constellation of behaviors. “So really, there is something true about autism, and everyone who meets the diagnosis of autism shows these kinds of behaviors.”

At the same time, the subtle differences in how each autistic person manifests the telltale features make it highly individual, says Pauline Chaste, a child psychiatrist at Inserm U 894, the Centre de Psychiatrie et Neurosciences, in Paris. “We describe a specific behavior that exists — that kind of social impairment and rigidity. You can have more or less of it, but it definitely exists.”

The more or less of autism could trace, in part, to the types of gene variants that contribute to it in a given person. Some of these variants have a big effect by themselves, while others make tiny contributions, and any autistic person could have their own unique mix of both. One thing seems clear: Though there may be something true about autism, as McPartland puts it, the existence of “one true autism gene” or even one gene for each autism feature is unlikely.

Instead, there will be patterns of gene combinations and the results they produce, says epidemiologist Elise Robinson of the Harvard T.H. Chan School of Public Health and an associate member of the Broad Institute. People who have both autism and intellectual disability, for example, tend to have more big-effect gene mutations than people with autism alone.

Facial communication

Looking for these contributing gene variants isn’t simply an exercise in scientific curiosity or in finding potential targets for drug treatments. Because most of these genes direct how human brains develop and nerve cells communicate, learning about how they lead to autism can also reveal a lot about how everyone’s brain works.

For example, a key autism trait is atypical social behaviors, such as, sometimes, not focusing on “social” facial features like the eyes. Although the tendency to look into another person’s eyes seems like something we might learn simply from being around other people, autism research has revealed that genes underlie the instinct.

In a 2017 study, the authors first showed that identical twins are similar in how they look at a video with social content, such as faces. When viewing the same video, the identical twin pairs shifted their eyes with the same timing and focused on the same things far more than did two non-identical siblings or unrelated children. The fact that almost all twin pairs shared this tendency suggests solid genetic underpinnings for the behavior.

Having established a strong genetic contribution to this trait, the investigators, from Emory University and the Marcus Autism Center in Georgia and Washington University in St. Louis, then showed that the tendency to look at the eye and mouth areas of a human face is decreased in autistic children. They concluded that while not all of the inclination to look at certain parts of a face is genetic, much of it is.

Twin studies like this are powerful tools for evaluating how much genes dictate a feature, and such investigations reveal that the genetic contribution to autism is substantial. Autism also tends to cluster in non-twin family members: One in five infants who has an older sibling with autism also develops it.

Genetic determinants

Overall, genetics accounts for about 70 to 80 percent of factors contributing to autism, says neurologist Daniel Geschwind, director of UCLA’s autism research and treatment center. By comparison, a condition like depression has an underlying genetic contribution of about 50 percent, he says. Alessandro Gozzi, neuroscientist and group leader at the Istituto Italiano di Tecnologia, weights the power of genes even more, placing the shared diagnosis rate between twins as high as 95 percent, depending on how strict the diagnostic boundaries are. But regardless of the precise value, he says that the “wide consensus” among autism researchers is that genetics is a powerful determinant of autism.

Going the next step — finding the specific genes involved — is a monumental task. It’s also one that yields dividends for understanding brain function more broadly.

The candidate gene variants are today very numerous, but a few stand out for their potential to exert a large effect. Chaste cites fragile X syndrome and Rett syndrome as examples — both are genetic conditions (termed syndromes because they are defined by a cluster of traits) that are tied to variants of a single gene or chromosome region and are closely associated with autism.

The gene linked to fragile X syndrome lies on the X chromosome. Its name, FMR1, is easily forgettable, but the effects of its variants are not. Studies on the causes of fragile X reveal that the protein this gene encodes, FMRP, acts as a cellular shuttle for RNA molecules that are crucial for nerve-cell communication and plasticity of connections in the brain. In people with fragile X, cells don’t produce the protein, or make very little of it. The FMR1 variants underlying fragile X are the most common known genetic cause of intellectual disability and are implicated in 1 to 6 percent of autism cases.

Like FMR1, the genetic changes involved in Rett syndrome also affect brain development. A gene called methyl CpG binding protein 2, or MECP2, oversees the activity of many brain-related genes, turning them off or on. Because of this pivotal role for MECP2, mutations that affect its function can lead to broad effects. Some of the resulting features look so much like autism that Rett syndrome was categorized as an autism spectrum disorder until 2013.

Other genetic syndromes also include autism as a feature. Some are caused by variants in a gene called SHANK3 which, like most genes implicated in autism, is involved in brain development and function. The protein that it encodes helps to coax nerve extensions to form and take shape so that a nerve cell can communicate with others. The SHANK3 protein also provides a physical scaffold for those cells to link up. In populations of people with mutations that prevent SHANK3 protein production or who are missing the segment of chromosome 22 that contains the gene, most will have autism or Phelan-McDermid syndrome, which often includes autism.

Yet another syndrome arises from the loss or duplication of a chunk of chromosome 16. Researchers linked this chromosomal change to autism in studies comparing the DNA of people with and without the condition, singling out sequence alterations found only in autistic participants.

Despite their clear ties to autism, these syndromes are rare. “Collectively, they are found in about 5 percent of the total population of patients with autism,” Gozzi says. That leaves a great deal to explain.

Inheritance on a spectrum

So where do the other autistic people come from, genetically speaking? Robinson says that their genetics don’t neatly fall into two types of buckets, of either a few genes with big effects or many genes with small effects. “It’s been well established at this point that it’s not either–or,” she says.

In fact, says Gozzi, varying combinations of big-effect mutations and lots of different, smaller-effect ones could explain the wide spectrum of differences observed among autistic people. The evidence supports such a range, he says: everything from a few heavy-hitting variations in some people, to an additive dose from many variants in others, and with overlap between the two patterns in still others.

Geschwind adds yet another layer of complexity: the role of the cellular environment that all the other gene variants in a person create, known as the background effect. For example, someone could have a mutation conferring high risk that is either enhanced or diminished by the background input from other genes not directly related to autism, to create a gradation of autism intensity.

Environmental influences

When researchers speak of environmental inputs to traits, diseases and disorders, they are referring to everything from pollutants in the air to subtle perturbations inside cells to cues from other cells. Finding such causative candidates for autism generally involves epidemiological studies that look for correlations between autism rates in a population and an environmental factor of interest.

These connections aren’t easy to locate. In the case of genes, if a study involves enough people, even rare genetic differences that make small contributions to autism can often be plucked from the pile. Not so for environmental influences if their effects are significant but small, says Robinson. Within those epidemiological studies, you have to be able to detect that slight signal and assess its power against the larger, background noise of lots of other variations in the cell, body or outside environment that you might not even be aware of and might not be relevant. “We don’t live in a simple, single-exposure world,” says Kristen Lyall, an epidemiologist at Drexel University in Philadelphia.

And even when a connection is made, its basis is still just math. That is certainly the first step in evaluating a link between an environmental factor and a condition such as autism: As one thing goes up, does the other follow? But two things that track together don’t necessarily share a biological association. (One of the silliest examples to illustrate how misleading correlation can be is how tightly the number of people killed by venomous spiders each year tracks with the number of letters in the winning word of the same year’s Scripps National Spelling Bee.)

In the case of genetic studies, gene changes with tiny effects can still be considered plausible if their usual role relates to brain function in some way. Environmental factors aren’t as well catalogued, measured and tracked. But the better epidemiological studies do look for correlations with credible and pre-identified factors of interest (so, not Scripps Spelling Bee words).

For feasibility’s sake, work on environmental factors in autism has tended to focus on inputs that have broad effects on brain development. Robinson points to extreme preterm birth, which is related to many kinds of neurodevelopmental disorders — autism among them.

Eventually, studies can add up to connect dots and arrive at a plausible story of cause and effect. For example, along with preterm birth, air pollution also has been linked to autism risk. Another recent study found that when oil and power plants close down, preterm births in the region drop. It’s therefore a reasonable hypothesis that very preterm birth operates as an intermediate between air pollution exposure and autism.

Lyall believes that prenatal exposures to environmental pollutants that can behave like hormones are particularly strong candidates for involvement in autism risk. These chemicals, collectively known as endocrine-disrupting compounds, include pesticides and even heavy metals, and they are pretty much everywhere — in air, land, water, food and us.

Some research suggests, for example, that exposure to the endocrine disruptor mercury in air pollution raises autism odds. The studies are few and the data haven’t overwhelmingly showed increases in risk, Lyall acknowledges, “but I think that it’s an interesting and important area for future research given the lack of regulation around these chemicals, their ubiquity in the environment and their known adverse effects on broader neurodevelopment.”

Researchers have also homed in on plausible biological bases for a couple of other potential environmental effects. Gozzi points to animal studies, mostly in mice, that bolster human work linking autism in a child with prenatal exposure to a mother’s ramped-up immune responses as a result of infections. Again, Gozzi stresses that the findings are far from definitive, and most studies involving humans have focused on infections severe enough to require hospitalization.

Another unearthed link is to paternal age at conception: Studies find that autism risk increases with the age of the father, usually starting in the thirties or forties, although the age range and magnitude of the increase vary among different studies. The cells that give rise to sperm tend to accumulate new mutations over the years, so the sperm contain sequence changes that pass to offspring but aren’t present in the father’s own body cells. Some of these changes involve regions or genes already implicated in autism risk. Sperm also show changes in the chemical tagging of DNA that controls the activity of genes.

Establishing environmental cause unequivocally is almost impossible, because of ethical constraints. It’s one thing to examine blood or tissue samples for genetic variants that track with autism diagnoses. It’s another thing entirely to manipulate factors to see if they induce autism or not. No one’s going to deliberately infect a pregnant woman or have a group of men specifically delay fatherhood just to test how these factors influence autism odds.

Researchers instead are stuck finding correlations between these factors and then looking at available measures, such as changes in gene activity, accrual of mutations over the lifespan and studies of autism-like behavior in animal models. And as they look at these associations, they often make discoveries that are relevant beyond autism — ones that have now been extended to studies of schizophrenia, aging and even human evolution. The link between autism and having an older father, for example, has led to studies examining how changes in sperm over time affect brain development in later generations.

While most environmental candidates remain just that — candidates — Lyall says emphatically that one factor is out of the running: vaccines. “That’s pretty conclusively been shown to have no association with autism,” she says, noting the numerous large epidemiological studies that have reached that conclusion.

The settled vaccine question is a small point of clarity in an otherwise blurred landscape of autism cause-and-effect research. Every new finding seems to open up yet more pathways, some leading toward autism, and some toward broader revelations about the brain and how hormones, the immune system, the air we breathe and more add up to make their mark on neural development. The network of genetic and environmental factors that converge and diverge to produce autism may reflect not only the multiplicity of ways of being autistic — but also, more broadly, of being human.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

The Federal Reserve and the art of navigating a soft landing … when economic data sends mixed signals

‘Surely we can avoid an economic crash? We can, but don’t call me Shirley!’ Paramount Pictures/Fathom Events
Christopher Decker, University of Nebraska Omaha

With inflation easing and the U.S. economy cooling, is the Federal Reserve done raising interest rates? After all, gently bringing down the trajectory of prices without crashing the economy was the central bank’s objective when it began jacking up rates over a year ago.

Gross domestic product, the broadest measure of an economy’s output, expanded at an annual pace of a mere 1.1% in the first quarter, according to data released April 27, 2023 – down from 2.6% recorded in the final three months of 2022. And the latest consumer price data, from March, shows inflation slowing to 5% on an annualized basis, the least in about a year.

Unfortunately for consumers and businesses weary of soaring borrowing costs, the Fed’s not likely done hiking rates quite yet. Financial markets are predicting another quarter-point hike when the Fed meets for a two-day meeting that ends May 3, 2023. And there could be several more increases to come.

But this does raise another important question: With all the recent, often conflicting, data and narratives regarding inflation, bank failures and layoffs in the tech sector, is the Fed close to engineering the “soft landing” it’s been hoping for?

The economy zigs then zags

The GDP data is a mixed bag and provides some clues to the answer.

Overall, the recent GDP figures suggest a likely economic slowdown going forward, due largely to a drawdown in inventories – that is, rather than ordering new goods, companies are relying more on stuff currently in storage. Businesses seems more inclined to sell what is on hand rather than order up new products, likely in anticipation of a slowdown in consumption. And business investment declined 12.5% in the quarter.

At the same time, consumer spending, which represents about two-thirds of GDP, grew at a healthy 3.7% pace, and investment in equipment such as computers and robotics increased by 11.2% – though this category is quite volatile and could easily turn in subsequent quarters.

Other data also points to a slowdown, such as a decline in new orders for manufactured goods. This, combined with the drawdown in inventories in the GDP report, might suggest that businesses are anticipating a slowdown in demand for goods and services.

When we look at the labor market, while job increases have been strong – 334,000 over the past six months – job openings have been declining. After peaking at about 12 million in March 2022, openings dropped to about 9.9 million as of February, according to the Bureau of Labor Statistics.

Inflation: Is it high or low?

In terms of inflation, we can also see conflicting numbers.

The headline consumer price index has indeed slowed steadily since peaking in June 2022 at 9.1%. But the core preferred consumption index, the Fed’s favored measure of inflation, has remained stubbornly elevated. The latest data, released on April 28, 2023, showed the index, which excludes volatile food and energy prices, was up 4.6% in March from a year earlier and has barely budged in months.

Meanwhile, wages, which when rising can have a strong upward push on prices, climbed at an annualized 5.1% in the first quarter, also according to data released on April 28. That’s down from the peak of 5.7% in the second quarter of 2022 but is still about the fastest pace of wage gains in at least two decades.

More hikes to come

So what might all this suggest about Fed actions on interest rates?

The next meeting is scheduled to end on May 3, with the market odds greatly favoring another 0.25 percentage point increase – which would be the 10th straight hike since March 2022.

With the inflation rate still well above the Fed’s target of about 2%, combined with continued job growth and a low unemployment rate, the central bank is likely not done ratcheting up rates. I agree with the market odds pricing in a quarter-point hike for the May meeting. Future data will guide any future rate increases beyond that.

The good news is that, I believe, the larger rate increases are well in the past.

Landing softly – or at least mildly

That brings us back to the big question: How close is the Fed to sticking a soft landing, in which the U.S. economy manages to tame inflation without a recession?

Sadly, it’s too early to tell. Labor markets can be very volatile and political and international events – such as potential gridlock on debt ceiling talks or further escalations in the Ukraine War – can turn things upside down. That said, we are either looking at a mild recession or a growth recession.

What’s the difference? A growth recession signals a weak economy but not enough to significantly drive up unemployment – and that’s preferable to even a mild recession of multiple quarterly drops in GDP and much higher unemployment.

We just don’t know which is more likely. What I think is true now, though, is that, barring any catastrophic and unpredictable events, a severe recession has been avoided.

Christopher Decker, Professor of Economics, University of Nebraska Omaha

This article is republished from The Conversation under a Creative Commons license. 

Criminologist Bruce Jacobs has spoken to carjackers in detail about their crimes. Here’s what he’s learned in two decades of study.

Almost as long as there have been cars, there have been carjackings — thefts of occupied automobiles committed through force, or threat of force. During Prohibition, shipments of alcohol were regularly intercepted by armed robbers, and other inventory-carrying commercial vehicles then became targets. Carjacking of personal vehicles became increasingly prominent in media reports in the 1990s after some high-profile incidents in which victims died during the robbery. The crime became a federal offense in 1992.

In recent years, reports of carjackings have increased in several cities. In Chicago, carjackings more than doubled in 2020 and continued to rise in 2021. Since 2019, carjackings in Philadelphia have more than tripled. Officials in New Orleans, Washington, DC, and Minneapolis have all reported similar spikes.

Bruce Jacobs, a criminologist at the University of Texas at Dallas, has extensively studied the crime. He started his research in the early 2000s, describing it as a natural progression from studying other street crimes in St. Louis, Missouri, where he had been researching illegal drug distribution, drug-related violence and robbery.

To understand the steps and motivations that drive carjackings, Jacobs and his collaborators used both crime-reporting data and interviews with active carjackers. Recently, Jacobs and Michael Cherbonneau, a criminologist at the University of North Florida, described insights into the scope and process of carjacking in the 2023 Annual Review of Criminology.

Knowable Magazine spoke with Jacobs to discuss what he’s learned about this crime and his takeaways for prevention. The following conversation has been edited for clarity and length.

You’ve been studying carjacking, off and on, for about 20 years now. What’s the most surprising thing you’ve learned?

I think maybe the most surprising thing is just the unpredictability of this crime. Other violent crimes usually have a certain pattern to them geographically or temporally. There may be some sort of interpersonal connection between the victim and the offender. Or, in stranger-on-stranger crimes, like robbery typically is, there are usually hotspots within a city that are more prone to experiencing those types of crimes.

With carjacking, it’s so spur-of-the-moment. The offender sees a vulnerable target and an opportunity to strike. It may not be in a so-called hotspot of a city. Really, anybody driving a car in public is potentially at risk. From the perspective of the victim, they may just be at the wrong place at the wrong time.

Why do people choose to carjack rather than steal an unoccupied car?

A lot of carjackers don’t like the ambiguity and the uncertainty of a potential victim coming out of their house or their business while they are stealing a car. Whereas with carjacking, the vehicle’s on, the keys are inside, the victim’s inside. It’s simply a matter of going up to them, displaying the weapon, telling them to get out of the car or throwing them out of the car, and taking the vehicle. It’s very quick. It’s very simple. That’s what some of our carjackers would say: It’s safer to carjack than to steal a vehicle off the street.

Cars today also have more security features than in the past. Does that also make carjacking a preferred option?

That’s what the evidence seems to show. Back in the day, you could break into an Oldsmobile or a Chevy, strip the ignition column, jam a screwdriver in there, and it starts in 30 seconds. With these modern cars, you can’t do that anymore. They require these chips and proximity readers. A lot of the electronics are much more advanced and not accessible to a thief with a screwdriver. So there does seem to be what might be called tactical displacement, where these offenders figure out, “If I identify the car I want, I’m just gonna take it by force. It’s already on and the keys are in it.”

Several major US cities have experienced dramatic rises in carjacking since the pandemic hit. What’s behind that trend?

First, the technology issue that we just talked about — it’s just getting harder and harder to steal cars off the street. Second, the pandemic, I think, played a large part as well. Due to the school shutdowns, younger at-risk offenders found themselves unsupervised with a lot of time on their hands. And the ubiquitous Covid mask allowed them substantially enhanced anonymity.

But it’s very difficult to do a year-to-year or city-to-city analysis because the data are not maintained that way on a federal level or even on a state or local level. Most jurisdictions don’t track carjacking separately from other forms of robbery. So we had to rely on reports from police officials who did track it in some of these cities.

To research carjacking, you’ve interviewed active offenders. How did you conduct these interviews?

Those active offenders were identified to us through a specially trained project field worker who I came to know over the years as part of my duties as a criminologist in St. Louis. He was an active offender himself. He had multiple and ongoing contacts with active offenders — not in jail or prison, but out on the street. He was trusted amongst the folks that he referred, and we had worked with him for many years.

So we relied on him to identify offenders and convey those respondents to us. Then we would interview them at length, through in-depth and semi-structured interviews, about why they did it, how they did it, where they did it, who they selected for targets.

What did you learn about decision-making and motives from your interviews?

The economic motives are probably primary — stealing the vehicle to chop it up for parts, or, not infrequently, we see these vehicles being stolen for their accessory items, like performance rims and high-end audio systems, which might be worth more than the car itself.

There are sometimes retaliatory motives where somebody is showboating their vehicle and driving in a way that is disrespectful to the would-be offender, and they'll just take it to teach a lesson. We’ve seen carjackings that are committed in the course of some other crime — for example, to escape. We’ve had carjackings done for thrills, especially among young offenders who are just looking for a rush.

There’s a variety of motives that energize this offense. It really depends on the offender and the situation that they’re in.

In terms of offender decision-making, it lines up with a lot of what we know about predatory violence more generally. Despite the opportunistic, spur-of-the-moment, crude way in which many of these offenses are carried out, there is a reasonable degree of calculation on the part of many of these offenders. There’s calculation in sizing up their targets, figuring out how to approach their target, figuring out the ideal place to commit these crimes to lower risk of detection, and then using force within the actual offense to maximize the likelihood of compliance.

It’s striking just how quickly these decisions are made. You’re talking about literally under a minute for most of the offenses to unfold and be done. I’ve reviewed thousands of police reports and video evidence of carjackings around the country. All that evidence seems to indicate most of these offenses are very, very quick.

You write that media accounts often give a skewed picture of carjacking. Why do you say that?

The carjackings that typically get reported in the media tend to be disproportionately violent, disproportionately graphic, because those kinds of stories generate eyeballs, and eyeballs generate ratings, which mean profit. You’ve got to be very careful not to suggest that those events are representative of the broader universe of carjackings.

Carjackings are very, very rarely fatal — in the tenths of a percent. And they very rarely involve serious victim injury: Only 1 percent of victims are hospitalized.

Based on your research, how can potential carjacking victims keep themselves safe?

Potential victims really have to educate themselves on being alert. It’s as simple as just being aware of your surroundings and people lurking on the periphery of your vehicle. There are certain points where you’re more vulnerable than others: when you get inside your vehicle, when you get outside of your vehicle, at traffic lights, at gas stations. That’s when you’re at highest risk. That’s what we’ve noticed not only in our interview-based research, but also from other researchers in the field, media sources and police sources.

Just being aware of those points, I think, can enhance victim safety. If you get that kind of gnawing feeling that something’s about to go down, I would listen to that sixth sense.

But can’t being super vigilant be exhausting and potentially cause us to view well-meaning strangers as threats?

There’s a balance. You don’t want to be paranoid. With paranoia, you can almost put the idea in the offender’s head. And we’ve seen that in our interviews, like, “Oh, I wasn't really even thinking about it, but this guy looked at me a certain way or looked paranoid or scared or afraid and then you know, the car was right there.” These crimes are so opportunistic and spur-of-the-moment that that can set them off.

You want to be able to be attuned to your surroundings so that you can react quickly if necessary. At least at certain times when you’re potentially vulnerable, just minimizing distractions that might undermine that situational awareness can help.

What about when the carjacking is already happening? What should victims do?

This is a difficult crime for a victim to manage. When you’re getting carjacked, you don’t know what’s happening. You might think you’re being abducted. And if the victim panics, then that can escalate really badly, quickly. If the vehicle’s on and the driver’s inside, the car can be both a weapon and a shield, so that can encourage resistance on the victim’s part. Even with a gun in your face, if you think you’re being abducted or about to be killed, you might just floor it to get away. That can potentially escalate the violence to you as the victim. Offenders get mad when you’re non-compliant. If they have a gun, they’re liable to fire it. Ironically, it’s really the carjacker’s job, to let you know, “Hey, I just want your car, get out and you’re not gonna get hurt.” Sometimes that doesn’t happen.

It’s hard to give universal advice, it’s very situational. The general best advice is not to resist, give them what they want.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Friday, April 28, 2023

AI is exciting – and an ethical minefield: 4 essential reads on the risks and concerns about this technology

Who’s in control? John Lund/Stone via Getty Images
Molly Jackson, The Conversation

If you’re like me, you’ve spent a lot of time over the past few months trying to figure out what this AI thing is all about. Large-language models, generative AI, algorithmic bias – it’s a lot for the less tech-savvy of us to sort out, trying to make sense of the myriad headlines about artificial intelligence swirling about.

But understanding how AI works is just part of the dilemma. As a society, we’re also confronting concerns about its social, psychological and ethical effects. Here we spotlight articles about the deeper questions the AI revolution raises about bias and inequality, the learning process, its impact on jobs, and even the artistic process.

1. Ethical debt

When a company rushes software to market, it often accrues “technical debt”: the cost of having to fix bugs after a program is released, instead of ironing them out beforehand.

There are examples of this in AI as companies race ahead to compete with each other. More alarming, though, is “ethical debt,” when development teams haven’t considered possible social or ethical harms – how AI could replace human jobs, for example, or when algorithms end up reinforcing biases.

Casey Fiesler, a technology ethics expert at the University of Colorado Boulder, wrote that she’s “a technology optimist who thinks and prepares like a pessimist”: someone who puts in time speculating about what might go wrong.

That kind of speculation is an especially useful skill for technologists trying to envision consequences that might not impact them, Fiesler explained, but that could hurt “marginalized groups that are underrepresented” in tech fields. When it comes to ethical debt, she noted, “the people who incur it are rarely the people who pay for it in the end.”

2. Is anybody there?

AI programs’ abilities can give the impression that they are sentient, but they’re not, explained Nir Eisikovits, director of the Applied Ethics Center at the University of Massachusetts Boston. “ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less,” he wrote.

But saying AI isn’t conscious doesn’t mean it’s harmless.

“To me,” Eisikovits explained, “the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.” Humans easily project human features onto just about anything, including technology. That tendency to anthropomorphize “points to real risks of psychological entanglement with technology,” according to Eisikovits, who studies AI’s impact on how people understand themselves.

A human hand against a dark background reaches out to touch a hologram-like hand.
People give names to boats and cars – and can get attached to AI, too. Yuichiro Chino/Moment via Getty Images

Considering how many people talk to their pets and cars, it shouldn’t be a surprise that chatbots can come to mean so much to people who engage with them. The next steps, though, are “strong guardrails” to prevent programs from taking advantage of that emotional connection.

3. Putting pen to paper

From the start, ChatGPT fueled parents’ and teachers’ fears about cheating. How could educators – or college admissions officers, for that matter – figure out if an essay was written by a human or a chatbot?

But AI sparks more fundamental questions about writing, according to Naomi Baron, an American University linguist who studies technology’s effects on language. AI’s potential threat to writing isn’t just about honesty, but about the ability to think itself.

A woman with short hair, a necklace, and a short-sleeve dress smiles guardedly in a black and white photograph.
American writer Flannery O'Connor sits with a copy of her novel ‘Wise Blood,’ published in 1952. Apic/Hulton Archive via Getty Images

Baron pointed to novelist Flannery O'Connor’s remark that “I write because I don’t know what I think until I read what I say.” In other words, writing isn’t just a way to put your thoughts on paper; it’s a process to help sort out your thoughts in the first place.

AI text generation can be a handy tool, Baron wrote, but “there’s a slippery slope between collaboration and encroachment.” As we wade into a world of more and more AI, it’s key to remember that “crafting written work should be a journey, not just a destination.”

4. The value of art

Generative AI programs don’t just produce text, but also complex images – which have even captured a prize or two. In theory, allowing AI to do nitty-gritty execution might free up human artists’ big-picture creativity.

Not so fast, said Eisikovits and Alec Stubbs, who is also a philosopher at the University of Massachusetts Boston. The finished object viewers appreciate is just part of the process we call “art.” For creator and appreciator alike, what makes art valuable is “the work of making something real and working through its details”: the struggle to turn ideas into something we can see.

Editor’s note: This story is a roundup of articles from The Conversation’s archives.

Molly Jackson, Religion and Ethics Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license.