Thursday, May 4, 2023

Why are so many Gen Z-ers drawn to old digital cameras?

A student on a school bus holding a digital point-and-shoot camera. Jason Zhang/Wikimedia Commons
Tim Gorichanaz, Drexel University

The latest digital cameras boast ever-higher resolutions, better performance in low light, smart focusing and shake reduction – and they’re built right into your smartphone.

Even so, some Gen Z-ers are now opting for point-and-shoot digital cameras from the early 2000s, before many of them were born.

It’s something of a renaissance, and not just for older cameras. The digital camera industry as a whole is seeing a resurgence. Previously, industry revenue peaked in 2010 and was shrinking annually through 2021. Then it saw new growth in 2022, and it is projected to continue growing for the coming years.

But why?

One explanation is nostalgia, or a yearning for the past. And indeed, nostalgia can be an effective coping strategy in times of change and upheaval – the COVID-19 pandemic is just one of the disorienting shifts of the past few decades.

But my research on people’s experiences with technology, which includes photography, suggests a deeper explanation: seeking meaning.

It’s not that these Gen Z-ers are longing to return to childhood, but that they are finding and expressing their values through their technological choices. And there’s a lesson here for everyone.

The human need for meaning

Humans have many needs – food, shelter, sex and so on. But humans also feel the urge to find meaning in life.

Meaning is different from happiness. Though happiness and meaning are often correlated, meaning doesn’t necessarily include the pleasure that characterizes happiness. Meaningful pursuits may involve struggle, suffering or even sacrifice. Meaning also lasts longer, whereas happiness is fleeting.

What does meaning do for people?

At its core, meaning is about identifying one’s values and making choices to develop oneself as a person. It allows a person to engage with the various aspects of their personality – “the multitudes” contained therein, as Walt Whitman wrote.

Put differently, meaning is about weaving a personal narrative from the facts of life. And it really is a need, not just something that’s nice to have. Meaning is what makes life feel valuable and worth living.

Seeking meaning with technology

Why do people adopt one technology over another? According to what scholars call the technology acceptance model, people consider two major aspects when choosing a technology: its perceived usefulness and its perceived ease of use.

But certainly there are other considerations, especially for personal technologies. People choose some technologies for the way they contribute to meaning. And the search for meaning extends beyond choosing a technology to the way a person uses and experiences it. For example, many people use social media in constructing their sense of self.

In my own research, I discerned four themes involved in people’s meaningful experiences with technology:

  1. Presence: People choose formats and technologies that will help them be more present and attentive during the experience.
  2. Centripetal force: A person’s relationship with the technology begins with a central practice but gradually expands to become a bigger part of their life. For example, as a person’s photography practice becomes more meaningful, they may find themselves printing photos, curating their collection and shopping for more equipment.
  3. Curiosity: A sense of wonder and interest guides the experience.
  4. Self-construction: Meaningful experiences with technology contribute to the person’s sense of self.

In my research on ultra-distance runners, who run races even longer than marathons, I saw all these elements at play. Runners chose particular shoes, GPS watches, sensors and software – or avoided them – in part to be more present with their bodies.

This can make the running itself more meaningful, along with other activities such as writing race recaps, keeping a training log and sharing photos.

Runner wearing orange pinnie checks watch.
Marathoner Youssef Sbaai checks his watch after winning the Sofia Marathon in October 2020. Artur Widak/NurPhoto via Getty Images

Over time, running becomes a central part of a person’s identity – they become “a runner.” In the end, long-distance running is not always enjoyable, but it is definitely meaningful.

And so technology, whether it’s the kind associated with running or some other activity, becomes a key way people can discern their values and make choices that support and better embody those values.

The meaning within old digital cameras

In this context, using a standalone digital camera immediately enhances the meaningfulness of an experience. Meaning is about exercising choice, and nowadays most people don’t own a camera at all – they just use their smartphone.

Digital cameras also enable presence: You need to remember to carry the camera around, and in return it won’t give you notifications or show you other apps while you’re shooting.

A sleek and minimalist point-and-shoot digital camera from 2008.
A 2008 Nikon Coolpix S520, one example of the kinds of digital cameras seeing a resurgence today. Simon Speed/Wikimedia Commons

That goes for any standalone camera. But old cameras, in particular, have a set of qualities that help users make meaning.

First, the image quality is poorer. But on social media, photos that get posted are less about polish and precision and more about sharing experiences and telling stories. As social media theorist Nathan Jurgenson writes in his book “The Social Photo,” “As a medium, social photography becomes an important means to experience something not representable as an image but instead as a social process: an appreciation of impermanence for its own sake.”

As a person chooses which photos to share and how to edit them, they are expressing their values and developing their sense of self. To some extent, smartphone photo filters allow for some of this expression, but old digital cameras produce different kinds of visual effects and lack the automated features designed to professionalize the look of each image.

Older cameras also introduce challenges in getting the images onto social media. They require cables, software and multiple steps to transfer the images. It’s a far cry from one-click image generation with artificial intelligence. What this means is that photography involves many more activities beyond simply taking photos. Photography becomes a bigger part of one’s life.

All this friction increases a person’s involvement in the process, inviting choices along the way. This is precisely the thinking behind the slow technology movement, which aims to design technology for goals like self-reflection, rather than efficiency or productivity. Research on meaningful design shows people form stronger attachments to products when they have to make more choices or get more involved.

When it comes to finding meaning in older forms of photography – whether you use a digital camera or a film camera – the slower process of creating and sharing images outweighs the speed, efficiency and crisp imagery of smartphone cameras.

Crafting a more meaningful life

The meaning hidden within old digital cameras contains broader lessons.

In recent years, critics have bemoaned the rupturing of social institutions and the transformation of digital platforms into places that merely serve as vehicles to sell ads and collect data from users. During the pandemic, life itself threatened to go digital with all the hype surrounding the metaverse.

I believe that a key to living well in the near future is to identify where you can create choices, so you don’t feel like you’re drifting along at the mercy of algorithms and the whims of Big Tech.

Perhaps you could start a chapter of the Luddite Club – as a group of teens in Brooklyn recently did – and play board games in the park on weekends. Perhaps you could opt for a paper book rather than a podcast, specifically because you can’t do something else while you’re reading it.

On the surface, deliberately rejecting the latest, flashiest forms of technology may seem like a problem – “You’ll be left behind and miss out!”

But on the other hand, slowing down life by engaging with slower technology creates space to make choices more thoughtfully in relation to your values – and cultivate more meaningful involvement in your own life.

Tim Gorichanaz, Assistant Teaching Professor of Information Studies, Drexel University

This article is republished from The Conversation under a Creative Commons license.

Wednesday, May 3, 2023

Automation threatens to replace some workers but can grow overall employment. The one sure thing is that technology will change how we labor.

Back in the 1990s, when US banks started installing automated teller machines in a big way, the human tellers who worked in those banks seemed to be facing rapid obsolescence. If machines could hand out cash and accept deposits on their own, around the clock, who needed people?

The banks did, actually. It’s true that the ATMs made it possible to operate branch banks with many fewer employees: 13 on average, down from 20. But the cost savings just encouraged the parent banks to open so many new branches that the total employment of tellers actually went up.

The robots are coming: SpaceX founder Elon Musk, and the late physicist Stephen Hawking both publicly warned that machines will eventually start programming themselves, and trigger the collapse of human civilization.

You can find similar stories in fields like finance, health care, education and law, says James Bessen, the Boston University economist who called his colleagues’ attention to the ATM story in 2015. “The argument isn’t that automation always increases jobs,” he says, “but that it can and often does.”

That’s a lesson worth remembering when listening to the increasingly fraught predictions about the future of work in the age of robots and artificial intelligence. Think driverless cars, or convincingly human speech synthesis, or creepily lifelike robots that can run, jump and open doors on their own: Given the breakneck pace of progress in such applications, how long will there be anything left for people to do?

That question has been given its most apocalyptic formulation by figures such as Tesla and SpaceX founder Elon Musk and the late physicist Stephen Hawking. Both have publicly warned that the machines will eventually exceed human capabilities, move beyond our control and perhaps even trigger the collapse of human civilization. But even less dramatic observers are worried. In 2014, when the Pew Research Center surveyed nearly 1,900 technology experts on the future of work, almost half were convinced that artificially intelligent machines would soon lead to accelerating job losses — nearly 50 percent by the early 2030s, according to one widely quoted analysis. The inevitable result, they feared, would be mass unemployment and a sharp upswing in today’s already worrisome levels of income inequality. And that could indeed lead to a breakdown in the social order.

“It’s always easier to imagine the jobs that exist today and might be destroyed than it is to imagine the jobs that don’t exist today and might be created.”

Jed Kolko

Or maybe not. “It’s always easier to imagine the jobs that exist today and might be destroyed than it is to imagine the jobs that don’t exist today and might be created,” says Jed Kolko, chief economist at the online job-posting site Indeed. Many, if not most, experts in this field are cautiously optimistic about employment — if only because the ATM example and many others like it show how counterintuitive the impact of automation can be. Machine intelligence is still a very long way from matching the full range of human abilities, says Bessen. Even when you factor in the developments now coming through the pipeline, he says, “we have little reason in the next 10 or 20 years to worry about mass unemployment.”

So — which way will things go?

There’s no way to know for sure until the future gets here, says Kolko. But maybe, he adds, that’s not the right question: “The debate over the aggregate effect on job losses versus job gains blinds us to other issues that will matter regardless” — such as how jobs might change in the face of AI and robotics, and how society will manage that change. For example, will these new technologies be used as just another way to replace human workers and cut costs? Or will they be used to help workers, freeing them to exercise uniquely human abilities like problem-solving and creativity?

“There are many different possible ways we could configure the state of the world,” says Derik Pridmore, CEO of Osaro, a San Francisco-based firm that makes AI software for industrial robots, “and there are a lot of choices we have to make.”

Automation and jobs: lessons from the past

In the United States, at least, today’s debate over artificially intelligent machines and jobs can’t help but be colored by memories of the past four decades, when the total number of workers employed by US automakers, steel mills and other manufacturers began a long, slow decline from a high of 19.5 million in 1979 to about 17.3 million in 2000 — followed by a precipitous drop to a low of 11.5 million in the aftermath of the Great Recession of 2007–2009. (The total has since recovered slightly, to about 12.7 million; broadly similar changes were seen in other heavily automated countries such as Germany and Japan.) Coming on top of a stagnation in wage growth since about 1973, the experience was traumatic.

True, says Bessen, automation can’t possibly be the whole reason for the decline. “If you go back to the previous hundred years,” he says, “industry was automating at as fast or faster rates, and employment was growing robustly.” That’s how we got to millions of factory workers in the first place. Instead, economists blame the employment drop on a confluence of factors, among them globalization,the decline of labor unions, and a 1980s-era corporate culture in the United States that emphasized down-sizing, cost-cutting and quarterly profits above all else.

But automation was certainly one of those factors. “In the push to reduce costs, we collectively took the path of least resistance,” says Prasad Akella, a roboticist who is founder and CEO of Drishti, a start-up firm in Palo Alto, California, that uses AI to help workers improve their performance on the assembly line. “And that was, ‘Let’s offshore it to the cheapest center, so labor costs are low. And if we can’t offshore it, let’s automate it.’”

AI and robots in the workplace

Automation has taken many forms, including computer-controlled steel mills that can be operated by just a handful of employees, and industrial robots, mechanical arms that can be programmed to move a tool such as a paint sprayer or a welding torch through a sequence of motions. Such robots have been employed in steadily increasing numbers since the 1970s. There are currently about 2 million industrial robots in use globally, mostly in automotive and electronics assembly lines, each taking the place of one or more human workers.

The distinctions among automation, robotics and AI are admittedly rather fuzzy — and getting fuzzier, now that driverless cars and other advanced robots are using artificially intelligent software in their digital brains. But a rough rule of thumb is that robots carry out physical tasks that once required human intelligence, while AI software tries to carry out human-level cognitive tasks such as understanding language and recognizing images. Automation is an umbrella term that not only encompasses both, but also includes ordinary computers and non-intelligent machines.

AI’s job is toughest. Before about 2010, applications were limited by a paradox famously pointed out by the philosopher Michael Polanyi in 1966: “We can know more than we can tell” — meaning that most of the skills that get us through the day are practiced, unconscious and almost impossible to articulate. Polanyi called these skills tacit knowledge, as opposed to the explicit knowledge found in textbooks.

Imagine trying to explain exactly how you know that a particular pattern of pixels is a photograph of a puppy, or how you can safely negotiate a left-hand turn against oncoming traffic. (It sounds easy enough to say “wait for an opening in traffic” — until you try to define an “opening” well enough for a computer to recognize it, or to define precisely how big the gap must be to be safe.) This kind of tacit knowledge contained so many subtleties, special cases and things measured by “feel” that there seemed no way for programmers to extract it, much less encode it in a precisely defined algorithm.

Today, of course, even a smartphone app can recognize puppy photos (usually), and autonomous vehicles are making those left-hand turns routinely (if not always perfectly). What’s changed just within the past decade is that AI developers can now throw massive computer power at massive datasets — a process known as “‘deep learning.” This basically amounts to showing the machine a zillion photographs of puppies and a zillion photographs of not-puppies, then having the AI software adjust a zillion internal variables until it can identify the photos correctly.

Although this deep learning process isn’t particularly efficient — a human child only has to see one or two puppies — it’s had a transformative effect on AI applications such as autonomous vehicles, machine translation and anything requiring voice or image recognition. And that’s what’s freaking people out, says Jim Guszcza, US chief data scientist at Deloitte Consulting in Los Angeles: “Wow — things that before required tacit knowledge can now be done by computers!” Thus the new anxiety about massive job losses in fields like law and journalism that never had to worry about automation before. And thus the many predictions of rapid obsolescence for store clerks, security guards and fast-food workers, as well as for truck, taxi, limousine and delivery van drivers.

Meet my colleague, the robot

The fact is that, even now, it’s very hard to completely replace human workers.

But then, bank tellers were supposed to become obsolete, too. What happened instead, says Bessen, was that automation via ATMs not only expanded the market for tellers, but also changed the nature of the job: As tellers spent less time simply handling cash, they spent more time talking with customers about loans and other banking services. “And as the interpersonal skills have become more important,” says Bessen, “there has been a modest rise in the salaries of bank tellers,” as well as an increase in the number of full-time rather than part-time teller positions. “So it’s a much richer picture than people often imagine,” he says.

Similar stories can be found in many other industries. (Even in the era of online shopping and self-checkout, for example, the employment numbers for retail trade are going up smartly.) The fact is that, even now, it’s very hard to completely replace human workers.

Steel mills are an exception that proves the rule, says Bryan Jones, CEO of JR Automation, a firm in Holland, Michigan, that integrates various forms of hardware and software for industrial customers seeking to automate. “A steel mill is a really nasty, tough environment,” he says. But the process itself — smelting, casting, rolling, and so on — is essentially the same no matter what kind of steel you’re making. So the mills have been comparatively easy to automate, he says, which is why the steel industry has shed so many jobs.

When people are better

“Where it becomes more difficult to automate is when you have a lot of variability and customization,” says Jones. “That’s one of the things we’re seeing in the auto industry right now: Most people want something that’s tailored to them,” with a personalized choice of color, accessories or even front and rear grills. Every vehicle coming down the assembly line might be a bit different.

It’s not impossible to automate that sort of flexibility, says Jones. Pick a task, and there’s probably a laboratory robot somewhere that has mastered it. But that’s not the same as doing it cost-effectively, at scale. In the real world, as Akella points out, most industrial robots are still big, blind machines that go through their motions no matter who or what is in the way, and have to be caged off from people for safety’s sake. With machines like that, he says, “flexibility requires a ton of retooling and a ton of programming — and that doesn't happen overnight.”

Contrast that with human workers, says Akella. The reprogramming is easy: “You just walk onto the factory floor and say, ‘Guys, today we’re making this instead of that.’” And better still, people come equipped with abilities that few robot arms can match, including fine motor control, hand-eye coordination and a talent for dealing with the unexpected.

All of which is why most automakers today don’t try to automate everything on the assembly line. (A few of them did try it early on, says Bessen. But their facilities generally ended up like General Motors’ Detroit-Hamtramck assembly plant,which quickly became a debugging nightmare after it opened in 1985: Its robots were painting each other as often as they painted the Cadillacs.) Instead, companies like Toyota, Mercedes-Benz and General Motors restrict the big, dumb, fenced-off robots to tasks that are dirty, dangerous and repetitive, such as welding and spray-painting. And they post their human workers to places like the final assembly area, where they can put the last pieces together while checking for alignment, fit, finish and quality — and whether the final product agrees with the customer’s customization request.

To help those human workers, moreover, many manufacturers (and not just automakers) are investing heavily in collaborative robots, or “cobots” — one of the fastest-growing categories of industrial automation today.

Collaborative robots: Machines work with people

Cobots are now available from at least half a dozen firms. But they are all based on concepts developed by a team working under Akella in the mid-1990s, when he was a staff engineer at General Motors. The goal was to build robots that are safe to be around, and that can help with stressful or repetitive tasks while still leaving control with the human workers.

To get a feel for the problem, says Akella, imagine picking up a battery from a conveyor belt, walking two steps, dropping it into the car and then going back for the next one — once per minute, eight hours per day. “I've done the job myself,” says Akella, “and I can assure you that I came home extremely sore.” Or imagine picking up a 150-pound “cockpit” — the car’s dashboard, with all the attached instruments, displays and air-conditioning equipment — and maneuvering it into place through the car’s doorway without breaking anything.

Devising a robot that could help with such tasks was quite a novel research challenge at the time, says Michael Peshkin, a mechanical engineer at Northwestern University in Evanston, Illinois, and one of several outside investigators that Akella included in his team. “The field was all about increasing the robots’ autonomy, sensing and capacity to deal with variability,” he says. But until this project came along, no one had focused too much on the robots’ ability to work with people.

So for their first cobot, he and his Northwestern colleague Edward Colgate started with a very simple concept: a small cart equipped with set of lifters that would hoist, say, the cockpit, while the human worker guided it into place. But the cart wasn’t just passive, says Peshkin: It would sense its position and turn its wheels to stay inside a “virtual constraint surface” — in effect, an invisible midair funnel that would guide the cockpit through the door and into position without a scratch. The worker could then check the final fit and attachments without strain.

Another GM-sponsored prototype replaced the cart with a worker-guided robotic arm that could lift auto components while hanging from a movable suspension point on the ceiling. But it shared the same principle of machine assistance plus worker control — a principle that proved to be critically important when Peshkin and his colleagues tried out their prototypes on General Motors’ assembly line workers.

“We expected a lot of resistance,” says Peshkin. “But in fact, they were welcoming and helpful. They totally understood the idea of saving their backs from injury.” And just as important, the workers loved using the cobots. They liked being able to move a little faster or a little slower if they felt like it. “With a car coming along every 52 seconds,” says Peshkin, “that little bit of autonomy was really important.” And they liked being part of the process. “People want their skills to be on display,” he says. “They enjoy using their bodies, taking pleasure in their own motion.” And the cobots gave them that, he says: “You could swoop along the virtual surface, guide the cockpit in and enjoy the movement in a way that fixed machinery didn’t allow.”

AI and its limits

Akella’s current firm, Drishti, reports a similarly welcoming response to its AI-based software. Details are proprietary, says Akella. But the basic idea is to use advanced computer vision technology to function somewhat like a GPS for the assembly line, giving workers turn-by-turn instructions and warnings as they go. Say that a worker is putting together an iPhone, he explains, and the camera watching from overhead believes that only three out of four screws were secured: “We alert the worker and say, ‘Hey, just make sure to tighten that screw as well before it goes down the line.’”

This does have its Big Brother aspects, admits Drishti’s marketing director, David Prager. “But we’ve got a lot of examples of operators on the floor who become very engaged and ultimately very appreciative,” he says. “They know very well the specter of automation and robotics bearing down on them, and they see very quickly that this is a tool that helps them be more efficient, more precise and ultimately more valuable to the company. So the company is more willing to invest in its people, as opposed to getting them out of the equation.”

This theme — using technology to help people do their jobs rather than replacing people — is likely to be a characteristic of AI applications for a long time to come. Just as with robotics, there are still some important things that AI can’t do.

Take medicine, for example. Deep learning has already produced software that can interpret X rays as well as or better than human radiologists, says Darrell West, a political scientist who studies innovation at the Brookings Institution in Washington, DC. “But we’re not going to want the software to tell somebody, ‘You just got a possible cancer diagnosis,’” he says. “You're still going to need a radiologist to check on the AI, to make sure that what it observed actually is the case” — and then, if the results are bad, a cancer specialist to break the news to the patient and start planning out a course of treatment.

Likewise in law, where AI can be a huge help in finding precedents that might be relevant to a case — but not in interpreting them, or using them to build a case in court. More generally, says Guszcza, deep-learning-based AI is very good at identifying features and focusing attention where it needs to be. But it falls short when it comes to things like dealing with surprises, integrating many diverse sources of knowledge and applying common sense — “all the things that humans are very good at.”

And don’t ask the software to actually understand what it’s dealing with, says Guszcza. During the 2016 election campaign, to test Google’s Translate utility, he tried a classic experiment: Take a headline — “Hillary slams the door on Bernie” — then ask Google to translate it from English to Bengali and back again. Result: “Barney slam the door on Clinton.” A year later, after Google had done a massive upgrade of Translate using deep learning, Guszcza repeated the experiment with the result: “Hillary Barry opened the door.”

“I don’t see any evidence that we’re going to achieve full common-sense reasoning with current AI,” he says, echoing a point made by many AI researchers themselves. In September 2017, for example, deep learning pioneer Geoffrey Hinton, a computer scientist at the University of Toronto, told the news site Axios that the field needs some fundamentally new ideas if researchers ever hope to achieve human-level AI.

Job evolution

AI’s limitations are another reason why economists like Bessen don’t see it causing mass unemployment anytime soon. “Automation is almost always about automating a task, not the entire job,” he says, echoing a point made by many others. And while every job has at least a few routine tasks that could benefit from AI, there are very few jobs that are all routine. In fact, says Bessen, when he systematically looked at all the jobs listed in the 1950 census, “there was only one occupation that you could say was clearly automated out of existence — elevator operators.” There were 50,000 in 1950, and effectively none today.

On the other hand, you don’t need mass unemployment to have massive upheaval in the workplace, says Lee Rainie, director of internet and technology research at the Pew Research Center in Washington, DC. “The experts are hardly close to a consensus on whether robotics and artificial intelligence will result in more jobs, or fewer jobs,” he says, “but they will certainly change jobs. Everybody expects that this great sorting out of skills and functions will continue for as far as the eye can see.”

Worse, says Rainie, “the most worried experts in our sample say that we’ve never in history faced this level of change this rapidly.” It’s not just information technology, or artificial intelligence, or robotics, he says. It’s also nanotechnology, biotechnology, 3-D printing, communication technologies — on and on. “The changes are happening on so many fronts that they threaten to overwhelm our capacity to adjust,” he says.

Preparing for the future of work

If so, the resulting era of constant job churn could force some radical changes in the wider society. Suggestions from Pew’s experts and others include an increased emphasis on continuing education and retraining for adults seeking new skills, and a social safety net that has been revamped to help people move from job to job and place to place. There is even emerging support in the tech sector for some kind of guaranteed annual income, on the theory that advances in AI and robotics will eventually transcend the current limitations and make massive workplace disruptions inevitable, meaning that people will need a cushion.

This is the kind of discussion that gets really political really fast. And at the moment, says Rainie, Pew’s opinion surveys show that it’s not really on the public’s radar: “There are a lot of average folks, average workers saying, ‘Yeah, everybody else is going to get messed up by this — but I’m not. My business is in good shape. I can’t imagine how a machine or a piece of software could replace me.’”

But it’s a discussion that urgently needs to happen, says West. Just looking at what’s already in the pipeline, he says, “the full force of the technology revolution is going to take place between 2020 and 2050. So if we make changes now and gradually phase things in over the next 20 years, it’s perfectly manageable. But if we wait until 2040, it will probably be impossible to handle.”

Editor’s note: This story was updated on August 1 to correct the details of an experiment by Jim Guszcza. The story originally said that an experiment during the 2016 election campaign was conducted to see how much deep learning had improved Google’s Translate ability; in fact, the 2016 experiment was conducted before Google had fully upgraded Translate with deep learning. The initial test was done with the headline “Hillary slams the door on Bernie,” not “Bernie slams the door on Hillary” as originally stated. The headline that resulted after translation from English to Bengali and back again was "Barney slam the door on Clinton," not “Barry is blaming the door at the door of Hillary's door.” The deep-learning improvements were tested a year later with the same initial headline and the resulting headline after the translation to Bengali and back was “Hillary Barry opened the door.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. 

Humans beat robots, hands down

We can readily manipulate all kinds of objects; for them, versatility is a huge struggle. They need better mechanics — and a lot more of the intelligence that goes into handling things.

Like it or not, we’re surrounded by robots. Thousands of Americans ride to work these days in cars that pretty much drive themselves. Vacuum cleaners scoot around our living rooms on their own. Quadcopter drones automatically zip over farm fields, taking aerial surveys that help farmers grow their crops. Even scary-looking humanoid robots, ones that can jump and run like us, may be commercially available in the near future.

Robotic devices are getting pretty good at moving around our world without any intervention from us. But despite these newfound skills, they still come with a major weakness: The most talented of the bunch can still be stopped in their tracks by a simple doorknob.

The issue, says Matt Mason, a roboticist at Carnegie Mellon University, is that for all of robots’ existing abilities to move around the world autonomously, they can’t yet physically interact with objects in a meaningful way once they get there.

“What have we learned from robotics? The number one lesson is that manipulation is hard. This is contrary to our individual experience, since almost every human is a skilled manipulator,” writes Mason in a recent review article.

It’s a fair point. We humans manipulate the world around us without thinking. We grab, poke, twist, chop and prod objects almost unconsciously, thanks in part to our incredibly dexterous hands. As a result, we’ve built our worlds with those appendages in mind. All the cellphones, keyboards, radios and other tools we’ve handled throughout our lifetime have been designed explicitly to fit into our fingers and palms.

Not so for existing robots. At the moment, one of the most widely used robotic hand designs, called a “gripper,” is more or less identical to ones imagined on TV in the 1960s: a device made of two stiff metal fingers that pinch objects between them.

In a controlled environment like an assembly line, devices like these work just fine. If a robot knows that every time it reaches for a specific part, it’ll be in the same place and orientation, then grasping it is trivial. “It’s clear what kind of part is going to come down the conveyor belt, which makes sensing and perception relatively easy for a robot,” notes Jeannette Bohg, a roboticist at Stanford University.

The real world, on the other hand, is messy and full of unknowns. Just think of your kitchen: There may be piles of dishes drying next to the sink, soft and fragile vegetables lining the fridge, and multiple utensils stuffed into narrow drawers. From a robot’s perspective, Bohg says, identifying and manipulating that vast array of objects would be utter chaos.

“This is in a way the Holy Grail, right? Very often, you want to manipulate a wide range of objects that people commonly manipulate, and have been made to be manipulated by people,” says Matei Ciocarlie, a robotics researcher and mechanical engineer at Columbia University. “We can build manipulators for specific objects in specific situations. That’s not a problem. It’s versatility that’s the difficulty.”

To deal with the huge number of unique shapes and physical properties of those materials — whether they’re solid like a knife, or deformable, like a piece of plastic wrap — an ideal robotic appendage would necessarily be something that resembles what’s at the end of our arms. Even with rigid bones, our hands bend and flex as we grasp items, so if a robot’s hand can do the same, it could “cage” objects inside its grasp, and move them around on a surface by raking at them like an infant does her toys.

Engineering that versatility is no small feat. When engineers at iRobot — the same company that brought you the Roomba vacuum cleaner — developed a flexible, three-fingered “hand” several years ago, it was hailed as a major feat. Today, roboticists continue to turn away from a faithful replica of the human hand, looking toward squishy materials and better computational tools like machine learning to control them.

The quest for soft, flexible “hands”

“Humanlike grippers tend to be much more delicate and much more expensive, because you have a lot more motors and they’re packed into a small space,” says Dmitry Berenson, who studies autonomous robotic manipulation at the University of Michigan. “Really, you’ve got to have a lot of engineering to make it work, and a lot of maintenance, usually.” Because of those limitations, he says, existing humanlike hands aren’t widely used by industry.

For a robotic hand to be practical and even come close to a human’s in ability, it would have to be firm but flexible; be able to sense cold, heat and touch at high resolutions; and be gentle enough to pick up fragile objects but robust enough to withstand a beating. Oh, and on top of all that, it would have to be cheap.

To get around this problem, some researchers are looking to create a happy medium. They’re testing hands that mimic some of the traits of our own, but are far simpler to design and build. Each one uses soft latex “fingers” driven by tendon-like cables that pull them open and closed. The advantage of these sorts of designs is their literal flexibility — when they encounter an object, they can squish around it, form to its complex shape, and scoop it up neatly.

Such squishy “hands” offer a major improvement over a hard metal gripper. But they only begin to solve the issue. Although a rubbery finger works great for picking up all sorts of objects, it will struggle with fine motor skills needed for simple tasks like placing a coin into a slot — which involves not just holding the coin, but also feeling the slot, avoiding its edges, and sliding the coin inside. For that reason, says Ciocarlie, creating sensors that tell robots more about the objects they touch is an equally important part of the puzzle.

Our own fingertips have thousands of individual touch receptors embedded within the skin. “We don’t really know how to build those kinds of sensors, and even if we did, we would have a very hard time wiring them and getting that info back out,” Ciocarlie says.

The sheer number of sensors required would raise a second, even knottier issue: what to do with all that information once you have it. Computational methods that let a robot use huge amounts of sensory data to plan its next move are starting to emerge, says Berenson. But getting those abilities up to where they need to be may trump all other challenges researchers face in achieving autonomous manipulation. Building a robot that can use its “hands” quickly and seamlessly — even in completely novel situations — may not be possible unless engineers can endow it with a form of complex intelligence.

That brainpower is something many of us humans take for granted. To pick up a pencil on our desk, we simply reach out and grab it. When eating dinner, we use tongs, forks, and chopsticks to grab our food with grace and precision. Even amputees who have lost upper limbs can learn to use prosthetic hooks for tasks that require fine motor skills.

“They can tie their shoes, they can make a sandwich, they can get dressed — all with the simplest mechanism. So we know it’s possible if you have the right intelligence behind it,” Berenson says.

Teaching the machine

Getting to that level of intelligence in a robot may require a leap in the current methods researchers use to control them, says Bohg. Until recently, most manipulation software has involved building detailed mathematical models of real-world situations, then letting the robot use those models to plan its motion. One recently built robot tasked with assembling an Ikea chair, for example, uses a software model that can recognize each individual piece, understand how it fits together with its neighbors, and compare it to what the final product looks like. It can finish the assembly job in about 20 minutes. Ask it to assemble a different Ikea product, though, and it’ll be completely flummoxed.

Humans develop skills very differently. Instead of having deep knowledge on a single narrow topic, we absorb knowledge on the fly from example and practice, reinforcing attempts that work, and dismissing ones that don’t. Think back to the first time you learned how to chop an onion — once you figured out how to hold the knife and slice a few times, you likely didn’t have to start from scratch when you encountered a potato. So how does one get a robot to do that?

Bohg thinks the answer may lie in “machine learning,” a sort of iterative process that allows a robot to understand which manipulation attempts are successful and which aren’t — and enables it to use that information to maneuver in situations it’s never encountered.

“Before machine learning entered the field of robotics, it was all about modeling the physics of manipulation — coming up with mathematical descriptions of an object and its environment,” she says. “Machine learning lets us give a robot a bunch of examples of objects that someone has annotated, showing it, ‘Here is good place to grab.’” A robot could use these past data to look at an entirely new object and understand how to grasp it.

This method represents a major change from previous modeling techniques, but it may be a while before it’s sophisticated enough to let robots learn entirely on their own, says Berenson. Many existing machine-learning algorithms need to be fed vast amounts of data about possible outcomes — like all the potential moves in a chess game — before they can start to work out the best possible plan of attack. In other cases, they may need hundreds, if not thousands, of attempts to manipulate a given object before they stumble across a strategy that works.

That will have to change if a robot is to move and interact with the world as quickly as people can. Instead, Berenson says, an ideal robot should be able to develop new skills in just a few steps using trial and error, or be able to extrapolate new actions from a single example.

“The big question to overcome is, how do we update a robot’s models not with 10 million examples, but one?” he says. “To get it to a point where it says, ‘OK, this didn’t work, so what do I do next?’ That’s the real learning question I see.”

Mason, the roboticist from Carnegie Mellon, agrees. The challenge of programming robots to do what we do mindlessly, he says, is summed up by something called Moravec’s paradox (named after the robotics pioneer Hans Moravec, who also teaches at Carnegie Mellon). It states, in short, that what’s hard for humans to do is often handled with ease by robots, but what’s second nature for us is incredibly hard to program. A computer can play chess better than any person, for instance — but getting it to recognize and pick up a chess piece on its own has proved to be staggeringly difficult.

For Mason, that still rings true. Despite the gradual progress that researchers are making on robotic control systems, he says, the basic concept of autonomous manipulation may be one of the toughest nuts the field has yet to crack.

“Rational, conscious thinking is a relatively recent development in evolution,” he says. “We have all this other mental machinery that over hundreds of millions of years developed the ability to do amazing things, like locomotion, manipulation, perception. Yet all those things are happening below the conscious level.

“Maybe the stuff we think of as higher cognitive function, like being able to play chess or do algebra — maybe that stuff is dead trivial compared to the mechanics of manipulation.”

Editor's note: This story was updated on August 24, 2018, to correct a caption describing the Apollo robot experiment. Jeannette Bohg is not the person shifting the box in the video; it is another researcher.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. 

Rejected Oklahoma plea for death penalty commutation highlights clemency’s changing role in US death penalty system

Protesters demonstrate against the conviction and death sentence of Richard Glossip. Larry French/Getty Images for MoveOn.org
Austin Sarat, Amherst College

When the Oklahoma Pardon and Parole Board decided not to recommend clemency for death row inmate Richard Glossip, the case highlighted the role clemency plays in the death penalty system.

Glossip had asked the board to commute the sentence he had been given for his role in an alleged murder-for-hire plot. He was convicted of paying his co-defendant, Justin Sneed, to kill Barry Van Treese in 1997. Van Treese owned the motel where Glossip was the manager.

The board, which met April 26, 2023, was split 2-2 over recommending that Glossip’s sentence be changed to life in prison. The fifth member of the board recused himself because his spouse was involved in Glossip’s prosecution. A majority vote of three is required for a favorable clemency recommendation.

Because Oklahoma law does not permit clemency without a positive recommendation from the board, its decision sets the stage for Glossip’s execution on May 18.

From the start, Glossip, who had never before been arrested for any crime, maintained his innocence. His case has attracted wide attention, including from some of Oklahoma’s most conservative Republican legislators, who contend that if the state puts him to death it will be executing an innocent man.

Oklahoma’s case against Glossip rested on the testimony of Sneed, who was induced to be a witness with a promise of a reduced sentence. In addition, the prosecution destroyed evidence that would have supported Glossip’s claim of innocence, and new witnesses have come forward who further undermine confidence in the verdict.

An independent investigation by a law firm engaged by state legislators concluded that “no reasonable juror hearing the complete record would have convicted Richard Glossip of first-degree murder” and that his trial could not “provide a basis for the government to take … [his] life.”

Even the state’s Republican attorney general, Gentner Drummond, has said Glossip is probably innocent and that “it would be a grave injustice to allow the execution of a man whose trial was plagued by many errors.”

Drummond asked the Oklahoma Court of Criminal Appeals to vacate Glossip’s conviction and grant him a new trial. The court refused on April 20, 2023, which led to the parole board hearing the following week.

As someone who has studied the history of clemency in capital cases, I see three elements that make this case noteworthy: Attorney General Drummond’s actions, the attempt to use clemency to prevent a miscarriage of justice, and the fact that grants of clemency in death cases are today quite rare.

The role of the attorney general

A man in a suit and tie.
Oklahoma Attorney General Gentner Drummond. AP Photo/Sue Ogrocki

Clemency hearings like Glossip’s are proceedings in which opposing sides – representing the condemned and the government prosecutors – present evidence and arguments. In Oklahoma, family members of the victim are also given time to make their views known.

In 1998, the U.S. Supreme Court gave its approval to that kind of procedure when it held that clemency hearings must afford due process to the participants. The court said the condemned person must be given an opportunity to convince a clemency board that the government should not put them to death – just as the government gets to defend its decision to do so.

And, as my research indicates, that is what the government has almost always done when its representatives participate in such a process.

But not in the Glossip case. Drummond, his state’s top prosecutor, took the unprecedented step of siding with the petitioner – even against other state officials.

“I want to acknowledge how unusual it is for the state to support a clemency application of a death row inmate,” Drummond told the Pardon and Parole Board. “I’m not aware of any time in our history that an attorney general has appeared before this board and argued for clemency. I’m also not aware of any time in the history of Oklahoma when justice would require it.”

Clemency as grace – or justice

I believe Drummond’s reference to justice would have surprised many of this country’s founders.

For them, doing justice was a matter for the courts. Clemency was about something else.

In United States v. Wilson, a decision from 1833 and the first case about clemency to be decided by the United States Supreme Court, Chief Justice John Marshall made that distinction clear. Instead of equating clemency and justice, he called clemency an “act of grace, proceeding from the power entrusted with the execution of the laws.”

Clemency, Marshall continued, “exempts the individual on whom it is bestowed from the punishment the law inflicts for a crime he has committed. It is … delivered to the individual for whose benefit it is intended, and not communicated officially to the court.”

A little more than 20 years after Marshall wrote that, another Supreme Court justice, James Wayne, reinforced this separation of clemency and justice. He noted that clemency was about “forgiveness, release and remission.” Wayne said it was a “work of mercy … [that] forgiveth any crime, offense, punishment, execution, right, title, debt or duty, temporal or ecclesiastical.”

But over the course of American history, both public and judicial understandings of the purpose of clemency have changed, with grace, forgiveness and mercy being replaced by justice.

Clemency, especially in capital cases, has come to be associated almost exclusively with correcting errors made in trials and other legal proceedings. Clemency hearings are now generally just another arena to which inmates like Richard Glossip can appeal for justice.

This view reached its height in the 1989 Supreme Court decision Herrera v. Collins, in which the court said that “A proper remedy for the claim of actual innocence … would be executive clemency” – a commutation or a pardon granted by a governor or the president.

Clemency, the court continued – using language that neither Marshall nor Wayne would have recognized – “is the historic remedy for preventing miscarriages of justice where judicial process has been exhausted.”

One example of this use of clemency occurred in 1998, when Gov. George W. Bush commuted the death sentence of Henry Lee Lewis after what Bush said were “serious concerns … about his guilt in this case.”

Clemency is rare in capital cases

A man in a T-shirt
Richard Glossip. Oklahoma Department of Corrections via AP

Glossip, joined by Attorney General Drummond, sought clemency in the hope of preventing a miscarriage of justice like the one Bush cited as a reason to save Lewis’ life. Given the facts of Glossip’s case, what the Pardon and Parole Board did shocked many observers. But, from the perspective of clemency’s recent record in capital cases, the result should not have been surprising.

As my research has shown, a century ago clemency was granted in about 25% of capital cases. But in more recent years, according to the nonprofit Death Penalty Information Center, clemencies in capital cases have been “rare.” The center notes, “Aside from the occasional blanket grants of clemency by governors concerned about the overall fairness of the death penalty, less than two have been granted on average per year since 1976. In the same period, more than 1,500 cases have proceeded to execution.”

While the center does not indicate how often clemency was sought in those cases, requesting clemency is often a standard part of the efforts death penalty defense lawyers make to try to save their clients.

It is hard to get clemency in capital cases because, as the center explains, “Governors are subject to political influence, and even granting a single clemency can result in harsh attacks.” As a result, “clemencies in death penalty cases have been unpredictable and immune from review.”

And what is true nationwide is also true in Oklahoma where during the past half-century there have been only five grants of clemency in capital cases.

Following the denial of clemency, Glossip’s lawyers have promised to keeping fighting and are asking both state and federal courts to stay his execution. Meanwhile Gov. Kevin Stitt has said he will do nothing to delay Glossip’s date with death.

Austin Sarat, William Nelson Cromwell Professor of Jurisprudence and Political Science, Amherst College

This article is republished from The Conversation under a Creative Commons license. 

The Importance of Mental Wellness for a Healthy Heart and Brain

Research shows anxiety, stress and depression can have a negative impact on physical health and may even increase the risk for heart disease and stroke.

In fact, the American Heart Association, the world’s leading nonprofit organization focused on heart and brain health, identified a strong interconnection between the mind, heart and body in its scientific statement, “Psychological Health, Well-Being and the Mind-Heart-Body Connection.”

“Research has clearly demonstrated negative psychological factors, personality traits and mental health disorders can negatively impact cardiovascular health,” said volunteer chair of the statement writing committee Glenn N. Levine, M.D., FAHA, master clinician and professor of medicine at Baylor College of Medicine and chief of the cardiology section at the Michael E. DeBakey VA Medical Center. “The body’s biological reaction to stress, anxiety and other types of poor mental health can manifest physically through an irregular heart rate or rhythm, increased blood pressure and inflammation throughout the body. Negative psychological health is also associated with health behaviors that are linked to an increased risk for heart disease and stroke, such as smoking, lower levels of physical activity, unhealthy diet, being overweight and not taking medications as prescribed.”

Studies have found some people, including people of color, may face a greater risk of poor health outcomes due to chronic stress, depression and anxiety linked to psychosocial stressors, particularly those related to social and economic inequality, discrimination, systemic racism and other societal factors. A study published in the “Journal of the American Heart Association” found U.S. adults who reported feeling highly discriminated against at work had an increased risk of developing high blood pressure compared to those who reported low discrimination at work.

“Mental health includes our emotional, psychological and social well-being,” Levine said. “It affects how we think, feel and act. It also helps determine how we handle stress, relate to others and make choices. Practicing mindfulness in all forms allows one to be more aware of and have more control over emotional responses to the experiences of daily life.”

Consider these tips from Levine to improve your mind-heart-body connection:

  • Practice meditation regularly. Even simple actions such as communing with nature or sitting quietly and focusing on your breath can have a positive impact.
  • Get plenty of good, restful sleep. Set a regular bedtime, turn off or dim electronics as bedtime approaches and form a wakeup routine.
  • Make connections and stay in touch. Reach out and connect regularly with family and friends, or engage in activities to meet new people.
  • Practice mindful movement. There are many types of gentle mindful practices like yoga and Tai chi that can be done about anywhere with no special equipment to help ease your soul and muscles.
  • Spend time with your furry friend. Companion animals are often beloved members of the family and research shows pets may help reduce physiological reactions to stress as well as support improved physical activity.
  • Work it out. Regular physical activity – a recommended 150 minutes of moderate activity, 75 minutes of vigorous activity or a mix of both weekly – can help relieve tension, anxiety and depression, and give you an immediate exercise “high.”

“Wellness is more than simply the absence of disease,” Levine said. “It is an active process directed toward a healthier, happier and more fulfilling life. When we strive to reduce negative aspects of psychological health, we are promoting an overall positive and healthy state of being.”

Learn more about the importance of heart health at heart.org

SOURCE:
American Heart Association

Cookies, chips, hot dogs and other ultraprocessed fare raise risk of runaway eating

From the earliest days of their evolution, guts and brains have been the best of friends.

It’s a mutually beneficial relationship. Guts prepare nourishment for delivery to the brain. And brains guide the behaviors needed to fill the gut with raw materials.

Even today, the primitive need to serve the gut’s hunger remains implanted in the human brain’s blueprint for directing behavior. But nowadays, food sometimes drives the brain to behave in ways that aren’t as useful for survival as the original evolutionary programming. In recent decades a mismatch has evolved between the food available to hungry humans and the brain circuitry designed to acquire it. Instead of scrounging for scarce sources of high-quality calories as in the era of hunting and gathering, modern humans are flooded with a glut of ultraprocessed foods, designed to appeal to the brain’s ancient evolutionary imperatives — whether the body needs the food or not.

“Ultraprocessed foods are the result of processing naturally occurring substances … and refining them into evolutionarily novel substances,” Ashley Gearhardt and Erica Schulte write in a review to appear in the 2021  Annual Review of Nutrition. Such substances tempt human taste buds with unnaturally high levels of ingredients that stimulate brain regions related to reward and motivation. As a result, consuming food today can engage behaviors similar to those accompanying addiction to drugs of abuse, write psychologists Gearhardt, of the University of Michigan, and Schulte, who recently moved from the University of Pennsylvania to Drexel University.

Traditionally, addiction experts have dismissed the notion that potato chips or ice cream could be addictive in the same sense as, say, heroin or alcohol. Those drugs can produce debilitating intoxication, and ceasing their use often leads to severe withdrawal symptoms. But by the 21st century, it became clear that tobacco also induces many features of addiction without intoxication. And quitting smoking might make people irritable but it does not inflict intolerable suffering.

“There is now scientific consensus that tobacco is a highly addictive substance,” say Gearhardt and Schulte. “Like tobacco, ultraprocessed foods do not trigger intoxication and do not cause life-threatening physical withdrawal symptoms, but people are prone to compulsively consume them even in the face of significant negative consequences.”

Tobacco’s addictive power demonstrates that addiction is not a simple condition with a clear biological signature that can be measured. Rather, addiction encompasses a cluster of symptoms; not all addicts exhibit all the symptoms. Ultraprocessed foods, Gearhardt and Schulte report, can in some people promote many of the behavioral symptoms associated with addiction to nicotine and other drugs.

Of course, people are always driven to obtain food — like water, it is necessary to survive. Nobody would claim that water is therefore addictive. But brain circuits that evolved to seek high-calorie foods such as nuts, fruits and meat are also stimulated by artificial concoctions loaded with sugars and fats.

Those ultraprocessed foods — such as chips, cookies, pizza and pastries — exploit the desire for tasty, high-calorie meals. So just as addictive drugs hijack the brain’s motivation and reward-seeking circuitry, so do ultraprocessed foods. Many people therefore consume those foods compulsively, despite undesirable consequences including excessive weight gain and various related illnesses, from diabetes to heart disease.

“Ultraprocessed foods have been a key factor in the rising global rates of obesity, diet-related disease and poor health,” Gearhardt and Schulte declare.

Beyond the appeal of sugars and fat, ultraprocessed foods often incorporate additional ingredients, such as attractive coloring, flavor enhancers and stabilizers to make chewing easier — helping deliver the reward to the brain faster and more efficiently.

“Ultraprocessed foods are designed to optimize not only the magnitude of the reward signal in the brain through high doses of calorie-dense ingredients and additives but also the speed with which that reward is delivered,” Gearhardt and Schulte point out.

And while even natural foods may be high in sugar, or in fat, ultraprocessed foods typically offer both extra fat plus sugar in the same package — ready to eat. Add in a vast marketing apparatus (unknown in prehistoric times), and the allure of ultraprocessed foods far exceeds the brain’s intrinsic desire for nutrition.

Foods that reward the brain

Remixing natural ingredients into novel concoctions with enhanced appeal was not invented by the food industry. It parallels the methods for preparing traditional addictive drugs by processing natural substances. Natural fruits can be fermented to make addictive drinks, for instance. Tobacco leaves are processed by drying to make nicotine delivery practical. Additional ingredients like sugars in alcoholic drinks and menthol in cigarettes boost the reward signal delivered to the brain. And just as making ultraprocessed foods mimics the manufacture of drugs of abuse, their excessive consumption evokes similar self-destructive behaviors, recent studies have shown.

Those studies have relied on a research tool developed at Yale University called the Yale Food Addiction Scale, based on the behavioral criteria used for diagnosing substance abuse disorders. Such behavioral indicators include lack of control over consuming a substance, continued consumption despite adverse consequences and unsuccessful attempts to cease use of the substance. No one symptom defines addiction; out of 11 symptoms, the presence of two to three indicates mild addiction, four to five suggest moderate addiction, and six or more indicate severe addiction.

Ultraprocessed foods are designed to optimize not only the magnitude of the reward signal in the brain but also the speed with which that reward is delivered.

Studies using the Yale scale show that obesity alone is not diagnostic of food addiction — not all obese people show addictive eating behaviors. But “there is consistent evidence that food addiction is higher for individuals with obesity,” Gearhardt and Schulte note. Such studies suggest that overall, about 15 percent of the US population may have food addiction (roughly the same prevalence as addiction to alcohol). And those studies consistently show that consuming ultraprocessed foods is linked to addictive eating behavior more commonly than natural fruits, vegetables or lean meats.

Besides their addiction-related behaviors, people diagnosed with food addiction on the Yale scale exhibit other similarities to people addicted to drugs. “More intense cravings, higher levels of depression and a greater likelihood of experiencing trauma” are all more common in both groups. Brain scan studies also find similar neural activity patterns in drug addicts and those diagnosed with food addiction.

“Studies using self-report, behavioral and neuroimaging methods have consistently concluded that ultraprocessed foods are the most likely to be associated with features of addiction,” Gearhardt and Schulte report.

Defining the problem

There’s no doubt that ultraprocessed foods motivate more consumption than nutritional needs alone dictate. But some researchers contend that food craving isn’t really a substance abuse disorder like alcoholism, but rather a behavioral addiction, like uncontrolled gambling. Resolving that issue would require identifying which specific chemical substances within the ultraprocessed foods are the key addictive agents.

So far research has not definitively identified such agents. But ultraprocessed foods do appear to evoke chemical changes in the brain (such as cellular sensitivity to some signaling molecules) to an extent that natural foods do not. On the whole, Gearhardt and Schulte conclude, research to date provides more support for the view that food addiction is substance-based rather than just a hard-to-break behavior.

They note, though, that “food addiction” is a term in need of refinement. For one thing, more research is needed to clarify which types of food are addictive and how to define them. Gearhardt and Schulte use “ultraprocessed food” to refer to industrially produced products containing such ingredients as high-fructose corn syrup and other additives. Mostly these foods — such as soft drinks, salty snacks, hot dogs and many other fast foods — overlap with “highly processed” foods spiked with refined carbohydrates and fats. Some are also sometimes called “hyperpalatable foods.”

It’s unclear which labels best identify foods likely to cause addictive behavior. And whether homemade versions of foods such as cookies count as addictive can depend on whether the ingredients used to bake them are processed, such as white flour.

Other issues with terminology enter the debate, of course, with respect to defining “addiction.” Some researchers continue to insist that excessive eating is a behavioral disorder rather than a substance addiction and scoff at the notion that Oreo cookies have anything in common with oxycodone.

But in a larger sense, arguing about whether food is addictive is fruitless. Addiction is not a naturally defined, invariant feature of biology like gravity or electric charge in physics. Addiction is a word. Its use should not be so rigidly constrained that the term can’t be used in ways to better aid those who suffer from self-destructive behaviors.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. 

The essential fly


Think before you swat: The much-maligned fly could be the key to ensuring future supplies of many of the world’s favorite foods

When entomologist Jonathan Finch turns his dust-caked car off the highway and onto the old wartime airstrip at Manbulloo, he knows what awaits him at the other end: 65,000 blooming mango trees, an indescribably horrible smell and the unmistakable buzz of excited blowflies.

These days, the old airstrip is the access road to the vast Manbulloo mango farm — 4 square kilometers of orchards near the town of Katherine in Australia’s Northern Territory. “It’s a beautiful place — remote, peaceful and blissfully shady beneath the trees,” Finch says. “But the smell is unbelievable. You just can’t get it off you.” Although we are talking on the phone, I get the impression he’s grinning. The loathsome odor, it turns out, is one he created himself. And it’s vital to his research into the pollinating prowess of flies.

Most of us don’t much like flies. Finch, though, is a big fan. He’s part of a team investigating the role that flies play in pollinating crops and whether, like honeybees, they might be managed to improve yields. He’s traveled from Western Sydney University on the other side of the continent to test a widely held belief among mango growers: If you leave out rotting carcasses, flies will come, and more flies mean more mangoes.

Mango growers realized way back that flies are important pollinators. “Some encourage flies by hanging large barrels from their trees and putting roadkill in them,” Finch says. “Other guys bring in a ton of fish and dump it in a heap in the middle of the orchard.” The farmers are convinced that the pungent bait makes a difference, and the biology of blowflies suggests that it might. Yet there’s no scientific proof that it does.

Blowflies are drawn to the smell of rotting flesh because they mate and lay their eggs on corpses and carcasses. They also forage among flowers to fill up on energy-boosting nectar and protein-rich pollen, transporting pollen from one flower to another in the process. So it seems fair to assume that extra flies will pollinate more flowers and the trees will bear more fruit. But do they?

To find out, Finch and his colleagues have coopted the Manbulloo farmers’ bait barrels and filled them with a mix of fish and chicken. With the temperature hovering around 30ºC (85ºF), the scent of decay soon wafts through the trees and the team can put the idea to the test.

Reputation reboot

Flies generally get a bad rap. People associate them with dirt, disease and death. “No one except entomologists really likes flies,” Finch says. Yet there’s good reason why we should cherish, encourage, even nurture them: Our future food supply could depend on it. The past few years have seen growing recognition that flies make up a large proportion of wild pollinators — but also that we know little about that side of their lives. Which sorts of fly pollinate what? How effective are they at delivering pollen where it’s needed? Which flies might we harness to boost future harvests — and how to go about it? With insect populations plummeting and honeybees under pressure from multiple threats, including varroa mites and colony collapse disorder, entomologists and pollination specialists are urgently trying to get some answers.

Animals are responsible for pollinating around 76 percent of crop plants, including a large number of globally important ones. Birds, bats and other small mammals do their bit, but insects do much more — pollinating flowers of many fruits, vegetables and nuts, from almonds to avocados, mangoes and melons, cocoa and coconuts, as well as crops grown to provide seed for future vegetable harvests. In a recent analysis for the Annual Review of Entomology, Australia-based biologist Romina Rader and colleagues from Australia, New Zealand and the US calculated that the world’s 105 most widely planted food crops that benefit from insect pollination are worth some $800 billion a year.

Bees, especially honeybees, get most of the credit, but overlooked and underappreciated is a vast army of beetles, butterflies, moths, ants, flies and more. In Rader’s analysis, only a handful of crops were visited exclusively by bees; most were visited by both bees and other insects. She and her colleagues assessed the contribution of each type of insect and found that flies were the most important pollinators after bees, visiting 72 percent of the 105 crops.

The realization that flies perform such a vital service has prompted a big push to learn how to make the most of these unsung heroes, by attracting them to fields and orchards and putting them to work in greenhouses and growing tunnels. As demand for food rises, growers will increasingly rely on managed pollinators reared for the job, and not just honeybees, says Rader. Flies will be crucial to ensuring future food security, she says.

Flies are amazingly diverse and near ubiquitous, living in just about every sort of habitat. Hundreds of species belonging to dozens of families have been reported visiting one or more crops, but two fly families stand out: hoverflies and blowflies. Rader’s analysis showed that hoverflies visit at least 52 percent of the crops studied and blowflies some 30 percent. Some species visit many different crops around the world: One hoverfly, the common drone fly (Eristalis tenax), has been recorded visiting 28 of Rader’s 105 crops, while the marmalade hoverfly ( Episyrphus balteatus) is close behind with 24, and the bluebottle Calliphora vicina (a blowfly), visits 8.  

Hoverflies and blowflies visit flowers to drink nectar, which fuels energetic activities like flying, and eat pollen to get the nutrients needed for sexual maturation. Like bees, many of these flies are hairy and trap pollen on the head and thorax as they feed. Larger flies can collect — and carry — hundreds and sometimes thousands of pollen grains as they fly from flower to flower. Unlike bees, which must forage close to their hive or nest, flies don’t have to provide for their young and can roam more widely.

They have other advantages too: Some flies forage earlier and later in the day; they tolerate a wider range of temperatures and are active when it’s too cool for bees; and they’ll be out and about even in wet and windy weather that keeps bees at home. And for those growing crops under glass or plastic, there’s potentially another plus. “Bees hate glasshouses and are inclined to sting you,” says Finch. Flies might prove more tolerant of working indoors. And crucially, says Finch: “Flies don’t sting.”

For now, honeybees still tend to do a larger share of crop pollination. With colonies trucked from crop to crop, managed bees generally far outnumber wild pollinators. Yet that’s not always the case. Flies breed faster, and when conditions are good, they can reach high densities. “Some species have fast life cycles and are very adaptable to changing conditions,” says Rader. What’s more, some of the most important hoverfly species are migratory, so huge numbers can turn up and far outnumber honeybees at crucial times of the year.

Recent radar studies tracking the migration of common European hoverflies (including the marmalade hoverfly) found that up to 4 billion fly northward into southern Britain each spring, a number not far short of all the honeybees in the whole of Britain. There have also been reports of great hoverfly migrations in the US, Nepal and Australia, suggesting that the phenomenon is widespread.

Even better, hoverflies provide valuable services besides pollination, says ecologist Karl Wotton, who heads the Genetics of Migration Lab at the University of Exeter in southwest England. Many species have predatory larvae with a voracious appetite for aphids, caterpillars and other soft-bodied pests. Wotton has calculated that the larvae of those billions of hoverflies that turn up in Britain each spring consume around 6 trillion aphids in the all-important early part of the growing season. “That’s around 6,000 tonnes of aphids or 20 percent of the population at that time of year,” he says. Other hoverflies have semiaquatic larvae that feed on waste organic material, usefully recycling nutrients. “It’s hard to think of a more beneficial group of insects,” says Wotton. “They provide great services — for free.”   

But how to harness flies to maintain — and boost — food production? One way is to attract more of them to fields and orchards. Schemes that encourage farmers to plant wildflowers, keep remnant native vegetation and leave grasslands uncut can be very effective at increasing the number and diversity of insects and expanding the pool of potential pollinators. Hoverflies and blowflies need a few extras if they are to proliferate, though: carrion for blowflies, access to aphids for some hoverflies and ponds or streams containing dung, decaying vegetation or carcasses for others.

Making fields and orchards more fly-friendly won’t always be enough. With that in mind, researchers round the world are trying to identify flies that can be reared commercially and released where and when their services are needed. But where to start? The vast majority of pollination studies have focused on bees, and although many species of flies have been reported visiting crops, in most cases little is known about how good they are at transporting pollen, let alone whether their visits translate into more fruit and vegetables.

That’s beginning to change. Scattered studies have logged how often flies visit flowers, counted the pollen grains stuck to their bodies and recorded crop yields, and found that some flies give bees a run for their money — and in some cases, outdo them. Researchers studying avocados in Mexico, for instance, found that the large green blowfly Chrysomya megacephala (aka the oriental latrine fly) visited more flowers in a given time than bees and carried pollen grains on parts of the body that would contact the stigma of the next avocado flower it visited. Studies in Israel, Malaysia and India all suggest that blowflies are effective at pollinating mangoes, while trials in the US and New Zealand showed that the European blue blowfly ( Calliphora vicina) produced as good a yield of leek and carrot seed as bees.

Hoverflies also show plenty of promise. In trials, a number of species have proved to be effective pollinators of seed crops, oilseed rape, sweet peppers and strawberries. Recent experiments in the UK, for instance, found that releasing a mixed bunch of hoverflies into cages of flowering strawberry plants increased the yield of fruit by more than 70 percent. What’s more, the strawberries were likely to be bigger, heavier and more perfectly formed.

Promise is one thing, practical application another. In Australia, researchers like Finch and Rader are working on a five-year, multi-institution project that, among other things, aims to match fly to crop, and then develop the best method of rearing them. At farms across the country, teams are putting candidate flies through their paces on crops as varied as mangoes and avocados, blueberries and vegetable seed.

At Manbulloo, Finch is focused on mangoes and whether the old farmers’ trick works. The stinking bait certainly attracted plenty of flies – but were they the same flies as those that growers saw visiting their mango flowers? They were. “Several large and common species seem to visit both carrion and flowers,” says Finch. Of those, one looked more promising than the others: the oriental latrine fly. “It’s big and hairy, which means it’s likely to carry and deposit a lot of pollen,” says Finch. “It’s also abundant, turns up in a lot of orchards and its larvae will eat anything that’s dead.”

After a temporary halt thanks to Covid-19, Finch plans to return to Manbulloo later this year to find out if the latrine flies live up to expectation. “They might just stick around the carrion all day, distracted by the disgusting smells,” he says. If they do venture through the orchard, he’ll monitor how many actually visit flowers and how often. The next test is whether the flies deliver pollen where it’s needed — on the stigmas of flowers that need fertilizing — a job that requires a microscope and plenty of patience. After all that, if the oriental latrine fly is still a contender, then it’s time to find out if its efforts pay off by releasing flies among trees protected from all other insects and measuring their success in mangoes.

The latrine fly might prove an effective pollinator, but that’s still not proof that the farmers’ carrion trick makes a difference. “For that, we’ll have to compare yields in orchards with carrion and without,” says Finch. If the growers are vindicated, then their cheap trick can be rolled out elsewhere. “If it turns out that they aren’t as good at depositing pollen as honeybees, then we may need to add more flies to compensate for their lower effectiveness.”

The idea of raising flies to produce food is slowly gaining traction, particularly for greenhouse crops. “Flies breed amazingly well and quickly on horrible things, which makes them cheap to use in glasshouses or release in fields,” says Finch. They are easy to transport as pupae and are expendable, unlike honeybees. Some growers are already reaping the benefits of purpose-bred flies. Tasmanian farmer Alan Wilson has been rearing his own blowflies for the past five years after discovering they improved his crop of high-value hybrid cauliflower seed. On the other side of the world in southern Spain, you can buy boxes of hoverfly pupae from Polyfly, the first company to produce hoverflies commercially for greenhouse crops.

Brilliant though flies are, they can have drawbacks. Those that attack livestock or people or are pests of other crops must be avoided at all costs. And of course there’s the yuck factor. In Spain, Polyfly has done some nifty rebranding of its hoverflies. The common drone fly — a poor choice of name for one of the world’s busiest pollinators — has been promoted to Queenfly, while its other offering, the large spotty-eyed dronefly, is sold as the Goldfly. Blowflies, linked in the public mind to death, decay and forensic examination of corpses, have a much bigger image problem. When the oriental latrine fly’s name comes up at a slick PR firm’s branding brainstorm, I’d like to be a fly on the wall.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.