Monday, May 8, 2023

AI’s next big leap

The unlikely marriage of two major artificial intelligence approaches has given rise to a new hybrid called neurosymbolic AI. It’s taking baby steps toward reasoning like humans and might one day take the wheel in self-driving cars.

A few years ago, scientists learned something remarkable about mallard ducklings. If one of the first things the ducklings see after birth is two objects that are similar, the ducklings will later follow new pairs of objects that are similar, too. Hatchlings shown two red spheres at birth will later show a preference for two spheres of the same color, even if they are blue, over two spheres that are each a different color. Somehow, the ducklings pick up and imprint on the idea of similarity, in this case the color of the objects. They can imprint on the notion of dissimilarity too.

What the ducklings do so effortlessly turns out to be very hard for artificial intelligence. This is especially true of a branch of AI known as deep learning or deep neural networks, the technology powering the AI that defeated the world’s Go champion Lee Sedol in 2016. Such deep nets can struggle to figure out simple abstract relations between objects and reason about them unless they study tens or even hundreds of thousands of examples.

To build AI that can do this, some researchers are hybridizing deep nets with what the research community calls “good old-fashioned artificial intelligence,” otherwise known as symbolic AI. The offspring, which they call neurosymbolic AI, are showing duckling-like abilities and then some. “It’s one of the most exciting areas in today’s machine learning,” says Brenden Lake, a computer and cognitive scientist at New York University.

Though still in research labs, these hybrids are proving adept at recognizing properties of objects (say, the number of objects visible in an image and their color and texture) and reasoning about them (do the sphere and cube both have metallic surfaces?), tasks that have proved challenging for deep nets on their own. Neurosymbolic AI is also demonstrating the ability to ask questions, an important aspect of human learning. Crucially, these hybrids need far less training data then standard deep nets and use logic that’s easier to understand, making it possible for humans to track how the AI makes its decisions.

“Everywhere we try mixing some of these ideas together, we find that we can create hybrids that are … more than the sum of their parts,” says computational neuroscientist David Cox, IBM’s head of the MIT-IBM Watson AI Lab in Cambridge, Massachusetts.

Each of the hybrid’s parents has a long tradition in AI, with its own set of strengths and weaknesses. As its name suggests, the old-fashioned parent, symbolic AI, deals in symbols — that is, names that represent something in the world. For example, a symbolic AI built to emulate the ducklings would have symbols such as “sphere,” “cylinder” and “cube” to represent the physical objects, and symbols such as “red,” “blue” and “green” for colors and “small” and “large” for size. Symbolic AI stores these symbols in what’s called a knowledge base. The knowledge base would also have a general rule that says that two objects are similar if they are of the same size or color or shape. In addition, the AI needs to know about propositions, which are statements that assert something is true or false, to tell the AI that, in some limited world, there’s a big, red cylinder, a big, blue cube and a small, red sphere. All of this is encoded as a symbolic program in a programming language a computer can understand.

Armed with its knowledge base and propositions, symbolic AI employs an inference engine, which uses rules of logic to answer queries. A programmer can ask the AI if the sphere and cylinder are similar. The AI will answer “Yes” (because they are both red). Asked if the sphere and cube are similar, it will answer “No” (because they are not of the same size or color).

In hindsight, such efforts run into an obvious roadblock. Symbolic AI can’t cope with problems in the data. If you ask it questions for which the knowledge is either missing or erroneous, it fails. In the emulated duckling example, the AI doesn’t know whether a pyramid and cube are similar, because a pyramid doesn’t exist in the knowledge base. To reason effectively, therefore, symbolic AI needs large knowledge bases that have been painstakingly built using human expertise. The system cannot learn on its own.

On the other hand, learning from raw data is what the other parent does particularly well. A deep net, modeled after the networks of neurons in our brains, is made of layers of artificial neurons, or nodes, with each layer receiving inputs from the previous layer and sending outputs to the next one. Information about the world is encoded in the strength of the connections between nodes, not as symbols that humans can understand.

Take, for example, a neural network tasked with telling apart images of cats from those of dogs. The image — or, more precisely, the values of each pixel in the image — are fed to the first layer of nodes, and the final layer of nodes produces as an output the label “cat” or “dog.” The network has to be trained using pre-labeled images of cats and dogs. During training, the network adjusts the strengths of the connections between its nodes such that it makes fewer and fewer mistakes while classifying the images. Once trained, the deep net can be used to classify a new image.

Deep nets have proved immensely powerful at tasks such as image and speech recognition and translating between languages. “The progress has been amazing,” says Thomas Serre of Brown University, who explored the strengths and weaknesses of deep nets in visual intelligence in the 2019 Annual Review of Vision Science. “At the same time, because there’s so much interest, the limitations are becoming clearer and clearer.”

Acquiring training data is costly, sometimes even impossible. Deep nets can be fragile: Adding noise to an image that would not faze a human can stump a deep neural net, causing it to classify a panda as a gibbon, for example. Deep nets find it difficult to reason and answer abstract questions (are the cube and cylinder similar?) without large amounts of training data. They are also notoriously inscrutable: Because there are no symbols, only millions or even billions of connection strengths, it’s nearly impossible for humans to work out how the computer reaches an answer. That means the reasons why a deep net classified a panda as a gibbon are not easily apparent, for example.

Since some of the weaknesses of neural nets are the strengths of symbolic AI and vice versa, neurosymbolic AI would seem to offer a powerful new way forward. Roughly speaking, the hybrid uses deep nets to replace humans in building the knowledge base and propositions that symbolic AI relies on. It harnesses the power of deep nets to learn about the world from raw data and then uses the symbolic components to reason about it.

Researchers into neurosymbolic AI were handed a challenge in 2016, when Fei-Fei Li of Stanford University and colleagues published a task that required AI systems to “reason and answer questions about visual data.” To this end, they came up with what they called the compositional language and elementary visual reasoning, or CLEVR, dataset. It contained 100,000 computer-generated images of simple 3-D shapes (spheres, cubes, cylinders and so on). The challenge for any AI is to analyze these images and answer questions that require reasoning. Some questions are simple (“Are there fewer cubes than red things?”), but others are much more complicated (“There is a large brown block in front of the tiny rubber cylinder that is behind the cyan block; are there any big cyan metallic cubes that are to the left of it?”).

It’s possible to solve this problem using sophisticated deep neural networks. However, Cox’s colleagues at IBM, along with researchers at Google’s DeepMind and MIT, came up with a distinctly different solution that shows the power of neurosymbolic AI.

The researchers broke the problem into smaller chunks familiar from symbolic AI. In essence, they had to first look at an image and characterize the 3-D shapes and their properties, and generate a knowledge base. Then they had to turn an English-language question into a symbolic program that could operate on the knowledge base and produce an answer. In symbolic AI, human programmers would perform both these steps. The researchers decided to let neural nets do the job instead.

The team solved the first problem by using a number of convolutional neural networks, a type of deep net that’s optimized for image recognition. In this case, each network is trained to examine an image and identify an object and its properties such as color, shape and type (metallic or rubber).

The second module uses something called a recurrent neural network, another type of deep net designed to uncover patterns in inputs that come sequentially. (Speech is sequential information, for example, and speech recognition programs like Apple’s Siri use a recurrent network.) In this case, the network takes a question and transforms it into a query in the form of a symbolic program. The output of the recurrent network is also used to decide on which convolutional networks are tasked to look over the image and in what order. This entire process is akin to generating a knowledge base on demand, and having an inference engine run the query on the knowledge base to reason and answer the question.

The researchers trained this neurosymbolic hybrid on a subset of question-answer pairs from the CLEVR dataset, so that the deep nets learned how to recognize the objects and their properties from the images and how to process the questions properly. Then, they tested it on the remaining part of the dataset, on images and questions it hadn’t seen before. Overall, the hybrid was 98.9 percent accurate — even beating humans, who answered the same questions correctly only about 92.6 percent of the time.

Better yet, the hybrid needed only about 10 percent of the training data required by solutions based purely on deep neural networks. When a deep net is being trained to solve a problem, it’s effectively searching through a vast space of potential solutions to find the correct one. This requires enormous quantities of labeled training data. Adding a symbolic component reduces the space of solutions to search, which speeds up learning.

Most important, if a mistake occurs, it’s easier to see what went wrong. “You can check which module didn’t work properly and needs to be corrected,” says team member Pushmeet Kohli of Google DeepMind in London. For example, debuggers can inspect the knowledge base or processed question and see what the AI is doing.

The hybrid AI is now tackling more difficult problems. In 2019, Kohli and colleagues at MIT, Harvard and IBM designed a more sophisticated challenge in which the AI has to answer questions based not on images but on videos. The videos feature the types of objects that appeared in the CLEVR dataset, but these objects are moving and even colliding. Also, the questions are tougher. Some are descriptive (“How many metal objects are moving when the video ends?”), some require prediction (“Which event will happen next? [a] The green cylinder and the sphere collide; [b] The green cylinder collides with the cube”), while others are counterfactual (“Without the green cylinder, what will not happen? [a] The sphere and the cube collide; [b] The sphere and the cyan cylinder collide; [c] The cube and the cyan cylinder collide”).

Such causal and counterfactual reasoning about things that are changing with time is extremely difficult for today's deep neural networks, which mainly excel at discovering static patterns in data, Kohli says.

To address this, the team augmented the earlier solution for CLEVR. First, a neural network learns to break up the video clip into a frame-by-frame representation of the objects. This is fed to another neural network, which learns to analyze the movements of these objects and how they interact with each other and can predict the motion of objects and collisions, if any. Together, these two modules generate the knowledge base. The other two modules process the question and apply it to the generated knowledge base. The team’s solution was about 88 percent accurate in answering descriptive questions, about 83 percent for predictive questions and about 74 percent for counterfactual queries, by one measure of accuracy. The challenge is out there for others to improve upon these results.

Good question

Asking good questions is another skill that machines struggle at while humans, even children, excel. “It’s a way to consistently learn about the world without having to wait for tons of examples,” says Lake of NYU. “There’s no machine that comes anywhere close to the human ability to come up with questions.”

Neurosymbolic AI is showing glimmers of such expertise. Lake and his student Ziyun Wang built a hybrid AI to play a version of the game Battleship. The game involves a 6-by-6 grid of tiles, hidden under which are three ships one tile wide and two to four tiles long, oriented either vertically or horizontally. Each move, the player can either choose to flip a tile to see what’s underneath (gray water or part of a ship) or ask any question in English. For example, the player can ask: “How long is the red ship?” or “Do all three ships have the same size?” and so on. The goal is to correctly guess the location of the ships.

Lake and Wang’s neurosymbolic AI has two components: a convolutional neural network to recognize the state of the game by looking at a game board, and another neural network to generate a symbolic representation of a question.

The team used two different techniques to train their AI. For the first method, called supervised learning, the team showed the deep nets numerous examples of board positions and the corresponding “good” questions (collected from human players). The deep nets eventually learned to ask good questions on their own, but were rarely creative. The researchers also used another form of training called reinforcement learning, in which the neural network is rewarded each time it asks a question that actually helps find the ships. Again, the deep nets eventually learned to ask the right questions, which were both informative and creative.

Lake and other colleagues had previously solved the problem using a purely symbolic approach, in which they collected a large set of questions from human players, then designed a grammar to represent these questions. “This grammar can generate all the questions people ask and also infinitely many other questions,” says Lake. “You could think of it as the space of possible questions that people can ask.” For a given state of the game board, the symbolic AI has to search this enormous space of possible questions to find a good question, which makes it extremely slow. The neurosymbolic AI, however, is blazingly fast. Once trained, the deep nets far outperform the purely symbolic AI at generating questions.

Not everyone agrees that neurosymbolic AI is the best way to more powerful artificial intelligence. Serre, of Brown, thinks this hybrid approach will be hard pressed to come close to the sophistication of abstract human reasoning. Our minds create abstract symbolic representations of objects such as spheres and cubes, for example, and do all kinds of visual and nonvisual reasoning using those symbols. We do this using our biological neural networks, apparently with no dedicated symbolic component in sight. “I would challenge anyone to look for a symbolic module in the brain,” says Serre. He thinks other ongoing efforts to add features to deep neural networks that mimic human abilities such as attention offer a better way to boost AI’s capacities.

DeepMind’s Kohli has more practical concerns about neurosymbolic AI. He is worried that the approach may not scale up to handle problems bigger than those being tackled in research projects. “At the moment, the symbolic part is still minimal,” he says. “But as we expand and exercise the symbolic part and address more challenging reasoning tasks, things might become more challenging.” For example, among the biggest successes of symbolic AI are systems used in medicine, such as those that diagnose a patient based on their symptoms. These have massive knowledge bases and sophisticated inference engines. The current neurosymbolic AI isn’t tackling problems anywhere nearly so big.

Cox’s team at IBM is taking a stab at it, however. One of their projects involves technology that could be used for self-driving cars. The AI for such cars typically involves a deep neural network that is trained to recognize objects in its environment and take the appropriate action; the deep net is penalized when it does something wrong during training, such as bumping into a pedestrian (in a simulation, of course). “In order to learn not to do bad stuff, it has to do the bad stuff, experience that the stuff was bad, and then figure out, 30 steps before it did the bad thing, how to prevent putting itself in that position,” says MIT-IBM Watson AI Lab team member Nathan Fulton. Consequently, learning to drive safely requires enormous amounts of training data, and the AI cannot be trained out in the real world.

Fulton and colleagues are working on a neurosymbolic AI approach to overcome such limitations. The symbolic part of the AI has a small knowledge base about some limited aspects of the world and the actions that would be dangerous given some state of the world. They use this to constrain the actions of the deep net — preventing it, say, from crashing into an object.

This simple symbolic intervention drastically reduces the amount of data needed to train the AI by excluding certain choices from the get-go. “If the agent doesn’t need to encounter a bunch of bad states, then it needs less data,” says Fulton. While the project still isn’t ready for use outside the lab, Cox envisions a future in which cars with neurosymbolic AI could learn out in the real world, with the symbolic component acting as a bulwark against bad driving.

So, while naysayers may decry the addition of symbolic modules to deep learning as unrepresentative of how our brains work, proponents of neurosymbolic AI see its modularity as a strength when it comes to solving practical problems. “When you have neurosymbolic systems, you have these symbolic choke points,” says Cox. These choke points are places in the flow of information where the AI resorts to symbols that humans can understand, making the AI interpretable and explainable, while providing ways of creating complexity through composition. “That’s tremendously powerful,” says Cox.

Editor’s note: This article was updated October 15, 2020, to clarify the viewpoint of Pushmeet Kohli on the capabilities of deep neural networks.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. 

 Minuteman Press of Central Alabama



Sunday, May 7, 2023

Digital bank runs: social media played a role in recent financial failures but could also help investors avoid panic

Daniel Beunza, City, University of London

A crisis of confidence in the US banking sector led people to pull their money from banks including Silicon Valley Bank and Credit Suisse, and more recently, First Republic Bank and California-based PacWest Bancorp. The way these events have unfolded have created a new term in the vocabulary of finance: “digital bank run”.

Unlike traditional bank runs, which conjure up images of people queuing outside a branch to withdraw their money in person, digital bank runs snowball even faster due to social media chatter. This can add to the sense of panic around the run.

Posts on Twitter with negative information about Silicon Valley Bank contributed to depositor withdrawals totalling US$40 billion (£32 billion) – 23% percent of total deposits – in a matter of hours, culminating in the bank’s failure. In contrast, it took Washington Mutual nine days to lose $17 billion (9% of its deposits) in 2008.

Digital bank runs are the new threat to financial stability that keeps regulators and investors awake at night. Like the toxic assets of the 2008 global financial crisis, they stem from a combination of new technology – social media, Twitter in particular – and the old complexities of the financial sector, in this case “fractional banking”.

Banks only keep a fraction of the money entrusted to them, investing the rest for profit. This means if a sufficiently large number of depositors demand their money back – typically out of fear about failure – the bank will not have enough. And then it really will fail. Concerns about a single bank may spread to other banks, leading to panic about the industry, widespread bank failures, and eventually, economic recession.

The Silicon Valley Bank crisis showed how the perils of fractional banking can be compounded by social media. Barring extreme situations such as queues of customers outside bank branches, it was difficult for negative information to spread from one customer to the next before social media.

But as a new, non-peer reviewed research paper suggests, a rise in negative tweets about Silicon Valley Bank before its March 10 collapse was followed by a drop in its stock price, seen as a proxy for deposits. The authors conclude that “social media did, indeed, contribute to the run on Silicon Valley Bank”.

Avoiding a bank run

To avoid the kind of contagion that leads to digital bank runs, bank management, investors and regulators need to be careful about what they say. Even if they aren’t posting on social media, investors and other interested parties are, and their discussions can impact sentiment about a bank.

The failure of Credit Suisse was arguably kicked off by an ill thought-out comment by the chair of a major investor in the bank, Ammar Al Khudairy of Saudi National Bank. He did not comment publicly on this issue but resigned within two weeks “due to personal reasons” according to a statement to the Saudi stock exchange.

Communication – or a lack of it – was also associated with the more recent share price drop and loss of confidence in First Republic Bank. A management decision not to take questions after a critical presentation to investors on April 25 attracted media attention before the banks assets were seized by US regulators and sold to US banking giant JP Morgan on May 1.

Governments can also help prevent digital bank runs. Deutsche Bank experienced a sharp drop in its share price on April 24 2023, minutes after the cost of insuring its debt against default surged to a four-year high. But a run on Deutsche did not happen. Germany’s chancellor Olaf Scholz publicly dismissed any comparison between Deutsche and the failed Swiss bank, which seemed to reassure markets.

From the standpoint of an investor or depositor, an averted run also underscores the importance of expertise, contacts and industry knowledge. As the Deutsche Bank case shows, investors often use the price of insurance against a bank’s default to measure wider confidence in a bank – even though this cost may change for other reasons such as investors using trading strategies to protect themselves from different risks.

So getting opinions and information from a wide range of sources is key both for professional investors and for people trading with their own money.

Black and white photo showing a crowd of people outside a bank with the sign
Depositors queue on the street during a run on the Adolf Mandel Bank on New York’s Lower East Side. February 16 1912. Everett Collection/Shutterstock

In my own research, I have often encountered this loss in translation, but professionals also have ways to communicate with each other and clarify such ambiguities. During fieldwork at a Wall Street trading floor, I saw traders making inferences about potential investments from price movements.

But before jumping on possible opportunities, they discussed them with fellow traders. Often, these colleagues sat on other desks (meaning they specialised in different strategies) so the traders could access diverse sources of knowledge.

For instance, when the stock prices of two companies failed to converge, as they typically do, for two hours following a merger announcement, the traders I observed did not immediately assume they had found an opportunity.

First, they tried to rule out alternative explanations. They looked through their proprietary databases for damning news about either company, but did not find any. Then they spoke with colleagues, who confirmed that rival traders at other banks were not active in the stocks, indicating they had found a solid trading opportunity.

They then executed the trade. Such strategies helped this bank attain a combination of risk and returns that was well above its Wall Street peers.

Non-professional traders probably lack access to a handy expert at the next desk, but social media can arguably provide a rudimentary substitute. But instead of simply counting the number of tweets with “negative sentiment”, exploit Twitter’s ability to reveal a wider range of opinions, including other users’ reactions to one other, and the emerging controversy.

With enough critical distance and appetite for diving into debate, it’s possible to see beyond anxious tweets and react more reflectively – just as Wall Street professionals do.

Daniel Beunza, Professor of Social Studies of Finance, City, University of London

This article is republished from The Conversation under a Creative Commons license.

Provided by: Minuteman Press of Central Alabama



What the Iraq War can teach the US about avoiding a quagmire in Ukraine – 3 key lessons

People protest outside of the United Nations headquarters in April 2023 demanding the return of Ukrainian children from Russia. Lev Radin/Pacific Press/LightRocket via Getty Images
Patrick James, USC Dornsife College of Letters, Arts and Sciences

Leaked Pentagon papers showed in early April 2023 that the U.S. is allegedly following the inner workings of Russia’s intelligence operations and is also spying on Ukraine, adding a new dimension to the United States’ involvement in the Ukraine war.

While the U.S. has not actually declared war against Russia, the documents show that it continues to support Ukraine with military intelligence as well as money and weapons against the Russian invasion.

There is no end in sight to the war between Ukraine and Russia – nor to U.S. involvement. While it is far from the first time that the U.S. became a third party to war, this scenario brings the Iraq War, in particular, to mind.

I am a scholar of international relations and an expert on international conflicts. A comparison with the Iraq War, I believe, offers a useful way to look at the case of Ukraine.

The Iraq and Ukraine wars have notable differences from a U.S. foreign policy perspective – chiefly, thousands of American soldiers died fighting in Iraq, while the U.S. does not have any ground troops in Ukraine. But assessing the Iraq War, and its long aftermath, can still help articulate concerns about the United States’ getting involved in intense violence in another faraway place.

Here are three key points to understand.

Soldiers in beige uniforms crouch down against a beige brick wall, while a young girl peers around the corner and watches them.
An Iraqi girl watches U.S. Army troops take cover in Mosul, Iraq, in 2003. Scott Nelson/Getty Images

1. Intervention doesn’t guarantee success

Around the time former President George W. Bush announced the U.S. would invade Iraq in 2003, Osama bin Laden, the wealthy Saudi Arabian Islamist who orchestrated the Sept. 11, 2001, attacks, remained at large. While not obviously connected, the fact that bin Laden continued to evade the U.S. contributed to a general sense of anger at hostile regimes. In particular, Saddam Hussein defied the U.S. and its allies.

The Iraqi dictator continued to evade inspections by the United Nations watchdog group the International Atomic Energy Agency, giving the impression that he had weapons of mass destruction. This proved maddening to the U.S. and its allies as the cat and mouse game dragged on.

Bush reportedly had intense concerns about whether Saddam could use alleged weapons of mass destruction to attack the U.S., causing even more harm than 9/11 did.

A U.S.-led coalition of countries that included the United Kingdom and Australia invaded Iraq in March 2003. The “coalition of the willing,” as it became known, won a quick victory and toppled Saddam’s regime.

Bush initially enjoyed a spike in public support immediately after the invasion, but his polls shortly after experienced a downward trajectory as the war dragged on.

However, the U.S. showed very little understanding of the politics, society and other important aspects of the country that it had taken the lead in occupying and then trying to rebuild.

Many decisions, most notably disbanding of the Iraqi Army in May 2003, revealed poor judgment and even outright ignorance because, with the sudden removal of Iraqi security forces, intense civil disorder ensued.

Disbanding the army caused insurgent militant forces to come out into the open. The fighting intensified among different Iraqi groups and escalated into a civil war, which ended in 2017.

Today, Iraq continues to be politically unstable and is not any closer to becoming a democracy than it was before the invasion.

2. Personal vendettas cannot justify a war

During his 24-year regime, Saddam lived an extravagant lifestyle coupled with oppression of civilians and political opponents. He engaged in genocide of Kurdish people in Iraq. Saddam was finally executed by his own people in 2006, after U.S. forces captured him.

Putin is equally notorious and even more dangerous. He has a long track record of violent oppression against his people and has benefited from leading one of the world’s most corrupt governments.

He also actually possesses weapons of mass destruction and has threatened multiple times to use them on foreign countries. Saddam and Putin have also both been the direct targets of U.S. political leaders, who displayed a fixation on toppling these foreign adversaries, which was evident long before the U.S. actually became involved in the Iraq and Ukraine wars.

The United States’ support for Ukraine is understandable because that country is fighting a defensive war with horrific civilian casualties. Backing Ukraine also makes sense from the standpoint of U.S. national security – it helps push back against an expansionist Russia that increasingly is aligned with China.

At the same time, I believe that it is important to keep U.S. involvement in this war within limits that reflect national interests.

People embrace in front of flowers and teddy bears, in front of a building that looks partially charred.
Ukrainians mourn civilians killed by Russian strikes in the town of Uman on April 30, 2023. Oleksii Chumachenko/Anadolu Agency via Getty Images

3. It can divide the country

The Iraq War resulted in a rise in intense partisanship in the U.S. over foreign policy. In addition, recent opinion polls about the Iraq War show that most Americans do not think that the invasion made the U.S. any safer.

Now, the U.S. faces rising public skepticism about getting involved in the Ukraine war, another expensive overseas commitment.

Polls released in January 2023 show that the percentage of Americans who think the U.S. is providing too much aid to Ukraine has grown in recent months. About 26% of American adults said in late 2022 that the U.S. is giving too much to the Ukraine war, according to Pew Research Group. But three-fourths of those polled still supported the U.S. engagement.

The average American knows little to nothing about Iraq or Ukraine. Patience obviously can grow thin when U.S. support for foreign wars becomes ever more expensive and the threat of retaliation, even by way of tactical nuclear weapons, remains in the realm of possibility. Aid to Ukraine is likely to become embroiled in the rapidly escalating conflict in Washington over the debt ceiling.

On the flip side, if the U.S. does not offer sufficient support for Ukraine to fend of Russian attacks and maintain its independence, adversaries such as Russia, China and Iran may feel encouraged to be aggressive in other places.

I believe that the comparison between the wars in Iraq and Ukraine makes it clear that U.S. leadership should clearly identify the underlying goals of its national security to the American public while determining the amount and type of support that it will give to Ukraine.

While many people believe that Ukraine deserves support against Russian aggression, current policy should not ignore past experience, and the Iraq War tells a cautionary tale.

Patrick James, Dornsife Dean’s Professor of International Relations, USC Dornsife College of Letters, Arts and Sciences

This article is republished from The Conversation under a Creative Commons license. 




Saturday, May 6, 2023

Scientists look to new technologies to make food safer


From romaine to snack crackers, foodborne disease outbreaks have increasingly worried the public. Cold plasma and high-pressure systems might help reduce the risks.

Nearly every month, it seems, comes a new report. In March, there was news of contaminated romaine lettuce, which eventually led to five deaths and sickened over 200 people across the US and Canada. In May, about 100 people in California got sick after eating raw oysters shipped from British Columbia. Then, at the end of July, the baking company Pepperidge Farm issued a recall for a few flavors of its seemingly innocuous, kid-friendly snack, Goldfish crackers.

And the list goes on. At first blush, lettuce, oysters and Goldfish might not seem that similar — but they and a host of other foods, even such things as peanut butter, baby formula and potato chips — can all harbor safety risks. The romaine, shipped from Yuma, Arizona, was contaminated with Escherichia coli O157:H7. The raw oysters were contaminated with norovirus, a pathogen responsible for most foodborne illnesses in the US. The Goldfish snacks? They were recalled because Pepperidge Farm thought one of the seasoning ingredients might contain Salmonella.

The US Centers for Disease Control and Prevention estimates that each year one in six Americans gets sick after eating contaminated food, and over the past decade, the average annual number of food recalls has steadily climbed. A surprising array of food groups — from fresh produce and meat to packaged dry items — has been recalled after outbreaks or customer complaints, or when random testing turned up spores of bacteria or traces of viruses that can cause food poisoning. These pathogens include Salmonella, Listeria, Shigella, norovirus and the especially dangerous E. coli O157:H7. In 2017, there were 456 food recalls, about a third of which were due to microbe-related contamination, according to a report by Food Safety Magazine. (Many recalls are issued if a food might include — or have come into contact with — unlabeled potential allergens.)

The rise in recalls may partly be due to an uptick in vigilance and sensitivity. “We’ve really advanced in science, and have better methods for detecting outbreaks,” says Jeff Farber, a food microbiologist at the University of Guelph in Canada. Indeed, the United States and Canada have some of the safest food supplies in the world, mostly owing to strong federal surveillance systems, he says.

But despite this watchfulness, and the existence of food preservation technologies such as thermal processing and irradiation that normally work well, people can still sicken and die from contaminated food. Existing technologies can’t deal with all threats. Plus, new concerns over controlling viruses, a small uptick in recalls of dry foods, the high costs of safety recalls, and a shifting public appetite for more fresh foods have all created an urgent need for food researchers to seek new approaches, scientists say.

The challenge is to find scalable techniques that destroy microbial threats while preserving flavors and nutritional value. That’s not easy, since many methods that kill microbes also tend to degrade vitamins or change a food’s structure — boiling lettuce will help clean it, but the resulting slop might not appeal to salad lovers.

Among the many routes to food sterilization now being explored — everything from microwaves to pulsed UV light and ozone gas — two emerging technologies are attracting a lot of interest: cold plasma and high-pressure processing. Neither method will solve everything, but both could help improve the safety of the food supply.

Bringing the heat

To kill pathogens, US food manufacturers primarily turn to thermal processing, a technique that uses high temperatures to knock out potentially dangerous bacteria and viruses. The method can assure food safety and keep processed foods on the shelf longer, but works best for canned and frozen foods, precooked meats such as hot dogs, and various pasteurized and processed products, since it can distort the food’s texture, color and nutritional content. Thermal processing won’t work for fresh produce, and thus couldn’t have helped prevent the extensive E. coli outbreak linked to contaminated romaine.

Produce presents special challenges for scientists because of its fragility. “There’s a lot of things you can do with dried foods, meats and poultry that can’t be applied to produce,” says Brendan Niemira, a food microbiologist at the US Department of Agriculture’s Agricultural Research Service in Wyndmoor, Pennsylvania. “We want to improve the safety of the food, but we still want to preserve the quality of the food: The berries have to look and smell good.”

Niemira has been searching for better ways to keep pathogens off fruits and vegetables for nearly 20 years, and his attention has now turned toward cold plasma — what he and others sometimes call the “purple blow torch.” Interest in the method has been growing over the last decade: In one 2010 study, food scientists in Germany were able to get rid of more than 99.99 percent of some strains B. subtilis that cause food poisoning with 20 seconds of plasma treatment.

Cold plasma processing

Plasma is a charged gas. Plasmas create the glow of neon signs and high-tech televisions. The sun is made of hot plasma. In food safety circles, however, scientists work with plasmas that have much lower temperatures. Cold plasma — a very reactive substance made up of photons, free electrons and charged atoms and molecules — can inactivate microorganisms without heat. (The word cold is relative to other types of plasma and can be misleading. Many cold plasma reactions are done at room temperature.)

Different sorts of cold plasma can be generated, depending on the gas you start with (common carrier gases include nitrogen and noble gases like helium, argon and neon) and the source of energy used to shift the gas into a plasma (electricity, microwaves and lasers all work). In the lab, scientists create a plume of plasma that interacts with the food, much like a blowtorch but without the heat. Some of the molecules in the plasma have antimicrobial activity. Reactive oxygen species can disintegrate bacterial cell membranes.Reactions within the plasma may also generate energy in the form of visible or UV light. The UV light, in turn, can damage DNA and other structures that help microbes survive.

Overall, “it’s a relatively inexpensive process that’s going to be chemical-free, residue-free, and doesn’t use any water,” Niemira says.

Cold plasma appears promising in the lab. But more work needs to be done before it will see widespread use. For one thing, it’s not yet approved for commercial use by the US Food and Drug Administration, which needs to see studies on how cold plasma affects food. Research is still in its early stages. “Our understanding is that the research about cold plasma in food has focused on its antimicrobial efficacy,” says FDA spokesperson Peter Cassell. “The FDA is not aware of research exploring other effects that would be important from a regulatory perspective, such as the chemical, toxicological and nutritional effects on the food.”

In addition to continuing to study applications of cold plasma, Niemira and others are looking to combine cold plasma with existing approaches such as high-intensity light, high pressure and chemical sanitizing processes to most efficiently kill as many pathogens as possible.

Under pressure

People commonly think that dry foods are safe, but that’s not necessarily true. Documented outbreaks in what are known in the field as low-moisture foods — cereals, dried fruits and vegetables, condiments and spices — have increased in recent years, as Farber writes in the 2018 Annual Review of Food Science and Technology. He links the increase to a growing awareness of dry foods’ susceptibility to foodborne pathogens and better detection methods.

One promising technique is high-pressure processing (HPP) — a mechanical process that applies a huge amount of pressure to food. HPP can retain the flavor and nutrition of foods, so researchers see it as another way to control microorganisms without heat in low-moisture food, meats and even some vegetables. (Tomatoes, for example, may hold up to HPP, but fragile leafy greens would not.) HPP is especially promising for inactivating viruses, which tend to be more resistant than bacteria to many current methods. It can also give foods a longer shelf life.

HPP is actually an old idea. Agricultural researcher Bert Holmes Hite first reported using it in 1899, while looking for ways to reduce spoilage in cow’s milk. He found that moist foods — such as fruit juices, meats and fruits — could be decontaminated when subject to nearly 6,500 times standard atmospheric pressure at sea level for 10 minutes. But because it was so difficult to manufacture the equipment for this technology, research was discontinued for nearly a century until it picked up again in Japan in the late 1980s.

High-pressure processing is most commonly done in batches, where samples are put inside flexible pouches or containers before they are loaded into the pressure vessel. The processors use fluids such as water or oil to generate high amounts of pressure, which acts uniformly on the food that’s being decontaminated to destroy or disable bacteria and viruses.

HPP has been shown to inactivate hepatitis A virus, the mouse version of norovirus and other viruses in minutes. Poliovirus, however, appears able to withstand even high pressures for extended time periods.

Unlike cold plasma, HPP is FDA-approved and in commercial use for some food products, such as salsa, jam, guacamole and fruit juices. Now, researchers are working out ways that HPP can be used for low-moisture foods such as cocoa, flour and raw almonds as well.Some oyster producers have embraced HPP since it can get rid of pathogens, as well as make it easier to shuck the shellfish.

Scientists don’t fully understand how HPP inactivates bacteria and viruses while leaving the food intact. They know that the method attacks weaker chemical bonds that may be crucial for the functioning of bacterial enzymes and other proteins. But HPP has limited effects on covalent bonds — so the chemicals that affect color, flavor and nutritional value of foods are left mostly intact. And because plant cell walls are sturdier than microbial cell membranes, they appear to withstand high pressure better.

Despite the promise of HPP, one of the main barriers to widespread commercial adoption is cost. Farber quotes a price tag of about a million dollars for a single HPP unit. The process will also need to be optimized for different types of foods, with data provided to the FDA.

Killing pathogens is one piece of puzzle

Sanitizing food, in any case, is just part of the food-safety picture, scientists stress. If a food item is deeply contaminated, even the best tech might not be able to sanitize it completely. Lettuce, apples and other produce are grown in an open environment, and are exposed to many potential pathogens. In some cases, outbreaks have been traced back to basic sanitation errors, such as employees failing to wash their hands. “Food safety doesn’t have one magic bullet,” says Karina Martino, a senior program manager for food processing at the Grocery Manufacturers Association, a US food industry trade group.

The simplest way to make food safe, scientists say, is to make sure it doesn’t get contaminated in the first place. Once pathogens end up in food, the door is wide open for cross-contamination. If produce is washed in large batches, a pathogen can spread all over the equipment and into previously uncontaminated food, leading to a massive outbreak. “Instead of having one pound of contaminated lettuce, you’ve got a million pounds of it,” Niemira says.

And then you have a mess on your hands. It took months for researchers to understand the source of the romaine lettuce outbreak — by the time they figured out that tainted water from a nearby cattle feedlot had contaminated the produce, the outbreak had spread to 36 states in the US, and several provinces in Canada.

Any one tool, be it HPP or cold plasma, will not be the only answer, says Niemira, and may never make our food 100 percent safe. But with them, he adds, “we can have an extra level of security and safety before it reaches the consumer.”

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.

Check Out Minuteman Press of Central Alabama



Yellen puts Congress on notice over impending debt default date: 5 essential reads on what’s at stake

Treasury Secretary Janet Yellen doesn’t want to look back in anger over a debt deadline missed. Photo by Alex Wong/Getty Images
Matt Williams, The Conversation

Lawmakers have been given notice of a new deadline if they are to avoid a damaging default on U.S. debt: June 1, 2023.

If Congress fails to raise the nation’s borrowing limit by that date, Treasury Secretary Janet Yellen warned, then the federal government risks being “unable to continue to satisfy all of the government’s obligations.”

Giving herself a little wiggle room by saying that it is pretty hard to work out the exact date of default, Yellen was clear on the potential impact: “If Congress fails to increase the debt limit, it would cause severe hardship to American families, harm our global leadership position, and raise questions about our ability to defend our national security interests.”

Yikes!

The warning may spur leaders in Congress into action. House Speaker Kevin McCarthy fired the starting pistol on negotiations over the debt ceiling in April, laying out the criteria under which Republicans would accept an increase. But McCarthy’s proposals – which have since passed a narrow vote in the House – have been shot down by the Biden administration for having strings attached that Democrats deemed unacceptable.

Explaining why the U.S. has a debt ceiling in the first place – and why it is a constant source of political wrangling – is a complicated matter. Here are five articles from The Conversation’s archive that provide some of the answers.

1. What exactly is the debt ceiling?

So, some basics. The debt ceiling was established by the U.S. Congress in 1917. It limits the total national debt by setting out a maximum amount that the government can borrow.

Steven Pressman, an economist at The New School, explained the original aim was “to let then-President Woodrow Wilson spend the money he deemed necessary to fight World War I without waiting for often-absent lawmakers to act. Congress, however, did not want to write the president a blank check, so it limited borrowing to US$11.5 billion and required legislation for any increase.”

Since then, the debt ceiling has been increased dozens of times. It currently stands at $31.4 trillion – a figure already reached. As a result, the Treasury has taken “extraordinary measures” to enable it to keep borrowing without breaching the ceiling. Such measures, however, can only be temporary – meaning at one point Congress will have to act to lift the ceiling or default on its debt obligations, which is expected to happen in July or August.

2. ‘Catastrophic’ consequences

How bad could it be if the U.S. does default on its debt obligations? Well, pretty bad, according to Michael Humphries, deputy chair of business administration at Touro University, who wrote two articles on the consequences.

“The knock-on effect of the U.S. defaulting would be catastrophic. Investors such as pension funds and banks holding U.S. debt could fail. Tens of millions of Americans and thousands of companies that depend on government support could suffer. The dollar’s value could collapse, and the U.S. economy would most likely sink back into recession,” he wrote.

3. Undermining the dollar

And that’s not all.

Such a default could undermine the U.S. dollar’s position as a “unit of account,” which makes it a widely used currency in global finance and trade. Loss of this status would be a severe economic and political blow to the U.S. But Humphries conceded that putting a dollar value on the price of a default is hard:

“The truth is, we really don’t know what will happen or how bad it will get. The scale of the damage caused by a U.S. default is hard to calculate in advance because it has never happened before.”

4. Can McCarthy make a deal?

Many of these concessions are known, such as allowing a single member of the House to call for a vote to remove him as speaker. But there many be others that remain secret and could be influencing McCarthy’s decision-making, argued Stanley M. Brand, a law professor at Penn State and former general counsel for the House. These could make it much harder to reach a deal with Biden over the debt ceiling.

“Some of the new rules spawned by McCarthy’s concessions may appear to democratize the procedures for considering and passing legislation. But they are likely to make it difficult for members to get the working majority necessary to pass legislation,” Brand explained. “That could make things such as raising the statutory debt ceiling, which is necessary to avert a government shutdown and financial crisis, and passing legislation to fund the government, difficult.”

5. The GOP endgame: A balanced budget

Another condition McCarthy agreed to in January is to push for a “balanced budget” within 10 years.

The U.S. government hasn’t had a balanced budget since 2001, the year President Bill Clinton left office. Linda J. Bilmes, a senior lecturer in public policy and public finance at Harvard Kennedy School who worked in the Clinton administration from 1997 to 2001, explained how they achieved that rare feat and why it’s unlikely to be repeated today.

“Back in 1997, after the smoke cleared, both the Clinton administration and the Republicans in Congress were able to claim some political credit for the resulting budget surpluses,” she wrote. “But – crucially – both parties recognized that a deal was in the best interest of the country and were able to line up their respective members to get the votes in Congress needed to approve it. The contrast with the current political landscape is stark.”

Editor’s note: This story is a roundup of articles from The Conversation’s archives. Sections of this article appeared in a previous article published on April 19, 2023.

Matt Williams, Senior Breaking News and International Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license. 

Advocating for Mental Health as a Universal Child Right

Mental health and psychological well-being are essential for children, adolescents and communities to thrive. With crises in locations such as Ukraine, Syria, Turkey and Afghanistan, the mental health and well-being of children and young people around the world are causes for concern.

Globally, more than 1 in 7 adolescents ages 10-19 live with mental health conditions, according to UNICEF. Children and youth globally, including those in the United States, face challenges bridging the gap in terms of mental health needs and proper access to quality services.

The COVID-19 pandemic coupled with school closures and disruptions in learning impacted nearly 1.6 billion children globally. Anxiety, depression and other mental health conditions actively threaten children’s ability to be healthy and happy. Addressing key mental health and psychosocial issues to support their development can allow them to meaningfully participate in society.

Together, UNICEF and UNICEF USA are advocating on a local, national and global scale to provide children with the tools they need to support mental health. On a global level, the organizations are calling on Congress to pass the Mental Health in International Development and Humanitarian Settings (MINDS) Act, the first federal legislation that addresses mental health and psychosocial support through U.S. foreign assistance. It focuses primarily on populations with increased risk factors for developing mental health disorders including children and caretakers in crisis-affected communities, gender-based violence survivors, displaced populations and more.

Raising awareness, engaging youth and sharing resources to support parents, adolescents and children are core ways to address the current state of global mental health. To learn more about how you can support these efforts and call on elected officials to prioritize mental health services for children and caregivers in U.S. foreign assistance, visit act.unicefusa.org/MINDSAct or text “MINDS” to 52886.

SOURCE:
UNICEF

 Tony's Hot Dogs!













The way we see and describe hues varies widely for many reasons: from our individual eye structure, to how our brain processes images, to what language we speak, or even if we live near a body of water 

What color is a tree, or the sky, or a sunset? At first glance, the answers seem obvious. But it turns out there is plenty of variation in how people see the world — both between individuals and between different cultural groups.

A lot of factors feed into how people perceive and talk about color, from the biology of our eyes to how our brains process that information, to the words our languages use to talk about color categories. There’s plenty of room for differences, all along the way.

For example, most people have three types of cones — light receptors in the eye that are optimized to detect different wavelengths or colors of light. But sometimes, a genetic variation can cause one type of cone to be different, or absent altogether, leading to altered color vision. Some people are color-blind. Others may have color superpowers.

Our sex can also play a role in how we perceive color, as well as our age and even the color of our irises. Our perception can change depending on where we live, when we were born and what season it is.

To learn more about individual differences in color vision, Knowable Magazine spoke with visual neuroscientist Jenny Bosten of the University of Sussex in England, who wrote about the topic in the  2022 Annual Review of Vision Science. This conversation has been edited for length and clarity.

How many colors are there in the rainbow?

Physically, the rainbow is a continuous spectrum. The wavelengths of light vary smoothly between two ends within the visible range. There are no lines, no sharp discontinuities. The human eye can discriminate far more than seven colors within that range. But in our culture, we would say that we see seven color categories in the rainbow: red, orange, yellow, green, blue, indigo and violet. That’s historical and cultural.

Is that what you taught your own kids, now aged 10 and 5?

I didn’t teach them anything about color because I was interested in observing what they naturally thought about it. Like, for instance, my daughter, probably at the age of 5, said: “Are we going to the blue building?” To me, it looked white. But it was illuminated by a blue-sky light. There’s also an anecdote that I’ve heard — I don’t know if there’s any solid evidence for this — that children can sometimes initially call the sky white, and then later they learn to perceive it as blue. I was interested in observing all these potential things in my own children.

Surely most people around the world agree in general about the main, basic colors, like red, yellow and blue. Don’t they?

There are several big datasets out there looking at color categorization across cultures. And the consensus is that there are some commonalities. This implies that there might be some biological constraints on the way people learn to categorize color. But not every culture has the same number of categories. So, there’s also this suggestion that color categories are cultural, and cultures experience a kind of evolution in color terms. A language might initially make only two or three distinctions between colors, and then those categories build up in complexity over time.

In some languages, like old Welsh for example, there’s no distinction made between blue and green — they both fall into a kind of “grue” category. In other languages, a distinction is made between two basic color terms for blue: In Russian, it’s siniy for dark blue and  goluboy for lighter blue. Do speakers that make that distinction actually perceive colors differently? Or is it just a linguistic thing? I think the jury’s still out on that.

There was an explosive debate online in 2015 about “The Dress,” and whether it was white and gold or blue and black. Why did people see it so differently?

Scientists got very interested in that particular image, too. And there’s been a lot of research on it: there’s even a special issue of a journal devoted to the dress. A consensus has emerged that the way you see the dress largely depends on what lighting you assume it to have. So, people who see it as blue and black see the dress as brightly illuminated by a yellowish light. And people who see it as white and gold see it as more dimly illuminated by a bluish, more shadowy light. Ultimately, it’s the brain that’s making the judgment, about what kind of illumination is on the dress.

But then the question is, why do some people think that is illuminated by bright yellow, and others by a dimmer blue? It could be your own experience with different lighting conditions, and which ones you’re more familiar with — whether you’re used to blue LED light or warm sunlight, for example. But it could also be influenced by other factors like, for example, changes that happen to your eyes as you age.

One of the most obvious reasons why people might see color differently is because their cones might be different: There might be genetic variations that affect the biology of the light detectors in their eye. How many kinds of variations are there like this?

There are many, many combinations. There’s three cone types. We know more about the variation in two of those: the ones that detect long and medium wavelengths, known as L and M cone types. Each of those has a photosensitive opsin, which is the molecule that changes shape when light is received, and which determines the cell’s sensitivity to wavelength. The genes that code for each opsin has seven sites in the gene that are polymorphic: They can have different letters of DNA. You can have different combinations of those seven variants. The total number is large.

One common variation is red-green color blindness. What causes that?

That would be an abnormality in either the L or the M cone types. In dichromacy — that’s the severe form of red-green color vision deficiency —you’d be missing either the L or the M cones, or they’d be there but non-functional.

Red-green color vision deficiency is also called Daltonism, after John Dalton, the English chemist from the 1790s. It wasn’t super obvious to him that his color vision differed from the majority. But he noticed a few cases where his descriptions of color differed from those of other people around him but were shared with his brother. He thought it was to do with an extra filter within the eye. But then, many years later, others were able to sequence his DNA and they could show that he was a dichromat.

In the mild form, anomalous trichromacy, you’d still have two different cone types, but they would just be much more similar to each other, in terms of the wavelengths of light that they are optimized to detect, than they are normally. So, the range of perceived differences between red and green would just be reduced.

What does the world look like to those who have the more severe case?

For a dichromat, they’re essentially missing a whole axis of color vision, and their color vision is then one-dimensional. In terms of how it looks, it’s quite hard to say because we don’t know what, subjectively, the two poles of that dimension are. What’s preserved is the axis between violets and lime green in a normal color space. So that’s often how it’s portrayed. But really, it could be any two hues that are perceived. We just don’t really know.

There have been some cases where people have been dichromatic in one eye only. And then you can ask them to match the color they see from the dichromatic eye to colors presented to the normal, trichromatic eye. And in those cases, sometimes they see more from the dichromatic eye than we expect. But we don’t know whether that’s typical of a regular dichromat who doesn’t have the trichromatic eye to help wire up their brain.

Do these variations from the norm always make the world less rich in terms of color? Or can some genetic variations actually enhance color perception?

Anomalous trichromacy is an interesting case. For the most part, color discrimination is reduced. But in particular cases, because their cones are sensitive at different wavelengths, they can actually discriminate certain colors that normal trichromats can’t. It’s a phenomenon called observer metamerism.

Then there’s tetrachromacy, where a person with two X chromosomes carries instructions for both an altered cone and a regular one, giving them four kinds of cones. We know that this definitely happens. But what we don’t know for sure is whether they can use that extra cone type to gain an extra dimension of color vision, and to see colors that normal trichromats can’t see or can’t discriminate.

The strongest evidence comes from a test where observers had to make a mixture of red and green light match a yellow; some individuals couldn’t find any mixture that would match the yellow. They would actually need three colors to mix together to make a match, instead of two. It’s as if there are four primary colors for them, instead of the usual three. But it’s hard to prove how and why that’s happening, or what exactly they see.

Do these people know they have color super-vision?

The women that we recruited didn’t know their color vision status. More than 50 percent of women have four cone types. But, usually, two of them are just very subtly different, so that may not be enough to generate tetrachromatic vision.

Your own subjective experience of color is so private, it’s hard to know how your color vision compares to the people around you. John Dalton was the first person to identify red-green color blindness, in 1798 — that’s really quite recent. He had a severe type. But even that wasn’t totally clear cut for him.

Are there other biological differences, beyond genes, that affect color vision?

Yes. The lens yellows with age, especially after the age of 40, and that reduces the amount of blue light that reaches the retina. There’s also the macular pigment, which also absorbs short, blue wavelengths of light. Different people have different thicknesses of that depending on what they eat. The more lutein and zeaxanthin you eat, substances that come from vegetables like leafy greens, the thicker the pigment. Iris color also has a small correlation with color discrimination: It could be a factor in determining your very precise experience of color. Blue eyed people seem to do slightly better in tests of color discrimination than brown eyed people.

Is our color perception also affected by the world around us? In other words, if I grow up in a green jungle, or a yellow desert, would I start to discriminate between more colors in those regions of the rainbow?

Yes, it can be. And that that’s quite a hot topic of research at the moment in color science. For example, whether there’s a separate word for green and blue seems to depend, in part, on a culture’s proximity to large bodies of water, for example. Again, that’s a linguistic thing — we don’t know whether that affects their actual perception.

There’s also a seasonal effect on perception of yellow. There was a study in York, which is quite gray and gloomy in the winter and nice and green in the summer, and they found that the wavelength that people perceived as pure yellow shifted with the season — only by a small amount, but still a measurable amount.

And there’s also been an effect observed from the season of your birth, especially if you were born in the Arctic Circle. That is probably to do with the color of light that you’re exposed to during your visual development.

The effect of the environment can affect perception in two opposite ways though: Different environments can contribute to individual differences in perception, but a shared environment can also counteract biological differences to make people’s perceptions more similar.

Wow. There are so many differences, and it seems so hard to unpick it all, and know whether those differences are biological or cultural. It really makes you go back to that philosophical conundrum: When I see blue, is it the same blue that you see?

Yes. I’ve always seen color as something really fascinating, especially the subjective experience of color. It’s still a complete mystery, how the brain produces that. I’ve always wondered about it, long before I decided to commit to the topic academically.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews.