Wednesday, March 29, 2023

Vinyl record sales keep spinning and spinning – with no end in sight

There are far easier ways to consume music than buying records, which takes time, money and effort. Alejandra Villa Loraca/Newsday via Getty Images
Jay L. Zagorsky, Boston University

Over the past decade, vinyl records have made a major comeback. People purchased US$1.2 billion of records in 2022, a 20% jump from the previous year.

Not only did sales rise, but they also surpassed CD sales for the first time since 1988, according to a new report from the Recording Industry Association of America.

Who saw that coming?

I certainly didn’t. In the mid-1990s, I sold off my family’s very large collection of records over my wife’s protests. I convinced her we needed the space, even if the buyer was picking up the whole stash for a song.

Back then, of course, there were far fewer options for listening to music – it was years before on-demand streaming and smartphones.

I now teach at a business school and follow the economy’s latest trends. Sales of records have been increasing since 2007, and the data shows the vinyl record industry’s rebound still has not peaked. Last year, the music industry sold 41.3 million albums, more than in any year since 1988.

This resurgence is just one chapter in a broader story about the growing popularity of older technologies. Not only are LP records coming back, but so are manual typewriters, board games and digital cameras from the late 1990s and early 2000s.

There are many theories about why records are making a comeback.

Most of them miss the point about their appeal.

Why records and not CDs?

One suggestion is that sales have been spurred by baby boomers, many of whom are now entering retirement and are eager to tap into the nostalgia of their youth.

Data shows this theory is not true.

First, the top-selling vinyl albums right now are current artists, not classic bands. As of this writing, Gorillaz, a band formed in the late 1990s, was at the top of the vinyl charts.

Second, data from the recording industry shows the most likely person to buy a LP record is in Gen Z – people born from 1997 to 2012.

Another theory is that records are cheap. While that might have been true in the past, today’s vinyl records command a premium. “Cracker Island,” the Gorillaz album that is currently topping the vinyl sales charts, lists for almost $22 – twice the cost of the CD. Plus, subscribing to an online service like Spotify for 15 bucks a month gives you access to millions of tracks.

A third explanation for the resurgence is that people claim records have better sound quality than digital audio files. Records are analog recordings that capture the entire sound wave. Digital files are sampled at periodic intervals, which means only part of the sound wave is captured.

In addition to sampling, many streaming services and most stored audio files compress the sound information of a recording. Compression allows people to put more songs on their phones and listen to streaming services without using up much bandwidth. However, compression eliminates some sounds.

While LP records are not sampled or compressed, they do develop snap, crackle and popping sounds after being played multiple times. Records also skip, which is something that doesn’t happen with digital music.

If you’re really going for quality, CDs are usually a superior digital format because the audio data is not compressed and has much better fidelity than records.

Yet even though CDs are higher quality, CDs sales have been steadily falling since their peak in 2000.

The ultimate status symbol

In my view, the most likely reason for the resurgence of records was identified by an economist over a century ago.

In the late 1890s, Thorstein Veblen looked at spending in society and wrote an influential book called “The Theory of the Leisure Class.”

In it, he explained that people often buy items as a way to gain and convey status. One of Veblen’s key ideas is that not everything in life is purchased because it is easy, fun or high quality.

Sometimes harder, more time-consuming or exotic items offer more status.

A cake is a great example. Say you offer to bring a cake to a party. You can buy a bakery-made cake that will look perfect and take only a few minutes to purchase. Or you could bake one at home. Even if it’s delicious, it won’t look as nice and will take hours to make.

But if your friends are like mine, they’ll gush over the homemade cake and not mention the perfect store-bought one.

Buying and playing vinyl records is becoming a status symbol.

Today, playing music is effortless. Just shout your request at a smart speaker, like Siri or Alexa, or touch an app on your smartphone.

Playing a record on a turntable takes time and effort. Building your collection requires thoughtful deliberation and money. A record storage cube alongside an accompanying record player also makes for some nice living room decor.

And now I – the uncool professor that I am – find myself bemoaning the loss of all of those albums I sold years ago.

Jay L. Zagorsky, Clinical Associate Professor, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, March 28, 2023

War in Ukraine accelerates global drive toward killer robots

It wouldn’t take much to turn this remotely operated mobile machine gun into an autonomous killer robot. Pfc. Rhita Daniel, U.S. Marine Corps
James Dawes, Macalester College

The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a Department of Defense directive. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related implementation plan released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.”

Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Weaponized artificial intelligence is the future of warfare.

“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of Article 36, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said.

Pressure of war

But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.

This month, a key Russian manufacturer announced plans to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to defend Ukrainian energy facilities from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous Switchblade drone, said the technology is already within reach to convert these weapons to become fully autonomous.

Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “logical and inevitable next step” and recently said that soldiers might see them on the battlefield in the next six months.

Proponents of fully autonomous weapons systems argue that the technology will keep soldiers out of harm’s way by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities.

Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them.

By contrast, fully autonomous drones, like the so-called “drone hunters” now deployed in Ukraine, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems.

Calling for a timeout

Critics like The Campaign to Stop Killer Robots have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of digital dehumanization.

Together with Human Rights Watch, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield.

a soldier crouches on the ground peering into a black box as to small projectiles with wings are launched from tubes on either side of him
This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings. U.S. Army AMRDEC Public Affairs

The organizations argue that the militaries investing most heavily in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the hands of terrorists and others outside of government control.

The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “appropriate levels of human judgment over the use of force.” Human Rights Watch issued a statement saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.

But as Gregory Allen, an expert from the national defense and international relations think tank Center for Strategic and International Studies, argues, this language establishes a lower threshold than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.”

The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy.

The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.

Impossible balance?

The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen.

The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “cannot be transferred to a machine, algorithm or weapon system.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.

If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.

James Dawes, Professor of English, Macalester College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Room-temperature superconductors could revolutionize electronics – an electrical engineer explains the materials’ potential

Room-temperature superconductors could make high-speed maglev trains more practical. Visual China Group via Getty Images
Massoud Pedram, University of Southern California

Superconductors make highly efficient electronics, but the ultralow temperatures and ultrahigh pressures required to make them work are costly and difficult to implement. Room-temperature superconductors promise to change that.

The recent announcement by researchers at the University of Rochester of a new material that is a superconductor at room temperature, albeit at high pressure, is an exciting development – if proved. If the material or one like it works reliably and can be economically mass-produced, it could revolutionize electronics.

Room-temperature superconducting materials would lead to many new possibilities for practical applications, including ultraefficient electricity grids, ultrafast and energy-efficient computer chips, and ultrapowerful magnets that can be used to levitate trains and control fusion reactors.

A superconductor is a material that conducts direct current without encountering any electrical resistance. Resistance is the property of the material that hinders the flow of electricity. Traditional superconductors must be cooled to extremely low temperatures, close to absolute zero.

In recent decades, researchers have developed so-called high-temperature superconductors, which only have to be chilled to minus-10 degrees Fahrenheit (minus-23 Celsius). Though easier to work with than traditional superconductors, high-temperature superconductors still require special thermal equipment. In addition to cold temperatures, these materials require very high pressure, 1.67 million times more than the atmospheric pressure of 14.6 pounds per square inch (1 bar).

As the name suggests, room-temperature superconductors don’t need special equipment to cool them. They do need to be pressurized, but only to a level that’s about 10,000 times more than atmospheric pressure. This pressure can be achieved by using strong metallic casings.

Where superconductors are used

Superconductor electronics refers to electronic devices and circuits that use superconducting materials to achieve extremely high levels of performance and energy efficiency that are orders of magnitude better than can be achieved with state-of-the-art semiconductor devices and circuits.

The lack of electrical resistance in superconducting material means that they can support high electrical currents without any energy loss due to resistance. This efficiency makes superconductors very attractive for power transmission.

Utility provider Commonwealth Edison installed high-temperature superconducting transmission lines and showcased technologies to bring power to Chicago’s north side for a one-year trial period. Compared to conventional copper wire, the upgraded superconducting wire can carry 200 times the electrical current. But the cost of maintaining the low temperatures and high pressures required for today’s superconductors makes even this efficiency gain impractical in most cases.

Because the resistance of a superconductor is zero, if a current is applied to a superconducting loop, the current will persist forever unless the loop is broken. This phenomenon can be used in various applications to make large permanent magnets.

Today’s magnetic resonance imaging machines use superconductor magnets to achieve the magnetic field strength of a few teslas, which is needed for accurate imaging. For comparison, the Earth’s magnetic field has a strength, or flux density, of about 50 microteslas. The magnetic field produced by the superconducting magnet in a 1.5 tesla MRI machine is 30,000 times stronger than that produced by the Earth.

Superconductors, from theory to applications.

The scanner uses the superconducting magnet to generate a magnetic field that aligns hydrogen nuclei in a patient’s body. This process combined with radio waves produces images of tissue for an MRI exam. The strength of the magnet directly affects the strength of the MRI signal. A 1.5 tesla MRI machine requires longer scan times to create clear images than a 3.0 tesla machine.

Superconducting materials expel magnetic fields from inside themselves, which makes them powerful electromagnets. These super-magnets have the potential to levitate trains. Superconducting electromagnets generate 8.3 tesla magnetic fields – more than 100,000 times the Earth’s magnetic field. The electromagnets use a current of 11,080 amperes to produce the field, and a superconducting coil allows the high currents to flow without losing any energy. The Yamanashi superconducting Maglev train in Japan levitates 4 inches (10 centimeters) above its guideway and travels at speeds up to 311 mph (500 kph).

Superconducting circuits are also a promising technology for quantum computing because they can be used as qubits. Qubits are the basic units of quantum processors, analogous to but much more powerful than transistors in classical computers. Companies such as D-Wave Systems, Google and IBM have built quantum computers that use superconducting qubits. Though superconducting circuits make good qubits, they pose some technological challenges to making quantum computers with large numbers of qubits. A key issue is the need to keep the qubits at very low temperatures, which requires the use of large cryogenic devices known as dilution refrigerators.

close-up of a computer chip showing colored LEDs scattered among integrated circuits
Some quantum computer processors use superconducting circuits. Steve Jurvetson/Flickr, CC BY

Promise of room-temperature superconductors

Room-temperature superconductors would remove many of the challenges associated with the high cost of operating superconductor-based circuits and systems and make it easier to use them in the field.

Room-temperature superconductors would enable ultra high-speed digital interconnects for next-generation computers and low-latency broadband wireless communications. They would also enable high-resolution imaging techniques and emerging sensors for biomedical and security applications, materials and structure analyses, and deep-space radio astrophysics.

Room-temperature superconductors would mean MRIs could become much less expensive to operate because they would not require liquid helium coolant, which is expensive and in short supply. Electrical power grids would be at least 20% more power efficient than today’s grids, resulting in billions of dollars saved per year, according to my estimates. Maglev trains could operate over longer distances at lower costs. Computers would run faster with orders of magnitude lower power consumption. And quantum computers could be built with many more qubits, enabling them to solve problems that are far beyond the reach of today’s most powerful supercomputers.

Whether and how soon this promising future of electronics can be realized depends in part on whether the new room-temperature superconductor material can be verified – and whether it can be economically mass-produced.

Massoud Pedram, Professor of Electrical and Computer Engineering, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Prosecuting Putin for abducting Ukrainian children will require a high bar of evidence – and won’t guarantee the children can come back home

Thousands of teddy bears with candles on display at a protest in Brussels in February 2023 represented abducted Ukrainian children. Nicolas Maeterlinck/Belga MAG/AFP via Getty Images
Stefan Schmitt, Florida International University

The International Criminal Court issued an arrest warrant for Russian President Vladimir Putin on March 17, 2023, over war crimes in Ukraine, alleging he bears “individual criminal responsibility” for abducting thousands of children from occupied parts of the country.

Russia’s Commissioner for Children’s Rights, Maria Alekseyevna Lvova-Belova, was also cited by the court on similar charges.

They mark the first arrest warrants the independent tribunal, based in The Hague, has issued since Russia launched a full-scale invasion of Ukraine in February 2022.

But the development will not guarantee the imminent arrest of Putin. The ICC, as it is often called, does not have its own police force and requires other supporting countries to enforce its warrants.

“The ICC is doing its part of work as a court of law. The judges issued arrest warrants. The execution depends on international cooperation,” the court’s president, Piotr Hofmanski, said in a statement on March 17.

As Russian police aren’t likely to arrest their country’s leader, as long as Putin remains inside Russia, he is probably safe.

Since Russia launched an invasion of Ukraine in February 2022, the Ukrainian government, Western powers and the United Nations have collected evidence of Russian violations of international humanitarian law, such as war crimes. This includes widespread sexual violence and the forced abduction and transfer of thousands of Ukrainian children to Russia.

Since 1998, I have worked in securing forensic evidence of these types of crimes in Afghanistan, Guatemala and other places. To me, it is apparent that identifying and collecting evidence of international crimes like killing civilians during conflict is beyond the capabilities and resources of local police crime scene teams, criminal investigators and prosecutors.

It’s also likely that the full extent of war crimes committed by both Ukraine and Russia won’t be credibly investigated and possibly prosecuted until after the war finally ends.

It surprises me that arrest warrants would be issued for the abduction of Ukrainian children. In order to successfully prosecute this crime, investigators will need to show that not only did the alleged abductors take the children against their will, but that they also did not intend to return the children to their legal guardians. This can be more challenging to prove than other kinds of war crimes.

To put these upcoming indictments into perspective, it is also useful to remember that the International Criminal Court, an independent tribunal based in The Hague often known as the ICC, tends to focus on high-level cases that go after political leaders and is not tasked to provide answers to families of all victims.

A blue sign says 'the International Criminal Court' outside of a large modern building structure.
An exterior view of the International Criminal Court building in The Hague, Netherlands. Michel Porro/Getty Images

Proving war crimes

War crimes, under international law, happen when civilians, prisoners of war, hospitals or schools – essentially anyone and anything that isn’t involved in military activities – are targeted during a conflict.

The Ukrainian government and Donetsk People’s Republic, a Ukrainian breakaway region occupied by Russians, have prosecuted and convicted both Russian and Ukrainian soldiers for war crimes since February 2022.

Ukraine has so far convicted 25 Russian soldiers of war crimes in Ukraine. These prosecutions raise questions about how evidence is collected and handled to support these cases – and about credibility.

Ukraine has a history of government corruption, and Donetsk is both not recognized internationally and is backed by Russia, which has a judicial system known to tolerate torture.

I investigate cases in which law enforcement, military and police are alleged to have committed crimes against civilians without accountability. In many cases, these alleged crimes happen during a civil war, like the Guatemalan civil war in the late 1970s and early 1980s, or the Rwandan conflict and genocide in the mid-1990s.

This means that I often work with international organizations like the United Nations to travel to these places and document physical evidence of war crimes – take photographs, take notes, do measurements and draw sketches to illustrate a potential crime scene. The idea is that any other experts can pick up this evidence and reach their own conclusions about what happened there.

Crime scene investigators like me generally do not determine whether a war crime was committed. That is a decision reserved for the prosecutor or a judge who is given the evidence.

People in white appear to be digging in a large trench, outside of a white church.
Ukrainian investigators exhume bodies from a mass grave in Bucha, Ukraine, in April 2022. Genya Savilov/AFP via Getty Images

Beyond political interests

Considering that this war is fought between Ukrainians and Russians – but involves other countries like the United States – any independent effort to investigate war crimes will raise questions of credibility.

In this context, one has to consider if an independent investigation and prosecution is even possible. The ICC is perhaps the best candidate, even though it is far from immune to political pressure, particularly from powerful countries.

The ICC has a specific mandate to go after people allegedly responsible “for the gravest crimes of concern to the international community.” This includes genocide, crimes against humanity and war crimes. The forced transfer and deportation of a group of people is a war crime.

But the ICC isn’t tasked with investigating the fate of victims on all sides of the war. This will take a separate effort, decades of work and cost a large amount of money, requiring the support of rich countries.

Since its inception in 2002, the ICC has indicted more than 40 people, all from Africa, and convicted 10 of them. While 123 countries are party to the ICC, meaning they have signed on to support its work, neither Russia nor Ukraine has ratified the treaty that allows the ICC to investigate crimes on their territories or by their forces.

Russia’s foreign ministry responded to the March 17 announcement by the ICC by saying that the arrest warrant does not “have meaning” for Russia, since it is not a party to the ICC.

The U.S. also never ratified the ICC’s founding treaty, with the justification that it would not accept prosecution of U.S. soldiers by a foreign court.

Ukraine, though, has given the ICC narrow jurisdiction to investigate crimes there since 2014.

In some cases, the ICC has not been able to successfully prosecute people even when it issues indictments. The court in 2009 and 2010, for example, issued indictments against Omar al-Bashir, former head of state in Sudan, for his role in carrying out genocide, and directing war crimes in Darfur. Yet, even though al-Bashir traveled internationally, no authority in any country he visited ever arrested him, despite the ICC’s arrest warrant.

A stuffed panda bear is covered in snow and sits in front of a sign that says 'children of war. 460 killed by Russia, 919 wounded by Russia, 16222 deported to Russia, 349 missing.'
A makeshift memorial dedicated to children killed, wounded, deported or missing in Ukraine is seen outside the Russian Embassy in Berlin in February 2023. Odd Andersen/AFP via Getty Images

Proving abductions took place

Russian forces have moved at least 6,000 Ukrainian children to camps and facilities across Russia for forced adoptions and military training, according to a March 2023 report by the Conflict Observatory, a program supported by the U.S. State Department.

Showing sufficient evidence that Russia forcibly abducted the children and did not intend to return them to their legal guardians would likely involve the children’s family members giving witness statements. That is, unless the ICC’s prosecutor has obtained Russian military documents or communications that clearly indicate that these are involuntary abductions.

Contrast this with trying to prosecute Russian military commanders and leaders for conducting multiple bombings of nonmilitary sites in Ukraine, such as hospitals or schools. It would be relatively simple to provide evidence that the attacks on these places constituted war crimes, as long as there is no evidence that these sites lost their protected status under international law, such as evidence that a bombed hospital or school had been used for military purposes.

The victims

War crimes involving massive numbers of casualties leave behind a multitude of surviving family members, all of whom have the right to know the fate of their loved ones.

But it is important to remember that the ICC’s prosecution of any war crime will not extend beyond the individual arrest and prosecution of soldiers and political leaders. The court is not responsible for repatriating children to their respective families.

This is an updated version of an article originally published on Aug. 5, 2022.

Stefan Schmitt, Project Lead for International Technical Forensic Services, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Life After Stroke: 5 tips for recovery and daily living

(Family Features) In the weeks and months immediately following a stroke, an early rehabilitation program offers the best possible recovery outcomes. While each person’s stroke recovery journey is unique, starting the path toward rehabilitation as soon as it’s medically safe allows stroke survivors to mitigate the lasting effects.

According to the American Stroke Association, a division of the American Heart Association, each year, approximately 800,000 people in the United States have a stroke. Strokes can happen to anyone, at any age. In fact, globally about 1 in 4 adults over the age of 25 will have a stroke in their lifetime.

Early Intervention
The rehabilitation and support a survivor receives can greatly influence health outcomes and recovery. The first three months after a stroke are especially critical. Although recovery may continue for years after a stroke, this time in the immediate aftermath of a stroke is when the brain is most able to adjust to the damage done by the stroke so the survivor can learn new ways to do things.

Physical, Communication and Cognitive Changes
Following a stroke, a survivor may experience physical changes, such as fatigue, seizures, weakness or paralysis on one side of the body or spasticity, stiff or rigid muscles which may cause difficulty with completing daily activities and tasks. If experiencing fatigue, speak with your health care provider about ways to reduce it. Your care team may also be able to provide medications to help with seizures and spasticity. Physical therapy is also an option.

Challenges after a stroke depend on the severity and location of the stroke. In addition to various physical disabilities, stroke survivors may experience aphasia, communication and thought problems related to speaking, listening, understanding or memory. Planning, organizing ideas or making decisions can also be harder.

“Remember to be patient when communicating with a stroke survivor,” said Elissa Charbonneau, M.S., D.O., chief medical officer of Encompass Health and an American Stroke Association national volunteer. “The impact of a stroke on cognitive, speech and language can be significant and isolating. When connecting with a stroke survivor, some helpful practices include demonstrating tasks, breaking actions into smaller steps, enunciating, asking multiple choice questions and repetition.”

Customized Rehabilitation
Once a stroke survivor’s medical condition is stabilized and he or she is ready to leave the hospital, rehabilitation can help restore function and teach new ways to complete everyday tasks. Rehabilitation may take place in an inpatient facility, skilled nursing facility or long-term acute care facility. Outpatient clinics and home health agencies can also provide rehabilitative care in certain circumstances.

One patient’s rehab journey might include therapy to improve balance, strength or mobility while another might need speech or other therapies. A rehabilitation designed for the individual is critical.

Preventing a Recurrence
After a first stroke, nearly 1 in 4 survivors will have another. Stroke survivors can help reduce their risk of having another stroke by working with their health care team to identify what caused the stroke and uncover personal risk factors.

Taking steps such as healthy eating, reducing sedentary time and taking medications as prescribed can help your brain and reduce your risk of a repeat stroke. Controlling conditions such as high blood pressure, diabetes and sleep apnea also reduce your risk of having another stroke.

Support During Your Journey
Caregivers and other loved ones can provide important long-term support during your recovery and rehabilitation.

Find resources for stroke rehab and recovery including the “Life After Stroke” guide, “Simply Good” cookbook and a support network to connect with other survivors at Stroke.org/Recovery.

 

Photo courtesy of Getty Images
 

SOURCE:
American Heart Association

A Twist on Traditional Burgers


Warm weather and grilling go hand-in-hand, and few dishes say summer like burgers. While traditional beef patties come to mind for many, there are also healthy protein options to satisfy that burger craving without sacrificing flavor.

For example, salmon is a nutritionally well-rounded alternative that offers a variety of health benefits, and an option like gluten-free Trident Seafoods Alaska Salmon Burgers are made with wild, sustainable, ocean-caught whole filets with no fillers and are lightly seasoned with a “just-off-the-grill,” smoky flavor. Topped with melted cheddar then piled on top of fresh arugula, peppered bacon and zesty mayo, these Alaskan Salmon Burgers with Peppered Bacon are a twist on tradition that can help you put a protein-packed, flavorful meal on the table in minutes.

Find more healthy seafood recipes at tridentseafoods.com.

Alaskan Salmon Burgers with Peppered Bacon

Prep time: 13 minutes
Servings: 4

  • 1/2       cup mayonnaise
  • 1 1/2    tablespoons lemon juice
  • 1/2       teaspoon lemon zest
  • salt
  • pepper
  • 1          box (11.2 ounces) Trident Seafoods Alaska Salmon Burgers
  • 4          cheddar cheese slices
  • 4          seeded burger buns, split and toasted
  • 4          cups arugula
  • 6          strips peppered bacon, cooked
  • 12        bread-and-butter pickles, drained
  1. In small bowl, combine mayonnaise, lemon juice and lemon zest. Season with salt and pepper. Set aside.
  2. Cook salmon burgers according to package directions. When almost cooked through, top each with slice of cheese, cover and cook until melted.
  3. Spread cut sides of buns with mayonnaise and top bottom buns with arugula. Cover with salmon burgers, bacon, pickles and top buns.
SOURCE:
Trident Seafoods

Why tornadoes are still hard to forecast – even though storm predictions are improving

A series of images in this photo montage shows the evolution of a tornado. JasonWeingart via Wikimedia, CC BY-SA
Chris Nowotarski, Texas A&M University

As a deadly tornado headed toward Rolling Fork, Mississippi, on March 24, 2023, forecasters saw the storm developing on radar and issued a rare “tornado emergency” warning. NOAA’s Weather Prediction and Storm Prediction centers had been warning for several days about the risk of severe weather in the region. But while forecasters can see the signs of potential tornadoes in advance, forecasting when and where tornadoes will form is still extremely difficult.

We asked Chris Nowotarski, an atmospheric scientist who works on severe thunderstorm computer modeling, to explain why – and how forecast technology is improving.

Why are tornadoes still so difficult to forecast?

Meteorologists have gotten a lot better at forecasting the conditions that make tornadoes more likely. But predicting exactly which thunderstorms will produce a tornado and when is harder, and that’s where a lot of severe weather research is focused today.

Often, you’ll have a line of thunderstorms in an environment that looks favorable for tornadoes, and one storm might produce a tornado but the others don’t.

The differences between them could be due to small differences in meteorological variables, such as temperature. Even changes in the land surface conditions – fields, forested regions or urban environments – could affect whether a tornado forms. These small changes in the storm environment can have large impacts on the processes within storms that can make or break a tornado.

Scientists stand near a truck outfitted with measuring devices with a dramatic storm on the horizon.
One way scientists gather data for understanding tornadoes is by chasing storms. Annette Price/CIWRO, CC BY

One of the strongest predictors of whether a thunderstorm produces a tornado relates to vertical wind shear, which is how the wind changes direction or speed with height in the atmosphere.

How wind shear interacts with rain-cooled air within storms, which we call “outflow,” and how much precipitation evaporates can influence whether a tornado forms. If you’ve ever been in a thunderstorm, you know that right before it starts to rain, you often get a gust of cold air surging out from the storm. The characteristics of that cold air outflow are important to whether a tornado can form, because tornadoes typically form in that cooler portion of the storm.

How far in advance can you know if a tornado is likely to be large and powerful?

It’s complicated. Radar is still our biggest tool for determining when to issue a tornado warning – meaning a tornado is imminent in the area and people should seek shelter.

The vast majority of violent tornadoes form from supercells, thunderstorms with a deep rotating updraft, called a “mesocyclone.” Vertical wind shear can enable the midlevels of the storm to rotate, and upward suction from this mesocyclone can intensify the rotation within the storm’s outflow into a tornado.

If you have a supercell and it has strong rotation above the ground, that’s often a precursor to a tornado. Some research suggests that a wider mesocyclone is more likely to create a stronger, longer-lasting tornado than other storms.

Forecasters also look at the storm’s environmental conditions – temperature, humidity and wind shear. Those offer more clues that a storm is likely to produce a significant tornado.

What radar showed as a tornado headed toward Rolling Fork on March 24, 2023.

The percentage of tornadoes that receive a warning has increased over recent decades, due to Doppler radar, improved modeling and better understanding of the storm environment. About 87% of deadly tornadoes from 2003 to 2017 had an advance warning.

The lead time for warnings has also improved. In general, it’s about 10 to 15 minutes now. That’s enough time to get to your basement or, if you’re in a trailer park or outside, to find a safe facility. Not every storm will have that much lead time, so it’s important to get to shelter fast.

What are researchers discovering today about tornadoes that can help protect lives in the future?

If you think back to the movie “Twister,” in the early 1990s we were starting to do more field work on tornadoes. We were taking radar out in trucks and driving vehicles with roof-mounted instruments into storms. That’s when we really started to appreciate what we call the storm-scale processes – the conditions inside the storm itself, how variations in temperature and humidity in outflow can influence the potential for tornadoes.

Scientists can’t launch a weather balloon or send instruments into every storm, though. So, we also use computers to model storms to understand what’s happening inside. Often, we’ll run several models, referred to as ensembles. For instance, if nine out of 10 models produce a tornado, we know there’s a good chance the storm will produce tornadoes.

The National Severe Storms Laboratory has recently been experimenting with tornado warnings based on these models, called Warn-on-Forecast, to increase the lead time for tornado warnings.

A destroyed home with just one wall standing and furniture strewn about in Rolling Fork, Mississippi, after the tornado March 24, 2023.
An early warning can be the difference between life and death for people in homes without basements or cellars. Chandan Khanna/AFP via Getty Images

There are a lot of other areas of research. For example, to better understand how storms form, I do a lot of idealized computer modeling. For that, I use a model with a simplified storm environment and make small changes to the environment to see how that changes the physics within the storm itself.

There are also new tools in storm chasing. There’s been an explosion in the use of drones – scientists are putting sensors into unmanned aerial vehicles and flying them close to and sometimes into the storm.

The focus of tornado research has also shifted from the Great Plains – the traditional “tornado alley” – to the Southeast.

US map showing highest number of tornadoes in Mississippi, Alabama and western Tennessee.
A map of severe tornadoes from 1986 to 2015 shows a large number in the Southeast. NOAA Storm Prediction Center

What’s different about tornadoes in the Southeast?

In the Southeast there are some different influences on storms compared with the Great Plains. The Southeast has more trees and more varied terrain, and also more moisture in the atmosphere because it’s close to the Gulf of Mexico. There tend to be more fatalities in the Southeast, too, because more tornadoes form at night.

We tend to see more tornadoes in the Southeast that are in lines of thunderstorms called “quasi-linear convective systems.” The processes that lead to tornadoes in these storms can be different, and scientists are learning more about that.

Some research has also suggested the start of a climatological shift in tornadoes toward the Southeast. It can be difficult to disentangle an increase in storms from better technology spotting more tornadoes, though. So, more research is needed.

Chris Nowotarski, Associate Professor of Atmospheric Science, Texas A&M University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Teacher pensions are becoming a bigger share of educational costs

Teacher pensions cost nearly $66 billion in 2020. Jose Luis Pelaez Inc/Getty Images
Michael Addonizio, Wayne State University

The 2022 stock market plunge has taken a toll on some of the nation’s largest state and municipal pension funds, making it harder for governments to pay for future retirement benefits to millions of K-12 teachers and other public employees.

Here, Michael Addonizio, an education policy expert at Wayne State University, provides insight on how teacher pensions are affecting K-12 school budgets overall and what, if anything, can be done to better manage pension systems and close funding gaps.

1. Is there enough money to pay teacher pensions?

Yes and no. There is enough money to pay pension benefits to current retirees. But there is not enough money to pay all promised benefits to future retirees.

U.S. teacher pension funds collectively manage about US$3 trillion in assets. These dollars are invested in various ways – stocks, bonds, real estate, foreign currency, and other ways. But these assets held by the retirement plans are generally less than the plans’ liabilities – that is, the projected cost of benefits promised to future retirees. As of 2022, this gap between assets and liabilities is about $878 billion. Put another way, the ratio of assets to liabilities is about 77%. This ratio is down from about 84% in 2021, but is higher than any other year since 2008.

The amount spent on teacher retirement costs in 2020 – $65.9 billion – represented 5.5% of total state and local K-12 spending.

The problem is that these retirement costs have been growing faster than total K-12 expenditures for decades. In 2001, retirement costs amounted to only 1.3% of total state and local school spending.

The growth in teacher retirement costs is due mostly to an increase in payments for unfunded pension liabilities, often referred to as pension debt. This is the amount of money that states and municipalities pay annually into their retirement systems to cover previously unfunded liabilities – that is, the shortfall that a pension fund needs to pay all future promised benefits.

2. How do these pension funding shortfalls occur?

Every year, pension planners have to make assumptions about how fast teacher salaries will grow, how many teachers will teach long enough to qualify for a pension, how long qualified retired teachers will live and collect benefits and how the pension fund’s investments will perform. If all these assumptions are correct and the plan’s expected assets cover its expected liabilities, the plan is considered fully funded.

But the typical teacher pension plan has not been fully funded at any point since about the year 2000. Overly optimistic investment assumptions are often the biggest part of unfunded liabilities. In response, states or big cities often redirect money from school operating budgets into the pension funds. But these governments often fail to make these payments in full.

States and cities face fiscal pressures from other spending demands and from tax collections that fail to keep pace. Pushing some unfunded pension liability costs into the future is often seen as less painful than cutting current government programs or raising taxes. But skimping on covering costs for future retirees often compounds the system’s liability problem over time.

In 2021, fully 69% of teacher retirement costs went to cover unfunded pension liabilities, up from 17% in 2001. In other words, the cost of future benefits is growing faster than the cost of current-year benefits.

Could it be due to increasingly generous retirement benefits? No. A recent report by the Equable Institute, a bipartisan nonprofit that studies public pensions and advises employees, communities and policymakers, concludes that the average value of lifetime benefits for new teachers is about $100,000 less than for their more senior colleagues.

Rather, unfunded pension liabilities can rise because of downturns in the financial market, lowering the systems’ investment earnings. Also, they may increase when schools hire more teachers and support staff, increasing the numbers of workers in the pension system. It can also be due to the rising cost of borrowing,

3. What does this mean for education funding?

As more public dollars flow to teacher retirement systems, fewer resources are available for schools and classrooms. From 2002 to 2020, total state and local K-12 spending rose 33%, while teacher retirement spending rose 220%. Nationally, and in most states, teacher pension costs have been rising faster than K-12 spending for the past two decades. States then take money from state funds normally dedicated to school operations and move them to the pension fund. The result has been less spending for school operations, in the form of either spending cuts or a smaller share of a growing spending pie.

For example, in the 2022-23 fiscal year, my state of Michigan will pay nearly $3 billion from the state School Aid Fund into the state-administered Public School Employees Retirement System to cover future pension costs. However, while this move will lower the amount of unfunded liabilities in the system, these dollars will come directly from state funds intended to support general K-12 school operations.

This practice has been repeated in many states over the past two decades. According to the Equable Institute study, the “hidden cuts” of using K-12 funds to cover pension costs have risen from $457 per student in 2001 to $1,290 per student in 2021 – a 182% increase in constant 2021 dollars.

4. How can the problem be solved?

The solutions rest with the states, and there is no “one size fits all” remedy. Each state has its own K-12 funding system and teacher retirement plans, which are governed by many rules that are embedded in state constitutions and laws. These state laws vary. For example, teachers in 15 states, including California and Texas, aren’t covered by the Social Security system. But there are some common issues and ways to address them.

One common problem is transparency. While it’s usually relatively easy to see how much states, districts and schools are spending for operations, it’s much more difficult to find public data on teacher retirement costs, particularly pension liability costs, because the data is remarkably scarce.

Pension dollars are as much a part of public education budgets as spending on teacher and staff salaries, books, buses and the rest. Careful monitoring and reporting of pension costs, both payments and liabilities, may improve management of these costs before they inflict more damage on budgets for teaching and learning.

Secondly, many states have reduced their financial support for K-12 schools in recent years. The share of personal income given to K-12 schools has steadily declined since the 2007-2009 Great Recession in 39 states.

States could protect school operating budgets by using general fund revenue to pay pension liability costs, not dedicated K-12 aid. Local districts could be responsible for the cost of current-year retirement benefits but cannot do much to manage unfunded pension liabilities. States could cover pension debt costs without reducing state aid for school operations, but it would require raising taxes or cutting programs in other areas.

To begin moving in this direction, states could restore their pre-recession levels of tax effort for K-12 education. A recent study by researchers from Rutgers University, the University of Miami and the Albert Shanker Institute concluded that had all states done this by 2016, schools would have reaped $288 billion in added funding.

Trading off pension support against school operating funds is not an inevitable result of rising pension costs. Whether states have the economic means or political will to address this problem effectively remains to be seen.

Michael Addonizio, Professor of Educational Leadership and Policy Studies, Wayne State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

4-day work week trials have been labelled a ‘resounding success’. But 4 big questions need answers

Shutterstock
Anthony Veal, University of Technology Sydney

A little more than a century ago, most people in industrialised countries worked 60 hours a week – six ten-hour days. A 40-hour work week of five eight-hour days became the norm, along with increased paid holidays, in the 1950s.

These changes were made possible by massive increases in productivity and hard-fought struggles by workers with bosses for a fair share of the expanding economic pie.

In the 1960s and ‘70s it was expected that this pattern would continue. It was even anticipated that, by the year 2000, there would be a “leisure society”. Instead, the trend towards reduced working hours ground to a halt.

But now there are suggestions we are on the cusp of another great leap forward – a 32-hour, four-day week for the same pay as working five days. This is sometimes referred to as the “100-80-100” model. You will continue to be paid 100% of your wages in return for working 80% of the hours but maintaining 100% production.

In Spain and Scotland, political parties have won elections with the promise of trialling a four-day week, although a similar move in the 2019 UK general election was unsuccessful. In Australia, a Senate committee inquiry has recommended a national trial of the four-day week.

Hopes of the four-day week becoming reality have been buoyed by glowing reports about the success of four-day week trials, in which employers have reported cutting hours but maintaining productivity.

However, impressive as the trial results may appear, it’s still not clear whether the model would work across the economy.

An employer-led movement

Unlike previous campaigns for a shorter work week, the four-day workweek movement is being led by employers in a few, mainly English-speaking, countries. Notable is Andrew Barnes, owner of a New Zealand financial services company, who founded the “4-Day Week Global” organisation.

It has coordinated a program of four-day week trials in six countries (Australia, Canada, Ireland, New Zealand, the United Kingdom and the United States). Almost 100 companies and more than 3,000 employees have been involved. (A highly publicised trial in Iceland was not coordinated by it.)

These trials are being monitored by an “international collaboration” of research teams at three universities: Boston College, Cambridge University, and University College Dublin. The Boston College team is led by work-time/leisure-time guru Juliet Schor, author of the 1991 bestseller The Overworked American.

A number of reports have been published, including one “global” report covering all six countries, and separate reports for the UK and Ireland]. A report on the Australian trial is promised for April.

The latest report from the 4 Day Week Global organisation.
The latest report from the 4 Day Week Global organisation. 4 Day Week Global

Overall, these reports have declared the trials a “resounding success” – both for employers and employees.

Employees, unsurprisingly, were overwhelmingly positive. They reported less stress, burnout, fatigue and work-family conflict, and better physical and mental health.

More significant were the employers’ responses. They have generally reported improved employee morale and no loss of revenue. Nearly all have committed to, or are considering, continuing with the four-day-week model.

Four big questions

The trials do not, however, answer all the questions about the viability of the four-day week. The four main ones are as follows.

First, are the research results reliable?

Employers and employees were surveyed at the start, halfway through and at the end of the six-month trials. But only about half of the employees and two-thirds of employers completed the vital final round. So there’s some uncertainty about their representativeness.

Second, did the participating firms demonstrate the key productivity proposition: an increase of almost 20% in output per employee per hour worked?

The firms involved were not asked to provide “output” data, just revenue. This may be a reasonable substitute. But it may also have been affected by price movements (inflation was on the march in 2022).

Third, for those firms that achieved the claimed productivity increase, how did it come about? And is it sustainable?

Proponents of the four-day week argue that employees are more productive because they work in a more concentrated way, ignoring distractions. A much longer period than six months will be needed to establish whether this more intense work pattern is sustainable.

Fourth, is the four-day model likely to be applicable across the whole economy?

This is the key question, the answer to which will only emerge over time. The organisations involved in the trials were self-selected and unrepresentative of the economy as a whole. They employed mostly office-based workers. Almost four-fifths were in managerial, professional, IT and clerical occupations. Organisations in other sectors, with different occupational profiles, may find increased productivity through more intensive working difficult to emulate.

Take manufacturing: only three firms from this sector were included in the large UK trial. Since manufacturing has been subject to efficiency studies and labour-saving investment for a century or more, an overall 20% “efficiency gain” to be had across the board seems unlikely.

The productivity gains achieved in office environments may harder to replicate in other settings such as manufacturing.
The productivity gains achieved in office environments may harder to replicate in other settings such as manufacturing. Shutterstock

Then there are sectors that provide face-to-face services to the public, often seven days a week. They cannot close for a day, and their work intensity is often governed by health and safety concerns. Reduced hours are unlikely to be covered by individual productivity increases. To maintain operating hours, either staff will have to work overtime or more staff would need to be employed.

As for the public sector, in Australia and other countries “efficiency savings” involving budget cuts of about 2% a year have been common for decades. Any “slack” is likely to have been already squeezed out of the system. Again, reducing standard hours would result in the need to pay overtime rates or recruit extra staff, at extra cost.

So what now?

This does not mean the four-day week could not spread through the economy.

One scenario is that it could spread in those workplaces and sectors where productivity gains are achievable.

Those employers and sectors not offering reduced hours would find it harder to recruit staff. They would need to reduce hours, perhaps by stages, to compete. In the absence of productivity gains, they would be forced to absorb the extra costs or pass them on in increased prices.

The pace at which such change takes place would depend, as it always has, on the level of economic growth, productivity trends and labour market conditions.

But it is unlikely to happen overnight. And, as always, it will be accompanied by many employers and their representatives claiming the sky is about to fall in.

Anthony Veal, Adjunct Professor, Business School, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.