Anxiety among Chinese primary school teachers under the “double reduction” policy: prevalence and associated factors
A roadmap to competitive preclinical packages
Nature Medicine, Published online: 17 April 2026; doi:10.1038/s41591-026-04345-2
Should researchers avoid translational research in animals in favor of human or AI models? We argue that this debate should focus not on comparing species but instead on how experimental systems can be combined to maximize mechanistic confidence, human relevance, and real-world decision-making value.
The case for fixing everything
The handsome new book Maintenance: Of Everything, Part One, by the tech industry legend Stewart Brand, promises to be the first in a series offering “a comprehensive overview of the civilizational importance of maintenance.” One of Brand’s several biographers described him as a mainstay of both counterculture and cyberculture, and with Maintenance, Brand wants us to understand that the upkeep and repair of tools and systems has profound impact on daily life. As he puts it, “Taking responsibility for maintaining something—whether a motorcycle, a monument, or our planet—can be a radical act.”
Radical how? This volume doesn’t say. In an outline for the overall work, Brand says his goal is to “end with the nature of maintainers and the honor owed them.”
The idea that maintainers are owed anything, much less honor, might surprise some readers. Actually, maintenance and repair have been hot topics in academia since the mid-2010s. I played some role in that movement as a cofounder of the Maintainers, a global, interdisciplinary network dedicated to the study of maintenance, repair, care, and all the work that goes into keeping the world going.
Brand is right, too, that maintainers haven’t gotten the laurels they deserve. Over the past few decades, scholars have shown that work from oiling tools to replacing worn parts to updating code bases all tends to be lower in status than “innovation.” Maintenance gets neglected in many organizational and social settings. (Just look at some American infrastructure!) And as the right-to-repair movement has shown, companies in pursuit of greater profits have frequently locked us out of being able to do repairs or greatly reduced the maintainable life of their products. It’s hard to think of any other reason to put a computer in the door of a refrigerator.
Some of Brand’s earlier work helped inspire those insights. But his new book makes me think he doesn’t see things that way. For Brand, maintenance seems to be a solitary act, profound but more about personal success and fulfillment than tending to a shared world or making it better.
Born in 1938, Brand is 87 years old. A sense hangs over the book—with its battles against corrosion, rust, and decay, with its attempts to keep things going even as they inevitably falter—of someone looking over life and pondering its end. Maintenance: Of Everything connects to every stage of Brand’s life. It’s worth reviewing where it falls in that arc. Brand has always been interested in tools and fixing things, but rarely has he focused on the systems that need the most care.
More than a half-century ago, Brand was a member of the Merry Pranksters, a countercultural, LSD-centered hippie collective famously led by Ken Kesey, the author of One Flew Over the Cuckoo’s Nest. In 1966, Brand co-produced the Trips Festival, where bands like the Grateful Dead and Big Brother and the Holding Company performed for thousands amid psychedelic light shows.
Brand’s Whole Earth Catalog had a vision that might feel progressive, but its libertarian, rugged-individualist philosophy of remaking civilization alone stood in contrast to more collective social change movements.
In some ways, the Trips Festival set a paradigm for the rest of his life’s work. Brand’s biographers have described him as a network celebrity—someone who got ahead by bringing people together, building coalitions of influential figures who could boost his signal. As Kesey put it in 1980, “Stewart recognizes power. And cleaves to it.”
Brand applied this network logic to the undertaking he will always be best remembered for: the Whole Earth Catalog. First published in 1968 and aimed at hippies and members of the nascent back-to-the-land movement, the publication had the motto “Access to tools.” Its pages were full of Quonset huts, geodesic domes, solar panels, well pumps, water filters, and other technologies for life off the grid. It was a vision that might feel progressive or left-leaning, but the libertarian, rugged-individualist philosophy of eschewing corrupt systems and remaking civilization alone stood in contrast to the more collective movements pushing for deep social change at the time—like civil rights, feminism, and environmentalism.
That vision also led straight to the empowerment that came with new digital tools, and to Silicon Valley. In 1985, Brand published the Whole Earth Software Catalog, the last of the series, and also cofounded the WELL—the Whole Earth ’Lectronic Link, a pioneering online community famous for, among other things, facilitating the trade of Grateful Dead bootlegs. He also wrote a hagiographic book about the MIT Media Lab, known for its corporate-sponsored research into new communications tech. “The Lab would cure the pathologies of technology not with economics or politics but with technology,” Brand wrote. Again, not collective action, not policymaking: tools. And Brand then cofounded the Global Business Network, a group of pricey consulting futurists that further connected him to MIT, Stanford, and the Valley. Brand had literally helped bring about the modern digital revolution.
His attention then turned toward its upkeep. Brand’s 1994 book, How Buildings Learn: What Happens After They’re Built, argued against high-modernist architectural ideas. Nearly all buildings eventually get remade, he argued, but he especially favored cheap, simple structures that inhabitants could easily retool to suit changing needs. In some ways, Brand was recapitulating the liberated—or libertarian—philosophy of the Whole Earth Catalog: People can remake their world, if they have access to tools. In a chapter titled “The Romance of Maintenance,” he asked readers to see the beauty, value, and occasional pleasures of fixer-uppers of all kinds.
This chapter was a touchstone for many of us in the academic subfield of maintenance studies. Researchers in disciplines like history, sociology, and anthropology, as well as artists and practitioners in fields like libraries, IT, and engineering, all started trying to understand the realities and, yes, romance of maintenance and repair. Brand joined and contributed to Listservs, attended conferences, chatted with intellectual leaders. So it’s a bit uncharitable when he writes that his new book is “the first to look at maintenance in general.” He knows better. The real question, though, is what his work has to teach us that others have not said before. In this first volume, the answer is unclear.
Maintenance: Of Everything, Part One is an odd book. If so much of Brand’s thinking has been about access to tools, he now asks, in a more extended way: How are our tools maintained? But where Brand began his career with a catalogue, in this volume we get … what? A digest? An almanac? An encyclopedia? Its form and riotous variety fit no genre easily.
The book has two chapters. The first, “The Maintenance Race,” recounts the story of three men who took part in the Golden Globe, a round-the-world race for solo sailors held in 1968. Each of the sailors, Brand explains, had a different philosophy of maintenance. One neglected it and hoped for the best. He died. Another thought of and prepared for everything in advance, and while he didn’t win the race, he completed it and once held the record for the “world’s longest recorded nonstop solo sailing voyage.” The final sailor won and did so through heroic acts of perseverance; his style was “Whatever comes, deal with it,” Brand explains. Structured like a fairy tale and unremittingly romantic, the story—like most of the anecdotes in the book—focuses on the derring-do of vigorous white guys. The strategy is no secret. Brand’s outline explains: “Start with a dramatic contest of maintenance styles under life-critical conditions—a true story told as a fable.” This myth is meant to inspire.
The second chapter, “Vehicles (and Weapons),” is over 150 pages long. It has five sections, multiple subsections, five subsections designated “digressions,” one called a “subdigression,” two “postscripts,” and several “footnotes” that are not footnotes in a formal sense but, rather, further addenda. At times, it all feels like notes for a future work. Brand makes no apology for the book’s woolliness. “All I can offer here,” he writes, “is to muse across a representative of maintenance domains and see what emerges.” Perhaps the most charitable reading of the potpourri is that it represents the return of a Merry Prankster, offering us a riotous varied light show. It’s a good book to leave on a table and occasionally open to a random page for entertainment. But it often seems as if it does not know what it wants to say or be.
“Vehicles (and Weapons)” begins by paraphrasing two famous works of maintenance philosophy, Robert M. Pirsig’s Zen and the Art of Motorcycle Maintenance and Matthew B. Crawford’s Shop Class as Soulcraft. Maintenance involves both “problem finding” and “problem solving.” While much repair work is marked by anxiety, impatience, and boredom, it also offers positive values and outcomes. “Motorcycle maintainers take heart from what they repair for—the glory of the ride,” Brand writes.
The beauty and triumph of cheapness is a running theme throughout the work, harking back to How Buildings Learn. Henry Ford’s Model T won out over early electric vehicles and hugely expensive luxury vehicles like Rolls-Royce’s Silver Ghost because it was cheap and easier to maintain. The three most popular cars in human history—the Ford Model T, the Volkswagen Bug, and the Lada “Classic” from Russia—all privileged cheapness, “retained their basic design for decades, and … invited repair by the owner.” Or, to be fair, maybe demanded it? For every hobbyist who delighted in being able to self-reliantly keep a VW running, there must have been thousands who appreciated how cheap it was and hated that it broke a lot. Brand never points to social research, like surveys, that might help us know people’s feelings on such matters.
Other sections recount how Americans created interchangeable parts (enabling not only cheap mass production but also easy maintenance), examine how maintenance works with assault rifles and in war, and track the history of technical manuals from the early modern period to the age of YouTube. These stories are solid, but they’re also well known to students of technology, and nearly all are recycled from the work of others, featuring many large block quotes. The volume breaks little new ground.
Brand treats maintenance as an unalloyed good. But the field of maintenance studies has moved on, burrowing into the domain’s ironies, complexities, and difficulties. A simple example: In most cases, it is environmentally far better to retire and recycle an internal-combustion vehicle and buy an electric one than to keep the polluting beast going forever. Maintaining a gas-guzzler or a coal-burning power plant isn’t a radical act but a regressive one. Also, maintenance can become a life-breaking burden on the poor, and it falls inequitably on the shoulders of women and people of color. Keeping existing systems going can be a way of avoiding tough, necessary change—like making technological systems more accessible for people with disabilities. In this volume, Brand is uninterested in such difficult trade-offs. He avoids any question of how politics shapes these issues, or how they shape politics.
This avoidance comes out most clearly in a section of “Vehicles (and Weapons)” that talks about Elon Musk—a character of “unique mastery,” Brand informs us. He tells us that Bill Gates once shorted Tesla’s stock, only to lose $1.5 billion. The lesson is clear: Elon won.
In what political and social vision is money the best way to keep the score? Brand rightly points out that electric vehicles have fewer moving parts and, in that sense, are more maintainable than internal-combustion vehicles. He celebrates Musk most of all because his products “have all proven to be game changers in part because they combine ingenious design with surprisingly low cost.” Again, it’s Brand’s “cheap, available tools” hypothesis. But there’s a real superficiality and lack of follow-through in thinking here: Teslas remain luxury vehicles whose sales have slumped since federal tax subsidies disappeared. The company has faced several right-to-repair lawsuits; there’s even a law review article on the topic. Musk is in no sense a maintenance hero. Yet Brand writes that with his companies, “Musk may have done more practical world saving than any other business leader of his time.” By the time Brand was writing this book, the controversies surrounding Musk for at least flirting with antisemitism, racism, sexism, authoritarianism, and more were quite clear. About this, the book says not a word.
For sure, Brand needn’t agree with Musk’s critics, but failing to even broach the subject is tone deaf and out of touch. Others have argued that Silicon Valley’s “Move fast and break things” mentality undermines healthy maintenance. Brand doesn’t raise the idea—even to dismiss it.
It could be that with Maintenance: Of Everything, Part One Brand is just getting going; that in subsequent volumes he’ll have something more coherent to say; that he’ll raise really hard questions and try to answer them. But given his track record, we might reasonably doubt it. Kesey said Brand cleaves to power; he certainly doesn’t question it.
Lee Vinsel is an associate professor of science, technology, and society at Virginia Tech and host of Peoples & Things, a podcast about human life with technology.
STAT+: Cell therapy primed liver transplant patients to avoid organ rejection, small study shows
Immune tolerance has long been the holy grail in transplant medicine, a hoped-for end to the downsides of anti-rejection regimens for patients after they receive lifesaving organ transplants. A small, early-stage study now shows promise in taking cells from living donors — people giving a portion of their livers — to teach recipients’ immune systems to accept the foreign organs as their own and achieve the ultimate healthy outcome.
Living donations take advantage of the liver’s ability to regenerate, meaning donors can part with a piece of their liver and later see it grow back. Recipients can regain enough liver function from the partial organs that also grow, replacing livers damaged by alcohol-associated liver disease, metabolic-associated liver disease, liver cancer, or other causes. Immunosuppression keeps their bodies from rejecting the new organs, but it also raises their vulnerability to infectious diseases and certain cancers. Serious side effects from the drugs include developing diabetes and kidney damage.
Cell therapy has been tried before to disarm the immune system’s attack by recruiting regulatory T immune cells taken from the donor. In the new study, whose results were published Friday in Nature Communications, different immune cells known as regulatory dendritic cells were obtained from donors’ white blood cells and generated in a lab. The idea behind both cell therapies is the same: to teach immune cells in the recipient’s body to treat the donated liver fragment as familiar tissue, not an invader be attacked.
Why having “humans in the loop” in an AI war is an illusion
The availability of artificial intelligence for use in warfare is at the center of a legal battle between Anthropic and the Pentagon. This debate has become urgent, with AI playing a bigger role than ever before in the current conflict with Iran. AI is no longer just helping humans analyze intelligence. It is now an active player—generating targets in real time, controlling and coordinating missile interceptions, and guiding lethal swarms of autonomous drones.
Most of the public conversation regarding the use of AI-driven autonomous lethal weapons centers on how much humans should remain “in the loop.” Under the Pentagon’s current guidelines, human oversight supposedly provides accountability, context, and nuance while reducing the risk of hacking.
AI systems are opaque “black boxes”
But the debate over “humans in the loop” is a comforting distraction. The immediate danger is not that machines will act without human oversight; it is that human overseers have no idea what the machines are actually “thinking.” The Pentagon’s guidelines are fundamentally flawed because they rest on the dangerous assumption that humans understand how AI systems work.
Having studied intentions in the human brain for decades and in AI systems more recently, I can attest that state-of-the-art AI systems are essentially “black boxes.” We know the inputs and outputs, but the artificial “brain” processing them remains opaque. Even their creators cannot fully interpret them or understand how they work. And when AIs do provide reasons, they are not always trustworthy.
The illusion of human oversight in autonomous systems
In the debate over human oversight, a fundamental question is going unasked: Can we understand what an AI system intends to do before it acts?
Imagine an autonomous drone tasked with destroying an enemy munitions factory. The automated command and control system determines that the optimal target is a munitions storage building. It reports a 92% probability of mission success because secondary explosions of the munitions in the building will thoroughly destroy the facility. A human operator reviews the legitimate military objective, sees the high success rate, and approves the strike.
But what the operator does not know is that the AI system’s calculation included a hidden factor: Beyond devastating the munitions factory, the secondary explosions would also severely damage a nearby children’s hospital. The emergency response would then focus on the hospital, ensuring the factory burns down. To the AI, maximizing disruption in this way meets its given objective. But to a human, it is potentially committing a war crime by violating the rules regarding civilian life.
Keeping a human in the loop may not provide the safeguard people imagine, because the human cannot know the AI’s intention before it acts. Advanced AI systems do not simply execute instructions; they interpret them. If operators fail to define their objectives carefully enough—a highly likely scenario in high-pressure situations—the “black box” system could be doing exactly what it was told and still not acting as humans intended.
This “intention gap” between AI systems and human operators is precisely why we hesitate to deploy frontier black-box AI in civilian health care or air traffic control, and why its integration into the workplace remains fraught—yet we are rushing to deploy it on the battlefield.
To make matters worse, if one side in a conflict deploys fully autonomous weapons, which operate at machine speed and scale, the pressure to remain competitive would push the other side to rely on such weapons too. This means the use of increasingly autonomous—and opaque—AI decision-making in war is only likely to grow.
The solution: Advance the science of AI intentions
The science of AI must comprise both building highly capable AI technology and understanding how this technology works. Huge advances have been made in developing and building more capable models, driven by record investments—forecast by Gartner to grow to around $2.5 trillion in 2026 alone. In contrast, the investment in understanding how the technology works has been minuscule.
We need a massive paradigm shift. Engineers are building increasingly capable systems. But understanding how these systems work is not just an engineering problem—it requires an interdisciplinary effort. We must build the tools to characterize, measure, and intervene in the intentions of AI agents before they act. We need to map the internal pathways of the neural networks that drive these agents so that we can build a true causal understanding of their decision-making, moving beyond merely observing inputs and outputs.
A promising way forward is to combine techniques from mechanistic interpretability (breaking neural networks down into human-understandable components) with insights, tools, and models from the neuroscience of intentions. Another idea is to develop transparent, interpretable “auditor” AIs designed to monitor the behavior and emergent goals of more capable black-box systems in real time.
Developing a better understanding of how AI functions will enable us to rely on AI systems for mission-critical applications. It will also make it easier to build more efficient, more capable, and safer systems.
Colleagues and I are exploring how ideas from neuroscience, cognitive science, and philosophy—fields that study how intentions arise in human decision-making—might help us understand the intentions of artificial systems. We must prioritize these kinds of interdisciplinary efforts, including collaborations between academia, government, and industry.
However, we need more than just academic exploration. The tech industry—and the philanthropists funding AI alignment, which strives to encode human values and goals into these models—must direct substantial investments toward interdisciplinary interpretability research. Furthermore, as the Pentagon pursues increasingly autonomous systems, Congress must mandate rigorous testing of AI systems’ intentions, not just their performance.
Until we achieve that, human oversight over AI may be more illusion than safeguard.
Uri Maoz is a cognitive and computational neuroscientist specializing in how the brain transforms intentions into actions. A professor at Chapman University with appointments at UCLA and Caltech, he leads an interdisciplinary initiative focused on understanding and measuring intentions in artificial intelligence systems (ai-intentions.org).
Stem Cell Editing Programs the Immune System to Make Own Therapeutic Proteins
For pathogens like HIV, malaria, and rapidly evolving influenza strains, coaxing the immune system to produce the rare, highly potent antibodies needed for protection has long been a scientific bottleneck. Vaccines can train B cells to evolve such broadly neutralizing antibodies, but only under ideal conditions—and only in a small fraction of people. Even attempts to genetically edit mature B cells produced responses that faded as the cells died out.
A team at the Rockefeller University has now taken a more upstream approach: programming hematopoietic stem and progenitor cells (HSPCs)—the source of all B lymphocytes—to carry permanent genetic instructions for therapeutic antibodies or other proteins. Because the immune system naturally amplifies rare, useful cells after vaccination, even a tiny number of edited stem cells can seed a durable, boostable immune response.
“The immune system is inefficient in that it produces a vast quantity of cells to protect itself,” said Harald Hartweger, a research assistant professor in Michel Nussenzweig’s Laboratory of Molecular Immunology. “We wanted to take advantage of the immune system’s ability to amplify useful, rare cells.”
The study, published in Science and titled “B lymphocyte protein factories produced by hematopoietic stem cell gene editing,” demonstrates that CRISPR‑edited HSPCs can mature into B cells that express engineered antibodies upon vaccination. A standard vaccination then acts as the trigger: antigen exposure drives those edited B cells to expand, differentiate into plasma cells, and secrete high titers of the inserted antibody that last long-term.
According to the paper, as few as ~7,000 edited HSPCs were enough to generate “high titers of long‑lasting protective or therapeutic antibodies and/or cargo proteins.” In mice engineered to produce a broadly neutralizing influenza antibody, this response was strong enough to protect against an otherwise lethal viral infection.
The platform proved unexpectedly versatile. Edited B cells could also secrete non‑antibody proteins, pointing to potential applications in genetic diseases. And by mixing HSPCs engineered with different antibody instructions, the researchers created immune systems capable of producing multiple antibodies simultaneously, an approach that could limit viral escape in HIV or other rapidly mutating pathogens. Human HSPCs edited using the same strategy produced functional human B cells in an immunodeficient mouse model, offering an early sign of translational feasibility.
“Our goal is to permanently impact the genome with a single injection, so that the body can make proteins of interest,” Hartweger said. “That protein could be an antibody that’s universally protective against HIV or influenza, but it could also be any therapeutic protein.”
The team is now moving toward preclinical testing in non‑human primates to evaluate protection against HIV and exploring whether similar strategies could be applied to T cells. The broader vision is a generalizable, long‑term protein‑production platform, one that could support treatments for infectious disease, protein deficiencies, autoimmunity, metabolic disorders, and cancer, according to Hartweger.
As Nussenzweig puts it, “The present study proposes a workaround for the antibody problem—a way of getting around the possibility that we may never get to a universal HIV vaccine, while still providing a promising, long‑lasting solution.”
The post Stem Cell Editing Programs the Immune System to Make Own Therapeutic Proteins appeared first on GEN – Genetic Engineering and Biotechnology News.
A pancreatic cancer breakthrough, and new hope for an off-the-shelf CAR-T treatment
On this week’s episode of the Readout LOUD: a pancreatic cancer breakthrough and new hope for an off-the-shelf CAR-T treatment in lymphoma.
Your favorite biotech podcasting crew is back to full strength this week, and we’re bringing you two newsy guest interviews. First, we’ll talk with Allogene Therapeutics Chief Medical Officer Zach Roberts about new study results that bolster the company’s efforts to develop an off-the-shelf CAR-T therapy for B-cell lymphoma, a type of blood cancer.
Intercellular Communication via Condensate Corona-Nanoparticle Complexes
Cells and tissues have a multitude of methods for intercellular communication. Nanoscale assemblies that transfer proteins and RNAs between cells are known, but the impacts of external additions or synthetic materials is unclear.
Researchers from the University College of Dublin’s Centre for BioNano Interactions (CBNI) explored detailed changes in nanostructure-biological hybrid complexes as they leave one cell and enter another.
“We had long believed that there are natural couriers and gateways that allow special, very small particulates to communicate in organisms,” said lead author Kenneth Dawson, DPhil, CBNI director.
The team published their work in a paper titled, “Condensate corona–nanoparticle complexes transfer functional biomolecules between cells” in Nature Materials.
In rare instances, a subset of nanoparticles that enter a cell undergo an unexpected transformation, acquiring a coating known as a “condensate corona.” This corona allows for regulated entrance into the cell.
“By gaining access to these natural gateways, it could be possible to ferry ‘toolkits’ of functional biomolecules, for example, extended corrective messages, directly into previously inaccessible areas within cells, and across biological barriers, greatly improving the effectiveness and, importantly, the safety of RNA-, gene- and protein-based therapies,” said lead author associate professor Yan Yan, PhD, UCD School of Biomolecular and Biomedical Science.
Using “magnetic-cored, silica-shelled nanoparticles precoated with a grafted or adsorbed biomolecular corona,” the researchers created a scaffold that provided the cell with a recognition cue, allowing for the cells to deposit a secondary corona. With magnetic cores, and silica shells that carry fluorescent labels, the nanoparticles are easily controlled, extracted, and visualized.
Live-cell imaging showed that these additionally transformed nanoparticles were re-exported and retained both their original corona, along with their new cell-derived layer.
“By combining magnetic core extraction with an optimized pulse–chase regime and post-isolation washing, we obtained highly reproducible particle-complex isolates with minimal background contamination,” the authors wrote. Analysis showed that the cell-derived corona was “solid-like, structurally stable and biochemically robust.”
They also identified protein profiles using stable-isotope amino acid labelling (SILAC) in the cells producing the corona, followed by mass spec analysis. These proteins have a high affinity for the ER and mitochondria and about 70% of the proteins have been previously associated with mesoscopic intracellular RNA granules.
“With the prototype in our hands, we were able to break into these communications and understand how biological information is shared between cells. From there, we began to send our own messages via the same system,” Dawson noted.
In further tests, the team found that within endosomes of the recipient cell, the corona detaches from the core and the fates of the core and corona diverge, with the proteins and RNA components of the corona escaping the endosome—and escaping degradation—to be distributed within and access targets in the cell. They were able to disrupt this process and keep the corona and the attached materials, in the endosome by grafting short peptides onto the coronal surface.
Utilizing CRISPR-Cas9 they tested the functionality of corona-bound particles that escape the endosome. They generated particle complexes for bioluminescent markers to monitor functionality. Analysis revealed “intact enzymatic activity can be delivered to recipient cells by condensate-borne cargo.”
The authors explained that together, their data suggest these condensates function as an encoded biomolecular transfer program that are activated by the recipient cell. They wrote: “It is remarkable that such architectures, built entirely from endogenous biomolecules of producer cells, can embody transfer programs that overcome most of the challenges faced within nanoscale therapeutics.”
“The findings provide a new blueprint for sending strategic and therapeutically effective biological messages to currently inaccessible locations in the body. That points towards a new concept of medicine that could reverse, rather than manage, currently intractable diseases,” concluded Dawson.
The post Intercellular Communication via Condensate Corona-Nanoparticle Complexes appeared first on GEN – Genetic Engineering and Biotechnology News.
Brain Circuits Underlying Placebo Pain Relief Identified in Mice
Though the placebo effect is a well documented phenomenon, the neurological mechanisms that underlie the process are still not fully understood. Now scientists from multiple institutions led by a team at the University of California San Diego (UCSD) have pinpointed the brain circuitry in mice that they believe is responsible for placebo pain relief. Details of their findings are published in a new paper in the journal Neuron. In it, they describe brain regions that support placebo effects and highlight sites where endogenous opioid neuropeptides send signals that are important for placebo pain relief.
The paper is titled “Top-down control of the descending pain modulatory system drives multimodal placebo analgesia.” According to the team, theirs is the first study to establish placebo mechanisms by adapting a protocol used for humans to work in mice. Working alongside labs at the University of Pennsylvania, University of California Irvine, and elsewhere, the UCSD team detected activity in parts of the mouse brain that correspond to those previously implicated in human studies. Furthermore, by precisely mapping neural pathways and brain activity in the mice, the team identified essential roles for neural circuits that link the cortex to the brainstem and spinal cord during placebo pain relief.
They also found that training mice to exhibit a placebo effect with one type of pain results in relief from several different types of pain including pain from injuries. That is particularly notable because it has “direct implications for how placebo training in humans might be used to produce resilience to future pain that results from injury,” explained Matthew Banghart, PhD, an associate professor in UCSD’s neurobiology department and lead author on the study. The findings also open a door to “expectancy-driven” placebo effects as a substitute for addictive painkillers, he noted, meaning that it might be possible to use placebo conditioning to train patients to build preemptive resilience to pain.
Full details of the findings and methods used are provided in the paper. In it, the teams explain that they used sensor technology and a light-activated drug developed in the Banghart lab to study the role of naturally-occurring opioid peptides in the brain. Specifically, they used the sensors to detect opioid peptide signaling in the ventrolateral periaqueductal gray (vlPAG) region, a known hub for pain signaling, during placebo trials. They then used the light-activated drug called photoactivatable naloxone, or PhNX, to establish that these opioid peptides actually drive pain relief in a manner similar to drugs like morphine. The light allowed the scientists control and timing of the opioid signaling interference. Using PhNX, they confirmed that both morphine-induced pain relief and placebo pain relief use the same opioid signaling pathway in the vlPAG region of the brain.
Essentially, “we trained a mouse brain to create its own broad-spectrum painkillers on demand, precisely where they are needed to treat pain, without the off-target effects of opioid-based painkillers,” said Janie Chang-Weinberg, a PhD student in the biological sciences graduate program at UCSD and one of the first authors on the study.
Future studies planned by the team will dig more deeply into how placebo learning unfolds in the brain and evaluate different placebo training strategies in mice with an eye towards developing protocols that readily translate to produce placebo pain resilience in people living with chronic pain.
The post Brain Circuits Underlying Placebo Pain Relief Identified in Mice appeared first on GEN – Genetic Engineering and Biotechnology News.


