NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.

These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”

A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.

While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.

Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.

Nuclear propulsion 101

Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.

Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible. 

“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia. 

The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available. 

To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.

So how will a nuclear-reactor-powered spacecraft work? 

Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.

Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down. 

Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.

To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).

But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space. 

“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute. “I’m happy to see them finally doing this.”

One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.” 

Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.

Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then  blast it out of the spacecraft, generating thrust.  

Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”

How to build a nuclear-powered spaceship

For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.

For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.

What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.) 

Annotated diagram of the key systems of SR-1 Freedom. Indicated at the front is the power and propulsion element, up to 48kw Advanced electric propulsion system. Panels at the middle are high performance, light weight composite and titanium heat rejection system. At the tail there is indicated an advanced closed Brayton cycle power conversion system and a .20kWe Reactor with HALEU UO2 fuel, heat pipe thermal transfer and boron carbide radiation shield. A small attachment at midcraft is labelled. :High Rate Direct to Earth Communications."

NASA

The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.

According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.” 

Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work? 

For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth. 

If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035. 

Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”

And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline. 

“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”

Engineered Miniature CRISPR Boosts Gene‑Editing Efficiency in Human Cells

One of the biggest obstacles in targeting CRISPR therapy deliveries directly into the body isn’t the editing chemistry, it’s the size of the editors themselves. The field’s workhorse nucleases, including Cas9 and Cas12a, are considerably large (exceeding 1,300 amino acids) to fit inside adeno‑associated virus (AAV) vectors, the most widely used delivery vehicle for in vivo gene therapy. That size mismatch has forced most clinical applications to rely on ex vivo editing of blood or bone‑marrow‑derived cells, leaving many tissues out of reach. A smaller CRISPR system that can be packaged into AAV without sacrificing efficiency has long been a key missing piece.

A new study published in Nature Structural & Molecular Biology takes a major step toward that goal. Researchers at the University of Texas at Austin and collaborators report the discovery and engineering of a compact Cas12f nuclease that performs robustly in human cells, a notable advance for a class of miniature enzymes that have historically shown lower efficiencies in mammalian cells compared to larger systems. The paper is titled, “Comparative characterization of Cas12f orthologs reveals mechanistic features underlying enhanced genome editing efficiency.”

The team began by mining metagenomic datasets for naturally small CRISPR enzymes and identified a previously uncharacterized ortholog, Alistipes sp. Cas12f (Al3Cas12f). Despite its compact size—roughly one‑third that of Cas9—the nuclease showed unexpectedly strong activity in human cells. In initial screens, Al3Cas12f produced more than 50% editing at many genomic sites and exceeded 90% at several targets. The authors wrote, “Results from a gRNA screen targeting intron 1 of the ALB gene, exon 3 of the APOA1 gene and the AAVS1 site within PPP1R12C intron 1 showed that 27 target sites displayed >10% editing, 19 sites displayed >50% editing and 10 sites displayed >90% editing across AAVS1 and APOA1.”

Cryo‑EM structures revealed why this miniature enzyme punches above its weight. Compared with other Cas12f orthologs, Al3Cas12f forms a more extensive and interlocking dimer interface, creating a stable, preassembled complex that supports efficient R‑loop formation. The guide RNA scaffold also appears naturally streamlined: unlike other Cas12f gRNAs, it lacks an extraneous stem‑loop and adopts a compact conformation that docks cleanly into the protein. As the authors noted, Al3Cas12f achieves “efficient R‑loop formation through a stable dimer interface and a naturally optimized gRNA.”

Using these structural insights, the team engineered an enhanced variant, Al3Cas12f RKK, that dramatically boosts editing efficiency across genomic loci. In human cells, the variant increased editing from below 10% to more than 80% at many targets, with some sites reaching 90%. The researchers tested the system in a leukemia‑derived human cell line, focusing on genes implicated in cancer, atherosclerosis, and ALS.

The mechanistic comparisons were equally revealing. By solving the structures of two additional Cas12f orthologs—Oscillibacter sp. Cas12f and Ruminiclostridium herbifermentans Cas12f—the team noted “divergent architectures and regulatory features governing protospacer-adjacent motif recognition, gRNA binding, dimerization, and DNA cleavage.” Al3Cas12f’s extended helices and mortise‑and‑tenon‑like interactions appear to be lineage‑specific adaptations that stabilize the nuclease and support high activity.

The next step is to test whether the enzyme maintains its performance when packaged into AAV vectors. If successful, the system could offer a blueprint for engineering future generations of compact CRISPR tools.

The post Engineered Miniature CRISPR Boosts Gene‑Editing Efficiency in Human Cells appeared first on GEN – Genetic Engineering and Biotechnology News.

Popular AI Chatbots Can Provide Misleading Medical Information

Around half the outputs from five commonly used artificial intelligence (AI) chatbots could lead users to ineffective or harmful medical choices without professional guidance, suggests research led by the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center.

As reported in BMJ Open, the researchers tested the free web versions of Gemini, DeepSeek, Meta AI, ChatGPT 3.5 and Grok available in 2024. They created 50 different adversarial prompts intended to test whether the AI models would give a problematic response or not.

The prompts were intended to realistically represent the kinds of queries members of the public might enter about health topics ranging from cancer to vaccines to stem cells, nutrition, and athletic performance. Some prompts required a specific answer and some were more open.

The researchers collected 250 responses to their prompts and categorized them as non-, somewhat, or highly problematic, using pre-defined criteria. Around 50% were problematic, 30% somewhat problematic and 19.6% highly problematic.  Open-ended prompts received the most problematic answers.

In terms of the specific models, Grok produced a disproportionate share of highly problematic answers, while Gemini produced the fewest highly problematic and the most non-problematic responses. Topic-wise, the chatbots appeared more accurate when asked about cancer and vaccines, but less so when asked about stem cells, athletic performance, and nutrition.

Reference lists provided to users by the models were limited or inaccurate and the answers required some knowledge to interpret properly and were aimed at college-educated users.

“Despite adversarial pressure, chatbots typically responded in a confident, authoritative tone. Refusals to answer and explicit caveats or disclaimers were rare, reflecting the models’ strong tendency to provide an output even when prompts steered toward contraindicated advice,” write lead author Nicholas Tiller, PhD, a research associate at the Lundquist Institute, Harbor-UCLA Medical Center, and colleagues.

“As the use of AI chatbots continues to expand, our data highlight a need for public education, professional training and regulatory oversight to ensure that generative AI supports, rather than erodes, public health,” they conclude.

The post Popular AI Chatbots Can Provide Misleading Medical Information appeared first on Inside Precision Medicine.

The Influence of the COVID-19 Pandemic on Current Teaching Methods, Training, and Perception Among Romanian Surgery-Oriented Students: Cross-Sectional Study

<strong>Background:</strong> The COVID-19 pandemic prompted rapid changes in medical education, accelerating the adoption of online and distance learning methods as alternatives to traditional teaching. While these approaches offered logistical advantages, students worldwide reported significant limitations, particularly in terms of motivation, clinical exposure, and hands-on skill acquisition. Despite the increased use of digital teaching during the pandemic, core educational objectives and the mission of medical training remained unchanged, emphasizing the continued importance of practical experience. <strong>Objective:</strong> This study aims to investigate the impact of the COVID-19 pandemic on current teaching methods in medical education and to explore students’ perceptions of online learning, telemedicine, artificial intelligence, and other modern educational alternatives. <strong>Methods:</strong> This observational, cross-sectional multicentric study surveyed a cohort of Romanian medical students using a self-developed 48-item online questionnaire distributed via social media. Data were collected over 6 weeks (February-March), yielding 451 responses, of which eligible participants included students in clinical years or preclinical students interested in surgical or orthopedic careers, with a heavy representation of the Medicine and Pharmacy University of Timisoara. Statistical analysis was performed using Microsoft Excel and JASP (University of Amsterdam; version 0.95.4). <strong>Results:</strong> A total of 436 responses were analyzed, with students favoring online or hybrid formats for lectures but preferring on-site teaching for practical training. Reduced patient interaction and limited skill acquisition were the main drawbacks of online practical education. Acceptance of hybrid learning correlated with more positive perceptions of teaching methods and a lower perceived desire to cheat. <strong>Conclusions:</strong> The COVID-19 pandemic brought significant changes to the way medicine is being taught in Romania, but it also brought a clearer picture for students and medical staff on how they want medical education to be done. Online cheating remains a significant challenge, but it is being tackled at the moment with different algorithms being tested.

Evaluating the Feasibility of Technology-Based Interventions in Disability and Rehabilitation: Definitions, Considerations, and Dimensions

Technology-based interventions in the field of disability and rehabilitation, which serve assistive, therapeutic, and/or service delivery functions, are considered complex due to the skills required of providers and recipients, degree of individual tailoring, and diversity of use settings. Feasibility studies are an important step in the evolution of complex interventions that can help refine the intervention, inform implementation, and prevent wasted resources. However, guidance is lacking regarding specific considerations for feasibility studies of technology-based interventions in disability and rehabilitation, which leaves researchers and developers reliant on resources from other fields that do not address important technology properties. To advance the field, context-specific definitions, considerations, and evaluation dimensions must be explicitly outlined to ensure that feasibility studies are constructively designed to meet the unique needs of these interventions. In this viewpoint article, we (1) propose a definition and framework for feasibility studies within the specific context of technology-based disability and rehabilitation interventions, (2) highlight important and unique imperatives for feasibility studies of these interventions, and (3) articulate relevant feasibility dimensions and associated evaluation criteria for these interventions. Building on previous work, we distinguish between feasibility studies, wherein we focus on iterative intervention refinement by addressing key development questions (eg, usability), and pilot studies, which are small-scale versions of a larger study that will evaluate intervention outcomes. Integrating previous typologies, we present 13 feasibility dimensions relevant to technology-based interventions and provide sample evaluation criteria, focusing on the intervention itself rather than study design considerations (eg, trial management). This information may be useful for research and development communities (academic, clinical, or industry) to inform comprehensive feasibility studies that examine unique aspects of technology-based interventions to promote real-world impact. This contribution encourages greater harmonization of terminology and evaluation methods to streamline interpretation and comparison across studies.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/ef7cfd105b7f0cb0debd92976a0ac50e" />

User Experience and Early Clinical Outcomes of a Mental Wellness Chatbot for Depression and Anxiety: Pilot Evaluation Mixed Methods Study

Background: Artificial intelligence–powered conversational agents (ie, chatbots) are increasingly popular outlets for users seeking psychological support, yet little is known about how users experience early-stage prototypes or which therapeutic processes contribute to clinical improvement. A transparent evaluation of emerging chatbot prototypes is needed to clarify if, how, and why artificial intelligence companions work and to guide their continued development. Objective: This mixed methods pilot study evaluated user experience, acceptability, and preliminary clinical signals for an early-stage mental wellness chatbot. We also examined whether baseline symptom severity moderated clinical improvement. Methods: Three sequential cohorts (n=125) completed a 2-week, incentivized chatbot exposure (approximately 60 min per week). Participants provided first-impression ratings, qualitative feedback, and pre–post assessments of depressive symptoms (PHQ-8 [Patient Health Questionnaire-8]), anxiety symptoms (GAD-7 [Generalized Anxiety Disorder-7]), psychological distress, well-being, and loneliness. Statistical models estimated symptom change and tested interactions with baseline symptom severity. Mixed methods analysis integrated quantitative outcomes with large language model–assisted qualitative content analysis of open-ended responses. Results: Participants described the chatbot as accessible, easy to use, and emotionally validating, while citing limitations in personalization and conversational depth. Qualitative responses consistently highlighted early therapeutic processes such as emotional validation, goal setting, and perceived attunement. Regression models showed significant pre–post reductions in depressive (Hedges =–0.32) and anxiety (=–0.32) symptoms, alongside modest improvements in distress and well-being. Baseline severity moderated improvement, with marginal effects indicating larger predicted reductions at higher PHQ-8 and GAD-7 baseline scores (eg, PHQ-8=15: =–0.84; GAD-7=15: =–0.62). Conclusions: This pilot provides a comprehensive view of early chatbot development and suggests promising user experiences and preliminary symptom improvements under structured pilot conditions. By integrating experiential and exploratory clinical data, the study identifies candidate process targets to inform ongoing refinement. Findings support continued development and demonstrate procedural feasibility for progression to larger, longer-term trials evaluating engagement and clinical outcomes under more naturalistic conditions.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/df551c8cc1fc34d8080828a3b50a6924" />

CAR T Cell Therapy Biomanufactured by Cellares Infused Into First Two Patients

Cellares reported that the first two patients have been dosed with Cabaletta Bio’s investigational CAR T cell therapy rese-cel (resecabtagene autoleucel) manufactured on Cellares’ Cell Shuttle™ instrument. The administration of an autologous cell therapy, which met all release criteria and was manufactured on an automated manufacturing platform, represents an important step on the journey to realizing a future where scalable manufacturing of autologous products to supply thousands of patients per year can be achieved with minimal capital investment and a low cost of goods, according to a Cellares spokesperson.

While the transformative clinical benefits of autologous CAR T cell therapy are well established in oncology, the high manufacturing costs, lack of scalability, process inconsistency, and operational inflexibility associated with the current highly manual way of manufacturing have created meaningful barriers to patient access, reducing patient accessibility to these therapies.

“This is an important milestone that reflects three years of focused collaboration between the teams at Cabaletta and Cellares,” said Steven Nichtberger, MD, co-founder, chairman, and CEO of Cabaletta Bio. “The dosing of these first two patients is an important demonstration of Cellares’ GMP manufacturing and supply chain capabilities with their automated manufacturing platform and thus represents a significant achievement toward our goal of securing high-capacity flexible supply with minimal capital investment and a low cost of goods.”

“This milestone is a transformative moment for the field of autologous cell therapy,” added Fabian Gerlinghaus, co-founder and CEO of Cellares. “For years, the promise of autologous CAR T has been constrained by manufacturing models that were never designed to scale.”

Rese-cel (formerly referred to as CABA-201) is an investigational, autologous CAR T cell therapy engineered with a fully human CD19 binder and a 4-1BB co-stimulatory domain, designed specifically for the treatment of autoimmune diseases. Administered as a single, weight-based infusion, rese-cel is intended to transiently and deeply deplete CD19-positive cells, with the goal of resetting the immune system and achieving durable clinical responses without the need for chronic therapy.

Cabaletta is evaluating rese-cel in the RESET™ (REstoring SElf-Tolerance) clinical development program, which includes multiple ongoing company-sponsored trials across a diverse and growing range of autoimmune diseases in rheumatology, neurology, and dermatology.

The post CAR T Cell Therapy Biomanufactured by Cellares Infused Into First Two Patients appeared first on GEN – Genetic Engineering and Biotechnology News.

Development of Virtual Mental Health Stepped Care Service for a Heart Failure Remote Management Program: Qualitative Descriptive Study

Background: Depression is highly prevalent yet undertreated among people living with heart failure, indicating barriers to mental health services. Although various digital mental health interventions have been developed to detect, treat, and manage depression in this population, these interventions have seen limited integration into clinical care and a lack of implementation research. Stepped care is a service innovation that may promote the implementation of these technologies into clinical settings, but few studies have examined how these services are designed in clinical settings. Objective: This study aimed to identify strategies to address health system barriers to accessing mental health care from the perspective of people living with heart failure, clinicians, and researchers, and to incorporate these strategies into the design of a virtual mental health stepped care service within a heart failure remote management program. Methods: A qualitative description study was conducted using purposive recruitment of people living with heart failure, clinicians, and researchers from a heart failure remote patient management program. As part of a service design approach, semistructured interviews explored potential strategies to address barriers to accessing mental health services. Two researchers coded the data descriptively and constructed themes to guide the development of a virtual stepped care service. Results: A total of 22 participants were interviewed, comprising 13 people living with heart failure and 9 clinicians and researchers. Six themes were identified, comprising 4 requirements and 2 foundational principles. The requirements were to (1) adopt a collective approach to identify distress across methods, people, and time points; (2) maintain a referral-based approach; (3) rely on existing mental health human resources; and (4) offer patient choice among various mental health care options. These requirements were supported by two principles: (1) building on organizational strengths and (2) reducing treatment burden. Based on these findings, a virtual stepped care service was developed, incorporating a depression screening module, referral-based workflows, and, where clinically appropriate, patient choice in treatment selection. Conclusions: The stakeholder-informed design of this virtual stepped care service contributes to the limited literature on stepped care service design and demonstrates how such models can be tailored to their intended contexts. Although each component was designed to address health system barriers to mental health care for people living with heart failure, resource limitations may constrain the balance between feasibility and quality of care. Future research should evaluate the acceptability of this model among people living with heart failure and clinicians.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/55252764b80ef3ce085b4f3f728aad47" />

Ultrasensitive Molecular Test Identifies Substantial TB Underdiagnosis in Boston

While developing an ultrasensitive test for the detection of Mycobacterium tuberculosis DNA (TB-DNA), researchers from Boston University have unexpectedly found a high prevalence of the molecular marker in U.S.-born patients hospitalized in Boston.

“We began this research with the intent of sourcing respiratory samples to support the ongoing development of a new molecular assay for TB,” said Guillermo Madico, MD, PhD, scientist at Boston University’s National Emerging Infectious Diseases Laboratories (NEIDL) and co-inventor of the TOP TB assay. “What we found was completely unexpected. Our ultrasensitive test is detecting Mycobacterium tuberculosis DNA in patients who are unlikely to be diagnosed with TB using current methods. This opens the possibility that there could be thousands of Americans infected with forms of tuberculosis disease that remain hidden from our current diagnostic tools—putting them at risk of developing more serious complications or potentially transmitting the disease to others.”

In 2022, there were over 8000 reported cases of TB in the United States, over 600 TB-related deaths, and an estimated 13 million people with Mycobacterium tuberculosis infection. Although incidence has steadily decreased in the U.S., the rate of decline is too slow to meet the ambitious World Health Organization strategy to end the global TB epidemic by 2035.

One threat to the global elimination goal is a gap in the detection of paucibacillary TB disease—a type of TB characterized by a low concentration of M. tuberculosis bacilli in samples that often results in false negative test results.

To improve detection, Madico and colleagues developed an ultrasensitive molecular assay developed at Boston University called the Totally Optimized PCR (TOP) TB assay, which targets a gene involved in M. tuberculosis cell wall assembly.

During the development process, the researchers conducted three separate clinical studies involving 297 patients from Boston hospitals.

Across the studies, the TOP TB assay detected TB DNA in 12–16% of samples—a rate far higher than expected given Boston’s low TB incidence rate. Of note, most TB DNA-positive patients tested negative on standard TB infection tests (tuberculin skin tests or interferon-gamma release assays), and the researchers hypothesize that the findings “indicate the existence of a paucibacillary form of TB that remains unrecognized and is not detectable using current diagnostic tools.”

During the study, there were three patients diagnosed with acute chest syndrome, a life-threatening complication of sickle cell disease, all of whom tested positive for TB DNA.

The researchers point out in Nature Communications that this “previously unrecognized association” has potential implications for clinical care in the U.S. and many other settings.

“These findings suggest we may be missing a significant burden of TB disease, particularly in older Americans and in patients with certain underlying conditions,” said Edward Jones-López, MD, who co-led the study while at Boston Medical Center and Boston University Chobanian & Avedisian School of Medicine. “Most concerning is the potential association with acute chest syndrome in sickle cell patients. If confirmed and expanded upon in larger studies, this finding could lead to better health outcomes for patients with this potentially life-threatening condition.”

The researchers emphasize that their preliminary findings require confirmation in larger, prospective multicenter studies that include comprehensive clinical, radiological, immunological, and microbiological correlation. However, they argue the evidence warrants immediate dissemination given potential implications for medical care and public health.

The post Ultrasensitive Molecular Test Identifies Substantial TB Underdiagnosis in Boston appeared first on Inside Precision Medicine.