STAT+: Flawed study on the antidepressant Paxil came with a cautionary note — if you knew how to find it

File this under “hiding in plain sight.”

Last fall, the Journal of the American Academy of Child & Adolescent Psychiatry issued a so-called expression of concern about a controversial study that was published in 2001 about the widely prescribed antidepressant known as Paxil.

Such a step is taken when a study may have errors or include unreliable information. The notice, which followed a request for a retraction, indicated that a review was underway. Meanwhile, it served as a warning, of sorts, to health care providers who might consult the study when deciding whether to prescribe the medicine.

Continue to STAT+ to read the full story…

Opinion: Hosting the ‘intellectual wrestling match’ between MAHA, public health

The deep distrust between public health and the Make America Healthy Again movement may seem impossible to heal. But the podcast “Why Should I Trust You?” is trying to do just that by facilitating conversation between people who often view each others as enemies.

Brinda Adhikari and Tom W. Johnson launched “Why Should I Trust You?” in 2025. Since then, they’ve hosted big names from MAHA, the Trump administration, the anti-vaccine movement, and traditional health. They also bring on everyday Americans trying to keep their families healthy while navigating a confusing information ecosystem. “Everyone, when they come on the show, no matter what their quote unquote, expertise, they’re all equals. Everyone gets time to speak,” Adhikari said.

Read the rest…

STAT+: AI could check millions of CT scans for heart risk. Who will pay for it?

In a CT scan, coronary artery calcium shows up as distinct, bright pixels. It looks like salt in the pepper of the heart. The more calcium, the higher a patient’s risk of a heart attack. 

Often, a cardiologist looks for those bright spots on purpose: They’ll grab snapshots of the heart between beats, to get the clearest possible view of the coronary arteries. But calcium is also visible on zoomed-out chest CTs that aren’t synchronized with the heart. Every year, patients receive 19 million of those more general scans — to screen for lung cancer, or investigate a persistent cough — and an eagle-eyed radiologist can report any incidental calcium they spot.

But even as heart disease remains the top cause of death in the United States, an estimated 20% to 40% of that incidental calcium goes unreported. “We need to find more of these patients,” said Ami Bhatt, chair of the Food and Drug Administration’s Digital Health Advisory Committee and chief innovation officer of the American College of Cardiology. 

Continue to STAT+ to read the full story…

Building trust in the AI era with privacy-led UX

The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-box compliance exercise, but rather as the first overture in an ongoing customer relationship. For the companies that get it right, the payoff can bring something more intangible, valuable, and durable than simple consent rates: consumer trust.

The opportunities of privacy-led UX have only recently come into focus. Adelina Peltea, the chief marketing officer at Usercentrics, has seen enterprise sentiment shift: “Even just a few years ago, this space was viewed more as a trade-off between growth and compliance,” she says. “But as the market has matured, there’s been a greater focus on how to tie well-designed privacy experiences to business growth.”

And it turns out that well-designed, value-forward consent experiences routinely outperform initial estimates.
Touchpoints for privacy-led UX often include consent management platforms, terms and conditions, privacy policies, data subject access request (DSAR) tools, and, increasingly, AI data use disclosures.

This report examines how data transparency builds trust with customers; how this, in turn, can support business performance; and how organizations can maintain this trust even as AI systems add complexity to consent processes.

Key findings include the following:

  • Privacy is evolving from a one-time consent transaction into an ongoing data relationship. Rather than asking users for broad permissions up front, leading organizations are introducing data-sharing decisions gradually, matching the depth of the ask to the stage of the customer relationship. Companies that take this tack tend to gather both a larger quantity and higher quality of consumer data, the value of which often compounds over time.
  • Privacy-led UX is a prerequisite for AI growth. The consumer data that organizations gather is rapidly becoming a core foundation upon which AI-powered personalization is built. Organizations that establish clear, enforceable privacy and data transparency policies now are better positioned to deploy AI responsibly and at scale in the future. This starts with correctly configured consent mode across ad platforms.
  • Agentic AI introduces new levels of both complexity and opportunity. As AI systems begin acting on users’ behalf, the traditional consent moment may never occur. Governing agent-generated data flows requires privacy infrastructure that goes well beyond the cookie banner.
  • Realizing the advantages of privacy-led UX requires cross-functional collaboration and clear leadership. Privacy-led UX touches marketing, product, legal, and data teams—but someone must own the strategy and weave the threads together. Chief marketing officers
  • (CMOs) are often best positioned for that role, given their visibility across brand, data, and customer experience.
  • A practical framework can support businesses in getting it right. Organizations must define their data collection and usage strategies and ensure their UX incorporates data consent, including a focus on banner design. Following a blueprint for evaluating and improving privacy-led UX supports consistency at every consent touchpoint.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.

These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”

A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.

While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.

Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.

Nuclear propulsion 101

Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.

Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible. 

“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia. 

The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available. 

To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.

So how will a nuclear-reactor-powered spacecraft work? 

Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.

Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down. 

Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.

To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).

But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space. 

“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute. “I’m happy to see them finally doing this.”

One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.” 

Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.

Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then  blast it out of the spacecraft, generating thrust.  

Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”

How to build a nuclear-powered spaceship

For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.

For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.

What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.) 

Annotated diagram of the key systems of SR-1 Freedom. Indicated at the front is the power and propulsion element, up to 48kw Advanced electric propulsion system. Panels at the middle are high performance, light weight composite and titanium heat rejection system. At the tail there is indicated an advanced closed Brayton cycle power conversion system and a .20kWe Reactor with HALEU UO2 fuel, heat pipe thermal transfer and boron carbide radiation shield. A small attachment at midcraft is labelled. :High Rate Direct to Earth Communications."

NASA

The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.

According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.” 

Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work? 

For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth. 

If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035. 

Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”

And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline. 

“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”

Engineered Miniature CRISPR Boosts Gene‑Editing Efficiency in Human Cells

One of the biggest obstacles in targeting CRISPR therapy deliveries directly into the body isn’t the editing chemistry, it’s the size of the editors themselves. The field’s workhorse nucleases, including Cas9 and Cas12a, are considerably large (exceeding 1,300 amino acids) to fit inside adeno‑associated virus (AAV) vectors, the most widely used delivery vehicle for in vivo gene therapy. That size mismatch has forced most clinical applications to rely on ex vivo editing of blood or bone‑marrow‑derived cells, leaving many tissues out of reach. A smaller CRISPR system that can be packaged into AAV without sacrificing efficiency has long been a key missing piece.

A new study published in Nature Structural & Molecular Biology takes a major step toward that goal. Researchers at the University of Texas at Austin and collaborators report the discovery and engineering of a compact Cas12f nuclease that performs robustly in human cells, a notable advance for a class of miniature enzymes that have historically shown lower efficiencies in mammalian cells compared to larger systems. The paper is titled, “Comparative characterization of Cas12f orthologs reveals mechanistic features underlying enhanced genome editing efficiency.”

The team began by mining metagenomic datasets for naturally small CRISPR enzymes and identified a previously uncharacterized ortholog, Alistipes sp. Cas12f (Al3Cas12f). Despite its compact size—roughly one‑third that of Cas9—the nuclease showed unexpectedly strong activity in human cells. In initial screens, Al3Cas12f produced more than 50% editing at many genomic sites and exceeded 90% at several targets. The authors wrote, “Results from a gRNA screen targeting intron 1 of the ALB gene, exon 3 of the APOA1 gene and the AAVS1 site within PPP1R12C intron 1 showed that 27 target sites displayed >10% editing, 19 sites displayed >50% editing and 10 sites displayed >90% editing across AAVS1 and APOA1.”

Cryo‑EM structures revealed why this miniature enzyme punches above its weight. Compared with other Cas12f orthologs, Al3Cas12f forms a more extensive and interlocking dimer interface, creating a stable, preassembled complex that supports efficient R‑loop formation. The guide RNA scaffold also appears naturally streamlined: unlike other Cas12f gRNAs, it lacks an extraneous stem‑loop and adopts a compact conformation that docks cleanly into the protein. As the authors noted, Al3Cas12f achieves “efficient R‑loop formation through a stable dimer interface and a naturally optimized gRNA.”

Using these structural insights, the team engineered an enhanced variant, Al3Cas12f RKK, that dramatically boosts editing efficiency across genomic loci. In human cells, the variant increased editing from below 10% to more than 80% at many targets, with some sites reaching 90%. The researchers tested the system in a leukemia‑derived human cell line, focusing on genes implicated in cancer, atherosclerosis, and ALS.

The mechanistic comparisons were equally revealing. By solving the structures of two additional Cas12f orthologs—Oscillibacter sp. Cas12f and Ruminiclostridium herbifermentans Cas12f—the team noted “divergent architectures and regulatory features governing protospacer-adjacent motif recognition, gRNA binding, dimerization, and DNA cleavage.” Al3Cas12f’s extended helices and mortise‑and‑tenon‑like interactions appear to be lineage‑specific adaptations that stabilize the nuclease and support high activity.

The next step is to test whether the enzyme maintains its performance when packaged into AAV vectors. If successful, the system could offer a blueprint for engineering future generations of compact CRISPR tools.

The post Engineered Miniature CRISPR Boosts Gene‑Editing Efficiency in Human Cells appeared first on GEN – Genetic Engineering and Biotechnology News.

Popular AI Chatbots Can Provide Misleading Medical Information

Around half the outputs from five commonly used artificial intelligence (AI) chatbots could lead users to ineffective or harmful medical choices without professional guidance, suggests research led by the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center.

As reported in BMJ Open, the researchers tested the free web versions of Gemini, DeepSeek, Meta AI, ChatGPT 3.5 and Grok available in 2024. They created 50 different adversarial prompts intended to test whether the AI models would give a problematic response or not.

The prompts were intended to realistically represent the kinds of queries members of the public might enter about health topics ranging from cancer to vaccines to stem cells, nutrition, and athletic performance. Some prompts required a specific answer and some were more open.

The researchers collected 250 responses to their prompts and categorized them as non-, somewhat, or highly problematic, using pre-defined criteria. Around 50% were problematic, 30% somewhat problematic and 19.6% highly problematic.  Open-ended prompts received the most problematic answers.

In terms of the specific models, Grok produced a disproportionate share of highly problematic answers, while Gemini produced the fewest highly problematic and the most non-problematic responses. Topic-wise, the chatbots appeared more accurate when asked about cancer and vaccines, but less so when asked about stem cells, athletic performance, and nutrition.

Reference lists provided to users by the models were limited or inaccurate and the answers required some knowledge to interpret properly and were aimed at college-educated users.

“Despite adversarial pressure, chatbots typically responded in a confident, authoritative tone. Refusals to answer and explicit caveats or disclaimers were rare, reflecting the models’ strong tendency to provide an output even when prompts steered toward contraindicated advice,” write lead author Nicholas Tiller, PhD, a research associate at the Lundquist Institute, Harbor-UCLA Medical Center, and colleagues.

“As the use of AI chatbots continues to expand, our data highlight a need for public education, professional training and regulatory oversight to ensure that generative AI supports, rather than erodes, public health,” they conclude.

The post Popular AI Chatbots Can Provide Misleading Medical Information appeared first on Inside Precision Medicine.

The Influence of the COVID-19 Pandemic on Current Teaching Methods, Training, and Perception Among Romanian Surgery-Oriented Students: Cross-Sectional Study

<strong>Background:</strong> The COVID-19 pandemic prompted rapid changes in medical education, accelerating the adoption of online and distance learning methods as alternatives to traditional teaching. While these approaches offered logistical advantages, students worldwide reported significant limitations, particularly in terms of motivation, clinical exposure, and hands-on skill acquisition. Despite the increased use of digital teaching during the pandemic, core educational objectives and the mission of medical training remained unchanged, emphasizing the continued importance of practical experience. <strong>Objective:</strong> This study aims to investigate the impact of the COVID-19 pandemic on current teaching methods in medical education and to explore students’ perceptions of online learning, telemedicine, artificial intelligence, and other modern educational alternatives. <strong>Methods:</strong> This observational, cross-sectional multicentric study surveyed a cohort of Romanian medical students using a self-developed 48-item online questionnaire distributed via social media. Data were collected over 6 weeks (February-March), yielding 451 responses, of which eligible participants included students in clinical years or preclinical students interested in surgical or orthopedic careers, with a heavy representation of the Medicine and Pharmacy University of Timisoara. Statistical analysis was performed using Microsoft Excel and JASP (University of Amsterdam; version 0.95.4). <strong>Results:</strong> A total of 436 responses were analyzed, with students favoring online or hybrid formats for lectures but preferring on-site teaching for practical training. Reduced patient interaction and limited skill acquisition were the main drawbacks of online practical education. Acceptance of hybrid learning correlated with more positive perceptions of teaching methods and a lower perceived desire to cheat. <strong>Conclusions:</strong> The COVID-19 pandemic brought significant changes to the way medicine is being taught in Romania, but it also brought a clearer picture for students and medical staff on how they want medical education to be done. Online cheating remains a significant challenge, but it is being tackled at the moment with different algorithms being tested.

Evaluating the Feasibility of Technology-Based Interventions in Disability and Rehabilitation: Definitions, Considerations, and Dimensions

Technology-based interventions in the field of disability and rehabilitation, which serve assistive, therapeutic, and/or service delivery functions, are considered complex due to the skills required of providers and recipients, degree of individual tailoring, and diversity of use settings. Feasibility studies are an important step in the evolution of complex interventions that can help refine the intervention, inform implementation, and prevent wasted resources. However, guidance is lacking regarding specific considerations for feasibility studies of technology-based interventions in disability and rehabilitation, which leaves researchers and developers reliant on resources from other fields that do not address important technology properties. To advance the field, context-specific definitions, considerations, and evaluation dimensions must be explicitly outlined to ensure that feasibility studies are constructively designed to meet the unique needs of these interventions. In this viewpoint article, we (1) propose a definition and framework for feasibility studies within the specific context of technology-based disability and rehabilitation interventions, (2) highlight important and unique imperatives for feasibility studies of these interventions, and (3) articulate relevant feasibility dimensions and associated evaluation criteria for these interventions. Building on previous work, we distinguish between feasibility studies, wherein we focus on iterative intervention refinement by addressing key development questions (eg, usability), and pilot studies, which are small-scale versions of a larger study that will evaluate intervention outcomes. Integrating previous typologies, we present 13 feasibility dimensions relevant to technology-based interventions and provide sample evaluation criteria, focusing on the intervention itself rather than study design considerations (eg, trial management). This information may be useful for research and development communities (academic, clinical, or industry) to inform comprehensive feasibility studies that examine unique aspects of technology-based interventions to promote real-world impact. This contribution encourages greater harmonization of terminology and evaluation methods to streamline interpretation and comparison across studies.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/ef7cfd105b7f0cb0debd92976a0ac50e" />