Last fall, the Journal of the American Academy of Child & Adolescent Psychiatry issued a so-called expression of concern about a controversial study that was published in 2001 about the widely prescribed antidepressant known as Paxil.
Such a step is taken when a study may have errors or include unreliable information. The notice, which followed a request for a retraction, indicated that a review was underway. Meanwhile, it served as a warning, of sorts, to health care providers who might consult the study when deciding whether to prescribe the medicine.
The deep distrust between public health and the Make America Healthy Again movement may seem impossible to heal. But the podcast “Why Should I Trust You?” is trying to do just that by facilitating conversation between people who often view each others as enemies.
Brinda Adhikari and Tom W. Johnson launched “Why Should I Trust You?” in 2025. Since then, they’ve hosted big names from MAHA, the Trump administration, the anti-vaccine movement, and traditional health. They also bring on everyday Americans trying to keep their families healthy while navigating a confusing information ecosystem. “Everyone, when they come on the show, no matter what their quote unquote, expertise, they’re all equals. Everyone gets time to speak,” Adhikari said.
In a CT scan, coronary artery calcium shows up as distinct, bright pixels. It looks like salt in the pepper of the heart. The more calcium, the higher a patient’s risk of a heart attack.
Often, a cardiologist looks for those bright spots on purpose: They’ll grab snapshots of the heart between beats, to get the clearest possible view of the coronary arteries. But calcium is also visible on zoomed-out chest CTs that aren’t synchronized with the heart. Every year, patients receive 19 million of those more general scans — to screen for lung cancer, or investigate a persistent cough — and an eagle-eyed radiologist can report any incidental calcium they spot.
But even as heart disease remains the top cause of death in the United States, an estimated 20% to 40% of that incidental calcium goes unreported. “We need to find more of these patients,” said Ami Bhatt, chair of the Food and Drug Administration’s Digital Health Advisory Committee and chief innovation officer of the American College of Cardiology.
The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-box compliance exercise, but rather as the first overture in an ongoing customer relationship. For the companies that get it right, the payoff can bring something more intangible, valuable, and durable than simple consent rates: consumer trust.
The opportunities of privacy-led UX have only recently come into focus. Adelina Peltea, the chief marketing officer at Usercentrics, has seen enterprise sentiment shift: “Even just a few years ago, this space was viewed more as a trade-off between growth and compliance,” she says. “But as the market has matured, there’s been a greater focus on how to tie well-designed privacy experiences to business growth.”
And it turns out that well-designed, value-forward consent experiences routinely outperform initial estimates. Touchpoints for privacy-led UX often include consent management platforms, terms and conditions, privacy policies, data subject access request (DSAR) tools, and, increasingly, AI data use disclosures.
This report examines how data transparency builds trust with customers; how this, in turn, can support business performance; and how organizations can maintain this trust even as AI systems add complexity to consent processes.
Key findings include the following:
Privacy is evolving from a one-time consent transaction into an ongoing data relationship. Rather than asking users for broad permissions up front, leading organizations are introducing data-sharing decisions gradually, matching the depth of the ask to the stage of the customer relationship. Companies that take this tack tend to gather both a larger quantity and higher quality of consumer data, the value of which often compounds over time.
Privacy-led UX is a prerequisite for AI growth. The consumer data that organizations gather is rapidly becoming a core foundation upon which AI-powered personalization is built. Organizations that establish clear, enforceable privacy and data transparency policies now are better positioned to deploy AI responsibly and at scale in the future. This starts with correctly configured consent mode across ad platforms.
Agentic AI introduces new levels of both complexity and opportunity. As AI systems begin acting on users’ behalf, the traditional consent moment may never occur. Governing agent-generated data flows requires privacy infrastructure that goes well beyond the cookie banner.
Realizing the advantages of privacy-led UX requires cross-functional collaboration and clear leadership. Privacy-led UX touches marketing, product, legal, and data teams—but someone must own the strategy and weave the threads together. Chief marketing officers
(CMOs) are often best positioned for that role, given their visibility across brand, data, and customer experience.
A practical framework can support businesses in getting it right. Organizations must define their data collection and usage strategies and ensure their UX incorporates data consent, including a focus on banner design. Following a blueprint for evaluating and improving privacy-led UX supports consistency at every consent touchpoint.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.
Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.
These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”
A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.
While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.
Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.
Nuclear propulsion 101
Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.
Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible.
“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia.
The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available.
To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.
So how will a nuclear-reactor-powered spacecraft work?
Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.
Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down.
Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.
To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).
But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space.
“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute.“I’m happy to see them finally doing this.”
One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.”
Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.
Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then blast it out of the spacecraft, generating thrust.
Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”
How to build a nuclear-powered spaceship
For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.
For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.
What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.)
NASA
The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.
According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.”
Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work?
For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth.
If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035.
Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”
And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline.
“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”
Background: Artificial intelligence–powered conversational agents (ie, chatbots) are increasingly popular outlets for users seeking psychological support, yet little is known about how users experience early-stage prototypes or which therapeutic processes contribute to clinical improvement. A transparent evaluation of emerging chatbot prototypes is needed to clarify if, how, and why artificial intelligence companions work and to guide their continued development. Objective: This mixed methods pilot study evaluated user experience, acceptability, and preliminary clinical signals for an early-stage mental wellness chatbot. We also examined whether baseline symptom severity moderated clinical improvement. Methods: Three sequential cohorts (n=125) completed a 2-week, incentivized chatbot exposure (approximately 60 min per week). Participants provided first-impression ratings, qualitative feedback, and pre–post assessments of depressive symptoms (PHQ-8 [Patient Health Questionnaire-8]), anxiety symptoms (GAD-7 [Generalized Anxiety Disorder-7]), psychological distress, well-being, and loneliness. Statistical models estimated symptom change and tested interactions with baseline symptom severity. Mixed methods analysis integrated quantitative outcomes with large language model–assisted qualitative content analysis of open-ended responses. Results: Participants described the chatbot as accessible, easy to use, and emotionally validating, while citing limitations in personalization and conversational depth. Qualitative responses consistently highlighted early therapeutic processes such as emotional validation, goal setting, and perceived attunement. Regression models showed significant pre–post reductions in depressive (Hedges =–0.32) and anxiety (=–0.32) symptoms, alongside modest improvements in distress and well-being. Baseline severity moderated improvement, with marginal effects indicating larger predicted reductions at higher PHQ-8 and GAD-7 baseline scores (eg, PHQ-8=15: =–0.84; GAD-7=15: =–0.62). Conclusions: This pilot provides a comprehensive view of early chatbot development and suggests promising user experiences and preliminary symptom improvements under structured pilot conditions. By integrating experiential and exploratory clinical data, the study identifies candidate process targets to inform ongoing refinement. Findings support continued development and demonstrate procedural feasibility for progression to larger, longer-term trials evaluating engagement and clinical outcomes under more naturalistic conditions.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/df551c8cc1fc34d8080828a3b50a6924" />
Background: Depression is highly prevalent yet undertreated among people living with heart failure, indicating barriers to mental health services. Although various digital mental health interventions have been developed to detect, treat, and manage depression in this population, these interventions have seen limited integration into clinical care and a lack of implementation research. Stepped care is a service innovation that may promote the implementation of these technologies into clinical settings, but few studies have examined how these services are designed in clinical settings. Objective: This study aimed to identify strategies to address health system barriers to accessing mental health care from the perspective of people living with heart failure, clinicians, and researchers, and to incorporate these strategies into the design of a virtual mental health stepped care service within a heart failure remote management program. Methods: A qualitative description study was conducted using purposive recruitment of people living with heart failure, clinicians, and researchers from a heart failure remote patient management program. As part of a service design approach, semistructured interviews explored potential strategies to address barriers to accessing mental health services. Two researchers coded the data descriptively and constructed themes to guide the development of a virtual stepped care service. Results: A total of 22 participants were interviewed, comprising 13 people living with heart failure and 9 clinicians and researchers. Six themes were identified, comprising 4 requirements and 2 foundational principles. The requirements were to (1) adopt a collective approach to identify distress across methods, people, and time points; (2) maintain a referral-based approach; (3) rely on existing mental health human resources; and (4) offer patient choice among various mental health care options. These requirements were supported by two principles: (1) building on organizational strengths and (2) reducing treatment burden. Based on these findings, a virtual stepped care service was developed, incorporating a depression screening module, referral-based workflows, and, where clinically appropriate, patient choice in treatment selection. Conclusions: The stakeholder-informed design of this virtual stepped care service contributes to the limited literature on stepped care service design and demonstrates how such models can be tailored to their intended contexts. Although each component was designed to address health system barriers to mental health care for people living with heart failure, resource limitations may constrain the balance between feasibility and quality of care. Future research should evaluate the acceptability of this model among people living with heart failure and clinicians.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/55252764b80ef3ce085b4f3f728aad47" />
Software engineering has experienced two seismic shifts this century. First was the rise of the open source movement, which gradually made code accessible to developers and engineers everywhere. Second, the adoption of development operations (DevOps) and agile methodologies took software from siloed to collaborative development and from batch to continuous delivery. Now, a third such shift looks to be taking shape with the adoption of agentic AI in software engineering.
Thus far, engineering teams have mainly used AI to assist with coding, testing, and other individual tasks, within tightly designed parameters. But with agentic capabilities, AI agents become reasoning, self-directing entities that can manage not just discrete tasks but entire software projects—and do so largely autonomously. If adopted and fully embraced by engineering teams, agentic AI will usher in end-to-end software process automation and, ultimately, agent-managed development and product lifecycle automation.
This report, which is based on a survey of 300 engineering and technology executives, finds that software engineering teams are seeing the potential in agentic AI and are beginning to put it to use, but so far in a mainly limited fashion. Their ambitions for it are high, but most realize it will take time and effort to reduce the barriers to its full diffusion in software operations. As with DevOps and agile, reaping the full benefits of agentic AI in engineering will require sometimes difficult organizational and process change to accompany technology adoption. But the gains to be won in speed, efficiency, and quality promise to make any such pain well worthwhile.
Key findings include the following:
Adoption momentum is building. While half of organizations deem agentic AI a top investment priority for software engineering today, it will be a leading investment for over four-fifths in two years. That spending is driving accelerated adoption. Agentic AI is in (mostly limited) use by 51% of software teams today, and 45% have plans to adopt it within the next 12 months.
Early gains will be incremental. It will take time for software teams’ investments in agentic AI to start bearing fruit. Over the next two years, most expect the improvements from agent use to be slight (14%) or at best moderate (52%). But around one-third (32%) have higher expectations, and 9% think the improvements will be game changing.
Agents will accelerate time-to-market. The chief gains from agentic AI use over that two-year time frame will come from greater speed. Nearly all respondents (98%) expect their teams’ delivery of software projects from pilot to production to accelerate, with the anticipated increase in speed averaging 37% across the group.
The goal for most is full agentic lifecycle management. Teams’ ambitions for scaling agentic AI are high. Most aim for AI agents to be managing the product development and software development lifecycles (PDLC and SDLC) end to end relatively quickly. At 41% of organizations, teams aim to achieve this for most or all products in 18 months. That figure will rise to 72% two years from now, if expectations are met.
Compute costs and integration pose key early challenges. For all survey respondents—but especially in early-adopter verticals such as media and entertainment and technology hardware—integrating agents with existing applications and the cost of computing resources are the main challenges they face with agentic AI in software engineering. The experts we interviewed, meanwhile, emphasize the bigger change management difficulties teams will face in changing workflows.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.