Opinion: Hosting the ‘intellectual wrestling match’ between MAHA, public health
The deep distrust between public health and the Make America Healthy Again movement may seem impossible to heal. But the podcast “Why Should I Trust You?” is trying to do just that by facilitating conversation between people who often view each others as enemies.
Brinda Adhikari and Tom W. Johnson launched “Why Should I Trust You?” in 2025. Since then, they’ve hosted big names from MAHA, the Trump administration, the anti-vaccine movement, and traditional health. They also bring on everyday Americans trying to keep their families healthy while navigating a confusing information ecosystem. “Everyone, when they come on the show, no matter what their quote unquote, expertise, they’re all equals. Everyone gets time to speak,” Adhikari said.
Building trust in the AI era with privacy-led UX
The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-box compliance exercise, but rather as the first overture in an ongoing customer relationship. For the companies that get it right, the payoff can bring something more intangible, valuable, and durable than simple consent rates: consumer trust.

The opportunities of privacy-led UX have only recently come into focus. Adelina Peltea, the chief marketing officer at Usercentrics, has seen enterprise sentiment shift: “Even just a few years ago, this space was viewed more as a trade-off between growth and compliance,” she says. “But as the market has matured, there’s been a greater focus on how to tie well-designed privacy experiences to business growth.”
And it turns out that well-designed, value-forward consent experiences routinely outperform initial estimates.
Touchpoints for privacy-led UX often include consent management platforms, terms and conditions, privacy policies, data subject access request (DSAR) tools, and, increasingly, AI data use disclosures.
This report examines how data transparency builds trust with customers; how this, in turn, can support business performance; and how organizations can maintain this trust even as AI systems add complexity to consent processes.

Key findings include the following:
- Privacy is evolving from a one-time consent transaction into an ongoing data relationship. Rather than asking users for broad permissions up front, leading organizations are introducing data-sharing decisions gradually, matching the depth of the ask to the stage of the customer relationship. Companies that take this tack tend to gather both a larger quantity and higher quality of consumer data, the value of which often compounds over time.
- Privacy-led UX is a prerequisite for AI growth. The consumer data that organizations gather is rapidly becoming a core foundation upon which AI-powered personalization is built. Organizations that establish clear, enforceable privacy and data transparency policies now are better positioned to deploy AI responsibly and at scale in the future. This starts with correctly configured consent mode across ad platforms.
- Agentic AI introduces new levels of both complexity and opportunity. As AI systems begin acting on users’ behalf, the traditional consent moment may never occur. Governing agent-generated data flows requires privacy infrastructure that goes well beyond the cookie banner.
- Realizing the advantages of privacy-led UX requires cross-functional collaboration and clear leadership. Privacy-led UX touches marketing, product, legal, and data teams—but someone must own the strategy and weave the threads together. Chief marketing officers
- (CMOs) are often best positioned for that role, given their visibility across brand, data, and customer experience.
- A practical framework can support businesses in getting it right. Organizations must define their data collection and usage strategies and ensure their UX incorporates data consent, including a focus on banner design. Following a blueprint for evaluating and improving privacy-led UX supports consistency at every consent touchpoint.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Immersive virtual reality as a novel approach to improve social cognition in multiple sclerosis: an EEG-based pilot study
The problem with thinking you’re part Neanderthal
You’ve probably heard some version of this idea before: that many of us have an “inner Neanderthal.” That is to say, around 45,000 years ago, when Homo sapiens first arrived in Europe, they met members of a cousin species—the broad-browed, heavier-set Neanderthals—and, well, one thing led to another, which is why some people now carry a small amount of Neanderthal DNA.
This DNA is arguably the 21st century’s most celebrated discovery in human evolution. It has been connected to all kinds of traits and health conditions, and it helped win the Swedish geneticist Svante Pääbo a Nobel Prize.
But in 2024, a pair of French population geneticists called into question the foundation of the popular and pervasive theory.
Lounès Chikhi and Rémi Tournebize, then colleagues at the Université de Toulouse, proposed an alternative explanation for the very same genomic patterns. The problem, they said, was that the original evidence for the inner Neanderthal was based on a statistical assumption: that humans, Neanderthals, and their ancestors all mated randomly in huge, continent-size populations. That meant a person in South Africa was just as likely to reproduce with a person in West Africa or East Africa as with someone from their own community.
Archaeological, genetic, and fossil evidence all shows, though, that Homo sapiens evolved in Africa in smaller groups, cut off from one another by deserts, mountains, and cultural divides. People sometimes crossed those barriers, but more often they partnered up within them.
In the terminology of the field, this dynamic is called population structure. Because of structure, genes do not spread evenly through a population but can concentrate in some places and be totally absent from others. The human gene pool is not so much an Olympic-size swimming pool as a complex network of tidal pools whose connectivity ebbs and flows over time.
This dynamic greatly complicates the math at the heart of evolutionary biology, which long relied on assumptions like randomly mating populations to extract general principles from limited data. If you take structure into account, Chikhi told me recently, then there are other ways to explain the DNA that some living people share with Neanderthals—ways that don’t require any interspecies sex at all.
“I believe most species are spatially organized and structured in different, complex ways,” says Chikhi, who has researched population structure for more than two decades and has also studied lemurs, orangutans, and island birds. “It’s a general failure of our field that we do not compare our results in a clear way with alternative scenarios.” (Pääbo did not respond to multiple requests for comment.)
The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells.
Chikhi and Tournebize’s argument is about population structure, yes, but at heart, it is actually one about methods—how modern evolutionary science deploys computer models and statistical techniques to make sense of mountains upon mountains of genetic data.
They’re not the only scientists who are worried. “People think we really understand how genomes evolve and can write sophisticated algorithms for saying what happened,” says William Amos, a University of Cambridge population geneticist who has been critical of the “inner Neanderthal” theory. But, he adds, those models are “based on simple assumptions that are often wrong.”
And if they’re wrong, what’s at stake is far more than a single evolutionary mystery.
A captivating story of interspecies passion
Back in 2010, Pääbo’s lab pulled off something of a miracle. The researchers were able to extract DNA from nuclei in the cells of 40,000-year-old Neanderthal bones. DNA breaks down quickly after death, but the group got enough of it from three different individuals to produce a draft sequence of the entire Neanderthal genome, with 4 billion base pairs.
As part of their study, they performed a statistical test comparing their Neanderthal genome with the genomes of five present-day people from different parts of the world. That’s how they discovered that modern humans of non-African ancestry had a small amount of DNA in common with Neanderthals, a species that diverged from the Homo sapiens line more than 400,000 years ago, that they did not share with either modern humans of African ancestry or our closest living relative, the chimpanzee.

Pääbo’s team interpreted this as evidence of sexual reproduction between ancient Homo sapiens and the Neanderthals they encountered after they expanded out of Africa. “Neanderthals are not totally extinct,” Pääbo said to the BBC in 2010. “In some of us, they live on a little bit.”
The discovery was monumental on its own—but even more so because it reversed a previous consensus. More than a decade earlier, in 1997, Pääbo had sequenced a much smaller amount of Neanderthal DNA, in that case from a cell structure called a mitochondrion. It was different enough from Homo sapiens mitochondrial DNA for his team to cautiously conclude there had been “little or no interbreeding” between the two species.
After 2010, though, the idea of hybridization, also called admixture, effectively became canon. Top journals like Science and Nature published study after study on the inner Neanderthal. Some scientists have argued that Homo sapiens would never have adapted to colder habitats in Europe and Asia without an infusion of Neanderthal DNA. Other research teams used Pääbo’s techniques to find genetic traces of interbreeding with an extinct group of hominins in Asia, called the Denisovans, and a mysterious “ghost lineage” in Africa. Biologists used similar tests to find evidence of interbreeding between chimpanzees and bonobos, polar and brown bears, and all kinds of other animals.
The inner-Neanderthal hypothesis also took a turn for the personal. Various studies linked Neanderthal DNA to a head-spinning range of conditions: alcoholism, asthma, autism, ADHD, depression, diabetes, heart disease, skin cancer, and severe covid-19. Some researchers suggested that Neanderthal DNA had an impact on hair and skin color, while others assigned individuals a “NeanderScore” that was correlated with skull shape and prevalence of schizophrenia markers. Commercial genetic testing companies like 23andMe started offering customers Neanderthal ancestry reports.
The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells. Or as Latif Nasser, a host of the popular-science program Radiolab, put it when he was hospitalized with Crohn’s disease, another Neanderthal-associated condition: “I just keep imagining these tiny Neanderthals … just, like, stabbing me and drawing these little droplets of blood out of me.”
“These things become meaningful to people,” Chikhi says. “What we say will be important to how people view themselves.”
The pitfalls of simplistic solutions
When population geneticists built the theoretical framework for evolutionary biology in the early 20th century, genes were only abstract units of heredity inferred from experiments with peas and fruit flies. Population genetics developed theory far more quickly than it accumulated data. As a result, many data-driven scientists dismissed the study of evolution as a form of storytelling based on unexamined assumptions and preconceived ideas.
By the ’90s, though, genes were no longer abstractions but sequenced segments of DNA. Genomic sequencing grounded evolutionary studies in the kind of hard data that a chemist or physicist could respect.
Yet biologists could not simply read evolutionary history from genomes as though they were books. They were trying to determine which of a nearly infinite number of plausible histories was the most likely to have created the patterns they observed in a small sample of genomes. For that, they needed simplified, algorithmic models of evolution. The study of evolution shifted from storytelling to statistics, and from biology to computer science.
That suited Chikhi, who as a child was drawn to the predictable laws and numerical precision of math and science. He entered the field in the mid-’90s just as the first big studies of human DNA were settling old debates about human origins. DNA showed that Africa harbored far more genetic diversity than the entire rest of the planet. The new evidence supported the idea that modern humans evolved for hundreds of thousands of years in Africa and expanded to the other continents only in the last 100,000 years. For Chikhi, whose parents were Algerian immigrants, this discovery was a powerful challenge to the way some archaeologists and biologists talked about race. DNA could be used to deconstruct rather than encourage the pernicious idea that human races had deep-seated evolutionary differences based on their places of origin.
At the same time, though, he was wary of the tendency to treat DNA as the final verdict on open questions in evolution. Chikhi had been surprised when, back in 1997, Pääbo and his team used that small amount of mitochondrial DNA to rule out hybridization between Homo sapiens and Neanderthals. He didn’t think that the absence of Neanderthal DNA there necessarily meant it wouldn’t be found elsewhere in the Homo sapiens genome.
Chikhi’s own research in the aughts opened his eyes to the gaps between historical reality and models of evolution. For one, despite the assumption of random mating, none of the animals Chikhi studied actually mated randomly. Orangutans lived in highly fragmented habitats, which restricted their pool of potential mates, and female birds were often extremely picky about their male partners.
These factors could confound an evolutionary biologist’s traditional statistical tool kit. Scientists were starting to apply a mathematical technique to estimate historical population sizes for a species from the genome of just a single individual. This method showed sharp population declines in the histories of many different species. Chikhi realized, though, that the apparent declines could be an artifact of treating a structured population as one that evolved with random mating; in that case, the technique could indicate a bottleneck even if all the subgroups were actually growing in size. “This is completely counterintuitive,” he says.
That’s at least partly why, when Pääbo’s 2010 Neanderthal genome came out, Chikhi was impressed with the sheer technical accomplishment but also leery of the findings about hybridization. “It was the type of thing we conclude too quickly based on genetic data,” he says. Pääbo’s work mentioned population structure as a possible alternative explanation—but didn’t follow up.
Just a couple of years later, a pair of independent scientists named Anders Eriksson and Andrea Manica picked up the idea, building a model with simple population structure that explicitly excluded admixture. They simulated human evolution starting from 500,000 years ago and found that their model produced the same genomic patterns Pääbo’s group had interpreted as evidence of hybridization.
“Working with structured models is really out of the comfort zone of a lot of population geneticists,” says Eriksson, now a professor at the University of Tartu in Estonia.
Their research impressed Chikhi. “At the time, I thought people would focus on population structure in the evolution of humans,” he says. Instead, he watched as the inner-Neanderthal hypothesis took on a life of its own. Scientists produced new methods to quantify hybridization but rarely examined whether population structure would yield the same results. To Chikhi, this wasn’t science; it was storytelling, like some of the old narratives about the evolution of racial differences.
Chikhi and Tournebize decided to take a crack at the problem themselves. “I’ve always been very skeptical about science, and population genetics in particular,” says Tournebize, now a researcher at the French National Research Institute for Sustainable Development. “We make a lot of assumptions, and the models we use are very simplistic.” As detailed in a 2024 paper published in Nature Ecology & Evolution, they built a model of human evolution that replaced randomly mating continent-wide populations with many smaller populations linked by occasional migration. Then they let it run—a million times.
At the end of the simulation, they kept the 20 scenarios that produced genomes most similar to the ones in a sample of actual Homo sapiens and Neanderthals. Many of these scenarios produced long segments of DNA like the ones their peers argued could only have been inherited from Neanderthals. They showed that several statistics, which other scientists had proposed as measurements of Neanderthal DNA, couldn’t actually distinguish between hybridization and population structure. What’s more, they showed that many of the models that supported hybridization failed to accurately predict other known features of human evolution.
“A model will say there was admixture but then predict diversity that is totally incompatible with what we actually know of human diversity,” Chikhi says. “Nobody seems to care.”
So how did Neanderthal DNA wind up in living people if not via interspecies passion? Chikhi and Tournebize think it’s more likely that it was inherited by both Neanderthals and some sapiens groups in Africa from a common ancestor living at least half a million years ago. If the sapiens groups carrying those genetic variants included the people who migrated out of Africa, then the two human species would have already had the DNA in common when they came into contact in Europe and Asia—no sex required.
“The interpretation of genetic data is not straightforward,” Chikhi says. “We always have to make assumptions. Nobody takes data and magically comes up with a solution.”
Embracing the uncertainty
Most of the half-dozen population geneticists I spoke with praised Chikhi and Tournebize’s ingenuity and appreciated the spirit of their critique. “Their paper forces us to think more critically about the model we use for inference and consider alternatives,” says Aaron Ragsdale, a population geneticist at the University of Wisconsin–Madison. His own work likewise suggests that the earliest Homo sapiens populations in Africa were probably structured—and that this is the likely reason for genomic patterns that other research groups had attributed to hybridization with a mysterious “ghost lineage” of hominins in Africa.
Yet most researchers still believe that modern humans and Neanderthals did probably have children with each other tens of thousands of years ago. Several pointed to the fact that fossil DNA of Homo sapiens who died thousands of years ago had longer chunks of apparent Neanderthal DNA than living people, which is exactly what you would expect if they had a more recent Neanderthal ancestor. (To address this possibility, Chikhi and Tournebize included DNA from 10 ancient humans in their study and found that most of them fit the structured model.) And while the Harvard population geneticist David Reich, who helped design the statistical test from Pääbo’s 2010 study, declined an interview, he did say he thought Chikhi and Tournebize’s model was “weak” and “very contrived,” adding that “there are multiple lines of evidence for Neanderthal admixture into modern humans that make the evidence for this overwhelming.” (Two other authors of that study, Richard Green and Nick Patterson, did not respond to requests for comment.)
Nevertheless, most scientists these days welcome the development of structured, or “spatially explicit,” models that account for the fact that any given member of a population is usually more closely related to individuals living nearby than to those living far away.
Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history.
Other scientists also say that random mating isn’t the only assumption in population genetics that merits scrutiny. Models rarely factor in natural selection, which can also create genetic patterns that look like hybridization. Another common assumption is that everyone’s DNA mutates at the same, constant rate. “All the theory says the mutation rate is fixed,” says Amos, the Cambridge population geneticist. But he thinks that rate would have slowed drastically in the group of Homo sapiens that expanded to Europe around 45,000 years ago. This, too, could have created genomic patterns that other scientists interpret as evidence of interbreeding with Neanderthals.

The point here isn’t that a complex model of evolution with many moving pieces is necessarily better than a simple one. Scientists need to reduce complexity in order to see the underlying processes more clearly. But simple models require assumptions, and scientists need to reevaluate those assumptions in light of what they learn. “As you get more data, you can justify more complex models of the world,” says Mark Thomas, a population geneticist at University College London, who wrote a history of random mating in population genetics that highlighted how the field was starting to see it as “a limiting assumption as opposed to a simplifying one.”
It can feel discouraging to couch conversations about the past in confusing terms like “population structure” and “mutation rates.” It seems almost antithetical to the spirit of science to talk more about uncertainty at the same time we are developing powerful technologies and enormous data sets for analyzing evolution. These tools often yield novel answers, but they can also limit the questions we ask. The French archaeologist Ludovic Slimak, for example, has complained that the idea of the inner Neanderthal has domesticated our image of Neanderthals and made it difficult to imagine their humanity as distinct from our own. Investigating Neanderthal DNA is sexier to many young researchers than searching for archaeological and fossil evidence of how Neanderthals actually lived.
Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history. Ultimately, that’s what Chikhi and Tournebize hope to do. After all, they don’t believe the question of population structure versus hybridization is either-or. It’s possible, and even likely, that both played a role in human evolution. “Our structured model does not necessarily mean that no admixture ever took place,” Chikhi and Tournebize wrote in their study. “What our results suggest is that, if admixture ever occurred, it is currently hard to identify using existing methods.”
Future methods might disentangle the different factors, but it’s just as important, Chikhi says, for scientists to be up-front about their assumptions and test alternatives. “There’s still so much uncertainty on so many aspects of the demographic history of Neanderthals and Homo sapiens,” he notes.
Keep that in mind the next time you read about your inner Neanderthal. The association between this DNA and some diseases may be real, of course—but would journals publish these studies without the additional claim that the DNA is from Neanderthals? Any good storyteller knows that sex sells, even in science.
Ben Crair is a science and travel writer based in Berlin.
Kent and Medway mental health appointments launch online
Symptoms of Anxiety and/or Depression and SDM in Older Patients With CLTI
Sponsors: Guy’s and St Thomas’ NHS Foundation Trust; University Hospitals, Leicester
Not yet recruiting
Want to understand the current state of AI? Check out these charts.
If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise.
Despite predictions that AI development may hit a wall, the report says that the top models just keep getting better. People are adopting AI faster than they picked up the personal computer or the internet. AI companies are generating revenue faster than companies in any previous technology boom, but they’re also spending hundreds of billions of dollars on data centers and chips. The benchmarks designed to measure AI, the policies meant to govern it, and the job market are struggling to keep up. AI is sprinting, and the rest of us are trying to find our shoes.
All that speed comes at a cost. AI data centers around the world can now draw 29.6 gigawatts of power, enough to run the entire state of New York at peak demand. Annual water use from running OpenAI’s GPT-4o alone may exceed the drinking water needs of 12 million people. At the same time, the supply chain for chips is alarmingly fragile. The US hosts most of the world’s AI data centers, and one company in Taiwan, TSMC, fabricates almost every leading AI chip.
The data reveals a technology evolving faster than we can manage. Here’s a look at some of the key points from this year’s report.
The US and China are nearly tied
In a long, heated race with immense geopolitical stakes, the US and China are almost neck and neck on AI model performance, according to Arena, a community-driven ranking platform that allows users to compare the outputs of large language models on identical prompts. In early 2023, OpenAI had a lead with ChatGPT, but this gap narrowed in 2024 as Google and Anthropic released their own models. In February 2025, R1, an AI model built by the Chinese lab DeepSeek, briefly matched the top US model, ChatGPT. As of March 2026, Anthropic leads, trailed closely by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba lag only modestly. With the best AI models separated in the rankings by razor-thin margins, they’re now competing on cost, reliability, and real-world usefulness.

The index notes that the US and China have different AI advantages. While the US has more powerful AI models, more capital, and an estimated 5,427 data centers (more than 10 times as many as any other country), China leads in AI research publications, patents, and robotics.
As competition intensifies, companies like OpenAI, Anthropic, and Google no longer disclose their training code, parameter counts, or data-set sizes. “We don’t know a lot of things about predicting model behaviors,” says Yolanda Gil, a computer scientist at the University of Southern California who coauthored the report. This lack of transparency makes it difficult for independent researchers to study how to make AI models safer, she says.
AI models are advancing super fast
Despite predictions that development will plateau, AI models keep getting better and better. By some measures, they now meet or exceed the performance of human experts on tests that aim to measure PhD-level science, math, and language understanding. SWE-bench Verified, a software engineering benchmark for AI models, saw top scores jump from around 60% in 2024 to almost 100% in 2025. In 2025, an AI system produced a weather forecast on its own.
“I am stunned that this technology continues to improve, and it’s just not plateauing in any way,” says Gil.

However, AI still struggles in plenty of other areas. Because the models learn by processing enormous amounts of text and images rather than by experiencing the physical world, AI exhibits “jagged intelligence.” Robots are still in their early days and succeed in only 12% of household tasks. Self-driving cars are farther along: Waymos are now roaming across five US cities, and Baidu’s Apollo Go vehicles are shuttling riders around in China. AI is also expanding into professional domains like law and finance, but no model dominates the field yet.
But the way we test AI is broken
These reports of progress should be taken with a grain of salt. The benchmarks designed to track AI progress are struggling to keep up as models quickly blow past their ceilings, the Stanford report says. Some are poorly constructed—a popular benchmark that tests a model’s math abilities has a 42% error rate. Others can be gamed: when models are trained on benchmark test data, for example, they can learn to score well without getting smarter.
AI companies are also sharing less about how their models are trained, and independent testing sometimes tells a different story from what they report. “A lot of companies are not releasing how their models do in certain benchmarks, particularly the responsible-AI benchmarks,” says Gil. “The absence of how your model is doing on a benchmark maybe says something.”
AI is starting to affect jobs
Within three years of going mainstream, AI is now used by more than half of people around the world, a rate of adoption faster than the personal computer or the internet. An estimated 88% of organizations now use AI, and four in five university students use it.
It’s early days for deployment, and AI’s impact on jobs is hard to measure. Still, some studies suggest AI is beginning to affect young workers in certain professions. According to a 2025 study by economists at Stanford, employment for software developers aged 22 to 25 has fallen nearly 20% since 2022. The decline might not be pinned on AI alone, as broader macroeconomic conditions could be to blame, but AI appears to be playing a part.

Employers say that hiring may continue to tighten. According to a 2025 survey conducted by McKinsey & Company, a third of organizations expect AI to shrink their workforce in the coming year, particularly in service and supply chain operations and software engineering. AI is boosting productivity by 14% in customer service and 26% in software development, according to research cited by the index, but such gains are not seen in tasks requiring more judgment. Overall, it’s still too early to understand the bigger economic impact of AI.
People have complicated feelings about AI
Around the world, people feel both optimistic and anxious about AI: 59% of people think that it will provide more benefits than drawbacks, while 52% say that it makes them nervous, according to an Ipsos survey cited in the index.
Notably, experts and the public see the future of AI very differently, according to a Pew survey. The biggest gap is around the future of work: While 73% of experts think that AI will have a positive impact on how people do their jobs, only 23% of the American public thinks so. Experts are also more optimistic than the public about AI’s impact on education and medical care, but they agree that AI will hurt elections and personal relationships.

Among all countries surveyed, Americans trust their government least to regulate AI appropriately, according to another Ipsos survey. More Americans worry federal AI regulation won’t go far enough than worry it will go too far.
Governments are struggling to regulate AI
Governments around the world are struggling to regulate AI, but there were some minor successes last year. The EU AI Act’s first prohibitions, which ban the use of AI in predictive policing and emotion recognition, took effect. Japan, South Korea, and Italy also passed national AI laws. Meanwhile, the US federal government moved toward deregulation, with President Trump issuing an executive order seeking to handcuff states from regulating AI.
Despite this federal action, state legislatures in the US passed a record 150 AI-related bills. California enacted landmark legislation, including SB 53, which mandates safety disclosures and whistleblower protections for developers of AI models. New York passed the RAISE Act, requiring AI companies to publish safety protocols and report critical safety incidents.

But for all the legislative activity, Gil says, regulation is running behind the technology because we don’t really understand how it works. “Governments are cautious to regulate AI because … we don’t understand many things very well,” she says. “We don’t have a good handle on those systems.”
Opinion: I’m a MAHA activist. I went into the public health lion’s den — and it changed how I think
The past few weeks have been nothing but discouraging for those of us who helped create the Make America Healthy Again movement, including a silly executive order on glyphosate that feels anathema to what we have fought for. I’d be lying if I said that my heart hasn’t been bent toward repentance for my part in the whole thing. I helped champion Bobby Kennedy as a campaign volunteer, and when he joined up with then-candidate Donald Trump, I reluctantly decided that the trade-offs were worth what I believed Kennedy could advocate for within the walls of a Trump White House: the best fixes for a very sick and broken nation.
Yet I found myself recently, and reluctantly, headed to the citadel of arrogance: Washington (well, Arlington, Va., to be more specific). At the invitation of Brinda Adhikari — one of the hosts of the podcast “Why Should I Trust You?” — I attended the Association of Schools and Programs of Public Health’s annual meeting, where I spoke on a panel about engaging in civil conversation in a session called “A Dialogue Between Academic Public Health and MAHA.”
In Memoriam: Judith L. Rapoport, MD
Dr. Judith L. Rapoport has left an indelible mark on the field of obsessive compulsive disorder (OCD) — not only through her extraordinary scientific contributions, but through the compassion, curiosity, and humanity she brought to her work. For countless individuals and families, her legacy is not just measured in research breakthroughs, but in hope restored and lives changed.
At a time when OCD was widely misunderstood, often hidden, and rarely discussed, Dr. Rapoport helped bring it into the light. Through her pioneering work at the National Institute of Mental Health, she gave shape and voice to a condition that many struggled to name. She was among the first to recognize that OCD could affect children, and that these young people deserved understanding, accurate diagnosis, and effective care. This insight alone transformed the trajectory of the field and opened doors for earlier intervention and support for families who had long felt alone.
What set Dr. Rapoport apart was not only her intellect, but her deep commitment to the people behind the science. She approached each question with both rigor and empathy, helping to establish treatments that have since become the gold standard, including exposure and response prevention (ERP) and medication. Her work helped shift the narrative—away from blame or misunderstanding, and toward recognition of OCD as a real, treatable medical condition.
Beyond the lab and clinic, Dr. Rapoport had a rare gift for storytelling. Her book, The Boy Who Couldn’t Stop Washing, brought readers into the lived experience of OCD with clarity and care. For many, it was the first time they saw their own struggles reflected with such honesty and dignity. It helped families feel seen, understood, and less alone — an impact that continues to ripple outward today. The Boy Who Couldn’t Stop Washing impacted professionals as well, providing an eye-opening introduction and gateway to the world of working with OCD.
For these accomplishments and more, Dr. Rappaport received the IOCDF’s 2018 Career Achievement Award. Her influence extends through the many clinicians and researchers she has mentored, each carrying forward her dedication to both excellence and empathy. Through them, her work continues to grow, shaping the future of OCD research and care in ways that are both profound and deeply human.
To honor Dr. Judith Rapoport is to honor a career defined not only by discovery, but by kindness and purpose. She helped the world better understand OCD — but more importantly, she helped people living with OCD feel understood. And in doing so, she changed lives in ways that will endure for generations.
The post In Memoriam: Judith L. Rapoport, MD appeared first on International OCD Foundation.

