On this week’s episode of the Readout LOUD: a pancreatic cancer breakthrough and new hope for an off-the-shelf CAR-T treatment in lymphoma.
Your favorite biotech podcasting crew is back to full strength this week, and we’re bringing you two newsy guest interviews. First, we’ll talk with Allogene Therapeutics Chief Medical Officer Zach Roberts about new study results that bolster the company’s efforts to develop an off-the-shelf CAR-T therapy for B-cell lymphoma, a type of blood cancer.
Thousands of genes are expressed differently in the brains of men and women, researchers have discovered.
The findings could help explain differences in neurodevelopmental, psychiatric, and neurodegenerative disorders between the sexes.
While men are more likely to experience schizophrenia, attention deficit hyperactivity disorder, and Parkinson’s disease, women are more prone to mood disorders and Alzheimer’s disease.
The U.S. study, inScience, is the first systemic single-cell survey of sex differences in gene expression across multiple regions of the human brain.
“Together, these findings provide a comprehensive map of molecular sex differences in the human brain and offer initial insight into their underlying mechanisms and potential functional consequences,” Alex DeCasien, PhD, from the National Institute of Mental Health in Bethesda, Maryland, told Inside Precision Medicine.
DeCasien and co-workers conducted a high-resolution analysis of gene expression in tissue samples from the brains of 15 men and 15 women using single-nucleus RNA sequencing.
They then used data from earlier large neuroimaging studies to select six cortical regions to sample, four of which showed sex-related differences in grey matter volume and two in which no such differences were found.
The team found subtle but widespread differences in gene activity between men and women. Biological sex explained very little of the variance in gene expression across the brain, at less than 1%, but differences were widespread—with more than 3000 genes showing different expression according to sex in at least one cortical region.
The greatest sex-related differences in gene expression were on the sex chromosomes. However, most of the genes showing sex-related variations in expression were autosomal—carried on one of the 22 numbered non-sex chromosomes.
The predominant driver for sex-biased expression of genes on these autosomal chromosomes were sex steroid hormones such as estrogen and testosterone.
Surprisingly, more than half the X chromosome genes in women were expressed in both alleles for at least one cell type. This indicated that many had escaped X chromosome inactivation—a female phenomenon in which one of the two X chromosomes is switched off early in development to stop women producing double the number of X-linked gene products to men.
“That finding has implications for understanding sex-biased disease susceptibility because several genes implicated in neurodevelopmental disorders reside on the X chromosome,” commented Jessica Tollkuhn, PhD, from Cold Spring Harbor Laboratory, and S Marc Breedlove, from Michigan State University, in an accompanying Perspective article.
They noted that autosomal genes showing sex-biased expression were substantially enriched for extracellular matrix components, hormone signaling pathways, and metabolic processes. “Genes with greater expression in women were enriched for mitochondrial and synaptic functions, whereas male-biased genes were associated with metabolic and structural pathways,” the editorialists added.
“By pinpointing these sexually differentiated processes, the data provide a treasure trove for the discovery of biomarkers of and/or therapeutic targets for differential disease risk in men and women.”
DeCasien and team added: “These findings raise the possibility that sex differences in gene expression modulate the magnitude of genetic effects at risk loci, contributing to differences in disease vulnerability and to reduced portability of polygenic risk prediction across sexes.”
WASHINGTON — Health Secretary Robert F. Kennedy Jr. returned to Capitol Hill Thursday, where he defended the administration’s efforts to fight health care fraud and improve affordability — and worked to avoid discussions about vaccine policy.
An hours-long Ways and Means hearing Thursday morning covered a wide range of topics related to Kennedy’s Department of Health and Human Services and kicked off a marathon series of testimonies about the president’s proposed budget.
Later, during a hearing with the House Appropriations health subcommittee, Kennedy said the president would release the name of the nominee to lead the Centers for Disease Control and Prevention before the end of the week. (Soon after, Trump announced the nominee.)
The scientists whose work spurred the development of powerful obesity drugs like Eli Lilly’s Zepbound are now raising a provocative hypothesis: Perhaps targeting the GLP-1 hormone is actually not necessary to achieve effective weight loss.
A group of researchers led by Richard DiMarchi and Matthias Tschöp has created an experimental drug that activates receptors of the GIP and glucagon hormones. They propose — based on rodent and monkey studies — that this kind of molecule, when administered at high enough doses, may result in weight loss comparable to the weight loss seen with drugs that include GLP-1 as a target, and without the tolerability issues like nausea and vomiting that often come with the approved treatments, according to a peer-reviewed draft paper published this week.
The research, funded by a biotech called BlueWater Biosciences, would still need to be confirmed in humans; oftentimes results seen in animals don’t translate in the clinic. But the proposed approach, outlined in the journal Molecular Metabolism by some of the most well-known scientists in the field, is likely to stir controversy, as it challenges a central notion underpinning not just the development of approved obesity products but also next-generation versions.
There’s a fault line running through enterprise AI, and it’s not the one getting the most attention. The public conversation still tracks foundation models and benchmarks—GPT versus Gemini, reasoning scores, and marginal capability gains. But in practice, the more durable advantage is structural: who owns the operating layer where intelligence is applied, governed, and improved. One model treats AI as an on-demand utility; the other embeds it as an operating layer—the combination of operation software, data capture, feedback loops and governance that sits between models and real work—that compounds with use.
Model providers like OpenAI and Anthropic sell intelligence as a service: you have a problem, you call an API, you get an answer. That intelligence is general-purpose, largely stateless, and only loosely connected to the day-to-day operations where decisions are made. It’s highly capable and increasingly interchangeable. The distinction that matters is whether intelligence resets on every prompt or accumulates over time.
Incumbent organizations, by contrast, can treat AI as an operating layer: instrumentation across operations, feedback loops from human decisions, and governance that turns individual tasks into reusable policy. In that setup, every exception, correction, and approval becomes a chance to learn—and intelligence can improve as the platform absorbs more of the organization’s work. The organizations most likely to shape the enterprise AI era are those that can embed intelligence directly into operational platforms and instrument those platforms so work generates usable signals.
The prevailing narrative says nimble startups will out-innovate incumbents by building AI-native from scratch. If AI is primarily a model problem, that story holds. But in many enterprise domains, AI is a systems problem—integrations, permissions, evaluation, and change management—where advantage accrues to whomever already sits inside high-volume, high-stakes operations and converts that position into learning and automation.
The inversion: AI executes, humans adjudicate
Traditional services organizations are built on a simple architecture: humans use software to do expert work. Operators log into systems, navigate operations, make decisions, and process cases. Technology is the medium. Human judgment is the product.
An AI-native platform inverts this. It ingests a problem, applies accumulated domain knowledge, executes autonomously what it can with high confidence, and routes targeted sub-tasks to human experts when the situation demands judgment that the system can’t yet reliably provide.
But inverting human-AI interaction isn’t just a UI redesign—it requires raw material. It’s only possible when the platform is built on a foundation of domain expertise, behavioral data, and operational knowledge accumulated over years.
The three compounding assets incumbents already own
AI-native startups begin with a clean architectural slate and can move quickly. What they can’t easily manufacture is the raw material that makes domain AI defensible at scale:
Proprietary operational data
A large workforce of domain experts whose day-to-day decisions generate training signals
Accumulated tacit knowledge about how complex work actually gets done
Services companies already have all three. But these ingredients aren’t moats on their own. They become an advantage only when a company can systematically convert messy operations into AI-ready signals and institutional knowledge—then feed the results back into operations so the system keeps improving.
Codifying expertise into reusable signals
In most services organizations, expertise is tacit and perishable. The best operators know things they cannot easily articulate: heuristics developed over the years, edge-case intuitions, and pattern recognition that operate below the level of conscious reasoning.
At Ensemble, the strategy for addressing this challenge is knowledge distillation. The systematic conversion of expert judgment and operational decisions into machine-readable training signals.
In health-care revenue cycle management, for example, systems can be seeded with explicit domain knowledge and then deepen their coverage through structured daily interaction with operators. In Ensemble’s implementation, the system identifies gaps, formulates targeted questions, and cross-checks answers across multiple experts to capture both consensus and edge-case nuance. It then synthesizes these inputs into a living knowledge base that reflects the situational reasoning behind expert-level performance.
Turning decisions into a learning flywheel
Once a system is constrained enough to be trusted, the next question is how it gets better without waiting for annual model upgrades. Every time a skilled operator makes a decision, they generate more than a completed task. They generate a potential labeled example—context paired with an expert action (and sometimes an outcome). At scale, across thousands of operators and millions of decisions, that stream can power supervised learning, evaluation, and targeted forms of reinforcement—teaching systems to behave more like experts in real conditions.
For example, if an organization processes 50,000 cases a week and captures just three high-quality decision points per case, that’s 150,000 labeled examples every week without creating a separate data-collection program.
A more advanced human-in-the-loop design places experts inside the decision process, so systems learn not just what the right answer was, but how ambiguity gets resolved. Practically, humans intervene at branch points—selecting from AI-generated options, correcting assumptions, and redirecting operations. Each intervention becomes a high-value training signal. When the platform detects an edge case or a deviation from the expected process, it can prompt for a brief, structured rationale, capturing decision factors without requiring lengthy free-form reasoning logs.
Building toward expertise amplification
The goal is to permanently embed the accumulated expertise of thousands of domain experts—their knowledge, decisions, and reasoning—into an AI platform that amplifies what every operator can accomplish. Done well, this produces a quality of execution that neither humans nor AI achieve independently: higher consistency, improved throughput, and measurable operational gains. Operators can focus on more consequential work, supported by an AI that has already completed the analytical groundwork across thousands of analogous prior cases.
The broader implication for enterprise leaders is straightforward. Advantages in AI won’t be determined by access to general-purpose models alone. It will come from an organization’s ability to capture, refine, and compound what it knows, its data, decisions, and operational judgment, while building the controls required for high-stakes environments. As AI shifts from experimentation to infrastructure, the most durable edge may belong to the companies that understand the work well enough to instrument it and can turn that understanding into systems that improve with use.
This content was produced by Ensemble. It was not written by MIT Technology Review’s editorial staff.
Posted on
<![CDATA[Faster aging may be linked to schizophrenia, according to new research.]]>
The AI boom has hit across industries, and public sector organizations are facing pressure to accelerate adoption. At the same time, government institutions face distinct constraints around security, governance, and operations that set them apart from their business counterparts. For this reason, purpose-built small language models (SLMs) offer a promising path to operationalize AI in these environments.
A Capgemini study found that 79 percent of public sector executives globally are wary about AI’s data security, an understandable figure given the heightened sensitivity of government data and the legal obligations surrounding its use. As Han Xiao, vice president of AI at Elastic, says, “Government agencies must be very restricted about what kind of data they send to the network. This sets a lot of boundaries on how they think about and manage their data.”
The fundamental need for control over sensitive information is one of many factors complicating AI deployment, particularly when compared against the private sector’s standard operational assumptions.
Unique operational challenges
When private-sector entities expand AI, they typically assume certain conditions will be in place, including continuous connectivity to the cloud, reliance on centralized infrastructure, acceptance of incomplete model transparency, and limited restrictions on data movement. For many state institutions, however, accepting these conditions could be anything from dangerous to impossible.
Government agencies must ensure that their data stays under their control, that information can be checked and verified, and that operational disruptions are kept to an absolute minimum. At the same time, they often have to run their systems in environments where internet connectivity is limited, unreliable, or unavailable. These complexities prevent many promising public sector AI pilots from moving beyond experimentation. “Many people undervalue the operating challenge of AI,” Xiao says. “The public sector needs AI to perform reliably on all kinds of data, and then to be able to grow without breaking. Continuity of operations is often underestimated.” An Elastic survey of public sector leaders found that 65 percent struggle to use data continuously in real time and at scale.
Infrastructure constraints compound the problem. Government organizations may also struggle to obtain the graphics processing units (GPUs) used to train and access complex AI models. As Xiao points out, “Government doesn’t often purchase GPUs, unlike the private sector—they’re not used to managing GPU infrastructure. So accessing a GPU to run the model is a bottleneck for much of the public sector.”
A smaller, more practical model
The many nonnegotiable requirements in the public sector make large language models (LLMs) untenable. But SLMs can be housed locally, offering greater security and control. SLMs are specialized AI models that typically use billions rather than hundreds of billions of parameters, making them far less computationally demanding than the largest LLMs.
The public sector does not need to build ever-larger models housed in offsite, centralized locations. An empirical study found that SLMs performed as well or better than LLMs. SLMs allow sensitive information to be used effectively and efficiently while avoiding the operational complexity of maintaining large models. Xiao puts it this way:“It is easy to use ChatGPT to do proofreading. It’s very difficult to run your own large language models just as smoothly in an environment with no network access.”
SLMs are purpose-built for the needs of the department or agency that will use them. The data is stored securely outside the model, and is only accessed when queried. Carefully engineered prompts ensure that only the most relevant information is retrieved, providing more accurate responses. Using methods such as smart retrieval, vector search, and verifiable source grounding, AI systems can be built that cater to public sector needs.
Thus, the next phase of AI adoption in the public sector may be to bring the AI tool to the data, rather than sending the data out into the cloud. Gartner predicts that by 2027, small, specialized AI models will be used three times more than LLMs.
Superior search capabilities
“When people in the public sector hear AI, they probably think about ChatGPT. But we can be much more ambitious,” says Xiao. “AI can revolutionize how the government searches and manages the large amounts of data they have.”
Looking beyond chatbots reveals one of AI’s most immediate opportunities: dramatically improved search. Like many organizations, the public sector has mountains of unstructured data—including technical reports, procurement documents, minutes, and invoices. Today’s AI, however, can deliver results sourced from mixed media, like readable PDFs, scans, images, spreadsheets, and recordings, and in multiple languages. All of this can be indexed by SLM-powered systems to provide tailored responses and to draft complex texts in any language, while ensuring outputs are legally compliant. “The public sector has a lot of data, and they don’t always know how to use this data. They don’t know what the possibilities are,” says Xiao.
Even more powerful, AI can help government employees interpret the data they access. “Today’s AI can provide you with a completely new view of how to harness that data,” says Xiao. A well-trained SLM can interpret legal norms, extract insights from public consultations, support data-driven executive decision-making, and improve public access to services and administrative information. This can contribute to dramatic improvements in how the public sector conducts its operations.
The small-language promise
Focusing on SLMs shifts the conversation from how comprehensive the model can be to how efficient it is. LLMs incur significant performance and computational costs and require specialized hardware that many public entities cannot afford. Despite requiring some capital expenses, SLMs are less resource-intensive than LLMs, so they tend to be cheaper and reduce environmental impact.
Public sector agencies often face stringent audit requirements, and SLM algorithms can be documented and certified as transparent. Some countries, particularly in Europe, also have privacy regulations such as GDPR that SLMs can be designed to meet.
Tailored training data produces more targeted results, reducing errors, bias, and hallucinations that AI is prone to. As Xiao puts it,“Large language models generate text based on what they were trained on, so there is a cut-off date when they were trained. If you ask about anything after that, it will hallucinate. We can solve this by forcing the model to work from verified sources.”
Risks are also minimized by keeping data on local servers, or even on a specific device. This isn’t about isolation but about strategic autonomy to enable trust, resilience, and relevance.
By prioritizing task-specific models designed for environments that process data locally, and by continuously monitoring performance and impact, public sector organizations can build lasting AI capabilities that support real-world decisions. “Do not start with a chatbot; start with search,” Xiao advises. “Much of what we think of as AI intelligence is really about finding the right information.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram
Inside a money-laundering center in Cambodia, an employee opens a banking app on his phone. It asks for a photo linked to the account, so he uploads a picture of a 30-something Asian man.
The app then requests a video “liveness” check. The scammer holds up a static image of a woman who doesn’t match the account. After 90 seconds, he’s in.
The exploit relies on illicit hacking services sold on Telegram that break “Know Your Customer” (KYC) facial scans. MIT Technology Review found 22 channels and groups advertising these services. This is what we discovered.
—Fiona Kelliher
Is carbon removal in trouble?
—Casey Crownhart
Last week, news emerged that Microsoft was pausing carbon removal purchases. It was a bombshell—Microsoft effectively is the carbon removal market, single-handedly purchasing around 80% of all contracted carbon removal.
The report sparked fear across the industry, raising questions about the future of carbon removal and the role of Big Tech. Read the full story.
This story is from The Spark, our weekly newsletter exploring the technology that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday.
The quest to measure our relationship with nature
—Emma Marris
Humans have done some destructive things to the ecosystems around us. But conservationists are learning that we can also be a force for good.
To understand how we work best with nature, a group of scientists, authors, and philosophers have developed new measurements of human-nonhuman relationships. Now, a team in the United Nations is continuing the work. Find out why—and what they hope to achieve.
This story is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Ukraine says Russian troops have surrendered to robots They claim a fully automated attack captured army positions for the first time in history. (404 Media) + Europe’s vision for future wars is full of drones. (MIT Technology Review)
2 Monkeys with BCIs are navigating virtual worlds using only their thoughts The research could help people with paralysis.(New Scientist) + But these implants still face a critical test. (MIT Technology Review)
3 NASA wants to put nuclear reactors on the Moon They could power lunar bases and extend spaceflight. (Wired $) + NASA is also building a nuclear-powered spacecraft. (MIT Technology Review)
4 Plans for online age verification in the US are raising red flags Experts warn of compliance issues and potential data breaches. (NBC News) + In the EU, an age verification app is about to launch. (Reuters $)
5 An AI chip boom just pushed Taiwan’s stock market past the UK’s It’s risen past $4 trillion to become the world’s seventh largest. (FT $) + Future AI chips could be built on glass. (MIT Technology Review)
6 The public backlash against data centers is intensifying in the US Protests and litigation are blocking projects. (CNBC) + One potential solution? Putting them in space. (MIT Technology Review)
7 Five-minute EV charging is becoming a reality China’s BYD has started rolling it out. (Gizmodo) + “Extended-range electric vehicles” are about to hit US streets. (Atlantic $)
8 Stealth signals are bypassing Iran’s internet blackout Files hidden in satellite TV broadcasts keep information flowing. (IEEE)
9 Shoe brand Allbirds made a shock pivot to AI, sending stock up 700% No bubble to see here, folks. (CNBC) + What even is the AI bubble? (MIT Technology Review)
10 The largest ever map of the universe is complete It captures 47 million galaxies and quasars. (Space.com)
Quote of the day
“I like the internet as much as anybody, but we’ve got to go on an internet diet. We don’t need to pay for corporations to do their internet stuff.”
—Sylvia Whitt, a 78-year-old retiree based in Virginia, tells the Washington Post why they’re protesting against data centers.
One More Thing
ISRAEL VARGAS
AI and the future of sex
Some Republican lawmakers want to criminalize porn and arrest its creators. But what if porn is wholly created by an algorithm? In that case, whether it’s obscene, ethical, or safe becomes a secondary issue. The primary concern will be what it means for porn to be “real”—and what the answer demands from all of us.
Technological advances could even remove the “messy humanity” from sex itself. The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. Read the full story.
—Leo Herrera
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ An animator turned his son’s drawings into epic anime characters. + Hundreds of baby green sea turtles made a spectacular first journey to the ocean. + You can now track rocket launches from take-off to orbit in real time. + These musical mistakes prove that even the classics aren’t perfect.
The superior colliculus (SC) plays a crucial role in multisensory integration, visual information processing, saccadic target selection, visual selective attention, and decision making. In particular, the SC has a key role in oculomotor coordination, following a rostro-caudal organization. The rostral SC, which corresponds to foveal representation, is linked to fixation, microsaccades, smooth pursuit, and vergence adjustments. In contrast, the caudal SC, representing more peripheral visual field, is associated with the large gaze shifts (saccades). However, evidence regarding whether this functional gradient is preserved in the human SC remains limited. In this study, we employed a sequence-following visual-motor task to specifically engage SC activity. We measured blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) responses to brief neural activity, known as hemodynamic response function (HRF). We showed a spatial gradient of the BOLD positive HRFs (pHRF) along the rostro-caudal axis of the SC. The pHRF was primarily located in the rostral SC, and it gradually weakened toward the caudal SC, where negative HRF (nHRF) was often observed. The systematic rostro-caudal evolution of HRFs were consistent both within and across subjects, consistent with results from previous electrophysiological studies. Our work showed the feasibility of using ultra-high-field fMRI to non-invasively examine neurovascular dynamics in a small and deeply located subcortical structures of the human brain.