Forecasting Protein Aggregation with an Improved Algorithm

A new, improved algorithm for studying protein aggregation could help biologics manufacturers design better-performing products with less experimental effort. The software, developed by scientists based in Barcelona, offers the ability to analyze the aggregation of proteins drawn from the AlphaFold protein structure database, as well as helping companies identify more soluble alternatives.

“Protein aggregation is a bottleneck in the production and manufacturing of biologics,” explains Salvador Ventura, PhD, a professor in the department of biochemistry and molecular biology at the Autonomous University of Barcelona (UAB).

The problem, he explains, is that many proteins used as therapies evolved to be soluble at the concentrations found in the human body. But therapeutics, such as antibodies, are produced in as high a concentration as possible.

“We want the product to deliver the maximum dose with the minimum amount of injection,” he says. “But proteins aren’t designed to be soluble at these concentrations, and their aggregation causes different effects.”

These can include the patient’s immune system reacting negatively or the aggregated product ceasing to work.

To overcome this problem, Ventura says, companies and labs try to forecast protein aggregation, usually experimentally with high-throughput combinational assays. But these approaches are not convenient for startups or small spinoff companies.

A computational approach, such as his algorithm, now in its fourth generation, can help these companies predict and design around protein aggregation.

It offers the ability to draw protein structures from AlphaFold to analyze likely protein aggregation using simulations of molecular dynamics. Users, he says, can also choose to mutate selected parts of the protein, identify other proteins in the same family, and even look at the possible impact of pH on solubility.

“Our lab is both computational and experimental, so most of the designs we’ve made, we’ve already proved by experiment,” Ventura says.

Limitations include the scarcity of high-quality experimental data available to train the algorithm, he explains.

Going forward, the team intends to model which solution and formulation conditions best maintain the stability of therapeutic proteins in manufacturing and clinical settings. “We’re working on these next steps already,” he says. “Although, as yet, we don’t have an algorithm for this.”

Ventura spoke about the latest version of his algorithm at the Bioprocessing Summit Europe in March.

The post Forecasting Protein Aggregation with an Improved Algorithm appeared first on GEN – Genetic Engineering and Biotechnology News.

Redefining Bioprocessing Using Reservoirs of Biochemical Diversity

In the global race to improve how medicines are made, scientists are turning to an unlikely source of innovation: the microscopic life thriving in some of the harshest soils on Earth. Beneath wild plants in Saudi Arabia’s arid landscapes, researchers have identified biological tools that could redefine bioprocessing.

A recent study by Saudi Arabia-based Rewaa S. Jalal, PhD, associate professor of biology at the University of Jeddah, and Fatimah M. Alshehrei, PhD, associate professor of microbiology at Umm al-Qura University, focuses on the rhizosphere—the thin layer of soil surrounding plant roots—where dense microbial communities interact with their host plants. These environments, shaped by extreme heat and limited water, are proving to be reservoirs of biochemical diversity with direct relevance to drug manufacturing.

The researchers zeroed in on enzymes known as glycosyltransferases, which play a central role in building complex sugar structures on proteins and other molecules. In pharmaceutical bioprocessing, this step—glycosylation—is crucial. It determines how therapeutic proteins behave, influencing everything from stability to effectiveness and immune compatibility.

What makes these enzymes especially compelling is their environmental pedigree. The microbes that produce them have adapted to survive under intense stress, evolving systems that remain functional in high temperatures and low-moisture conditions. These traits could translate into more robust and flexible bioprocessing workflows, where maintaining strict environmental control is often costly and technically demanding.

The study also reveals that different plant species cultivate distinct microbial communities, each enriched with unique enzyme families. For example, the rhizosphere of Moringa oleifera shows a different enzymatic profile compared to Abutilon fruticosum, highlighting how plant-microbe partnerships shape biochemical potential. For bioprocessing, this diversity could enable the selection of highly specific enzymes tailored to particular drug production needs.

Beyond protein modification, the identified enzymes are linked to the synthesis of key biomolecules such as cellulose, chitin, and β-glucans. These materials are already used in areas like drug delivery, wound care, and tissue engineering. Improving how they are produced through advanced bioprocessing could expand their applications and reduce manufacturing constraints.

Despite the promise, the researchers emphasize that their findings are based on computational analysis of genetic data. The real-world performance of these enzymes in industrial bioprocessing systems remains to be tested.

Still, the implications are significant. As pharmaceutical companies seek more sustainable and efficient ways to produce complex biologics, enzymes shaped by extreme environments might offer a powerful advantage. Instead of engineering solutions from scratch, scientists are increasingly uncovering them in nature—already optimized through evolution.

In this emerging vision of bioprocessing, the future of medicine might be shaped not only by cutting-edge technology but also by the resilient microbial ecosystems hidden beneath desert plants.

The post Redefining Bioprocessing Using Reservoirs of Biochemical Diversity appeared first on GEN – Genetic Engineering and Biotechnology News.

<![CDATA[A new US Department of War backed phase 2a study will test BXCL501’s efficacy in easing acute stress reactions and preventing PTSD.]]>

Lung Screening Incidental Findings May Guide Follow-Up for Other Cancers

An analysis of the US National Lung Screening Trial (NLST) has found that the presence of certain types of abnormalities in regions outside of the lungs on low-dose computed tomography (LDCT) images may be associated with a significantly increased risk for extrapulmonary cancer.

The abnormalities, termed significant incidental findings (SIFs), could help clinicians decide when follow-up care is likely to catch extrapulmonary cancer early and when it may not be necessary.

“In this paper, we provide an evidence base for making decisions on abnormalities outside of the lungs that might be seen at lung screening,” said study author Ilana Gareen, PhD, a professor of epidemiology at Brown University School of Public Health. “The goal is to give physicians and patients better data so that they can make more informed choices about those abnormalities that should be considered for follow-up and those that most likely can be ignored.”

Writing in JAMA Network Open, Gareen and co-authors explain that LDCT lung cancer screening frequently detects SIFs unrelated to lung cancer; in the NLST, 34% of 26,455 patients screened with LDCT had SIFs reported but the nature of the SIFs varied.

And although there are recommendations for reporting and addressing SIFs, there is limited evidence for an association between SIFs detected at LDCT lung cancer screening and extrapulmonary cancer diagnoses.

To address this, Gareen and team analyzed data from 75,104 LDCT screening rounds performed in 26,445 individuals (mean age, 61 years; 59.0% men) who were randomly assigned to receive LDCT during the NSLT. The participants had a history of heavy smoking (≥30 pack–years), meaning they are also at high risk for several extrapulmonary cancers, including pancreatic, bladder, and kidney cancer.

The researchers focused on SIFs that were labelled as potentially indicative of extrapulmonary cancer (cancer SIF), rather than those that possibly indicated emphysema or cardiovascular disease.

They report that cancer SIFs were recorded for 2265 (3.0%) screening rounds in 1807 (6.8%) participants across the three screening rounds they received.

Participants with cancer SIFs were significantly older than those with no cancer SIF (mean 62.1 vs. 61.4 years) and significantly more likely to have a history of a smoking-related disease (68.6 vs. 65.7%).

Within one year of a screening round, 1025 participants were diagnosed with an extrapulmonary cancer. Of these, 67 (6.5%) had a SIF on LDCT. This corresponds to 3.0% of participants with a cancer SIF.

Overall, the risk for extrapulmonary cancer among the people with a cancer SIF was 29.6 per 1000 screening rounds compared with 13.3 per 1000 screening rounds in those without a cancer SIF. After adjustment for potential confounders, the marginal risk difference between the two groups was 13.9 per 1000 participants, suggesting that for every 1000 people screened, the presence of a cancer SIF is associated with 13.9 additional cases of extrapulmonary cancer.

When the researchers looked at specific cancer types, they found that the marginal risk difference was substantially higher for urinary cancers, at 17.0 per 1000 participants. It was 5.0 for digestive cancer, 12.3 for breast cancer, and 13.8 for other cancers including lymphoma and leukemia.

“In general, if an abnormality is found that might indicate cancer, the patient receives additional imaging to evaluate that abnormality,” Gareen told Inside Precision Medicine. “Our paper provides additional information as to those abnormalities that should be considered to increase the risk of a cancer diagnosis.”

Importantly, mortality from extrapulmonary cancer accounted for 22.3% of the certified deaths in the LDCT arm of the NLST. Therefore “early detection of these cancers may facilitate early treatment and potentially reduce associated morbidity and mortality,” the authors write. “Identification of cancer SIFs associated with extrapulmonary cancers in NLST participants could be used to plan appropriate diagnostic evaluations for patients undergoing lung cancer screening.”

Gareen said the next step will be to determine if the findings are replicated in lung screening in the community, or if the rate in community screening is higher or lower.

In accompanying comment, Patrick Senior and Andrew Creamer, both from Gloucestershire Hospitals NHS Foundation Trust, in Gloucester, United Kingdom, point out that the false positive rate for a cancer SIF was 97% but say “it is hard to imagine a scenario in which an incidental finding with even a possibility of representing cancer would be disregarded.”

However, they note that “when considered in the context of the numbers of people eligible for lung cancer screening programs around the world, acting on such findings poses a considerable additional burden on the health systems that must investigate them.”

Senior and Creamer say that the results “underscore the importance of both a robust health economics analysis of how screening programs manage such incidental findings and patient-centered research to understand the impact that such unexpected results may have on the individual. Further research is needed to ensure that screening programs are confident when faced with information they did not ask for.”

The post Lung Screening Incidental Findings May Guide Follow-Up for Other Cancers appeared first on Inside Precision Medicine.

Childhood Dementia Explained by Synaptic Dysfunction, Opens New Therapies

In a new study published in Nature Communications titled,Modelling synaptic dysfunction in childhood dementia using human iPSC-derived cortical networks,” researchers from Flinders University in Adelaide have uncovered how hyperactive and dysregulated synaptic circuits emerge in the brain tissue of children impacted by Sanfilippo syndrome, a common form of childhood dementia. 

In Australia, an estimated 1400 children currently live with childhood dementia, with hundreds of thousands of cases worldwide. Sanfilippo syndrome is a rare genetic condition that causes fatal brain damage. Children typically reach early developmental milestones before rapidly losing cognitive skills, speech, and mobility. Early symptoms often include hyperactivity and sleep disturbance. 

Alterations in synaptic communication play key roles in neurodegenerative disease progression and cognitive decline. Yet few studies have explored how excitation and inhibition synaptic imbalances contribute to pediatric neurodegenerative disorders. 

Cedric Bardy, PhD, professor and head of the Laboratory for Human Neurophysiology and Genetics at the South Australian Health, describes the study findings as “significant progress.” Chronic overactivity in the brain appears to be a fundamental mechanism contributing to cognitive deterioration in children with Sanfilippo syndrome. 

Using human stem cell-derived cortical neurons and electrophysiology, the team demonstrated that excitatory synapses in the neurons of affected children become abnormally active during early brain development. 

While these neurons initially developed and functioned normally, they became increasingly overactive over time. Brain cell networks showed bursts of intense, highly synchronized electrical activity as they matured, mirroring the hyperactivity and neurological symptoms seen in children with the condition. 

“This hyperactivity offers a clear biological explanation for early behavioral changes, and it brings us closer to understanding the complex mechanisms contributing to childhood dementia,” said Bardy.

Results also demonstrated that these neurons are vulnerable to stress. When exposed to mild nutrient deprivation, excitatory synaptic abnormalities increased, suggesting that common illnesses or physiological stressors may accelerate neurological decline. 

“Our research shows that disrupted synaptic communication is not simply a byproduct of degeneration. It is an early driver of the disease,” Bardy says. 

Childhood Dementia Initiative CEO and founder, Megan Maack, is a co-author of the study and has been involved in guiding the project since its inception. 

“This research is significant not just for Sanfilippo syndrome, but for the field of childhood dementia as a whole,” said Maack. “By identifying the precise cellular mechanisms driving the disease, we are moving towards a personalized medicine approach—the kind of targeted treatment strategy that has transformed outcomes for children with cancer.”

Researchers are now evaluating whether drugs that are already on the market for use in other conditions could be repurposed for childhood dementia. Bardy says the team has already demonstrated that these synaptic imbalances can be corrected with certain medications in the laboratory, indicating that they represent a genuine therapeutic target. 

The post Childhood Dementia Explained by Synaptic Dysfunction, Opens New Therapies appeared first on GEN – Genetic Engineering and Biotechnology News.

Mustafa Suleyman: AI development won’t hit a wall anytime soon—here’s why

We evolved for a linear world. If you walk for an hour, you cover a certain distance. Walk for two hours and you cover double that distance. This intuition served us well on the savannah. But it catastrophically fails when confronting AI and the core exponential trends at its heart.

From the time I began work on AI in 2010 to now, the amount of training data that goes into frontier AI models has grown by a staggering 1 trillion times—from roughly 10¹⁴ flops (floating-point operations‚ the core unit of computation) for early systems to over 10²⁶ flops for today’s largest models. This is an explosion. Everything else in AI follows from this fact.

The skeptics keep predicting walls. And they keep being wrong in the face of this epic generational compute ramp. Often, they point out that Moore’s Law is slowing. They also mention a lack of data, or they cite limitations on energy.

But when you look at the combined forces driving this revolution, the exponential trend seems quite predictable. To understand why, it’s worth looking at the complex and fast-moving reality beneath the headlines.

Think of AI training as a room full of people working calculators. For years, adding computational power meant adding more people with calculators to that room. Much of the time those workers sat idle, drumming their fingers on desks, waiting for the numbers to come through for their next calculation. Every pause was wasted potential. Today’s revolution goes beyond more and better calculators (although it delivers those); it is actually about ensuring that all those calculators never stop, and that they work together as one.

Three advances are now converging to enable this. First, the basic calculators got faster. Nvidia’s chips have delivered an over sevenfold increase in raw performance in just six years, from 312 teraflops in 2020 to 2,250 teraflops today. Our own Maia 200 chip, launched this January, delivers 30% better performance per dollar than any other hardware in our fleet. Second, the numbers arrive faster thanks to a technology called HBM, or high bandwidth memory, which stacks chips vertically like tiny skyscrapers; the latest generation, HBM3, triples the bandwidth of its predecessor, feeding data to processors fast enough to keep them busy all the time. Third, the room of people with calculators became an office and then a whole campus or city. Technologies like NVLink and InfiniBand connect hundreds of thousands of GPUs into warehouse-size supercomputers that function as single cognitive entities. A few years ago this was impossible.

These gains all come together to deliver dramatically more compute. Where training a language model took 167 minutes on eight GPUs in 2020, it now takes under four minutes on equivalent modern hardware. To put this in perspective: Moore’s Law would predict only about a 5x improvement over this period. We saw 50x. We’ve gone from two GPUs training AlexNet, the image recognition model that kicked off the modern boom in deep learning in 2012, to over 100,000 GPUs in today’s largest clusters, each one individually far more powerful than its predecessors.

Then there’s the revolution in software. Research from Epoch AI suggests that the compute required to reach a fixed performance level halves approximately every eight months, much faster than the traditional 18-to-24-month doubling of Moore’s Law. The costs of serving some recent models have collapsed by a factor of up to 900 on an annualized basis. AI is becoming radically cheaper to deploy.

The numbers for the near future are just as staggering. Consider that leading labs are growing capacity at nearly 4x annually. Since 2020, the compute used to train frontier models has grown 5x every year. Global AI-relevant compute is forecast to hit 100 million H100-equivalents by 2027, a tenfold increase in three years. Put all this together and we’re looking at something like another 1,000x in effective compute by the end of 2028. It’s plausible that by 2030 we’ll bring an additional 200 gigawatts of compute online every year—akin to the peak energy use of the UK, France, Germany, and Italy put together.

What does all this get us? I believe it will drive the transition from chatbots to nearly human-level agents—semiautonomous systems capable of writing code for days, carrying out weeks- and months-long projects, making calls, negotiating contracts, managing logistics. Forget basic assistants that answer questions. Think teams of AI workers that deliberate, collaborate, and execute. Right now we’re only in the foothills of this transition, and the implications stretch far beyond tech. Every industry built on cognitive work will be transformed.

The obvious constraint here is energy. A single refrigerator-size AI rack consumes 120 kilowatts, equivalent to 100 homes. But this hunger collides with another exponential: Solar costs have fallen by a factor of nearly 100 over 50 years; battery prices have dropped 97% over three decades. There is a pathway to clean scaling coming into view.

The capital is deployed. The engineering is delivering. The $100 billion clusters, the 10-gigawatt power draws, the warehouse-scale supercomputers … these are no longer science fiction. Ground is being broken for these projects now across the US and the world. As a result, we are heading toward true cognitive abundance. At Microsoft AI, this is the world our superintelligence lab is planning for and building.

Skeptics accustomed to a linear world will continue predicting diminishing returns. They will continue being surprised. The compute explosion is the technological story of our time, full stop. And it is still only just beginning.

Mustafa Suleyman is CEO of Microsoft AI.

The Download: water threats in Iran and AI’s impact on what entrepreneurs make

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Desalination plants in the Middle East are increasingly vulnerable 

As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region. 
 
President Donald Trump has threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why

—Casey Crownhart 

This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. 

AI is changing how small online sellers decide what to make 

For small entrepreneurs, deciding what to sell and where to make it has traditionally been a slow, labor-intensive process. Now that work is increasingly being done by AI.   

Tools like Alibaba’s Accio compress weeks of product research and supplier hunting into a single chat. Business owners and e-commerce experts say they’re making sourcing more accessible—and slashing the time from product idea to launch.  

Read the full story on how AI is leveling the path to global manufacturing

—Caiwei Chen 

The gig workers who are training humanoid robots at home 

When Zeus, a medical student in Nigeria, returns to his apartment from a long day at the hospital, he straps his iPhone to his forehead and records himself doing chores.  
 
Zeus is a data recorder for Micro1, which sells the data he collects to robotics firms. As these companies race to build humanoids, videos from workers like Zeus have become the hottest new way to train them.   
 
Micro1 has hired thousands of them in more than 50 countries, including India, Nigeria, and Argentina. The jobs pay well locally, but raise thorny questions around privacy and informed consent. The work can be challenging—and weird. Read the full story.  

—Michelle Kim 

This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Anthropic’s new model found security problems in every OS and browser 
Claude Mythos has been heralded as a cybersecurity “reckoning.” (The Verge)  
+ Anthrophic is limiting the rollout over hacking fears. (CNBC
+ It’s also launching a project that lets Mythos flag vulnerabilities. (Gizmodo
+ Apple, Google, and Microsoft have joined the initiative. (ZDNET

2 Iranian hackers are targeting American critical infrastructure 
Their focus is on energy and water infrastructure. (Wired
+ They’re targeting industrial control devices. (TechCrunch)  

3 Google’s AI Overviews deliver millions of incorrect answers per hour 
Despite a 90% accuracy rate. (NYT $) 
+ AI means the end of internet search as we’ve known it. (MIT Technology Review

4 Elon Musk is trying to oust OpenAI CEO Sam Altman in a lawsuit 
As remedies for Altman allegedly defrauding him. (CNBC
+ Musk wants any damages given to OpenAI’s nonprofit arm. (WSJ $) 

5 ICE has admitted it’s using powerful spyware 
The tools that can intercept encrypted messages. (NPR
+ Immigration agencies are also weaponizing AI videos. (MIT Technology Review

6 Greece has joined the countries banning kids from social media 
Under-15s will be blocked from 2027. (Reuters
+ Australia introduced the world’s first social media ban for children. (Guardian
+ Indonesia recently rolled out the first one in Southeast Asia. (DW)  
+ Experts say they’re a lazy fix. (CNBC

7 Intel will help Elon Musk build his Terafab in Texas 
They aim to manufacture chips for AI projects. (Engadget
+ Musk says it will be the largest-ever semiconductor factory. (Engadget
+ Future AI chips could be built on glass. (MIT Technology Review)  

8 TikTok is building a second billion-euro data center in Finland 
It’s moving data storage for European users. (Reuters
+ Finland has become a magnet for data centers. (Bloomberg $) 
+ But nobody wants one in their backyard. (MIT Technology Review

9 Plans for Canada’s first “virtual gated community” have sparked a row 
The AI-powered surveillance system has divided neighbors. (Guardian
+ Is the Pentagon allowed to surveil Americans with AI? (MIT Technology Review

10 The high-tech engineering of the “space toilet” has been revealed 
Artemis II is the first mission to carry one around the world. (Vox

Quote of the day 

“This case has always been about Elon generating more power and more money for what he wants. His lawsuit remains nothing more than a harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor.” 

—OpenAI criticizes Musk’s legal action in an X post

One More Thing 

USWDS

Inside the US government’s brilliantly boring websites 

You may not notice it, but your experience on every US government website is carefully crafted. 

Each site aligns an official web design and a custom typeface. They aim to make government websites not only good-looking but accessible and functional for all. 

MIT Technology Review dug into the system’s history and features. Find out what we discovered

—Jon Keegan 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ Rejoice in the splendor of the “Earthset” image captured by Artemis II. 
+ Meet the fearless cat chasing off bears. 
+ This document vividly explains what makes the octopus so unique. 
+ Revealed: the rhythmic secret that makes emo music so angsty

Personalized audiovisual gamma stimulation enhances neural connectivity and entrainment beyond fixed 40 Hz protocols

BackgroundConventional 40 Hz gamma stimulation is applied across individuals, potentially overlooking inter-individual neural variability.ObjectiveThis study evaluated conversation gamma frequency (CGF)–a personalized gamma frequency derived from task engagement–against the fixed 40 Hz and individual gamma frequency (IGF) derived from auditory responses.MethodsIn Experiment 1, gamma center frequencies were measured under resting, reading, and conversation conditions. In Experiment 2, EEG was used to compare neural entrainment effects across CGF, 40 Hz, and IGF conditions.ResultsConversation gamma frequency stimulation induced stronger neural activation and functional connectivity in the frontal, temporal, and parietal cortices compared to 40 Hz or IGF. Theta-gamma coupling analysis revealed significantly increased phase synchronization under CGF compared to 40 Hz with enhanced connectivity. However, entrainment declined as the frequency difference between CGF, and 40 Hz increased, emphasizing the limitation of fixed-frequency stimulation.ConclusionThese findings provide EEG-based mechanistic evidence that individualized gamma stimulation may represent a hypothesis-generating strategy for future neurorehabilitation research in aging and neurodegenerative conditions.

STAT+: A decade ago, these drugs tore apart the FDA. Today, they might be some patients’ best hope 

A year after the worst day of her life, Debra Miller received a voicemail she couldn’t quite make out. In a thick accent, a man said something about research and left a phone number. She called but couldn’t get through. “I didn’t know what country code to put in,” she said.

Debra moved on, but the voice kept tumbling through her brain. She was desperate. Her first child, Hawken, had been diagnosed 13 months before with Duchenne muscular dystrophy. In blunt tones she would never forget, a doctor had told her that her 5-year-old boy would slowly lose the ability to walk and die by 18.

When she finally figured out the digits, a Dutch scientist explained he was launching a startup around one of the most counterintuitive ideas in modern genetics: that sometimes you can fix a broken gene by breaking it just a little bit more. 

That strategy, known as exon skipping, would taunt Debra for two decades, always promising a therapy just out of reach. It prompted her to raise $1.3 million for the Dutch scientist and helped turn her fledgling advocacy group, CureDuchenne, into a powerhouse. Eventually, the idea spread far beyond the Netherlands and Debra’s home in Newport Beach, Calif., stirring tenuous hope for a life-altering treatment. 

Exon-skipping drugs sparked a civil war within the Food and Drug Administration. Under pressure from advocates and companies, a top official overrode reviewers to approve the first of several candidates. One company, Sarepta Therapeutics, has since earned over $5.5 billion from from drugs that may or may not provide much benefit. 

Throughout, by the fickle winds of scientific misfortune, mother and son remained waiting — until about two and a half years ago. That’s when Hawken enrolled in a clinical trial for a new exon-skipping drug Debra helped support. The results from him and 38 other patients have since stunned some of the field’s top experts. 

Continue to STAT+ to read the full story…