Synthetic microbial co-cultures for modular bioelectronic sensing in diverse environments
Nature Biotechnology, Published online: 17 April 2026; doi:10.1038/s41587-026-03075-7
Modular integration of bacterial strains expands the application range of whole-cell bioelectric sensors.
A roadmap to competitive preclinical packages
Nature Medicine, Published online: 17 April 2026; doi:10.1038/s41591-026-04345-2
Should researchers avoid translational research in animals in favor of human or AI models? We argue that this debate should focus not on comparing species but instead on how experimental systems can be combined to maximize mechanistic confidence, human relevance, and real-world decision-making value.
New gene-editing approaches for β-hemoglobinopathies
Nature Medicine, Published online: 17 April 2026; doi:10.1038/d41591-026-00021-7
Three phase 1/2 trials show that direct editing of HBG1 and HBG2 promoters is a promising disease‑agnostic strategy for treating β‑hemoglobinopathies such as sickle-cell disease and β-thalassemia.
Medical devices win 2026 Edison Awards for innovation
Medtronic, Abbott, Boston Scientific, Medical Microinstruments (MMI) and other medical device developers earned honors at the 2026 Edison Awards. They were among more than 150 finalists for the awards, which recognize “excellence in product and service innovation, marketing, and human-centered design” across a range of categories including health, medical and biotech, engineering and robotics, materials…
The post Medical devices win 2026 Edison Awards for innovation appeared first on Medical Design and Outsourcing.
Microstructure makes ePTFE a versatile medtech material
By Matt Navarro, Aptyx Expanded polytetrafluoroethylene (ePTFE) has become a staple in the medical device industry for applications ranging from vascular grafts to stent encapsulations and more. It’s known for chemical inertness, biocompatibility, flexibility, and durability. What may surprise engineers is that ePTFE is not a single, uniform material. It takes several forms with varying…
The post Microstructure makes ePTFE a versatile medtech material appeared first on Medical Design and Outsourcing.
Abbott’s device leader pay climbs again with double-digit sales growth
Abbott EVP and Medical Devices Group President Lisa Earnhardt’s pay package increased more than 20% in 2025 as device sales maintained their double-digit growth. That’s according to the latest executive compensation disclosure from Abbott, which was the world’s eighth-largest medical device company in Medical Design & Outsourcing‘s Medtech Big 100 ranking by revenue. That ranking…
The post Abbott’s device leader pay climbs again with double-digit sales growth appeared first on Medical Design and Outsourcing.
MiniMed flexes with next-gen insulin pump after spinning off from Medtronic
Within two weeks of MiniMed’s initial public offering in March, the Medtronic spinoff received FDA clearance for its latest-generation MiniMed Flex automated insulin delivery system. The smaller, screenless pump system is a major milestone for one of the world’s largest diabetes businesses. “We have a long history with durable pumps,” MiniMed EVP, Chief Product and…
The post MiniMed flexes with next-gen insulin pump after spinning off from Medtronic appeared first on Medical Design and Outsourcing.
The MDO Nitinol Knowledge webinar returns with a Medtronic distinguished engineer
Medtronic Distinguished Engineer Ramesh Marrey is the featured guest of Medical Design & Outsourcing‘s 2026 Nitinol Knowledge webinar on May 14. Marrey works in the Medtronic Structural Heart & Aortic business unit’s R&D group and previously worked in the company’s Neurovascular business. He also worked at Cordis when it was part of Johnson & Johnson…
The post The MDO Nitinol Knowledge webinar returns with a Medtronic distinguished engineer appeared first on Medical Design and Outsourcing.
How robots learn: A brief, contemporary history
Roboticists used to dream big but build small. They’d hope to match or exceed the extraordinary complexity of the human body, and then they’d spend their career refining robotic arms for auto plants. Aim for C-3P0; end up with the Roomba.
The real ambition for many of these researchers was the robot of science fiction—one that could move through the world, adapt to different environments, and interact safely and helpfully with people. For the socially minded, such a machine could help those with mobility issues, ease loneliness, or do work too dangerous for humans. For the more financially inclined, it would mean a bottomless source of wage-free labor. Either way, a long history of failure left most of Silicon Valley hesitant to bet on helpful robots.
That has changed. The machines are yet unbuilt, but the money is flowing: Companies and investors put $6.1 billion into humanoid robots in 2025 alone, four times what was invested in 2024.
What happened? A revolution in how machines have learned to interact with the world.
Imagine you’d like a pair of robot arms installed in your home purely to do one thing: fold clothes. How would it learn to do that? You could start by writing rules. Check the fabric to figure out how much deformation it can tolerate before tearing. Identify a shirt’s collar. Move the gripper to the left sleeve, lift it, and fold it inward by exactly this distance. Repeat for the right sleeve. If the shirt is rotated, turn the plan accordingly. If the sleeve is twisted, correct it. Very quickly the number of rules explodes, but a complete accounting of them could produce reliable results. This was the original craft of robotics: anticipating every possibility and encoding it in advance.
Around 2015, the cutting edge started to do things differently: Build a digital simulation of the robotic arms and the clothes, and give the program a reward signal every time it folds successfully and a ding every time it fails. This way, it gets better by trying all sorts of techniques through trial and error, with millions of iterations—the same way AI got good at playing games.
The arrival of ChatGPT in 2022 catalyzed the current boom. Trained on vast amounts of text, large language models work not through trial and error but by learning to predict what word should come next in a sentence. Similar models adapted to robotics were soon able to absorb pictures, sensor readings, and the position of a robot’s joints and predict the next action the machine should take, issuing dozens of motor commands every second.
This conceptual shift—to reliance on AI models that ingest large amounts of data—seems to work whether that helpful robot is supposed to talk to people, move through an environment, or even do complicated tasks. And it was paired with other ideas about how to accomplish this new way of learning, like deploying robots even if they aren’t yet perfect so they can learn from the environment they’re meant to work in. Today, Silicon Valley roboticists are dreaming big again. Here’s how that happened.
Jibo
Jibo
A movable social robot carried out conversations long before the age of LLMs.
An MIT robotics researcher named Cynthia Breazeal introduced an armless, legless, faceless robot called Jibo to the world in 2014. It looked, in fact, like a lamp. Breazeal’s aim was to create a social robot for families, and the idea pulled in $3.7 million in a crowdsourced funding campaign. Early preorders cost $749.
The early Jibo could introduce itself and dance to entertain kids, but that was about it. The vision was always for it to become a sort of embodied assistant that could handle everything from scheduling and emails to telling stories. It earned a number of devoted users, but ultimately the company shut down in 2019.

In retrospect, one thing that Jibo really needed was better language capabilities. It was competing against Apple’s Siri and Amazon’s Alexa, and all those technologies at the time relied on heavy scripting. In broad terms, when you spoke to them, software would translate your speech into text, analyze what you wanted, and create a response pulled from preapproved snippets. Those snippets could be charming, but they were also repetitive and simply boring—downright robotic. That was especially a challenge for a robot that was supposed to be social and family oriented.
What has happened since, of course, is a revolution in how machines can generate language. Voice mode from any leading AI provider is now engaging and impressive, and multiple hardware startups are trying (and failing) to build products that take advantage of it.
But that comes with a new risk: While scripted conversations can’t really go off the rails, ones generated by AI certainly can. Some popular AI toys have, for example, talked to kids about how to find matches and knives.
OpenAI
Dactyl
A robot hand trained with simulations tries to model the unpredictability and variation of the real world.
By 2018, every leading robotics lab was trying to scrap the old scripted rules and train robots through trial and error. OpenAI tried to train its robotic hand, Dactyl, virtually—with digital models of the hand and of the palm-size cubes Dactyl was supposed to manipulate. The cubes had letters and numbers on their faces; the model might set a task like “Rotate the cube so the red side with the letter O faces upward.”
Here’s the problem: A robotic hand might get really good at doing this in its simulated world, but when you take that program and ask it to work on a real version in the real world, the slight differences between the two can cause things to go awry. Colors might be slightly different, or the deformable rubber in the robot’s fingertips could turn out to be stretchier than it was in simulation.

The solution is called domain randomization. You essentially create millions of simulated worlds that all vary slightly and randomly from one another. In each one the friction might be less, or the lighting more harsh, or the colors darkened. Exposure to enough of this variation means the robots will be better able to manipulate the cube in the real world. The approach worked on Dactyl, and one year later it was able to use the same core techniques to do something harder: solving Rubik’s Cubes (though it worked only 60% of the time, and just 20% when the scrambles were particularly hard).
Still, the limits of simulation mean that this technique plays a far smaller role today than it did in 2018. OpenAI shuttered its robotics effort in 2021 but has recently started the division up again—reportedly focusing on humanoids.
Google DeepMind
RT-2
Training on images from across the internet helps robots translate language into action.
Around 2022, Google’s robotics team was up to some strange things. It spent 17 months handing people robot controllers and filming them doing everything from picking up bags of chips to opening jars. The team ended up cataloguing 700 different tasks.
The point was to build and test one of the first large-scale foundation models for robotics. As with large language models, the idea was to input lots of text, tokenize it into a format an algorithm could work with, and then generate an output. Google’s RT-1 received input about what the robot was looking at and how the many parts of the robotic arm were positioned; then it took an instruction and translated it into motor commands to move the robot. When it had seen tasks before, it carried out 97% of them successfully; it succeeded at 76% of the instructions it hadn’t seen before.

The second iteration, RT-2, came out the following year and went even further. Instead of training on data specific to robotics, it went broad: It trained on more general images from across the internet, like the vision-language models lots of researchers were working on at the time. That allowed the robot to interpret where certain objects were in the scene.
“All these other things were unlocked,” says Kanishka Rao, a roboticist at Google DeepMind who led work on both iterations. “We could do things now like ‘Put the Coke can near the picture of Taylor Swift.’”
In 2025, Google DeepMind further fused the worlds of large language models and robotics, releasing a Gemini Robotics model with improved ability to understand commands in natural language.
Covariant
RFM-1
An AI model that allows robotic arms to act like coworkers.
In 2017, before OpenAI shuttered its first robotics team, a group of its engineers spun out a project called Covariant, aiming to build not sci-fi humanoids but the most pragmatic of all robots: an arm that could pick up and move things in warehouses. After building a system based on foundation models similar to Google’s, Covariant deployed this platform in warehouses like those operated by Crate & Barrel and treated it as a data collection pipeline.
By 2024, Covariant had released a robotics model, RFM-1, that you could interact with like a coworker. If you showed an arm many sleeves of tennis balls, for example, you could then instruct it to move each sleeve to a separate area. And the robot could respond—perhaps predicting that it wouldn’t be able to get a good grip on the item and then asking for advice on which particular suction cups it should use.
This sort of thing had been done in experiments, but Covariant was launching it at significant scale. The company now had cameras and data collection machines in every customer location, feeding back even more data for the model to train on.

It wasn’t perfect. In a demo in March 2024 with an array of kitchen items, the robot struggled when it was asked to “return the banana” to its original location. It picked up a sponge, then an apple, then a host of other items before it finally accomplished the task.
It “doesn’t understand the new concept” of retracing its steps, cofounder Peter Chen told me at the time. “But it’s a good example—it might not work well yet in the places where you don’t have good training data.”
Chen and fellow founder Pieter Abbeel were soon hired by Amazon, which is currently licensing Covariant’s robotics model (Amazon did not respond to questions about how it’s being used, but the company runs an estimated 1,300 warehouses in the US alone).
Agility Robotics
Digit
Companies are putting this humanoid to the test in real-world settings.
The new investment dollars flowing to robotics startups are aimed largely at robots shaped not like lamps or arms but like people. Humanoid robots are supposed to be able to seamlessly enter the spaces and jobs where humans currently work, avoiding the need to retool assembly lines to accommodate new shapes such as giant arms.
It’s easier said than done. In the rare cases where humanoids appear in real warehouses, they’re often confined to test zones and pilot programs.

That said, Agility’s humanoid Digit appears to be doing some real work. The design—with exposed joints and a distinctly unhuman head—is driven more by function than by sci-fi aesthetics. Amazon, Toyota, and GXO (a logistics giant with customers like Apple and Nike) have all deployed it—making it one of the first examples of a humanoid robot that companies see as providing actual cost savings rather than novelty. Their Digits spend their days picking up, moving, and stacking shipping totes.
The current Digit is still a long way from the humanlike helper Silicon Valley is betting on, though. It can lift only 35 pounds, for example—and every time Agility makes Digit stronger, its battery gets heavier and it has to recharge more often. And standards organizations say humanoids need stricter safety rules than most industrial robots, because they’re designed to be mobile and spend time in proximity to people.
But Digit shows that this revolution in robot training isn’t converging on a single method. Agility relies on simulation techniques like those OpenAI used to train its hand, and the company has worked with Google’s Gemini models to help its robots adapt to new environments. That’s where more than a decade of experiments have gotten the industry: Now it’s building big.

