Background: An increasing number of rehabilitation technologies are being developed to support upper limb rehabilitation after stroke, with smart textile solutions for surface electromyography (sEMG) emerging as a promising approach. Early end-user involvement is crucial for developing user-friendly and clinically valid rehabilitation tools. Objective: This study aims to refine and evaluate the prototype design and usability of a smart textile biofeedback system for self-administered upper limb training after stroke. Methods: The training system includes a knitted smart textile sleeve with integrated electrodes over the forearm muscles, an sEMG unit, and tablet-based biofeedback software. An iterative co-design process was followed, including initial testing, demonstration sessions with end users (9 clinicians and 10 individuals with stroke), and a final evaluation of the co-design process. Participants’ experiences were gathered through semistructured interviews, analyzed using content analysis, and the User Experience Questionnaire. The co-design team included experts in stroke rehabilitation, textile engineering, biomedical engineering, software development, and human factors, as well as a research partner with lived experience after stroke. Results: The perspectives of the end users and the expert team were collectively integrated into prototype refinements of the sleeve and training software to meet the needs of the intended target group. The experiences of end users formed 2 main categories: “This could be an exciting new training tool for stroke rehabilitation” and “The tool works well, but some changes could enhance independent training.” End users found the smart textile sleeve and biofeedback system easy to use and saw potential for integrating it into their training routines. Both end-user groups rated the system as attractive, stimulating, and novel. Conclusions: The results of this study establish a necessary ground toward the development of a smart textile sEMG biofeedback system for self-administered upper limb training after stroke. Findings from the co-design process support the continued development and evaluation of the system as a self-administered upper limb training tool for individuals living with stroke.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/09868209cfa3cec73d3f73d20963ee99" />
Designing Psychologically Grounded Artificial Intelligence for Supporting Bystander-Based Cyberaggression Intervention: Mixed Methods Exploratory Study
Background: Cyberaggression poses a growing threat to mental health, contributing to increased distress, reduced self-esteem, and other adverse psychosocial outcomes. Although bystander intervention can mitigate the escalation and impact of cyberaggression, individuals often lack the confidence, strategies, or language to respond effectively in these high-stakes online interactions. Advances in generative artificial intelligence (AI) present a novel opportunity to facilitate digital behavior change by assisting bystanders with contextually appropriate, theory-informed intervention messages that promote safer online environments and support mental well-being. Objective: This mixed methods design study aimed to explore the feasibility of using generative AI to support bystander intervention in cyberaggression on social media. Specifically, we examined whether AI can generate effective responses aligned with established intervention strategies and how these responses are perceived in terms of their potential to de-escalate online harm and foster behavior change. Methods: We collected 1000 real-world cyberaggression examples from public social media datasets and generated bystander intervention responses using 3 distinct prompt strategies: a generic policy reminder, a baseline GPT prompt, and a theory-driven GPT prompt (AllyGPT). To evaluate the responses, we conducted computational linguistic analyses to assess their psycholinguistic features and carried out a mixed methods evaluation. Three trained coders rated each message on favorability, conversational impact, and potential to change behavior and later participated in semistructured interviews to reflect on their evaluation process and perceptions of intervention effectiveness. Results: Linguistic analyses revealed that baseline GPT responses exhibited more emotionally positive and authentic language compared to AllyGPT responses, which showed a more analytical and assertive tone. Policy reminder messages were linguistically rigid and lacked emotional nuance. Human evaluation results showed that AllyGPT responses received the highest effectiveness ratings for low-incivil cyberaggression cases in 2 dimensions (favorability and changing behavior), and baseline GPT works better for mid and high levels for all effectiveness dimensions. For medium- and high-incivility aggressions, baseline GPT responses received the highest ratings across all 3 dimensions of effectiveness (favorability, discussion-shifting potential, and likelihood of changing bullying behavior), followed by AllyGPT, with policy reminders rated lowest. Qualitative feedback further emphasized that baseline GPT responses were perceived as natural and inclusive, while AllyGPT responses, although grounded in psychological theory, were sometimes viewed as overly direct. Policy reminders were considered clear but lacked persuasive impact. Conclusions: Our work showed that designing effective AI-generated bystander interventions requires a deep sensitivity to platform culture, social context, and user expectations. By combining psychological theory with adaptive, conversational design and ongoing feedback loops, future systems can better support bystanders, delivering interventions that are not only contextually appropriate but also socially resonant and behaviorally impactful. As such, this work serves as a foundation for scalable, human-centered AI systems that promote safer online spaces and users’ mental well-being.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/2902e66b1ac5ac34f57f42bbc6adfe75" />
Comparing Large Language Models and Traditional Machine Translation Tools for Translating Medical Consultation Summaries: Quantitative Pilot Feasibility Study
Background: Translation of medical consultation summaries is essential for equitable health care communication in culturally and linguistically diverse populations. While machine translation (MT) tools and large language models (LLMs) are widely accessible, their feasibility and safety for health care contexts remain underexplored. Objective: This pilot study investigates the feasibility and limitations of using LLMs and traditional MT tools to translate medical consultation summaries from English into the most common languages other than English spoken in Australia—Arabic, Chinese (simplified written form), and Vietnamese. Methods: Two simulated summaries—a simple patient-facing summary and a complex clinician-oriented interprofessional letter—were translated using 3 LLMs (GPT-4o, Llama-3.1, and Gemma-2) and 3 MT tools (Google Translate, Microsoft Bing Translator, and DeepL). Translations were benchmarked against professional third-party interpreter translations using Bilingual Evaluation Understudy, Character-level F-score, and Metric for Evaluation of Translation with Explicit Ordering metrics. Results: The translation performance varied across languages, tools, and summary complexity when assessed using automatic evaluation metrics. Traditional MT tools outperformed LLMs on surface-level metrics, while LLMs showed relative strengths in semantic similarity for Vietnamese and Chinese. Arabic translations improved with complex input, suggesting morphological advantages. The metric-based evaluation highlighted feasibility but also risks, particularly in Chinese clinical contexts. Conclusions: This pilot study provides formative evidence of opportunities and limitations in applying artificial intelligence translation for health care communication. Findings underscore the importance of human oversight; domain-specific evaluation metrics; and further formative and clinical research to guide the safe, equitable use of artificial intelligence translation tools.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/b86925d16f121fdeb31e2fefcf1227ba" />
STAT+: Trump goes soft on insurance, and a medical underwriting chart
This is the online version of STAT’s weekly email newsletter Health Care Inc. Sign up here.
We watched the Artemis II astronauts splash down safely last week. A reminder that legitimately amazing things can still happen. Parachute your thoughts here: bob.herman@statnews.com.
Tough talk, soft stance
A few months ago, President Trump confidently said he would be meeting with the country’s largest health insurance companies to pressure them to lower their premiums. The message was just that — a message to give the appearance that Trump officials were willing to crack down on health insurers, which have been at the center of Americans’ disdain of the health care system for decades.
Why opinion on AI is so divided
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s a marathon, not a sprint, after all.)
This year’s report, which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country.
There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild.
But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report: “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.)
Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now?
The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.”
That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.)
I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day. Maybe that’s tongue-in-cheek, but there’s definitely something to it.
The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them.
This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others.
The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued.
Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other.
Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.
Neural Mechanism Underlying Sensory Behavior Revealed in C. elegans
Animal behavior reflects a complex interplay between an animal’s brain and its sensory surroundings. In a new study published in Nature Neuroscience titled, “Neural sequences underlying directed turning in Caenorhabditis elegans,” researchers from Massachusetts Institute of Technology (MIT) have shown how neuron circuits within C. elegans nematode worms respond to odors and generate movement as they pursue favorable versus unfavorable smells. The results inform understanding of the basic principles of the sensory nervous system for therapeutic applications.
“Across the animal kingdom, there are just so many remarkable behaviors,” said Steven Flavell, PhD, associate professor at the Picower Institute at MIT, Howard Hughes Medical Institute (HHMI) investigator, and corresponding author of the study. “With modern neuroscience tools, we are finally gaining the ability to map their mechanistic underpinnings.”
Whether moving toward a food source or away from a predator, animals must integrate sensory stimuli to navigate to favorable locations. The neural circuits for navigation are tasked with generating directed movement while simultaneously integrating sensory input to update behavior. Understanding how neural circuits select, execute and adapt sensory-guided navigation behaviors uncovers basic principles of how nervous systems are organized to integrate sensory information and control behavior.
In C. elegans, the authors identified error-correcting turns during navigation and used whole-brain calcium imaging and cell-specific perturbations to determine their neural underpinnings. Defined neurons activated in a stereotyped order during each turn. Distinct neurons in this sequence respond to the spatial distribution of attractive and aversive olfactory cues, anticipate upcoming turn directions and drive movement, linking key features of this sensorimotor behavior across time.
“One thing that really excited us about this study is that we were able to see what a sensorimotor arc looks like at the scale of a whole nervous system: all the bits and pieces, from responses to the sensory cue until the behavioral response is implemented,” Flavell said.
The electrical activity of more than 100 neurons was tracked during sensory movement. Notably, C. elegans only have 302 neurons total. Instead of random movements, the worms executed turns with advantageous timing and at well-chosen angles.
The activity of SAA neurons was crucial for integrating odor detection with planned movement and predicted the direction of upcoming turns. Several neurons showed different activity patterns depending on the location of odors were and whether the worm was moving forward or in reverse.
Additionally, the neuromodulator, tyramine, was essential for turning and shifting gears. When the worms moved in reverse, tyramine from the neuron RIM enabled other neurons in the sequence to change their activity appropriately to execute the turns. In several experiments, the scientists knocked out RIM tyramine, which disrupted the navigation behaviors and the sequence of neural activity.
The post Neural Mechanism Underlying Sensory Behavior Revealed in <i>C. elegans</i> appeared first on GEN – Genetic Engineering and Biotechnology News.
Co-Design of a Depression Self-Management Tool for Adolescent and Young Adult Cancer Survivors: Rapid Qualitative Analysis of Interview Feedback on a Prototype
STAT+: GSK advancing ovarian cancer drug mo-rez
Want to stay on top of the science and politics driving biotech today? Sign up to get our biotech newsletter in your inbox.
It’s been a minute since I’ve wished you a good morning. Morning!
We’ve got some big news on Revolution Medicines’ pancreatic cancer treatment. But don’t miss GSK’s move to push an ovarian cancer ADC into five Phase 3 trials after striking early data. And Spyre Therapeutics released some competitive ulcerative colitis results.

