Below is a lightly edited, AI-generated transcript of the “First Opinion Podcast” interview with Brinda Adhikari and Tom W. Johnson, hosts of the podcast “Why Should I Trust You?” Be sure to sign up for the weekly “First Opinion Podcast” on Apple Podcasts, Spotify, or wherever you get your podcasts. Get alerts about each new episode by signing up for the “First Opinion Podcast” newsletter. And don’t forget to sign up for the First Opinion newsletter, delivered every Sunday.
Torie Bosch: In 2025, the well-known emergency physician Craig Spencer found himself in an unexpected place: the Children’s Health Defense Conference in Austin, Texas. There, he chatted with anti-vaccine activists, MAHA supporters, and others with deep distrust of doctors and mainstream medicine. As he wrote in an essay for STAT about the experience, “I didn’t change any minds, nor did my convictions waver. But every conversation was honest and respectful.”
<strong>Background:</strong> Hypertension remains a leading global health challenge, particularly in low- and middle-income countries (LMICs), where limited health care infrastructure and resources restrict effective management. Community health workers (CHWs) are critical in delivering care in these settings, and when equipped with mobile health (mHealth) apps, they can greatly enhance chronic disease management. Involving CHWs in the design and development at all stages is essential for the success of such programs. However, relatively little research discusses CHW feedback on mHealth interventions. <strong>Objective:</strong> This study aims to evaluate CHW feedback on a hypertension program using a novel tablet-based mHealth tool designed for CHW hypertension diagnosis and management in rural Guatemala. <strong>Methods:</strong> We conducted a mixed-methods analysis as part of a pilot study in San Lucas Tolimán, Guatemala, involving 6 CHWs over a 6-month period. Quantitative data were collected using the System Usability Scale and Likert-scale surveys before and after study completion. Qualitative data were gathered through written surveys and focus group interviews conducted in Spanish by bilingual team members. These methods assessed the app’s ease of use, workflow integration, and cultural appropriateness. CHWs provided detailed perspectives on technical challenges, training adequacy, and patient engagement, which guided iterative refinements to both the mHealth app and the hypertension management program. <strong>Results:</strong> The mHealth app was generally well-received. Average System Usability Scale scores exceeded 70, surpassing established usability thresholds. Likert scale data revealed CHWs found the app to be useful and easy to use, but identified training protocols as areas for improvement. Qualitative analysis of focus groups and written surveys revealed 3 dominant themes. First, CHWs identified practical short-term needs, including slower and more comprehensive training sessions, simplified medication dosing regimens to reduce pill burden, and streamlined survey questions to shorten patient visit times. Second, CHWs raised larger structural concerns, including retention challenges related to financial compensation and misalignment between required clinical data collection and the cultural appropriateness of certain app questions. Third, CHWs highlighted program benefits, including improved patient care and hypertension management, empowerment through educational tools, and increased pride and community trust associated with the program. <strong>Conclusions:</strong> Our findings suggest that iteratively integrating user feedback into the development of mHealth interventions is key to improve usability, cultural appropriateness, and overall effectiveness of chronic disease management in resource-constrained settings. Due to the small number of CHW participants, as well as a reliance on self-reported perceptions, these findings should be interpreted as exploratory and hypothesis-generating rather than generalizable. This study contributes to the growing literature on mHealth apps for noncommunicable diseases in LMICs and provides insights into CHW experiences. Addressing the technical barriers and systemic challenges identified in this study can help improve future implementations of mHealth-enabled chronic disease programs in LMICs. <strong>Trial Registration:</strong>
Posted on
<![CDATA[Artemis astronauts spotlight psychiatric medication, mental health support, and trust—revealing why psychiatry’s village mindset strengthens care, leadership, and ethics.]]>
Posted on
<![CDATA[Artemis astronauts spotlight psychiatric medication, mental health support, and trust—revealing why psychiatry’s village mindset strengthens care, leadership, and ethics.]]>
Two years ago, I wrote in the New England Journal of Medicine that one of the greatest threats to childhood vaccination is the normalization of skepticism, even though it isn’t actually the norm. When credible outlets, trusted voices, and social media algorithms tell the public that most Americans doubt vaccines, some may start to wonder if they should, too. I watched that play out this week.
On Monday, Politico published a poll on vaccine attitudes titled, “More Americans doubt vaccine safety than trust it, Politico Poll finds,” followed by the subhead, “Health Secretary Robert F. Kennedy Jr.’s views are commonplace across the land.” I consider Politico a reputable news outlet, so this headline stopped me in my tracks.
The AI boom has hit across industries, and public sector organizations are facing pressure to accelerate adoption. At the same time, government institutions face distinct constraints around security, governance, and operations that set them apart from their business counterparts. For this reason, purpose-built small language models (SLMs) offer a promising path to operationalize AI in these environments.
A Capgemini study found that 79 percent of public sector executives globally are wary about AI’s data security, an understandable figure given the heightened sensitivity of government data and the legal obligations surrounding its use. As Han Xiao, vice president of AI at Elastic, says, “Government agencies must be very restricted about what kind of data they send to the network. This sets a lot of boundaries on how they think about and manage their data.”
The fundamental need for control over sensitive information is one of many factors complicating AI deployment, particularly when compared against the private sector’s standard operational assumptions.
Unique operational challenges
When private-sector entities expand AI, they typically assume certain conditions will be in place, including continuous connectivity to the cloud, reliance on centralized infrastructure, acceptance of incomplete model transparency, and limited restrictions on data movement. For many state institutions, however, accepting these conditions could be anything from dangerous to impossible.
Government agencies must ensure that their data stays under their control, that information can be checked and verified, and that operational disruptions are kept to an absolute minimum. At the same time, they often have to run their systems in environments where internet connectivity is limited, unreliable, or unavailable. These complexities prevent many promising public sector AI pilots from moving beyond experimentation. “Many people undervalue the operating challenge of AI,” Xiao says. “The public sector needs AI to perform reliably on all kinds of data, and then to be able to grow without breaking. Continuity of operations is often underestimated.” An Elastic survey of public sector leaders found that 65 percent struggle to use data continuously in real time and at scale.
Infrastructure constraints compound the problem. Government organizations may also struggle to obtain the graphics processing units (GPUs) used to train and access complex AI models. As Xiao points out, “Government doesn’t often purchase GPUs, unlike the private sector—they’re not used to managing GPU infrastructure. So accessing a GPU to run the model is a bottleneck for much of the public sector.”
A smaller, more practical model
The many nonnegotiable requirements in the public sector make large language models (LLMs) untenable. But SLMs can be housed locally, offering greater security and control. SLMs are specialized AI models that typically use billions rather than hundreds of billions of parameters, making them far less computationally demanding than the largest LLMs.
The public sector does not need to build ever-larger models housed in offsite, centralized locations. An empirical study found that SLMs performed as well or better than LLMs. SLMs allow sensitive information to be used effectively and efficiently while avoiding the operational complexity of maintaining large models. Xiao puts it this way:“It is easy to use ChatGPT to do proofreading. It’s very difficult to run your own large language models just as smoothly in an environment with no network access.”
SLMs are purpose-built for the needs of the department or agency that will use them. The data is stored securely outside the model, and is only accessed when queried. Carefully engineered prompts ensure that only the most relevant information is retrieved, providing more accurate responses. Using methods such as smart retrieval, vector search, and verifiable source grounding, AI systems can be built that cater to public sector needs.
Thus, the next phase of AI adoption in the public sector may be to bring the AI tool to the data, rather than sending the data out into the cloud. Gartner predicts that by 2027, small, specialized AI models will be used three times more than LLMs.
Superior search capabilities
“When people in the public sector hear AI, they probably think about ChatGPT. But we can be much more ambitious,” says Xiao. “AI can revolutionize how the government searches and manages the large amounts of data they have.”
Looking beyond chatbots reveals one of AI’s most immediate opportunities: dramatically improved search. Like many organizations, the public sector has mountains of unstructured data—including technical reports, procurement documents, minutes, and invoices. Today’s AI, however, can deliver results sourced from mixed media, like readable PDFs, scans, images, spreadsheets, and recordings, and in multiple languages. All of this can be indexed by SLM-powered systems to provide tailored responses and to draft complex texts in any language, while ensuring outputs are legally compliant. “The public sector has a lot of data, and they don’t always know how to use this data. They don’t know what the possibilities are,” says Xiao.
Even more powerful, AI can help government employees interpret the data they access. “Today’s AI can provide you with a completely new view of how to harness that data,” says Xiao. A well-trained SLM can interpret legal norms, extract insights from public consultations, support data-driven executive decision-making, and improve public access to services and administrative information. This can contribute to dramatic improvements in how the public sector conducts its operations.
The small-language promise
Focusing on SLMs shifts the conversation from how comprehensive the model can be to how efficient it is. LLMs incur significant performance and computational costs and require specialized hardware that many public entities cannot afford. Despite requiring some capital expenses, SLMs are less resource-intensive than LLMs, so they tend to be cheaper and reduce environmental impact.
Public sector agencies often face stringent audit requirements, and SLM algorithms can be documented and certified as transparent. Some countries, particularly in Europe, also have privacy regulations such as GDPR that SLMs can be designed to meet.
Tailored training data produces more targeted results, reducing errors, bias, and hallucinations that AI is prone to. As Xiao puts it,“Large language models generate text based on what they were trained on, so there is a cut-off date when they were trained. If you ask about anything after that, it will hallucinate. We can solve this by forcing the model to work from verified sources.”
Risks are also minimized by keeping data on local servers, or even on a specific device. This isn’t about isolation but about strategic autonomy to enable trust, resilience, and relevance.
By prioritizing task-specific models designed for environments that process data locally, and by continuously monitoring performance and impact, public sector organizations can build lasting AI capabilities that support real-world decisions. “Do not start with a chatbot; start with search,” Xiao advises. “Much of what we think of as AI intelligence is really about finding the right information.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
BackgroundSocial cognition is increasingly recognized as part of the non-motor phenotype of essential tremor (ET). Available ET evidence suggests selective alterations in some socio-cognitive domains, whereas findings on self-reported empathy and alexithymia remain limited and inconsistent.ObjectivesThis cross-sectional study aimed to evaluate empathy and alexithymia in patients with ET compared with healthy controls (HC), and to explore their associations with global cognition and with each other.MethodsForty ET patients and 40 HC underwent the Italian versions of the Montreal Cognitive Assessment (MoCA), the short Empathy Quotient (EQ-short), and the Toronto Alexithymia Scale (TAS-20).ResultsET patients had significantly lower MoCA scores than HC (22.1 ± 4.1 vs. 25.3 ± 3.2, p<0.001), whereas no between-group differences emerged for EQ-short or TAS-20 scores. In ET, MoCA was not significantly associated with empathy or alexithymia measures. In HC, higher MoCA scores were associated with greater emotional reactivity. Exploratory bivariate analyses suggested inverse associations between social skills and alexithymia in ET, but only the adjusted ET models remained significant.ConclusionOur findings do not support a group-level deficit in self-reported empathy or alexithymia in ET. Rather, they suggest that socio-emotional functioning may be largely preserved at the group level, while the relationship between social skills and emotional self-description may differ in ET.
Background: The integration of artificial intelligence (AI) into clinical practice is contingent on public trust. This trust often depends on physician oversight, yet a significant gap exists between the need for AI-competent physicians and the current state of medical education. While the perspectives of students and experts on this gap are known, the views of the US general public remain largely unquantified. Objective: This study aimed to assess US public perceptions regarding AI in medicine and the corresponding emergent needs for medical education. We specifically sought to quantify public trust in different diagnostic scenarios, concerns about physician overreliance on AI, support for mandatory AI education, and priorities for the future focus of medical training. Methods: We conducted a cross-sectional, web-based survey of adults in the United States in November 2025. Participants (N=524) were recruited via SurveyMonkey Audience. We calculated descriptive statistics, frequencies, proportions (percentages), and 95% CIs for all main survey items. Results: A total of 524 participants completed the survey. Most (n=329, 62.8%; 95% CI 58.6%‐66.9%) placed the most trust in a physician’s diagnosis based on their expertise alone; only 7.8% (n=41; 95% CI 5.5%‐10.1%) trusted an AI-first diagnostic model. Trust was highly contingent on training: 93.9% (n=492) of participants rated formal physician training on AI limitations as “essential” or “very important.” Widespread concern about physician overreliance on AI was reported, with 81.1% (n=425) being “very concerned” or “extremely concerned.” Consequently, 85.1% (n=446) agreed or strongly agreed that training on AI use, ethics, and limitations should be mandatory in medical school. When asked about future educational priorities, 70.2% (n=368; 95% CI 66.3%‐74.1%) believed that medical education should focus on human-centered skills (eg, empathy and communication) over clinical skills. Conclusions: The US public expressed conditional trust in medical AI, strongly preferring physician-led and critically supervised models. These findings reveal a clear public mandate for medical education reform. The public expects future physicians to be mandatorily trained to appraise AI, understand its limitations, and refocus their professional development on the human-centered skills that technology cannot replace.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/3874abd2d5f25c78f21987a16f3af6be" />
Military chaplaincy is an established yet multifaceted practice within military organizations and is exposed to particular stressors such as the use of violence, ethical dilemmas, loss, and existential vulnerability. This study examines how a Swedish normative framework for Military Soul Care (ACCES: advisory role, command and crisis support, ceremonies, education, and soul care conversations) interacts with Swedish military chaplains’ own experiences of what they perceive as most important and meaningful in their mission. The empirical material consists of qualitative questionnaire data collected in 2025 from 50 military chaplains. The material was analyzed using an abductive approach and organized thematically. The results show that conversations constitute the task to which the greatest amount of time is devoted across both main categories of military chaplains, and that conversations are understood broadly, ranging from informal everyday interactions to confidential individual soul care conversations. Various forms of ceremonies and crisis support related to death and grief were experienced as particularly meaningful and reflect a clearly articulated priestly identity. Educational tasks varied between categories, with time constraints and organizational priorities limiting opportunities depending on context. A central finding is that presence within the organization, aimed at building relationships and trust, emerges as a decisive prerequisite and contributes to many chaplains working beyond their contracted hours. The importance of presence is not explicitly articulated in the ACCES framework but rather permeates the mission implicitly. Against the backdrop of a changed security environment, the findings illustrate that ecclesial priestly competencies related to crisis response, death, grief, and funeral expertise constitute a particularly vital resource in situations of crisis and war.