뉴스피드 큐레이션 SNS 대시보드 저널

Health-care AI is here. We don’t know if it actually helps patients.

MIT Technology Review AI | | ⚡ AI 서비스
#ai #doctors #health-care #hospitals #patients #gemini #review

요약

사용자가 뉴스 기사를 요약해 달라고 요청했습니다. 기사 내용을 분석해보겠습니다: 핵심 내용: - Thor 雷神(@thorwebdev)이 Hugging Face, Pollen Robotics와 협력 - Reachy Mini 대화 앱에 Gemini Live 통합 발표 - 로봇 대화 인터페이스에 실시간 멀티모달 AI 적용 사례 - 로봇/에이전트 개발자들에게 주목할 만한 업데이트 요약 규칙: 1. 반드시 한국어로 작성 ✓ 2. 제목을 그대로 복사하지 말 것 ✓ 3. 본문의 구체적 사실, 수치, 맥락을 포함할 것 - 구체적 사실: Hugging Face, Pollen Robotics 협력, Reachy Mini, Gemini Live 통합 4. 80자 이상 작성할 것 - 80자 이상으로 작성해야 함 제목과 거의 동일하지만, 본문을 기반으로 더 구체적으로 작성해보겠습니다. Thor 雷神(@thorwebdev)이 Hugging Face, Pollen Robotics와 협력하여 Reachy Mini 대화 앱에 Google의 Gemini Live를 통합했다고 밝혔습니다. 이번 업데이트는 로봇 대화 인터페이스에 실시간 멀티모달 AI를 적용한 사례로, 기존 텍스트 기반 대화 시스템을 넘어 시각·청각 등 복합적인 입출력을 지원하는 것이 핵심입니다. 로봇 및 AI 에이전트 개발자들에게 새로운 가능성을 제시하는 사례로서 업계의 주목을 받고 있습니다.

왜 중요한가

개발자 관점

검토중입니다

연구자 관점

검토중입니다

비즈니스 관점

검토중입니다

본문

I don’t need to tell you that AI is everywhere. Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays. A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients? We don’t yet have a good answer. That’s what Jenna Wiens, a computer scientist at the University of Michigan, and Anna Goldenberg of the University of Toronto, argue in a paper published in the journal Nature Medicine this week. Wiens tells me she has spent years investigating how AI might benefit health care. For the first decade of her career she tried to pitch the technology to clinicians. Over the last few years, she says, it’s as though “a switch flipped.” Health-care providers not only appear much more interested in the promise of these technologies, they have also begun rapidly deploying them. The problem is that many providers aren’t rigorously assessing how well they actually work. Take “ambient AI” tools, for example. Also known as AI scribes, they “listen” to conversations between doctors and patients, then transcribe and summarize them. Multiple tools are available, and they are already being widely adopted by health-care providers. A few months ago, a staffer at a major New York medical center who develops AI tools for doctors told me that, anecdotally, medics are “overjoyed” by the technology—it allows them to focus all their attention on their patients during appointments, and it saves them from a lot of time-consuming paperwork. Early studies support these anecdotes and suggest that the tools can reduce clinician burnout. That’s all well and good. But what about patient health outcomes? “[Researchers] have evaluated provider or clinician and patient satisfaction, but not really how these tools are affecting clinical decision-making,” says Wiens. “We just don’t know.” The same holds true for other AI-based technologies used in health-care settings. Some are used to predict patients’ health trajectories, others to recommend treatments. They are designed to make health care more effective and efficient. But even a tool that is “accurate” won’t necessarily improve health outcomes. AI might speed up the interpretation of a chest X-ray, for example. But how much will a doctor rely on its analysis? How will that tool affect the way a doctor interacts with patients or recommends treatment? And ultimately: What will this mean for those patients? The answers to those questions might vary between hospitals or departments and could depend on clinical workflows, says Wiens. They might also differ between doctors at various stages of their careers. Take the AI scribes, as another example. Some research on AI use in education suggests that such tools can impact the way people cognitively process information. Could they affect the way a doctor processes a patient’s information? Will the tools affect the way medical students think about patient data in a way that impacts care? These questions need to be explored, says Wiens. “We like things that save us time, but we have to think about the unintended consequences of this,” she says. In a study published in January 2025, Paige Nong at the University of Minnesota and her colleagues found that around 65% of US hospitals used AI-assisted predictive tools. Only two-thirds of those hospitals evaluated their accuracy. Even fewer assessed them for bias. The number of hospitals using these tools has probably increased since then, says Wiens. Those hospitals, or entities other than the companies developing the tools, need to evaluate how much they help in specific settings. There’s a possibility that they could leave patients worse off, although it’s more likely that AI tools just aren’t as beneficial as health-care providers might assume they are, says Wiens. “I do believe in the potential of AI to really improve clinical care,” says Wiens, who stresses that she doesn’t want to stop the adoption of AI tools in health care. She just wants more information about how they are affecting people. “I have to believe that in the future it’s not all AI or no AI,” she says. “It’s somewhere in between.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

관련 저널 읽기

전체 보기 →