Patients Use AI to Interpret Medical Charts
Judith Miller's story represents healthcare's quiet revolution. When the 76-year-old Milwaukee resident received confusing lab results showing...
3 min read
Writing Team
:
Oct 3, 2025 8:00:01 AM
We've spent years watching healthcare stumble through AI adoption like a drunk uncle at Thanksgiving—enthusiastic, well-meaning, but utterly unprepared for what comes next. The American Association of Physicists in Medicine (AAPM), American College of Radiology (ACR), RSNA, and Society for Imaging Informatics in Medicine (SIIM) just did something rare in medicine: they got ahead of the curve.
Their newly published framework, "Teaching AI for Radiology Applications: A Multisociety-Recommended Syllabus," drops this week across Medical Physics, Radiology: Artificial Intelligence, and the Journal of Imaging Informatics in Medicine. It's open-access, which means no paywalls between radiologists and the education that could determine whether AI becomes their ally or their replacement.
Here's the uncomfortable truth: most radiologists can read a CT scan better than they can evaluate an AI algorithm's training data. That's not a knock on their clinical expertise—it's an indictment of how medical education has ignored the single biggest technological shift since digital imaging itself.
The framework tackles this by segmenting education across four stakeholder groups: users applying AI in clinical workflows, purchasers evaluating technologies, clinical collaborators guiding development, and developers building algorithms. This isn't academic navel-gazing—it's recognition that AI literacy can't be one-size-fits-all when the stakes include patient outcomes and professional relevance.
Consider the purchasing role alone. A 2024 JACR study found that 68% of radiology departments had acquired at least one AI tool, but fewer than 30% had formal evaluation frameworks for algorithm performance in their specific patient populations. We're buying AI like we're shopping for lab coats—based on vendor promises rather than empirical fit.
What makes this framework notable isn't just what it teaches, but how it structures that teaching. The syllabus is deliberately flexible, allowing institutions to adapt content while maintaining consistent instruction on fundamentals: algorithm basics, clinical integration, regulatory compliance, and ethics.
That flexibility matters because radiology practices range from academic medical centers with dedicated AI teams to rural hospitals where one radiologist covers everything from chest X-rays to interventional procedures. Cookie-cutter education fails in environments that diverse.
The collaboration itself—four major societies coordinating across three journals—signals something we don't see often enough in healthcare: consensus that education can't wait for perfect knowledge. The alternative is radiologists learning AI through vendor sales pitches and Reddit threads, which is roughly where we've been operating until now.
Here's where it gets uncomfortable: this framework tacitly admits that most practicing radiologists weren't trained for the field they're now working in. That's not unique to medicine—ask any marketing director if their 2015 education prepared them for running AI-powered campaigns in 2025—but in healthcare, the gap between training and practice can have consequences beyond blown budgets.
The framework's emphasis on regulatory issues and ethical considerations is particularly sharp. FDA clearance doesn't mean an algorithm works well in your institution with your patient demographics. A 2023 Nature Medicine paper showed that skin cancer detection algorithms trained predominantly on lighter skin tones showed significant performance degradation when deployed in more diverse populations. Understanding those failure modes isn't optional—it's the difference between good medicine and algorithmic malpractice.
The clinical collaborator role is equally vital. These are the radiologists who work alongside developers to ensure algorithms actually solve real problems rather than impressing VCs. Their competencies include translating clinical needs into technical requirements and evaluating algorithm outputs for clinical plausibility. In other words: making sure AI tools work in the messy reality of medicine, not just in perfectly curated datasets.
If you're reading this thinking "what does radiology education have to do with my content strategy," you're missing the point. Every industry adopting AI faces identical challenges: users who don't understand the tools, purchasers who can't evaluate them, collaborators who can't guide development, and developers who don't understand the domain.
The difference is that radiology's professional societies recognized the problem and built infrastructure before the crisis hits. Marketing is still in the "let everyone figure it out" phase, which explains why 70% of marketing AI investments fail to scale beyond pilot programs, according to Gartner's 2024 Marketing Technology Survey.
We could use a framework like this. Four stakeholder groups. Clear competencies. Flexible implementation. Open access. Instead, we're letting vendors define AI literacy through webinars designed to sell software.
Want help building AI competency across your marketing organization—not just buying more tools? Winsome Marketing's growth experts work with CMOs and marketing leaders to develop practical AI literacy that drives results, not just adoption metrics. Let's talk about what AI education actually looks like when you care about outcomes.
Judith Miller's story represents healthcare's quiet revolution. When the 76-year-old Milwaukee resident received confusing lab results showing...
This October, Neuralink begins human trials for thought-to-text neural implants, and we're witnessing the first credible step toward genuine human-AI...
Scientists have crossed a threshold that demands our attention: for the first time, artificial intelligence has successfully designed viruses capable...