
Podcast: Play in new window | Download (Duration: 23:04 — 18.5MB) | Embed
Subscribe: Apple Podcasts | Spotify | Email | RSS | More
Healthcare AI isn’t a tech problem—it’s a mirror reflecting how our health system already fails. Uncomfortable truths from Datapalooza 2025.
Summary
We’re asking the wrong questions about AI in healthcare. Instead of debating whether it’s good or bad, we need to examine the system-eating-its-tail contradictions we’ve created: locking away vital data so AI learns from everything except what matters most, demanding transparency from inherently secretive companies, and fearing tools could make us lazy instead of more capable. Privacy teams protect data, tech companies build tools, regulators write rules—everyone’s doing their part, but no one steps back to see the whole dysfunctional picture. AI in healthcare isn’t a technology problem; it’s a mirror reflecting how our health system already falls short with privacy rules that hinder progress, design processes that exclude patients, and institutions that fear transparency more than mediocrity. The real question is whether we’re brave enough to fix these underlying problems that AI makes impossible to ignore.
Click here to view the printable newsletter with images. More readable than a transcript, which can also be found below.
Contents
Please comment and ask questions:
- at the comment section at the bottom of the show notes
- on LinkedIn
- via email
- YouTube channel
- DM on Instagram, TikTok to @healthhats
Production Team
- Kayla Nelson: Web and Social Media Coach, Dissemination, Help Desk
- Leon van Leeuwen: editing and site management
- Oscar van Leeuwen: video editing
- Julia Higgins: Digit marketing therapy
- Steve Heatherington: Help Desk and podcast production counseling
- Joey van Leeuwen, Drummer, Composer, and Arranger, provided the music for the intro, outro, proem, and reflection
- Claude, Perplexity, Auphonic, Descript, Grammarly, DaVinci
Podcast episode on YouTube
Inspired by and Grateful to:
Christine Von Raesfeld, Mike Mittleman, Ame Sanders, Mark Hochgesang, Kathy Cocks, Eric Kettering, Steve Labkoff, Laura Marcial, Amy Price, Eric Pinaud, Emily Hadley.
Links and references
Academy Health’s Datapalooza 2025 Innovation Unfiltered: Evidence, Value, and the Real-World Journey of Transforming Health Care
Tableau a visual analytics platform
Practical AI in Healthcare podcast hosted by Steven Labkoff, MD
Episode
Proem
Here’s the thing about AI in healthcare—it’s like that friend who offers to help you move, then shows up with a sports car. The Iron Woman meant well, but it doesn’t quite meet your actual needs. I spent September 5th at Academy Health’s 2025 Datapalooza conference about AI in healthcare, ‘Innovation Unfiltered: Evidence, Value, and the Real-World Journey of Transforming Health Care. a is Academy Health’s strongest conference for people with lived experience. I’m grateful to Academy Health for providing me with a press pass, which enabled me to attend the conference.
I talked to attendees about how they use AI in their work and what keeps them up at night about AI. I recorded some of those conversations and the panels I attended. When I listened to the raw footage, I heard terrible recordings filled with crowd noise and loud table chatter, like dirty water spraying out of a firehose. Aghast, I thought, what is the story here? I was stumped. How can I make sense of this? I had to deliver something.
So, here’s how I use AI in my work as a podcaster/vlogger. I used the Auphonic app to clean up the audio and remove noise, and then the Descript app to create transcripts of all the recordings. I went into my Claude podcast Project (a Project is an ongoing thread with everything I’ve done with Claude for my podcast over the past three months). I attached the transcripts and prompted the AI platform to identify themes. OK, that was helpful, but dull. So, I prompted Claude to think like a tech-savvy teen with a sense of humor. Eureka! Now we’re getting somewhere. I edited heavily and then prompted Claude to identify clips of speakers that illustrated the themes. I used the Perplexity app for research. Finally, I did the last written edit with a polish from the Grammarly app.
For audio, I returned to the Descript app, found the recommended clips, and extracted them. Then I recorded a video of myself, again using Descript. Compilation editing of the video was done with the DaVinci app. I should give production credit to Auphonic, Claude, Descript, Grammarly, Perplexity, and DaVinci.
Paradox, Irony, Catch 22
Datapalooza 2025 showcased the health and care industry’s intense focus on Artificial Intelligence, whatever that means. My podcast acts as a Rosetta Stone to share the excitement of what I learn and deem important in my journey toward best health. How can we use AI safely? Let’s jump in with some lessons I learned.
Burying the Treasure to Keep It Safe
There’s a Data Privacy Paradox. The very health data that could benefit most from AI faces the most restrictions. Sushmita Macheri works with Medicare/Medicaid data—information about some of our most vulnerable populations—but can’t use AI to identify errors that could improve their care. Meanwhile, commercial entities are freely training AI on whatever data they can scrape. Therefore, the most sensitive and valuable healthcare data remains locked away while AI trains on potentially biased and unrepresentative information.
Sushmita Macheri: I work with healthcare, Medicare, and Medicaid data. I would like to upload the data so I can understand what errors I’m getting, but I’m unable to do that due to the restrictions we have at work. So, if I were able to upload one, let’s say, like a file that I am having errors with.
Health Hats: So, what kind of errors, like missing data, what are the errors that you notice?
Sushmita Macheri: I work with Tableau, mostly. Sometimes, if I’m having issues with a calculated field, I would like to upload that calculated field or the logic behind it in the calculator to try to understand what the error is, but I’m unable to do so. For me, it’s the biggest challenge.
Bias, Treating the Chart, Not the Patient
Bob Stevens points out a harsh irony: AI makes decisions about patients while being trained on data that intentionally excludes patient perspectives. The people most affected by AI decisions had the least input in training the systems. It’s like having a medical advisory board that leaves out doctors and patients, then questioning why the recommendations fail.
Bob Stevens: I am concerned about bias, as I mentioned, and that really worries me for two reasons. First, AI uses all available content, and as patients, we know that patient perspective content has not been well represented. Now, as AI starts making decisions based on this, all the content it has is just what’s available. It’s gathering it all. We haven’t been well represented in that process. So, it’s going to stay biased, right? Without patient information and the patient perspective, that creates a bias.
Bob Stevens: The second type of bias is related to how it’s designed. It’s not being general because it’s a technology, while they’re asking for patient input. There’s also bias in the design process because of who is doing the designing. So, you have two levels. One can be considered intentional, but the other is the accumulation of all this data that is there. We’re not represented in and haven’t been represented in. And how do we change that? The incremental change in the AI dataset is expected to take decades. What bothers me is that we are now relying on AI to assign a label that can then trigger a response or action.
Bob Stevens: That’s a high-risk moment, asking AI to make a decision that’s inherently high-risk. So what AI should always do is say. Here’s what I see. Now consider this when going in. And that brings us to the second part of a PCORnet study that I was involved in, which focused on the ER. And we had our electronic health record, and depending on how certain things, it was called a natural language processing process. And it looked at all these different things, and then based on that, it said, look to this, or looked to that, or looked to the other. It was those AI prompts that were based on the information from the electronic health record, which was then entered into the electronic health record. For that physician in the ER, they would then need to do certain things.
Circular Dependence, Chasing Your Tail
Rolanda Clark hits on something profound: we need expertise to verify AI, yet AI is supposed to democratize expertise. She notes you “still have to educate yourself on how to check the information,” but if you already have that expertise, why do you need AI? And if you don’t have the expertise, how can you verify it? It’s a circular dependency that reveals AI’s limitations rather than its strengths.
Rolanda Clark: So, I’d say with AI, it’s not foolproof. You still have to educate yourself on how to verify the information that’s being presented, and that’s hard to do.
Health Hats: I’ve started saying, ‘What is wrong with your algorithm?’ Correct. And I get some kind stuff I didn’t think about that makes me wanna burrow in more.
Rolanda Clark: But I think that’s imperative. I think you must counter to mitigate this like bullshit.
Health Hats: Because you need to do that with experts anyway, because just because they’re experts in this little thing, they think they’re experts in way more.
Rolanda Clark: Exactly. As a patient advocate, I ask myself, ‘Why am I here?’ And then I realize I have common sense, and I can see when it’s bullshit. I definitely have value regardless. And you’ve had experiences that these organizations often don’t promote or share because they want to highlight all of the good.
Health Hats: So, transparency? That’s an honorable challenge. How do you be transparent, and how do you trust yourself? Push the boundaries of what you’re willing to be transparent about. For me to cross a line, I find it helpful to know who’s behind it, their motivation, how they make money, and what they’ve decided to keep protected. It’s the company’s value that you don’t want to share because it’s the secret sauce. You don’t want to share the recipe. Kentucky Fried Chicken, sure.
It Doesn’t Have to Make Sense.
There’s a systemic irony where the most regulated industries that could benefit most from AI innovation are the least able to experiment with it. Healthcare organizations can’t risk HIPAA violations when exploring AI capabilities, so they often fall behind less-regulated sectors. Meanwhile, tech companies with no healthcare expertise are building health and care AI tools. I told my kids that life doesn’t have to make sense.
Grace Cordovana: In my advocacy work, I have the privilege of accompanying patients and their families to the point of care. I’ve been observing this anecdotally, essentially running my own informal study. I notice that when consent is asked for, it creates a really positive experience.
People are excited; patients and families are excited, and the doctor is excited, creating a spark of energy because now we can connect and talk as people. But what I’m noticing is that patients and families are now using their own tools, and they say, ‘That’s great, doc. You hit record, and I’m going to hit record too because I have my own tool.’
All hell breaks loose. Fractured relationship. Wait. We can’t do that. I, that’s not HIPAA compliant. We don’t have, hold on. No, we really can’t do that. I’m sorry. I’m not comfortable. Now, if this is a new patient encounter, it’s a major problem. Think about a new patient encounter where this patient has cancer, and this is their first appointment for a second opinion on an advanced cancer. And that’s how we’re starting off.
Throwing Out the Baby with the Bathwater
Madhu Jalan’s concern about her son reveals another contradiction: we’re afraid AI will make us lazy, so we avoid using tools that could boost our productivity. But this avoidance might actually make us less competitive and adaptable. It’s like refusing to use spell check because you want to be a better speller—you end up writing less, not better.
Madhu Jalan: I have a 17-year-old, and I worry that he won’t learn the skills he needs to get along. Critical thinking, for example, involves writing. I worry that it will just make him lazy and completely redundant. That’s what I fear. He’s 17, so he still needs to learn how to learn and shouldn’t take the lazy way out. That’s what I worry about.
Clear as Mud
A delicious irony is that most people call for AI transparency, yet the most successful AI companies are among the muddiest. We want to understand how AI works, what data it uses, and how it makes decisions—but the companies we think have the best AI are the ones most protective of their “secret sauce.” The transparency advocates have the least power to enforce transparency.
Grace Cordovana: I encourage you if you haven’t read it or haven’t heard about it, take a look at the Light Collective AI Rights for Patients document, and it’s rooted in seven pillars. So, we boiled the ocean down to the crux of what was important for us from a patient’s perspective in that setting. Not just the foundation but looking at what the apex of ethics and the apex of good would look like for the people for whom all of these tools and technologies were being developed.
So, we answered the question, What do patients want, need, expect, and demand? And we landed on patient-led governance.
We committed ourselves to uphold patients’ transparency and self-determination. This includes identity, security, privacy, the right of action, and shared benefits. When you explore the document further, you’ll find all the specific details. I can assure you that each of us carefully reviewed every word and statement, and we all consented to the final version.
Patients can and will do good work, laying foundations that set a precedent for other stakeholders. This approach is divided. Designed to work in multi-stakeholder settings, public-private partnerships—which I will advocate for—should include public, private, and patient collaborations as we envision the future.
Redistricting to Democratize
There’s an ironic class dynamic where AI is supposed to democratize access to capabilities, but it actually requires significant skill to use effectively. It’s like learning a foreign language with characters that change shape with the weather. The people who most need AI help (like Yvonne McLean Florence learning scientific terms) are least able to verify its accuracy or notice its biases. Meanwhile, those most capable of using AI responsibly (like Madhu) are the most worried about its risks.
Yvonne McLean Florence: We attend specific conferences, and they put us in groups. We have to conduct research and learn new terms, so I use it for that. Help me understand the different scientific terminologies, as I don’t have a strong science background. Not that I have to be a professional when we’re at these conferences, but you do go there to learn.
Humanize Through the Looking Glass
Bob’s “Yogi the AI” example, which we heard earlier, reveals a graphic irony: we humanize AI to make it seem safer, while at the same time dehumanizing the process it’s meant to support. They give the AI a cute mascot name and treat it like a team member, but the underlying process reduces complex human problems to data patterns and algorithmic responses.
Bob Stevens: What they’ve done with the AI is that the AI person sits there as a member of the committee, and they actually name them and interact with them as if they’re sitting at the computer, as if they’re present. They usually do something silly. The name of the school is Bucknell Bears. So, if they’re the Bears, then they would end up naming AI, Yogi, and it becomes a real person sitting in that meeting, providing that input. But the key is, they’re just one person on the team? What is AI? Is one person on the team
Driving while looking into the Rearview Mirror
There’s an irony in the way we’re using 21st-century AI to perpetuate 20th-century biases. As Bob notes previously, changing the training data will “take decades,” meaning that today’s AI systems will continue to reflect historical inequities even as they are used to generate predictions to make forward-looking decisions about health, care, education, and social services.
A Million Interns Working for You
Finally, there’s Rolanda Clark’s observation about having “a million interns”—AI gives us unprecedented capability while making us more dependent. She can now act on creative ideas immediately, but what happens when the AI isn’t available? We gain productivity but lose resilience.
Rolanda Clark: I feel it’s so advantageous that I can come up with a thought, and that becomes the catalyst for so much more. Years ago, when I had these wonderful inklings, it was more imaginative because I usually didn’t have the time or energy to go full throttle. Now, you have a million interns digging for you.
What Keeps Me Up at Night About AI?
I should answer my own question. At night, my apocalyptic mind worries about the corporatization of AI, the huge energy and water use that AI farms require, and the potential collapse of our financial stability when the AI bubble bursts.
Reflection
We’re asking the wrong questions about AI in healthcare. Instead of debating whether it’s good or bad, safe or dangerous, we should examine the contradictions we’ve created: We lock away vital health data to protect people, so AI learns from everything except what matters most. We demand transparency from companies built on keeping secrets. We avoid tools that could make us more thoughtful because we fear becoming lazy.
People I talked to see some of these problems clearly. Sushmita can’t use AI to catch errors that could help vulnerable patients. Bob knows the bias is baked in from the start. Grace sees the HIPAA panic when patients want recording tools that their doctors already use. Rolanda understands that verifying AI requires expertise, undermining its promise of democratization.
So why aren’t we fixing this? Because each problem seems like someone else’s job. Privacy teams protect data. Tech companies build tools. Regulators write rules. Everyone’s doing their part, but no one is stepping back to realize, “Wait—the whole system is eating its own tail.”
AI in healthcare isn’t a technology problem. It’s a reflection of how our health system already falls short—privacy rules hinder progress, design processes exclude patients, and institutions fear transparency more than mediocrity.
The question isn’t whether we should use AI in healthcare. It’s whether we’re brave enough to fix the underlying problems that AI makes impossible to ignore.
See you at the PCORI 2025 Annual Meeting and the Camden Coalition’s Putting Care at the Center 2025 where I’ll keep asking uncomfortable questions.
Related episodes from Health Hats
Artificial Intelligence in Podcast Production
Health Hats, the Podcast, utilizes AI tools for production tasks such as editing, transcription, and content suggestions. While AI assists with various aspects, including image creation, most AI suggestions are modified. All creative decisions remain my own, with AI sources referenced as usual. Questions are welcome.
Creative Commons Licensing
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms. CC BY-NC-SA includes the following elements:
BY: credit must be given to the creator. NC: Only noncommercial uses of the work are permitted.
SA: Adaptations must be shared under the same terms.
Please let me know. danny@health-hats.com. Material on this site created by others is theirs, and use follows their guidelines.
Disclaimer
The views and opinions presented in this podcast and publication are solely my responsibility and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute® (PCORI®), its Board of Governors, or Methodology Committee. Danny van Leeuwen (Health Hats)