Skip to main content

Healthcare AI for Humans: Governance, Research, and Rights

Spread the love

Data scientist Emily Hadley on navigating AI in healthcare, offering practical advice for maintaining patient agency amid algorithmic decision-making.

Summary

This interview with data scientist Emily Hadley examines the intersection of artificial intelligence and healthcare through a deeply personal lens. Hadley’s journey began when her own health diagnosis coincided with her graduate studies in analytics, revealing how algorithm-driven systems often affect patient care—especially through insurance claim denials and clinical documentation. The conversation offers practical guidance for patients navigating AI-influenced healthcare, including reviewing AI-generated clinical notes for accuracy, challenging algorithmic insurance decisions, and insisting on human intervention when automated systems fail. Hadley advocates for preserving patient agency and rights within increasingly automated systems while highlighting how algorithm review boards are striving to provide governance in this largely unregulated space. The interview concludes with resources for staying informed about developments in healthcare AI, emphasizing that while AI tools are rapidly advancing, patient advocacy remains vital.

Please comment and ask questions:

Production Team

You know who you are. I’m grateful.

Podcast episode on YouTube

No video

Inspired by and Grateful to

Eric Pinaud, Laura Marcia, Amy Price, Dave deBronkart,

Links and references

Prompt Engineering

Algorithm Review Boards at RTI

Dave deBronkart’s Patient’s Use AI

Episode

Proem

This year, I switched from Medicare Advantage to Traditional Medicare. I still needed to purchase a supplemental commercial plan to cover what Medicare Part B didn’t. However, the supplemental commercial plan denied some services the previous Medicare Advantage plan covered. Why? What algorithms did each plan use to determine coverage? How can I manage this?

Welcome to the third installment of Artificial Intelligence Can Work for You. We’ve explored how I use AI in my podcast productions and delved into some AI basics with Info-Tech leader Eric Pinaud.

I asked Emily Hadley, a data scientist at RTI specializing in AI algorithms for insurance coverage decisions, to join us. Early in her graduate studies, Emily was diagnosed with Crohn’s disease. This led to her interest in studying insurance algorithms.

A Data Scientist Awakes

Health Hats: How did you gain expertise in AI?

Emily Hadley: Great question. I was diagnosed right as I started a graduate program in analytics. In my undergraduate studies, I studied statistics in public policy. I liked the idea of using data to shape how policymakers make decisions, especially in the US. I had done some work with AmeriCorps and then went to grad school to really hone those skills. Being diagnosed at the same time that I was in grad school meant that I was navigating to new, informative, and educational areas. And I think that that’s when I really came to realize the power of data and the power of AI in shaping the way that organizations and people make decisions. We live in a really algorithm-fueled society. We constantly encounter technology and AI systems, even when we don’t realize it.

An example I give is that I’ve faced many problems getting insurance to cover the things it is supposed to. I didn’t realize until a couple of years ago that this is due to many insurers embracing algorithm-driven decision-making systems that often automatically deny coverage for services that should be included. Instead, they might say they don’t cover it because the appropriate code was not included when billing. So, the provider claims, ‘ Oh, we don’t cover that because the code was missing, ‘ even though it should have been included. I feel as though I’ve been a victim of some of these automated systems, which have significantly impacted my life and pushed me to understand that these AI systems are not hypothetical. We live with them every day, and we don’t have a lot of insight into them as consumers or citizens. And that really pushed me into this responsible AI space of thinking. How do we develop and use algorithms that align with how people would treat each other? Not necessarily how algorithms and robots would treat each other.

Health Hats: Are you saying that this is a way to be more transparent about what’s in the algorithms?

Emily Hadley: That’s a piece of it.

Building Guardrails with AI Governance

Health Hats: In something you sent me to educate me more about what you’re doing, you talked about algorithm review boards, and I was trying to picture them. Who’s around the table? Can you tell us a little bit about what an algorithm is? Is it real? Is it theoretical?

Emily Hadley: Yeah, I’ll launch right into it. I’ve been passionate and interested in this since I saw more companies embrace AI, especially in the United States. States don’t have laws to guide how companies, academic institutions, nonprofits, and government organizations use AI. Certainly, some legislation and rulemaking is probably coming, but in the absence of it.

Organizations need to decide how they will manage AI from a risk perspective. This includes reputational risks to the organization, its customers, and the population at large. Also, from an equity and justice perspective, how can AI systems align with our organization’s mission and values?

One of the things that I started noticing at my own organization was that we have something called the Data Governance Committee, which existed before ChatGPT became a big thing and before everyone talked about AI. The data governance committee was focused on how to protect data on the projects that we work on. Many projects involve private health information or other personally identifiable information. We need to ensure that even before GPT, we didn’t want to upload this information to the cloud or expose people’s data in a way that was not permitted.

This group has also adapted to become an AI review group. So, when someone at our organization wants to use AI in their projects, I recently wanted to use AI to help summarize some text responses that we were working on. Before I moved forward, I needed to check with the data governance committee to ensure that it aligned with RTI policies and that I was using the data in a protected and secure way.

I assumed, and this research confirmed, that other organizations are doing the same thing. They are putting together groups of people, especially in the finance and health sectors. To your point, they don’t all look the same. Every organization is doing what works for them.

At my organization, the data governance committee includes our corporate council staff members, ethics officer, data privacy officer, and a couple of subject matter experts like myself, who bring a lot of different data or research pieces to the table. Finance organizations, especially banks, have had a long history of risk assessment committees for various credit scoring or lending algorithms.

They’re mostly adapting a group, sometimes adding some new AI expertise, but a lot of that expertise is already in-house. I would say the health groups have done some of the most interesting and innovative work in this space because this type of review is new for many of them. It’s similar to some FDA-type review work they’ve done.

Health Hats: Or IRB review.

Hallucinations and Validation with AI in Research

Emily Hadley: Exactly. As part of this research, we investigated whether IRBs could do this work. And what we heard was actually a resounding no, they did not consider.

Health Hats: it’s a different focus. I’ve been on an IRB, and there is this business of being a generalist, so there’s value in having a generalist or two generalists in a group of many experts. Okay. So, what do you think the role of consumers is on review boards and algorithm review boards?

Emily Hadley: I’m noticing a focus on affected communities, especially in the health sector. This includes patients and clinicians, particularly those engaged in the work. It’s not an algorithm review board but for the long COVID research you mentioned. We have patient representatives involved in all of our manuscripts. I was just at a clinician review meeting last Friday, and it’s incredibly helpful to have someone provide insight when determining whether we prepared this methodology correctly. Are these initial results what you expected? Do you feel you have a say in this process and how it’s being developed? I’ve also observed tech companies embrace that level of stakeholder involvement. It’s more consumer driven. They want to create products that people will use. However, I am encouraged to see the participation of affected communities because I believe that’s where many revelations occur.

Health Hats: Let’s take a step back. What kinds of AI are used in research?

Emily Hadley: Yeah, that’s a great question. In research, we see it using a couple of different areas. One of the biggest is information gathering, extraction, and summarization. We’ve been using it for literature reviews to help summarize or get key points out of particular papers. We’ve been excited that it allows people with different educational or literacy backgrounds to interpret papers. It can be really frustrating to work with a peer-reviewed publication where you’re like, I don’t know what it means here. So, some of the generative AI summarizations have been helpful. Another area we’ve seen some work in is when I mentioned this free text response.

I am coming up on a project right now where we’ve got some Reddit data that we’re trying to summarize with ChatGPT. Then, another generative AI model and some of our biggest problems are related to hallucinations. You probably have heard of these, where the models make up stuff that seems right but isn’t there. And then related to that is validation. And I think that’s an area that requires more consistent methods. And there isn’t a lot out there that says this is how you validate the output from a generative model in this context. So, right now, we’re manually reading through the ChatGPT summary and then going back to the original data. We’re like, okay, is this thing that was mentioned in the summary? Also, in the data, we find some really interesting things, like in our data. Somebody mentioned going to grad school, but in the summary, they said somebody graduated from grad school but didn’t say they graduated. They just said they went.

Health Hats: That’s a small but big thing.

Prompt Engineering-Conversational AI

One of my friends, Amy Price, is a researcher at Dartmouth, and she does research in both engagement and AI. What she’s been preaching and teaching, and what I’ve been learning from her, is to treat my prompts as a conversation. I have grandkids, one of whom is a debater. He has to take opposite sides and defend opposite sides of the question. So, when I’m with him, and he’s espousing some political opinion, I’ll ask him, if you were going to debate against that, what would you say? And so I’ve been thinking about that and trying with some of my queries to say, okay, if you are going to disagree with what you just said, what would you say or to try to figure out like to treat it as a conversation and think about if I have my critical thinking hat on how if it was you, Emily, that I’m talking to. I’m skeptical and don’t want to call you a stupid jerk.

So when you think about that, are you thinking about, on the one hand, how to do it manually, and then the next step is how to create the following query to ask it? Yeah, I don’t know. I’d like to say more about that.

Emily Hadley: Yeah. What you’re getting at is the field of prompt engineering, and so it’s this idea of prompt engineering.

Health Hats: Someday, that’ll be a different hat.

Emily Hadley: Exactly. Exactly. And it’s your exact point. It’s figuring out how to structure your prompts to get back the answer you want or, on the flip side, an entirely different answer. Prompt engineering is a field that is also rapidly changing. It feels like some of these AI groups are releasing new models once a month, and when a new model comes out, it might disrupt all of the prompts you’ve written to that date. And so you have to reconfigure. It creates a lot of transparency problems and replicability problems. We put out a paper in a journal earlier this year that I think the model isn’t available from Open AI anymore, right? They discontinue these models, and they no longer support them. So, if somebody wanted to duplicate what they did, what we did in that paper, they could use a similar method. But it’d be hard to get the same results because the model no longer exists.

Health Hats: I never thought of that. Oh goodness, it is a challenge from a scientific perspective. Oh, man. That’s, it’s like a political change.

Emily Hadley: Yeah, exactly, and the models can change pretty drastically between releases as well.

Health Hats: What’s an example of a change? I’m having trouble. I get the idea. I can’t picture it, really.

Emily Hadley: Sure. So, one example would be when ChatGPT first came out, people asked for citations and a link to the source. Oftentimes, it would entirely make up that this person wrote this paper and they didn’t write it at all. In recent models, GPT has become internet-compatible so that you can search the internet for real links and accurate citations.

That’s not to say we’re not seeing those hallucinations at all, but they certainly have improved in more recent versions of that model. Now, if you ask it for a list of sources, you’ll probably get back actual, real papers rather than the ones that it made up the first time around.

Health Hats: The one experience I’ve had. This was a while ago, and I was trying to create an image of someone getting their blood pressure taken. It wouldn’t go through; they wouldn’t answer the question. It turned out that the word blood was banned because they didn’t want gory content. When I typed in sphygmomanometer, I could get a picture of someone with a blood pressure cuff.

Verification and Vigilance

I’ve been to three or four conferences in the last two months, and I find people proselytizing for AI like it’s an AI cult and the solution to everything. And then some people are suspicious and don’t trust it at all. So, what do you think of the curious consumer? What should we be paying attention to as we use these tools supposedly to help us make decisions about our health?

Emily Hadley: Yeah, that’s an excellent question. I think folks will want to pay attention to a couple of things. One of the first ones is. New places that you’re seeing AI being used and recognizing the opportunity to opt out. One of the recent interesting cases has been the automated AI transcription of doctor’s notes.

We’re coming out of appointment sessions, and in theory, your hospital is supposed to disclose to you that they use the system and that an AI system autogenerated the notes. You should also know that those are wrong in some cases. Wrong in almost every example that has been tested, and yet it’s still being deployed in real-world settings because there’s not a lot of restriction since it’s not in a clinical setting. Still, it’s not affecting something about somebody interacting with your body at that point in time. And so I, I would encourage people to review their doctor notes, especially if they generated and catch errors where they’re wrong and follow up with your practitioner about it

Health Hats: There are many errors in what’s written, too. For me, the best clinicians are the ones who talk out loud while they’re writing their notes, and they say, “Let me know if I got something wrong or I’m not clear,” and then it’s right because I don’t remember when I got home. And I really appreciate that. I could say no, I’m taking blah, blah, blah. Or I’m not taking that ’cause it gives me a belly ache, or that wasn’t my history. That was my wife’s history

Emily Hadley: Yeah.

Health Hats: Then we will fix it right away. But that’s the rare clinician.

Emily Hadley: Yeah, exactly. Those historical notes can become very important when managing a condition over time. Therefore, I believe you are the best advocate for your patient notes, which is frustrating. I don’t think it should necessarily be this way, but, you know, that’s how it has turned out.

Health Hats: actually. Wow. All right. What else?

Staying Informed

Emily Hadley: Yeah, sure. I mentioned insurance earlier. Educating yourself on how insurers are using algorithms is a somewhat unregulated space, so they are just jumping in and saying, yes, we’re just going to use algorithms instead of having a person review the billing. You get all of these things that you shouldn’t be billed for. I was shocked by how insurance companies do this with limited personal intervention and similarly pushing to talk with someone whenever possible rather than a chatbot. I have personally found that the chatbots have not been particularly helpful in my experience.

I worry about AI taking away people’s agency, especially in healthcare. And I think I would love for people to continue to know that you have a lot of rights as a patient. You have many rights when dealing with an insurer, and an AI system cannot take those rights away from you. You should continue to exert them, not pay for things you’re not supposed to pay for, and get the treatment you deserve. And I’m hopeful that some of the incoming government rules and regulations will recognize that and push to maintain the person’s importance in a healthcare system.

Health Hats: Where do you like to go when working with colleagues in this space and learning yourself? What groups do you find that appreciate your contribution and keep you up to date?

Emily Hadley: You mentioned conferences. We’re seeing more and more conferences grow in this space. I was at Academy Health earlier this year. We had some great AI conversations.

Health Hats: Which one?

Emily Hadley: Academy Health’s research one. I am a member of ACM, the Association for Computing Machinery, the acronym IEE.

Health Hats: Those are big, like data science and computer science groups, but they have been doing a lot to push for standards in the health sector. And I enjoy being part of those groups. Then, from the US government, I pay a lot of attention to what NIST is up to. That’s the National Institute for Standards and Technology, I want to say. They also led the way in developing a whole bunch of AI resources for the US government. FDA, which plays a significant role in the medical space, has adopted much of their work. And so I’ve been excited about what I’ve seen coming out of them.

Thank you so much. This has been great.

Health Hats: All right. Take care of yourself.

Emily Hadley: Thanks, Danny. Have a great afternoon.

Reflection

When I asked Emily to speak with us, I hoped to feel less overwhelmed about AI. If I kept this episode solely in the AI bucket, I failed. I’m beginning to sense that I know less than I thought before. However, if I included it in my advocacy bucket, I succeeded. I have some new tools. I can always utilize a new tool. Whether managing chronic pain, attending to my safety, advocating, or podcasting, I find that I need at least three tools in my toolkit. Nothing works every time. Nothing works for everyone. Three is the magic number for tools to feel confident that one will be effective.

Emily encourages us to pay attention, use common sense, continue learning, and advocate for ourselves in person. If I have any energy, I can do what is necessary.

How do you use AI to manage your health and care? What opportunities and challenges do you face? To join our chat, please open the Substack app or visit substack.com/chat on the web. Navigate to Health Hats publication’s chat section to participate in discussions.

Related episodes from Health Hats

From Dick Tracy to AI: Out of Mind to Beyond Mind

AI: Neither Artificial nor Intelligent. Useful and Sobering

Artificial Intelligence in Podcast Production

Health Hats, the Podcast, utilizes AI tools for production tasks such as editing, transcription, and content suggestions. While AI assists with various aspects, including image creation, most AI suggestions are modified. All creative decisions remain my own, with AI sources referenced as usual. Questions are welcome.

Creative Commons Licensing

CC BY-NC-SA

This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms. CC BY-NC-SA includes the following elements:

Please let me know. [email protected]. Material on this site created by others is theirs, and use follows their guidelines.

Disclaimer

The views and opinions presented in this podcast and publication are solely my responsibility and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute®  (PCORI®), its Board of Governors, or Methodology Committee. Danny van Leeuwen (Health Hats)

Danny van Leeuwen

Patient/Caregiver activist: learn on the journey toward best health

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.