Skip to main content

Fear, Shame, Access, Connection -Privacy in Digital Exchange

By January 27, 2024January 29th, 2024Advocate, ePatient, Informaticist, Podcasts, Researcher, Video
Spread the love

Fred Trotter on the balancing privacy & connection, the role of AI in societal judgment, and practical privacy protection strategies with a nod to Mighty Casey

Watch two five-minute podcast clips on YouTube.

Click here to view or download the printable newsletter with associated images




How does YouTube know so much about me? I’m searching on my browser for solutions to my too-slow-responding Bluetooth mouse. In moments, YouTube feeds me shorts about solving Mac problems. I’m following a teen mental health Twitter chat, and my TikTok feed shows threads about mental health apps. How do they know? I’m getting personal comments about my mental health. My mental health is mostly good. Who else will know? Do I care? I live my life out loud. I don’t share what I wouldn’t want on a billboard, which, for me, is almost everything. When is that unsafe? When would I be embarrassed? I’m no longer looking for work, so I don’t care. Who can access my data? What should I share? What does privacy even mean? How does privacy impact the need for connection? Isn’t privacy a continuum – different needs at different times from different people?  So many questions.

Today’s guest, Fred Trotter, co-authored the seminal work Hacking Healthcare. Fred is a Healthcare Data Journalist and expert in Clinical Data Analysis, Healthcare Informatics, Differential Privacy, and Clinical Cybersecurity.

Podcast intro

Welcome to Health Hats, the Podcast. I’m Danny van Leeuwen, a two-legged cisgender old white man of privilege who knows a little bit about a lot of healthcare and a lot about very little. We will listen and learn about what it takes to adjust to life’s realities in the awesome circus of healthcare. Let’s make some sense of all of this.

Privacy in Digital Communication

Health Hats: I picture movement along a continuum when I think about Digital Privacy. Complete privacy is connecting with no one. That’s intolerable. No privacy is connecting with everyone about everything. That’s unsafe and exhausting. Privacy and risk tolerance go hand in hand for me alone and for me with my peeps and tribes. Risk tolerance isn’t fixed it changes with context. My thoughts get muddier when I associate privacy and connection. They are flip sides of the same coin. I need community connection. But the more I connect (content and reach), the more complex privacy becomes. My approach to managing privacy involves harm reduction, a term used in substance use treatment. So, based on my ever-changing risk tolerance and my need for connection, how do I reduce the harm privacy issues can cause?

Harm reduction, safety, data aggregation

Fred Trotter: It’s funny that you mentioned harm reduction. A college friend of mine, Elizabeth Chiarello, is an opioid researcher. She studies pharmacists and their situations in different regulatory contexts. She is a harm reductionist. During this conversation about harm reduction, I think harm reduction is like patient safety, where there are two versions of the word. One is a term of art that comes from a particular clinical context. Of course, as you point out, harm reduction is usually talked about in the context of opioids, which means let’s not criminalize this and instead focus on reducing the harm that this complicated and miraculous class of drugs provides. Patient safety is a similar term, wherein the specific clinical context is a set of procedures that hospitals should follow to ensure that unnecessary harm doesn’t happen. Then, the more general lessons could come from these approaches to harm reduction. Perhaps this concept should have a life outside this context and become broader. Let’s take away some of the judgment in harm reduction, like shame associated with some consequences. These negative, arbitrary consequences are associated with a particular clinical topic. Patient safety, like harm reduction, is the generalizable version in whatever context you are discussing. Are you using best practices to reduce patient harm in a particular context? Honestly, very simple. As you switch from an inpatient hospital to an outpatient context to the context of doing research and data aggregation, it’s unclear what patient safety means.

So, what do privacy and harm reduction mean? That’s something to chew on. These are terms that mean what you want them to mean in the context of a conversation. They’re pretty good terms. The internet, in general, has taken terms like health equity and made them politicized and controversial. The internet can tear a word apart and make it useless. People hear a word and hear different meanings or stories when they say the same word. That makes good-faith communication difficult. Similar words like patient, safety, privacy, harm, and reduction all have some powerful expectations.

Communication minimalists and maximalists

Fred Trotter: When you talk about the risk spectrum, I hear two privacy and cybersecurity camps I can’t entirely agree with. One camp is we’re going to communicate no matter what. Use HTTPS, an encrypted connection, as opposed to HTTP. But we’re going to communicate, we’re going to send data around, and we’re going to do what needs to be done. We’re not thinking about the implications of the data moving. It’s somebody else’s problem-communication maximalists. Then there’s the camp I used to have problems with: let’s shut it all down. I want my medical bills to go over snail mail, please. I don’t want electronic anything happening to me. Let’s go to zero on communications if I can prevent it, and let’s wait until we can figure out how to secure it- communication minimalists. The implication of what you say when you say, I have a risk spectrum, is, do you want the communication to happen, but not in all contexts? And you’re willing to trade off some communication to reduce risk in some contexts. Contexts, in and of itself, acknowledging that some balancing needs to occur from my perspective, are the basis for a sophisticated conversation. A surprising number of people need to be convinced that any consideration of privacy is reasonable. Like any balancing, it is good because they’re communication maximalists; any communication is good, and communication minimalists, no communication without absolute privacy.

Privacy in small villages during the Bronze Age

Fred Trotter: Suppose sociologists and anthropologists look backward in time and consider how things were when most of the world lived in small villages. In that case, it’s tough for the whole village not to know everything about you. If you look back into the bronze age, running a city was a logistical nightmare because you didn’t have trucks or anything else. You have grain carts coming in and out to feed people in all these villages, so the vast majority lived in places with under 200 people. But all cooperated to make some land work effectively. So, there was no privacy, but there was also no aggregated data there. I guess there was no harm in scaling, for lack of a better word.

Privacy in the viral modern age

Fred Trotter: If you go viral in the wrong way in the modern digital era, either you or I could say something dumb and go viral in the wrong way on this and every call we’re on. But if you do something where you think nobody’s watching, and somebody is watching, somebody does have a camera, you think you have privacy, you don’t, and that becomes viral. That could ruin your life and sometimes should, right? So, I think issues like police violence and the cases where police officers are misbehaving, we need cameras for a lot longer than we’ve had them. I’m sure thankful that we have the cameras now. So, I’m not necessarily even saying that going viral negatively and having mass consequences with your reputation destroyed for a million or a billion people at once is necessarily a negative thing. In some cases, that’s warranted, but it is a new judicial engine, how we’re going to judge people and how we’re going to evaluate them.

Judicial engine

Health Hats: What do you mean by a judicial engine?

Fred Trotter: I think it is an alternative to the traditional rule of law, a system for judging. So, if you and I disagree, and we haven’t committed any crimes – like if I hit you in public, that’s assault. There’s a judicial process that the government takes over once that crime has occurred. But we can see each other in private court and around this system of jurisprudence, the rule of law. Certain things are assumed, such as innocent until proven guilty. People fail to realize how much evolution has occurred because we have the concept of trial by judge, contrasting with trial by jury. And you can go to a court and decide very early in the process which of those two things you prefer. Sometimes, you can’t. The concept was that God would favor whoever was right in the argument. If you lop my head off, well, you were right. And vice versa. So, the judicial process has taken centuries to evolve. It has variations across the globe. The variations are significant. Suppose you think about the judicial engine system in Singapore. For better or worse, it is famously different from the one in the United States. So, we have this concept of adjudicating problems and potentially passing judgment on people and social media.

Privacy and shame

Health Hats: How would you define privacy?

Fred Trotter: We have this ancient bronze village. If you screw up, it’s limited to 250 people. And if you screw up, you might have to switch villages. And then we get to the modern era, and there was this weird period where you could get a house in the suburbs and have a greater degree of privacy than you had in the village. Nobody knew your business. You were behind your closed doors, and you had your yard. The yards were buffers against information leaking out. Now, we have a reduction from that temporary place of strong privacy to what we have today. There have been many revolutions in our understanding of shame. As we’ve been studying it lately, we’ve understood what a powerful force it is, and that is the mechanism by which this extra-judicial system works. So, the freedom to process the issues in your life might bring shame, either in the sense that I feel it myself or that other people are attempting to make me feel it, on issues that might be so personal that your shame might be a problem. One thing differentiating patients in how they come out on privacy is whether their medical condition is socially acceptable and socially welcomed, which, of course, changes in society about what’s welcome and what’s not. So, I don’t think you can talk about privacy effectively without discussing shame and what we choose to shame in our culture. I do think how I think I’m unique in defining it that way.

Health Hats: I never thought about shame.

Fred Trotter: I have this long hallway in my house. If you look that way, there’s a long hallway; it’s not a big apartment I have. I love my apartment because of the long, thin hallway. I frequently find myself because I’ve forgotten my implements, you know, walking naked down this long hallway, and there’s just one building on the other side. There’s this giant window that can see in my long window. Now, I’m not ashamed of how I am naked. I’m okay with my body and everything else now, but that doesn’t necessarily mean that I’m keen to have somebody with a camera taking a photo. So, am I ashamed of my body?  Do I have shame for my nakedness? What privacy means is, I’m good if I’m thinking about it, from the perspective of a photo on the internet that never gets taken down in the Barbra Streisand effect. That one probably well-meaning neighbor, I don’t know them, can take time to figure out how to get a picture through my window. And I think everyone’s windows are the same way, right?

I’m not unique in this situation. It’s just the situation I’m thinking about. I think there’s probably an equivalent situation where you live, and every person has those and neighbors unless they’ve taken a lot of effort to ensure they don’t. It’s not actually that people who are concerned with privacy don’t subject themselves to those variables. I think there is a lot of space for discussion. I’ve been thinking about shame for a long time, and I think this patient community has a lot of shame issues when they use their preferred [social media] platforms. Some people don’t feel shame, such as people who have colostomy bags—having a digestive system that essentially is no longer a hundred percent inside, for lack of a better term. There are people now who go online and say. I will take pictures of myself in a bathing suit with my colostomy bag at the beach, which is marvelous. I applaud that because I think what you’re trying to do there is you’re trying to refactor the shame. You’re trying to say, well, this is not something shameful. It’s just a fact of life for me, and I won’t put that in your face. But, you know, if I want to go to the beach.

Denied access

Health Hats: Okay, there’s this piece of it that’s shame, but then there’s a piece about what people do with the information. If I am denied access to something, I don’t get a job, or I can’t get insurance or something.

Fred Trotter: Well, I hope my definition extends to that. Because what I’m talking about is not just that for which you feel shameful.

Health Hats: Oh, you did say that.

Fred Trotter: The sense in which other people say that in you is unacceptable. We are going to go extra judicially.

Health Hats: Oh, we’re back to the judicial. So now this is falling together for me. So, what do you think about this? The connection, the desire for connection, and your tolerance of privacy risk.

Peer-to-peer connection and privacy risk

Fred Trotter: So, I think you’re absolutely in that vagary. I think there are two different underlying meanings for connection. They have two very different implications if I’m talking about my need to communicate with you and my stuff with you, which is peer-to-peer connection stuff. Society is still reeling whenever we have a new medium with different rules. TikTok works differently than Facebook, which is different from Instagram. Every time that happens, we have a different understanding of what it means to be peer-to-peer.

People-to-needs connection

One-on-one and peer-to-group peers. Communication in terms of what clinical privacy might mean. However, I also think that when you say a need for connection, I think of the boring stuff, which is, in many cases, a much, much bigger deal: you have a very dull need to connect to your health insurance company. I think there are people to people, and then there are people to needs. You switch clinics. That’s a connection. You get a new insurance. That’s a connection. All these connections are tedious and happen in the background, and then there are the connections you willfully make, which are making a new friend and having a new romantic relationship.

It’s a new community when you’re a patient who’s just been diagnosed with X, Y, and Z and want to discover what other people are doing. Those are different, but they both fall under the definition of connection.

A connection you don’t know you have

Fred Trotter: I think there’s a middle ground where you have a connection made that you assume is not one where your privacy is invested, but it is. Credit card companies and Facebook are perfect examples. It’s completely different than deciding to connect and share what’s going on in my personal life with a new person individually or in a group. These supposed boring and safe connections that you have with your health insurers and people in the HIPAA world, privacy extracted as a business case where you have a connection. I think your paradigm is correct. There’s a connection, privacy, and how they interact. When I’ve shared something personal with you, I’d rather you not say that to the whole world. That’s privacy as a peer-to-peer phenomenon. When I’m talking to my doctor, or I’m talking about health insurance covered by HIPAA and this new middle ground where Apple knows whether I have HIV, even though I’ve never explicitly told Apple, I’ve not necessarily used their health tools. It understands because it’s following me so entirely that they know that. Google does so for different reasons: Amazon, Facebook, and many other places you wouldn’t think, Target, where you shop famously, you know, are in this category of people who can infer with a very high degree of reliability what your health conditions are and other things that you might want to keep private. So, I think there are at least three big buckets of what connectivity and privacy mean when you think of connectivity versus privacy because of the regulatory and practical circumstances under which we live.

Health Hats: And they are.

Harm reduction

Fred Trotter: If we talk about harm reduction, it’s similar. There are multiple levels of harm reduction. There’s harm to me, from individual to individual. There’s harm that you don’t necessarily see. You’re unaware of what’s happening; this means somebody knows something about you, sells it, denies you something, you know, that’s hidden.

Health Hats: So, with reducing harm, there’s stuff you can control, and there’s stuff you can’t. I would be pressed to say what I can control and can’t. What do you think about the harm reduction in terms of this? We’re talking about a better understanding of how complex any of this is.

Fred Trotter: So, let me make helpful oversimplifications. I invite you to do the same. It helps complicate. You have to acknowledge that there is an oversimplification. So, I’m oversimplifying a bunch of things to make valuable points. Let’s oversimplify the peer-to-peer thing by assuming that if you’re rude to people at a birthday party, all your friends and family are at the party, and you’re rude; they’ll shun you a little bit, right? And so the problem on the, the, the problem with peer-to-peer privacy, you can oversimplify to be that scales nearly infinitely so if I’m rude at a birthday party now. I say something that, you know, the parents don’t appreciate, the birthday child doesn’t appreciate, and somebody catches on a camera that can scale, but the whole world knew that Fred was rude in a birthday party.

So, scaling is the problem with the peer-to-peer? Let’s assume that is all there is to this.

Health Hats: Right. No, I hear you. But that’s a good point.

Oversimplification of harm reduction

Fred Trotter: it’s a good oversimplification. It’s just that what used to didn’t scale now scales. The problem with them is that let’s assume that the peer-to-peer stuff is there. Let’s assume, also for an oversimplification that your doctor and your insurance company are always on your side. So, let’s assume everything that HIPAA covers works in your favor. You know, that’s a dangerous oversimplification because we know that that’s not true. But let’s assume that it is, and let’s assume that when we talk about the con, the real problems with privacy are this much less regulated, much less opaque, middle ground of big tech understanding stuff about you that you didn’t know that they understood, where you didn’t explicitly tell them.


Fred Trotter: And I think the redlining problem is the problem. I’m referring to the case of the racist past of the United States, where there were explicit rules in the financing in the industry to ensure that certain parts of town were available only to people of certain races. Now, of course, the problem with that is that there’s a very explicit, racist past, and there’s a study by 538. And, of course, you didn’t introduce me, but I’m a healthcare data journalist. So, I’m a con; I want to use data and understand things. And 538 are data journalists who cover hot healthcare topics. It’s like they’ve discovered that, in general, the former explicit practice of redlining carries over into a modern world where redlining still happens.

The neighborhoods are still segregated, and it just continues. The experience that I think is critical for redlining is that it is in this zone of the judicial processes that are not formally part of the judicial system. People are making societal judgments about people, and they don’t know. Of course, any community talks. So, if your community can never get more mortgages in a particular area, it’s not like you don’t know that, right? But there’s also no formal judgment. You don’t understand exactly what’s going on. Who is doing that? Is it the government, and is it the banks? Is it the real estate agents? And, of course, the answer we know now is all the above. We’re participating in that. So, what’s happening? I’m very, very concerned. Well, two things. One is that explicit policy, which was made illegal a long time ago, was practiced even before that and still has impacts today. And, practically speaking, in some cases, you could say that the policy is not over. It embeds an unethical practice into a system that impacts everyone. I’m very concerned that those unethical practices are embedded into modern AI.

AI Artificial Intelligence

Fred Trotter: And, of course, I’m not the first person to consider the possibility that modern artificial intelligence might be racist or sexist. You know, it’s unethical and discriminatory in some other way. That’s what everyone’s talking about. I think as a healthcare journalist and, in this conversation, I’m much more interested in discussing precisely how those problems can be healthcare-related as opposed to real estate. I don’t know anything about it. I don’t know anything about redlining. I don’t know anything about real estate. That’s not my area except knowing this is a huge problem affecting our society. Also, it is one of the areas where, even now, your zip code is more important than your blood pressure in terms of your healthcare, right? And so there are cases where I try to be at least somewhat informed that these issues ultimately impact people’s health. I have a story about what I’ve recently learned about AI, which I will discuss extensively. Because I think it’s essential to understand. I think about this judicial thing; you’ve picked it up four times. Thank you for that.

Call to action

I need your help to expand my audience to younger people in advocacy. I’m doing more in short-form videos. Please help by pointing me to communities of young advocates and the channels and hashtags they use so I can listen and learn. I now have one URL for all channels and media. is where you can subscribe, access episodes, my website, and social media, and search the Health Hats archive. Your support is appreciated.

ChatGPT and health coverage

Fred Trotter: I’m thinking about this extra-judicial, outside of the formal court system judgments we always make in society. The most important thing in healthcare is coverage decisions. Is your treatment going to be covered? Is your medication of choice going to be covered? Will the medication that works for you be covered instead of the one that works but doesn’t? I discussed this on my video casting on LinkedIn, where a physician used ChatGPT to write the letter he would send to an insurance company to say this procedure should be covered. I can’t remember the clinical topic; it doesn’t matter. I didn’t understand it when I was talking about it. He said something like, take my side in a clinical argument.

Then, he constructed the clinical argument, had a respectful tone, and provided references. So, sure enough, ChatGPT spits out this thing. What I thought was interesting was that there was no question about whether he was right. He just told ChatGPT that he was right and then had ChatGPT argue with him. So, I did, in live streaming. I tested to see if I could reverse the polarity entirely. And I said I’m an insurance company chat, GPT. And here’s the clinical topic precisely as the physician described it. Show why that’s not necessary and provide references, right? Use a respectful tone. Sure enough, ChatGPT took the other side of the argument and asked why that was unacceptable. So, one of the reasons why I have been so focused on the judgments that we make and how things get decided is that I think it’s going to be substantively outsourced to AI that has access to parts of your digital footprint that you wouldn’t necessarily want them to put together in a particular way.

Aggregating information

Do I have a problem with the fact that I’ve got an STD? Do I have a problem with the grocery store knowing that I bought ice cream and them knowing that I got a particular prescription in the pharmacy? They knew that I was there, let’s say at two; what time is it right now? It’s Tuesday in the early morning. It’s not an average time for a professional to go to the grocery store or the pharmacy. But if I have an urgent matter, I will go there. So, am I comfortable with the grocery store, knowing when I was there, what the medication was, and that I bought a particular item in their grocery store? I don’t have a problem with that. I have a problem with them putting that all together and knowing that I have an STD. Yeah, I do. That’s not their business. So, they are, of course, putting that information together. It’s not that they’re putting that information together to figure out whether Fred has a particular STD, if Fred has a particular condition, or if Fred has this or something else that might be considered shameful. They’re putting everything together for everyone. Is everything suitable? They want to have this picture because that’s a valuable picture they can sell. They can sell me more if they understand my problems, what interests me, and what I might buy.

AI judicial processes by Insurers outside the courts

However, in certain circumstances, that information is super damaging. So, I’m very concerned with AI. We have all these processes like judicial processes where you request to have your medication covered, and then the insurance companies send it back and say, well, no, we’re not going to do that. Okay, you can appeal that to a higher level, and you can say my doctors are now involved, my doctor’s mailing you. And it goes back and forth like this until almost all judicial processes outside the courts have finished. Then, they will switch to being in the courts. And I foresee a human judge looking at a set of correspondence where no human has written anything. Where it’s been AI on both sides, all the way up to the top. And then the first time a human is saying is this judgment reached by this outside deciding system outside the courts. I don’t want to say extra-judicial because that has a meaning, but I think it’s hard not to say that.

Health Hats: I get it.

Fred Trotter: Outside courts decide, and then it goes into the courts, and then for the first time, a judge is there reading words written and read by AI, and no human has ever written or read. So, I’ve been denied my medication, and I know we’re working on it. But what I mean by we are the AI advocates for my doctor, and the AI advocates for the insurance companies have been interacting and trying to sort it out, and they can’t reach an agreement. And now, we will go to court about whether this medication is covered under my insurance plan. I think that’s not just; I don’t think that’s a fantasy.  That’s going to be a new normal.

What can I do to reduce potential harm?

Health Hats: I am awash in how complicated this is – how much risk there is and how evil it is sometimes. And so, what can I do? I’m not so much asking you precisely what I can do. I’m not asking that yet. I’m going with how we’ve been talking, breaking down the buckets within what I can do. Here’s what I can do at the level of password protection, like individual things that I can do, and I’m not minimizing anything. Like saying that password protection is enormous. But then there are policies and regulations influencing that. But how would you break down the domains of what I can do to reduce potential harm to myself in this arena?

The Light Collective

Fred Trotter: So that’s a difficult question. So, we formed The Light Collective, an organization intending to try and take a stand and provide some education about what you should be doing to protect yourself, to advocate for yourself and regulation, and these kinds of things. And I continue to endorse that organization. I don’t work with them as much as I did when Andrea Downing and I started it. But I continue to endorse their purpose and their actions. I continue to be impressed with that team. So, if you want a corpus of stuff to study, go to Light Collective. They’ve got a resource library. That’s probably what you want to read about.

Password managers

Fred Trotter: Password managers are essential. I understand the problem from my cybersecurity background, yet I find myself perplexed about exactly how to approach this stuff. I’m dubious that education and learning will help because I’ve learned a lot, and I’m still in a position where I don’t know exactly what to do.

Health Hats: That’s quite a statement.

Fred Trotter: It’s a problem. Let me tell you some of my generalized approaches. I use a password manager. I do not use a password manager that is incorporated into my browser. Using one in your browser is probably good practice because it’s simpler.


Fred Trotter: I choose to go one step further. I started to embrace pseudonymity formally. I have two accounts on every device I have. I’ve got Fred Trotter, and then I’ve got another user I log in as. And I’ve got a separate private identity that I’m using to look up stuff. Suppose I’m concerned enough about my privacy to turn on anonymous mode, a private window. In that case, I should do that in a user account on my computer that is separate from everything else. I do a substantial amount of browsing over there in that world. I have a different Amazon account. I’m doing that because I want to break at least a little. I use the VPN over there. I’m trying to create a different whole identity so that I can’t be pegged down so quickly as precisely and exactly what Fred Trotter is interested in.

We also know his social security number; you can tie everything together. That’s an idea I’d not run by the collective to see if that should be default advice. Separate your work life from your non-work life. I’m Fred Trotter, and I consult about health IT, privacy, etc. That’s one user. And I’m a different user when watching Netflix and all that stuff. And I think that’s a good idea because there are many things you don’t think about automatically when you do that. So, a way to aggregate a bunch of good ideas, the VPN, the password manager, the different accounts, and everything else into a simple system that’s easy to do.

Low-tech approaches

Fred Trotter: Do you have any tips like that? What is the easiest way to ensure you’re naturally doing those?

Health Hats: Yes and no. The one thing is that I don’t like to say or put anything on electronics that I wouldn’t want on a billboard, which doesn’t deal with so much. It doesn’t deal with limits on access. It doesn’t deal with that at all.

Fred Trotter: But I think it’s exactly what I was suggesting with this idea, which is there’s a bunch of other things that you do correctly because of that, and that’s like when I try and don’t always succeed. When I’m discussing Danny behind Danny’s back, I always try to say, is this conversation something I would be comfortable hearing? And most of the time, I’m talking about you. And, of course, I don’t talk about you. I don’t talk about most people when I do, but I occasionally talk about others. I try to think before the conversation begins. I would have a conversation that, if it were recorded and this person heard it, they would either feel nothing or feel good about what I said – not that I’m hurting someone.

Health Hats: Is the mic on when you thought it was off?

Fred Trotter: Exactly. Then you’re okay. That’s a good policy for a dozen other reasons besides the excellent human policy. I’m suggesting honest advice to my two users on a single computer: have a personal computer and a work computer. But that’s honest advice. But I can’t afford to do that. Nobody can afford to do that. So, all two different users are as close as you can get to that. The other reason it’s good is that you turn off the work computer. It’s a good thing to say I’m not here right now. I’m over there. I’m on personal time. I think that’s positive. And again, I think the Light Collective has a lot of good stuff.

The Electronic Frontier Foundation

The EFF Electronic Frontier Foundation probably does the best for patient privacy without being labeled as a patient privacy organization. They release many tools, think carefully, and are constantly advocating. If you want something other than the Light Collective to learn, EFF is powerful.

Inter-rater reliability in chart reviews

There’s a not great secret of the healthcare system: inter-rater reliability on chart reviews. Let’s say your healthcare organization will be doing a study on your healthcare conditions. Before that happens, somebody must review your chart and determine if you have the disease. Are you doing well or poorly? To what degree do you have side effects that will prevent you from participating in this study? Do you have a secondary condition that will prevent you from this study? So, researchers have people with clinical experiences, doctors, PhDs, and nurses, and they cross-train these people. I heard at that same conference that a large institution has 50 full-time employees doing nothing but this. These chart reviews are essential for research organizations. The naughty secret about chart reviews is that when two people do a chart review, they will get the same answer about 85% of the time, sometimes a little less, sometimes a little more. I’m talking about consistently if you have those 50 full-time employees, and you test them on the duplicate records repeatedly, and you see how often they agree about what they say. Based on clinical topics, you would think it would be something like 98% or 97%, which creates these rules of thumb in other industries where complex situations must be evaluated. They get up into the nineties, high nineties in the end. But in chart reviews, it’s shallow. 80-85% are average numbers. That’s not great.

Inter-rater reliability and AI

Fred Trotter: When you do a chart review or observational study, you will look at data. Suppose you will use that data to recruit for a clinical trial. The starting status of the patients is foundational. Then, we’re going to assign people into groups randomly. We will do all the work of studying the six or seven different permutations of study types. They’re all grounded in this chart review process. At this conference, they revealed, which was news to me, that they trained several off-the-shelf ChatGPT and some other large language models you can download on your laptop and run. The percentage of inter-rater reliability between the large language model and the people was 85%. The problem I see with that is it’s one of these cases where we have not adequately gotten human intelligence to solve a particular problem. When researching healthcare, we all live with this complex problem: people can look at the same healthcare record and see different things. Now, we’ve figured out how to make a significant language model stand in as one of these reviewers when you have 50 full-time employees doing something.

AI can make a complex system faster, not better

You could also scale it out. You could fire half of your human raters, keep half of them, and not just have 25 replace them. You could have 250 replacing them. You can say, AI, why don’t you evaluate this chart the same way Mary does? But you know, when she’s having a bad day, like when she’s got a hangover, or when she’s feeling particularly pessimistic about people with diabetes, whatever it is like, you can intentionally introduce bias to these 250 large language models raters. And you have, say, 50%, 30%, or 10% human. But they’re validating that the large language models are not going too far askew. You’re just keeping a human in the mix to keep it from going crazy. You would probably improve your overall chart reviews. However, the improvements are limited to what human intelligence was able to accomplish, and human intelligence has not been able to solve this problem. As I’m hearing this, the insurance and the adjudication process concern me. I think the chart review adjudication process is of concern. In all these cases, we will be in a place very soon where we’re taking humans out of the mix without ever getting to something fair, equitable, reasonable, reproducible, and decent for patients, providers, and health insurance companies. I’m not interested in having whatever the patients say goes. But certainly, we are not in a place where the patients are fully respected.

Health Hats: When I first led an EHR implementation from paper to electronic, I had enough sense to know that our core billing data sets were crap – too many duplicate, outdated patients and providers lists. I tried to insist, not knowing how vital my instinct was that we clean it up before we automated. I was only somewhat successful. The data sets were messy, and they didn’t want to use the resources to clean it up. So, we ended up automating garbage, faster garbage.

Situational awareness

Health Hats: Suppose somebody is trying to learn about privacy, risk, and self-protection. What would be your key takeaways?

Fred Trotter: Well, I think it’s essential to continue to follow the discussions about privacy and digital communication, following you, following me, especially if we talk or get together. This is an area of shared interest. Every time we get together, we talk about this. Following the EFF is essential. I think following The Light Collective, and when I say follow, I mean, like in the podcasts from the people associated with those organizations talking about these topics.

Health Hats: So, awareness.

Fred Trotter: Situational awareness. I think there will be a lot of QWERTY keyboard stuff where a technical decision seems like a good idea at the time, but it has negative long-term impacts when technology gets locked in. In the next ten years, we will make many decisions embedded for centuries, so everybody must be aware and plugged in. I think commenting on regulatory processes is probably more important than participating in political processes because our politics are so broken. There’s a vast number of complex issues that are handed down by CMS or FDA or agencies like that. Paying attention to regulations is good.

Expectations of organizations

Fred Trotter: When I say this, every organization is dysfunctional. So, when I refer you to an organization, then you find out it’s dysfunctional, don’t resent me. That’s the way organizations are. But another organization I think is worth listening to is the Society for Participatory Medicine, which is as close to a patient watering hole as we have, with patients from the various patient communities coming together a little bit. I think they’re worth following.

ChatGPT and Large Language Models

Fred Trotter: I advise people to try to interact and understand how the significant language models work. Get good if you can at ChatGPT. Learn how the prompting changes things and how these large language models work. Returning to that story, when they first turned the LA large language model on and asked to do chart reviews, it was getting like 50% inter inner rate of reliability, and then they changed the prompts. And they got it up to 85%. So I think there will be programming with an English component, programming with natural language, which will come out of the prompting of these languages. And that will be a new skill that will help me follow the conversation and understand. I think that’s a good thing.

The Mighty Casey Quinlan Approach

Fred Trotter: If you have an issue where you are concerned that someone will use information against you, they will shame you systematically or make judgments against you, be careful. Think carefully about how you and your information flow and who has the information and who doesn’t. I think two approaches work there. One is to try to make sure the information doesn’t leave. But I would also say the way it should work is that just because you have the information go out, you can fight against the injustice in the judicial and extra-judicial processes and reduce harm. That is as important as we need people who are saying, yeah, I have my colostomy bag, and I’m not going to allow my workplace to use that to discriminate against me. I will be loud and annoying about that – the Mighty Casey approach – and we need people trying to protect their privacy. We also need people saying that just because you have information doesn’t mean you get to use it against me. So, we need people who are fighting and trying to get out of the fight and the people who are trying to get into the fight regarding information being used against you.

DALL.E – AI Images

Health Hats: I want to tell you a swift story. I have a 12-year-old grandson, and we get together for an hour every Sunday. We’ve been doing this since 2019. This time, we were playing with DALL.E, the AI graphic, trying to get it to draw a decent anime picture. We tried all sorts of ways to say what we wanted, being general, changing what the picture was about, and putting in certain styles of anime to replicate if it was watercolor versus photos. According to my grandson, it was all garbage and did not reflect any decent anime. So, I’m telling that story because we think a lot about AI and words, but there are also images.

Privacy of creators

Fred Trotter: That’s important. Let’s generalize as far as we can go. I think the future will be that an AI will be something you can talk to because that’s how you talk to communicate. It’s either in writing or spoken words. I think it will spread to the point where AI either badly or correctly imitates almost any human creativity. So, I want a song that sounds like this. I want a picture that looks like this. I want a video that depicts this. I want a novel. I want something printed, something sewn. I think there’s a massive space for machines doing creative work. The other side of that coin is that every time you ask the app to do that, it’s violating the privacy of everyone who did the art. And I’m not sure that privacy is the right way to say that, but you’re certainly taking from creators.

You’re prompting, but then AI is outsourcing creativity by aggregating creativity. I’m going to look at a thousand pictures or sewings. I’m going to take the creativity of a vast number of people, reverse engineer it, and then produce something for you that is, in some senses, creative. But it’s not clear to the degree that it’s de novo creative versus creative in the way that it’s just aggregated imitation. It’s not clear what that means.

Dangerously hopeful

Fred Trotter: It’s so complicated like this: the people who believe that AI will make people more productive. I think they’re woefully uninformed, and they are Pollyannish. Is that the right way to say it? It’s just dangerously hopeful.

Health Hats: Thanks, Fred.

Fred Trotter: All right.

Health Hats: We’ll have to do this again. Thank you so much.


How long of a shelf life will this conversation have? The tension between community, learning, safety, shame, and technology, however you define them, will never cease.  Significant changes in technology have unexpected ramifications. Imagine life before and after the introduction of fire, the wheel, the printing press, penicillin, light bulbs, the telephone, contraception, and batteries. All predating computers are affecting privacy, fear, shame, and connection.

I appreciate Fred’s insistence on considering definition and context when discussing privacy, harm reduction, health equity, and justice. I can’t imagine a tribe without justice inequity. The concept of Artificial Intelligence as the rapid aggregation of human creativity is so seductive. Should I open my heart to that seduction a little bit, a lot, or not at all?

Perhaps Fred and I should have this conversation again in a year or two.

Podcast Outro

I host, write, and produce Health Hats the Podcast with assistance from Kayla Nelson and Leon and Oscar van Leeuwen. Music from Joey van Leeuwen. I play Bari Sax on some episodes alone or with the Lechuga Fresca Latin Band.

I buy my hats at Salmagundi Boston and my coffee from the Jennifer Stone Collective. Links are in the show notes. I’m grateful to you who have the critical roles of listeners, readers, and watchers. Subscribe and contribute. If you like it, share it. See you around the block.

Please comment and ask questions:

Production Team

  1. Kayla Nelson: Web and Social Media Coach, Dissemination, Help Desk 
  2. Leon van Leeuwen: article-grade transcript editing 
  3. Oscar van Leeuwen: video editing
  4. Julia Higgins: Digit marketing therapy
  5. Steve Heatherington: Help Desk and podcast production counseling
  6. Joey van Leeuwen, Drummer, Composer, and Arranger provided the music on the intro, outro, proem, and reflection including Moe’s Blues for Proem and Reflection and Bill Evan’s Time Remembered for on-mic clips.


I buy my hats at Salmagundi Boston. And my coffee from the Jennifer Stone Collective. I get my T-shirts at Mahogany Mommies.   As mentioned in the podcast: drink water, love hard, fight racism

Inspired by and Grateful to

Andrea Downing, Jill Holdren, Valencia Robinson, Ken Goodman, Virginia Lorenzi, Michael Mittelman

Links and references

Today’s guest, Fred Trotter, co-authored the seminal work Hacking Healthcare

The Light Collective

embrace pseudonymity

EFF Electronic Frontier Foundation

Imagine life before and after the introduction of fire, the wheel, the printing press, penicillin, light bulbs, the telephone, contraception, and batteries. All predating computers and affecting privacy fear, shame, and connection.

Creative Commons Licensing


This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms. CC BY-NC-SA includes the following elements:

Please let me know. Material on this site created by others is theirs, and use follows their guidelines.


The views and opinions presented in this podcast and publication are solely my responsibility and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute®  (PCORI®), its Board of Governors, or Methodology Committee. Danny van Leeuwen (Health Hats)

Danny van Leeuwen

Patient/Caregiver activist: learn on the journey toward best health

Verified by MonsterInsights