Podcast: Play in new window | Download (Duration: 45:37 — 45.5MB) | Embed
Subscribe: Apple Podcasts | Spotify | Email | RSS | More
Laura Marcial talks with us about making the tech sausage of Clinical Decision Support: Guidelines, evidence, rules, knowledge engineers. Clinical decision-making still depends on human trust time, talk, control, and connection.
Blog subscribers: Listen to the podcast here. Scroll down through show notes to read the post.
Subscribe to Health Hats, the Podcast, on your favorite podcast player
Episode Notes
Prefer to read, experience impaired hearing or deafness?
Find FULL TRANSCRIPT at the end of the other show notes or download the printable transcript here
Contents with Time-Stamped Headings
to listen where you want to listen or read where you want to read (heading. time on podcast xx:xx. page # on the transcript)
Introducing Laura Marcial 00:56. 1
What is Clinical Decision Support from the inside? 06:16. 2
Potential: Technology and CDS 11:25. 3
CDS: Making the sausage 18:12. 4
Tech driven decisions impact more people 23:52. 6
Out of touch with life? 27:18. 6
Upstream and downstream of the decision 31:57. 7
Realistic expectations of CDS 24:33. 8
Please comments and ask questions
- at the comment section at the bottom of the show notes
- on LinkedIn
- via email
- DM on Instagram or Twitter to @healthhats
Credits
Music by permission from Joey van Leeuwen, New Orleans Drummer, Composer
from Oliver 1968, Fagin singing I Think I Better Think It Out Again
Thanks to these fine people who inspired me for this episode: Barry Blumenfeld, Blackford Middleton, Danny Sands, Edwin Lomotan, Frank Opelka, Geri Lynn Baumblatt, Ginny Meadows, Jerry Osheroff, Jodyn Platt, Maria Michaels, Pat Mastors, Sharon Sebastion
Links
Oliver 1968, Fagin singing I Think I Better Think It Out Again
Agency for Healthcare Research and Quality, Clinical Decision Support
Related podcasts
About the Show
Welcome to Health Hats, learning on the journey toward best health. I am Danny van Leeuwen, a two-legged, old, cisgender, white man with privilege, living in a food oasis, who can afford many hats and knows a little about a lot of healthcare and a lot about very little. Most people wear hats one at a time, but I wear them all at once. I’m the Rosetta Stone of Healthcare. We will listen and learn about what it takes to adjust to life’s realities in the awesome circus of healthcare. Let’s make some sense of all this.
To subscribe go to https://health-hats.com/
Creative Commons Licensing
The material found on this website created by me is Open Source and licensed under Creative Commons Attribution. Anyone may use the material (written, audio, or video) freely at no charge. Please cite the source as: ‘From Danny van Leeuwen, Health Hats. (including the link to my website). I welcome edits and improvements. Please let me know. danny@health-hats.com. The material on this site created by others is theirs and use follows their guidelines.
The Show
Introducing Laura Marcial
Several lifetime’s ago I led the implementation of a couple of electronic health records. Implementation means all that needs to be done to shift from a paper record system to an electronic one or changing electronic health records. It involves people, workflows, data, software and hardware. Probably in that order. My first implementation was in with an addiction treatment provider that also managed benefits for a managed care company. The second was in an urban health system. I learned to value of the voice of all stakeholders in the implementation. I learned that workflows that don’t work well, don’t work well faster with automation and technology. Workflows are how people do their work, day-to-day. Picture frustration becoming faster frustration. Hence you need everyone who touches technology (stakeholders) around the table while you’re implementing. I also learned that garbage data in become faster garbage with advanced automation. Core data sets (lists of patients, providers, staff, procedures) all have errors and duplicates. If you don’t clean up the garbage, frustration and burnout ensue. So, with my background, I know enough about data and technology to be an early adopter and a profound skeptic. I enjoyed trying to solve a critical complex puzzle with people who depend on it. There’s more and less to it than meets the eye. So, dear reader/listener I’ve been writing and talking about clinical decision support – the human side. I promised that I’d invite a guest to explain how the tech sausage of clinical support is made. Voila! Here’s Laura Marcial, a data scientist from RTI (RTI is an independent, nonprofit institute that provides research, development, and technical services to government and commercial clients worldwide). Full disclosure, I serve as an independent consultant to this independent institute. Laura Marcial understands both the human and the tech sides of clinical decision support and has several years of experience trying to explain it to me. I’ve only been a fair student. Let’s see what we can learn with Laura.
Health Hats: Good afternoon. Laura, it’s great that you’re here. We’ve worked together and talked to each other at least a couple of times a month for three years. You’re a health information scientist specializing in information search, human-computer interaction, and the use and management of scientific data. For me, that means when I don’t understand something, I call you. How would you describe your work?
Laura Marcial: While I am a human-centered design enthusiast and do work in that area, I also do clinical decision support development and implementation work and a fair amount of health IT (Information Technology) evaluation work.
Health Hats: I wanted to talk to you, record, and share it because I’m a guy that knows enough to be dangerous. One of my missions as an activist, as Health Hats, is to explain to people who don’t live in the bubble about what’s going on and why it’s important. I’m very good at explaining the human side of healthcare and explaining some of the technology parts, like usability. But when it comes to the nuts and bolts, especially in the clinical decision support space, I feel inadequate. But it’s important. The industry puts so much energy into the technology. I’m hoping that this conversation will clarify some of the issues and mull over some of the challenges and dilemmas. Maybe even feel like it’s not so crazy that we don’t get it.
Laura Marcial: Yes, absolutely.
What is Clinical Decision Support from the inside?
Health Hats: When you’re talking to people outside of the clinical decision support bubble, CDS, how do you describe CDS?
Laura Marcial: The concept of clinical decision support has changed over the years in a good way. We like to think of decision support more broadly and consider how research and evidence can help guide decision making in real-time. That’s rarely easy in healthcare delivery. But, of course, that’s where decisions get made – at the point of care when you’re working with a clinician. Maybe you feel a lot of pressure to decide. I think for those of us who’ve been working in clinical decision support for a long time, we like to think of it as a health care consumer guide as a consumer report for health care. It requires a lot of tailoring though to the specific individuals, circumstances and environment, and context. We often fall short of those goals and provide maybe some help. But it tends to be limited.
Health Hats: Making decisions about clinical problems. I’m not feeling well. Either my function is off. I’m uncomfortable. I’m sad. I’m worried. I’ve tried stuff. It may or may not have worked. Now I need help. So, I go to a clinician to get help. The clinician’s job is to learn about me and what’s going on in my life, what’s my discomfort? Why am I there? We face decisions. The decisions can range from taking a pill that’s a prescription, taking a pill that isn’t a prescription, having therapy, changing your behavior. Is clinical decision support about any of that?
Laura Marcial: Yes. It’s about those options but being presented with options based on evidence. So the idea behind clinical decision support is to synthesize the evidence that’s out there, apply it to a specific context, a person, a situation, group – any of those things or all of them taken together – and then provide a set of recommendations that, also have some documented support. So, characteristics that help a person think through those options and perhaps choose a decision. Maybe this is the first visit or the first decision you’re making. And they’re five options. And so, you’re trying the first of these options. Maybe it’s the lowest impact option or the lightest lift option.
Health Hats: Lightest lift meaning the least disruption in your life? Amputation would be huge disruption.
Laura Marcial: Yes. With pain, for example, doing therapy before trying any therapeutic.
Health Hats: Is clinical decision support always with a patient and a medical person, like a doctor or nurse practitioner or PA? Or is it also with a chiropractor, with an acupuncturist, nutritionist, other licensed health care professionals, or is it mostly medical?
Laura Marcial: Traditionally, it’s something delivered to the clinician and a clinician decides. So, it’s been in the medical domain. But that doesn’t mean that that’s where it should remain. It clear it shouldn’t. And you know, we’re certainly seeing it moving into the behavioral health domain, whether that extends to all the services and community-based organizations that are providing other kinds of social supports.
Health Hats: When you say behavioral health, that’s addiction treatment and mental health services?
Laura Marcial: For example. Yes.
Health Hats: How have you used CDS in your life outside of your professional role, in your personal life or as a family member?
Laura Marcial: It’s not a formal form of clinical decision support. We all use clinical decision support. We don’t necessarily call it that or give it that name. But I think it’s looking for information, whether electronically or talking to friends, family, or community members to look for good options and problems or risks. For example, the most common and the most prevalent CDS today is looking at what’s the right drug for me to take for this new symptom that I have? Can I take something homeopathic? Can I take it with this other drug that I normally take? Is there going to be a problem? Do I have any allergies to this medication that I need to be aware of before I take it? Can I recommend it for a friend who’s aspirin allergic or something like that?
Health Hats: Okay. I’m thinking about how I’ve used it. Do I want to take the cholesterol medication? Do I want a colonoscopy? What medicine do I take for my multiple sclerosis? So those are all ripe for clinical decision support?
Laura Marcial: Yes, absolutely.
Potential: Technology and CDS
Health Hats: Okay. So, clinical decision support has a manual process, meaning a human process, that technology gets applied to because time is so short, and there’s just so much information that any one human being can’t take it in.
Laura Marcial: Yes. I have an example. My husband had a knee injury from early in his life. And there was no specific treatment. It was a tear, not a complete tear, but a stretch of his ACL, anterior cruciate ligament. He had done a little bit of therapy after. The knee was never the same again. It’s not that unusual for knee injuries like that. But he was still strong and having no real pain and no difficulty walking. Then, in our late forties, something was throwing a wrench in the works. Something was getting in the way of his knee joint. So, we spent a lot of time talking about it because the outcomes for knee surgery are poor. They’re kind of terrible. Most of the time, you go in and have something cleaned out and some repair done. But if you’re a person who develops arthritis, the arthritis will come back with a vengeance. So, we knew from the films, that he had very little cartilage. We knew that he was lucky to not be in chronic pain. But we also knew that there was the ghost or cloud of what looked like a random piece floating in his knee. So, after four opinions and lots of research and me saying consistently, “I don’t want you to have surgery on this knee because the outcomes don’t look good,” we finally got someone who agreed to go in and take a look. Look and see if they can remove any floating pieces. I think this is commonly called a joint mouse. There was this floating piece of cartilage probably from the early injury that was shaken loose when we were playing around in Baltimore on this piece of sculpture, and he landed on the knee. After that the joint mouse was floating. Anyway, they removed two joint mice from the knee and closed him up and sent him home. No more scraping or cleaning. He’s been much better.
Health Hats: What’s the clinical decision support of that? Did you say already, or did I miss it?
Laura Marcial: We did get four different orthopedic surgeon opinions. Over a long period, we looked at all the evidence, from a research standpoint, and from the popular press about how well people do and how well they feel after knee surgery. The overwhelming opinion was that it works for some and it doesn’t work at all for others. It’s 50/50 chance that you’ll have any success.
Health Hats: I feel schizophrenic in the sense that sometimes I think about what you’re saying, and I keep hearing, “this is humans doing work to learn more.” Then I sit around these tables where it’s about the technology of clinical decision support. I understand what you went through to figure out what your options were and then select one. It sounds all human. What’s the technology part of that?
Laura Marcial: The situation that I’m describing is the situation at present. However, there is a lot of good evidence if it had been available to even one of the clinicians we saw and would have been presented to us and examined carefully. It’s not rocket science. The evidence is solid. It could have been clear early on without four visits, without lots of consternation, who would have known what our options were and thought about a stepwise approach.
Now a word about our sponsor, ABRIDGE.
Make the most of time, trust, talk, control and connection with ABRIDGE. Push the big pink button and record the conversation with your doctor. Read the transcript or listen to clips when you get home. Abridge was created by patients, doctors, and caregivers. Check out the app at abridge.com or download it on the Apple App Store or Google Play Store. Record your health care conversations. Let me know how it went!”
CDS: Making the sausage
Health Hats: Can you take us into the back room where clinical decision support experts work? What do they do? How is the sausage made?
Laura Marcial: As a rule, we look for robust, stable guidelines or recommendations. We start with the human part. Maybe it’s taking evidence from lots of sources and adding some information about outcomes, about popular sentiment. I’m looking at the widest range of treatment considerations. And so, you take these guidelines and you look at them, with knowledge engineers and you transform them into computer-based algorithms that could be triggered or inserted into a clinical workflow. So that means, for example, I call up and make an appointment with my orthopedic surgeon because I’m having pain. I described the pain. I log that as a chief complaint for the visit. I get a message about 72 hours before my visit. “I got the question. Could you please log in to a patient portal and log this information about your pain.” So, I go in, and I log some information about my pain. I have a lot of good information to share because maybe I diary it myself, whether with an app or on paper. Then I get a little bit of patient education about how to describe my pain or how to better communicate my pain, or even some patient education about what some of the range of treatment options that I’ll discuss with my provider.
Health Hats: What’s a knowledge engineer?
Laura Marcial: There’s a transformation process that must happen. You can’t usually take a set of guidelines and then present them to a clinician.
Health Hats: So, a guideline is something like: here is your risk for having a stroke or heart attack and based on that, you should or shouldn’t take a cholesterol medication? So, that’s a guideline?
Laura Marcial: Correct.
Health Hats: Then somebody is taking the evidence, the research behind that, and all these different, what-ifs and characteristics. Then an engineer is somebody who takes that and codes it?
Laura Marcial: A knowledge engineer is going to basically write a set of rules that will then be coded, usually by a developer.
Health Hats: So, give me an example of a rule.
Laura Marcial: The mammogram or prostate cancer screening guidelines are rules and those rules can be applied. Those rules are pretty universal. They’re for anyone, once they reach a certain age, regardless of their other demographic characteristics. That recommendation will pop up. So even today, most electronic health records have rules for prostate screening and mammograms.
Health Hats: So, a rule is if this, then that?
Laura Marcial: That’s right. If a patient just turned 50, they meet the rule and the technology checks to see if there was a recent mammogram, checks to see if they’ve ever had a colonoscopy? There are several things that you tick off the list, to make sure that they’re compliant with those guidelines.
Health Hats: Then there’s a knowledge engineer that comes up with the rules. Then somebody must figure out where’s the data, what does it mean, and plug it into the rule?
Laura Marcial: The knowledge engineer will typically take the standard guideline and think about how to apply that in the clinical workflow. They’ll lay out some of the constraints for that clinical environment. Are we working in an emergency department? Are we working in an outpatient practice? We’ll make decisions about when it gets triggered, what kind of control the clinic or practice has to modify the trigger, and how the rules would get updated, maintained, and kept current. The knowledge engineer will communicate what the rule is, how it should function, and what information gets presented. They will consider questions to ask to make the recommendation to the clinician or the patient or both. Ultimately that becomes recommendations or specifications for a developer to build and integrate into an electronic health record.
Tech driven decisions impact more people
Decisions, decisions, decisions. Most of the decisions I make are thoughtless – without thought, not mean spirited. Daily, run-of-the-mill, life decisions. We just make them based on experience, intuition, inertia. When to do something. What route to take. It’s when decisions are unusual – like when you’re sick, then they’re decisions we hope can be thoughtful. Decisions based on previous, related decisions, or with people we trust. Then there’s personal research – asking others, going to the library or Googling. When decisions are made for communities – groups of people – that’s different. Then there’s equity, science, right and wrong, cost, policy, politics. Add tech to decisions and you can get good and bad faster with more impact. Laura talks about intervening. You need strong science when you intervene. It impacts lots of people.
Health Hats: Laura, am I wrong in thinking most healthcare decisions don’t really have evidence behind them or guidelines. Maybe I’m exaggerating. But there are so many decisions in healthcare. It’s like putting in a new kitchen with so many issues, so many decisions to make. It’s overwhelming how many decisions there are. And in healthcare, it seems the same. There’s a ton of decisions all the time. So, if there’s not guidelines and evidence for let’s say it’s 50/50. I have no idea. So, is CDS only for when there is evidence?
Laura Marcial: Clinical decision support is definitely intervening. And if it’s sort of universally intervening like a universal screening. It needs to be strongly evidence-based because you’re intervening for nearly every member of the population. So, it’s important. However, there are some things that we say we’re leaving to common sense or that we’re leaving to medical training in terms of what is the basic protocol, what is the right approach, what are the right decisions? But when it comes to lifelong management and treatment of chronic disease, there is usually good evidence. It’s just getting to that evidence and bringing that evidence together into some meaningful, reusable, repurposable form that is, a black box to some extent.
Out of touch with life?
Health Hats: When I sit around the table and talk to people who are clinical decision support experts, sometimes I have this overwhelming feeling that they’re out of touch with life. And that they’re so into the rules, the evidence, the app, the EHR. It just feels removed. What do you think of that?
Laura Marcial: There are a couple of things going on. They’re rooms full of people who’ve been working at this for decades now. So, their minds are always thinking, “how do we advance this? How do we continue to develop this?” So, they’ve been narrowing their own spheres and focusing. The degree of focus is sometimes overwhelming. And it does sometimes come across as losing touch with reality. On the other hand, that focus results in real ingenuity and change as with any major transition. We can make an analogy to Ford and his assembly line. There’s a sea change in the works. You visualize it and you have a sense that we’re just on the cusp of it. You want to push that envelope. You want to keep pushing at it. But we consistently have a hard time explaining what this is, describing it, and then even realizing the benefit of it. So, it’s been hard to describe.
Health Hats: I sit at these tables, and I’m the token patient. My drumbeat is, “I’m a charismatic, smart guy with one voice and one life experience. I’m happy to share my life experience and I’m not shy about my voice. On the other hand, there’s a sea of stuff I don’t know about life and other people’s lives.” In the CDS bubble, when you’ve seen groups of people or CDS experts making good use of this lived experience, how have you seen that work or not work?
Laura Marcial: The conversation about clinical decision support has been around a long time. Just over the last four years, the three that you and I have been working together, we’ve seen some really important shifts in the brain trust that has been carrying the torch for CDS. They’re embracing the patient perspective in a way that they haven’t conceived of before. I don’t think they were ignoring it. I think that they were so intent on solving a problem at the clinical decision-making point that they neglected to involve the care and needs of the patient. Not that they didn’t care about the outcome of the patient, but because there’s so many balls to juggle already, they didn’t think about adding the balls of the patient perspective although they knew, to a large extent, that they were there. Part of that is because operationally, you’re trying to provide, in a fairly paternalistic approach to care delivery, to provide the right tools at the point of care, at the point of decision-making, for the clinician. Little did we realize that key to being successful is whether the patient embraces the recommendation. And if it’s one or five recommendations, how do you weigh, manage, and measure these options? Those are big shifts: involving the user in the process of thinking through what it should look like and how it should function and when it should be delivered. Those are all great new transitional changes for clinical decision support.
Upstream and downstream of the decision
Health Hats: When I talk to people about making clinical decisions, there’s so much focus on the decision. Sometimes I feel like a decision, and a buck and a quarter will buy you a Pepsi. You make a decision, but you still got to do the work. And there’s the upstream and there’s a downstream. It seems that the more expert people are the narrower they think. When they get up against patients who are expert in their lives, the decision is a drop, is a moment. What about who am I and what’s important to me? That’s upstream of the decision. What have I tried and what worked and what didn’t work? Whether it’s medical or nonmedical, legal, or illegal, there’s all that stuff. That’s downstream of the decision. Then there are so many different kinds of decisions. One-time decisions like “should I have my appendix out or not?” But then some decisions are over and over again. Like, am I going to take the pill? Am I going to get the pills, and then am I going to take them? Let alone behavior change, which is over and over and over. Then if the decision doesn’t turn out as expected, there’s another set of decisions. There’s a laser focus on the decision. I get that you have to be narrow because otherwise, it’s just crazy overwhelming. But it doesn’t seem real.
Laura Marcial: Yes, I agree. A big challenge is weight loss. It’s an everyday, every decision program. It makes a difference over a long period. The idea that it’s a point in time decision is completely erroneous. It’s every single day, every single moment choice. You’re right that philosophically, we understand there’s a difference between treatment decisions you’re making a life or death decision with a burst appendix versus “am I going to take my pain meds?”
Realistic expectations of CDS
Health Hats: When I communicate with people outside of the bubble, I need to be realistic about what we can expect from clinical decision support. I tend to sit there, and I think, “Oh, but you guys aren’t thinking about this and you guys aren’t thinking about that.” I get frustrated. Then I think, “well, this is totally the wrong group of people to talk to about this stuff or to consider it. They’re not expert in this.” It’s right sizing my expectations.
Laura’s going to talk about the N of one. Scientists study groups of people – say women over 50 years old for screening mammograms. That’s billions of people. Can’t study them all. So, they study a sample – some of them. N is the number in the sample. Say a thousand of the billions. The N is then a thousand. An N of One is you or me (if we were women). Not the population, not a sample, just one person. An N of 1.
Laura Marcial: It’s so true. The reality here is that you’ve been an N of one in a sea of clinical decision support experts. You’re trying to represent the biggest part of the population involved in and realizing any value from clinical decision support. And they’re narrowly focused on the technology of CDS. It can feel like a lot of the message falls on deaf ears. The reality is that it is sinking in. There’s this desire to bring a tool as a solution to a problem that’s very human. You’re alone, but more generally in engineering broadly, it took a, took an engineer alone to figure out that you could solve that problem. She created a whole new host of problems in doing so.
Health Hats: I’m a layperson who is an expert in my life. I want to make good use of clinical decision support for when I’m in that rare moment where I make choices with my professional team member. What should I do to be a better consumer? I don’t know about the words I’m using, but you get what I mean?
Laura Marcial: Yes. I think we’re looking for strong advocacy and input from patients. Give feedback to your clinical teams about the kinds of information that motivate you to adhere, to participate, to understand. In the information science world, we talk about having conversations about your information seeking behavior. What kind of information do you look for to prepare for your visit or to help you make decisions, and what do you trust or who do you trust?
Trust, trust, trust
Health Hats: Yes. Trust. Oh my God, isn’t that huge?
Laura Marcial: Yes, it is huge. That’s a critical piece of what people are really looking for in that interaction with the clinician. Is what my clinician’s recommending for me really the best there is out there? Can I trust them that they’ve done their research, or they’ve done their homework in making these recommendations?
So what? Do I know more now?
Health Hats: I’m trying to decide what more do I know now than I knew 45 minutes ago when we started this conversation? I need to right-size my expectations of clinical decision support and what it applies to. I feel a confirmation that it’s a human problem that we’re trying to automate in a busy and complex world. Trust is really important to this. I don’t know that we talked about this, but in CDS circles, I hear a lot about how we need to educate the patient. My frame is more that we need to learn about each other. I think I’ve told you this story before. When my neurologist diagnosed me with MS, he said, “I’m an expert in drugs and therapeutics to treat populations with MS. But I don’t know crap about you.” His goal was to learn about me, and my goal needed to be to learn about MS. I appreciated the two-way street of that. Sometimes when I listen to CDS experts, there’s not much of that two-way street: we have to learn about each other for it to work.
Laura Marcial: If we look at what is the motivation to make this automated solution, there are two core motivations. One of them is to improve quality overall. For example, I’m making the same recommendation in every healthcare facility throughout the nation. I’m providing the stepwise solution or treatment options to a certain level of fidelity that is sameness. Everywhere you go, there is at least the sense that you can move the needle on quality, universally. So, mammogram screening for everyone. So at least, on some level, it does catch more of the possible cases earlier and provide more treatment options for those few who will have positive screens. So, it’s providing a level of quality and ensuring that quality throughout the healthcare system. That’s a primary goal. Yet I think the practitioner understands that these recommendations are for populations. They’re for large groups of people. When you get an individual, that N of one, into your practice, you have to figure out how that recommendation applies to that person and to that person’s situation. The clinical decision support is intended to be a scaffold, a way to have a conversation. But it can’t replace the conversation. But it provides some critical treatment options, some decision points. Maybe it facilitates a conversation. But if you’re not sharing bi-directionally some information that helps you tailor the decision to the person and the situation, then it can fail.
Health Hats: I have to process this very informative conversation. Thank you.
Laura Marcial: Yes. Thank you, Danny.
Clip from Oliver 1968, Fagin singing I Think I Better Think It Out Again
Reflections
Readers, you missed the brief clip in the podcast from the 1968 musical, Oliver, with Fabian singing: I’m reviewing the situation. …I think I better think it out again. Hey! OK. I’m processing. I’m back to the 3T’s and 2C’s – Trust, time, talk, control. and connection. People stuff. Tech still excites me – what potential! I’m grateful to the experts toiling to make sense of clinical decision support and deliver products that help us. I’m still a tech skeptic. I’m more convinced than ever that people at the center of care (patients, clinicians, and the people that support us) need to sit at every table. No one knows what they don’t know – patients, clinicians, coders, knowledge engineers, or me. Tech makes humans faster with a bigger reach. But it’s not magic. It can be mysterious. It can be remote and inaccessible. But it doesn’t make bad good. Only humans can do that. I need to right-size my expectations.