Skip to main content

In the Wild: Data to Info to Action & Back & Again #158

Spread the love

Data is not info is not action. Data, cooked into Info could lead to action. People add context, values, culture, experiences, history, biases to data and info.

Blog subscribers: Listen to the podcast here. Scroll down through show notes to read the post.

Subscribe to Health Hats, the Podcast, on your favorite podcast player

Please support my blog and podcast.

CONTRIBUTE HERE

Episode Notes

Prefer to read, experience impaired hearing or deafness?

Find FULL TRANSCRIPT at the end of the other show notes or download the printable transcript here

Contents with Time-Stamped Headings

to listen where you want to listen or read where you want to read (heading. time on podcast xx:xx. page # on the transcript)

Proem.. 1

Introducing Bryn Rhodes and Laura Marcial 04:02. 2

Realizing the fragility of health 06:41. 2

We made which decision? How did it turn out? 09:52. 3

End point? Decision, action, continual learning? 13:26. 4

What data to collect? When to collect it? 14:32. 4

Who does this work for in real life? 15:48. 4

Context matters for blood pressure 18:14. 5

Ongoing learning post research 22:32. 6

Spanning the gulf between specialized expertise 23:52. 6

Data needs infrastructure to become information 26:15. 6

Summarizing for the Public 30:17. 7

Hubris. Satisfied with stopping at results. 33:45. 8

Reflection 39:21. 9

Please comments and ask questions

Credits

Music by permission from Joey van Leeuwen, Drummer, Composer, Arranger

Web and Social Media Coach Kayla Nelson @lifeoflesion

The views and opinions presented in this podcast and publication are solely the responsibility of the author, Danny van Leeuwen, and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute®  (PCORI®), its Board of Governors or Methodology Committee.

Sponsored by Abridge

Inspired by and grateful to Lauren McCormack, Bill Lawrence, Lygeia Ricciardi, Cynthia Cullen, Juhan Sonin, Cheryl Damberg, Jack Needleman, Matthew Pickering, Aaron Carroll, Greg Merritt, Wesley Michael

Links

Health eDecisions

Clinical Quality Framework Initiatives (CQF)

FHIR® (Fast Healthcare Interoperability Resources).

Healthcare Triage Podcast Triage Science Culture and Reproducibility Series An excellent series about the challenges of research industrial complex values and priorities. So much to learn here even for me, eyeball deep in research funding.

Related podcasts

Health Hats episodes about Clinical Decision Support

Clinical Decision Support Technology – Still Human

Humanity Before Technology – Clinical Decision Support

A Zebra, Not a Horse: Rare Patient Voice

About the Show

Welcome to Health Hats, learning on the journey toward best health. I am Danny van Leeuwen, a two-legged, old, cisgender, white man with privilege, living in a food oasis, who can afford many hats and knows a little about a lot of healthcare and a lot about very little. Most people wear hats one at a time, but I wear them all at once.  I’m the Rosetta Stone of Healthcare. We will listen and learn about what it takes to adjust to life’s realities in the awesome circus of healthcare.  Let’s make some sense of all this.

To subscribe go to https://health-hats.com/

Creative Commons Licensing

The material found on this website created by me is Open Source and licensed under Creative Commons Attribution. Anyone may use the material (written, audio, or video) freely at no charge.  Please cite the source as: ‘From Danny van Leeuwen, Health Hats. (including the link to my website). I welcome edits and improvements.  Please let me know. danny@health-hats.com. The material on this site created by others is theirs and use follows their guidelines.

The Show

Proem

Sometimes I feel absolutely disgusted with the hubris of academic research. How did we ever get to the place where we’re satisfied when we define high-value evidence from research that controls for most real-life factors (variables)? Supposedly good research controls for circumstances, settings, and populations until the research becomes so vanilla as to be meaningless to me and you. How can we expect people to use vanilla research to make decisions when it excludes people like them? Excludes women, people of color, the homeless, the incarcerated, those with rare diseases, and only includes able-bodied people or mostly people treated in academic medical centers? Research that barely knows how to use person-recorded data; can’t figure out how to track people’s decisions or the outcomes of those decisions over time; finds claims and medical records data to be the strongest data; can’t figure out how to correct medical record data errors. I get how complex this is, but where did we find so few resources to continually learn and get out of this rut? It’s just not good enough. I’m tired of the excuses: we don’t have interoperable data, sufficient standards, tested methodology, single person ID. The excuses are truly massive challenges. Yet it’s not good enough that we stop after we do the easy stuff. OK, I take that back. It’s not easy. But it’s what we have an industry for – systems, processes, and money – to do that kind of research. But we haven’t figured out how to take the next step to make it meaningful to more people.  OMG, what a rant.

Let’s narrow this frustration to medical decision-making (I know, not so narrow). We know that making decisions/choices about our health is complicated. We know that making choices as individuals relying on studies of populations, groups of people, is fraught – as much art as science. Awareness of disparities in healthcare delivery and research just adds to the fraughtness. I don’t think that’s even a word. I’ve been perseverating for decades about decisions people and their clinician partners make, tracking those decisions and the resulting outcomes in real-time or at least over time. In my naivete, I thought this was brilliant and simple. I bare my humility here once again. But like a dog with a bone, I can’t let go. Today, I want to add to the complexity by speaking with two informaticists, asking them about the dilemma from their specialized point of view. Informaticists are experts in information, data, and decision-making. Bryn Rhodes and Laura Marcial specialize in creating, testing, and implementing apps for clinicians and patients while making decisions together.

Introducing Bryn Rhodes and Laura Marcial

Bryn Rhodes has been a software developer for more than 20 years, focusing on information and database management systems and applications.  He has been a member of the Health eDecisions and Clinical Quality Framework Initiatives (CQF) working on the problem of sharing executable clinical knowledge and is currently working on bringing the standards that have been developed in the CQF initiative to FHIR® (Fast Healthcare Interoperability Resources). FHIR standards define how healthcare information can be exchanged between different computer systems regardless of how it is stored in those systems. It allows healthcare information to be available securely to those who need to access it and those who have the right to do so for the benefit of a patient receiving care.

Laura Marcial is a data scientist from RTI (RTI is an independent, nonprofit institute that provides research, development, and technical services to government and commercial clients worldwide). Full disclosure, I serve as an independent consultant to this independent institute.  Laura describes herself as a human-centered design enthusiast, working in clinical decision support development, implementation, and health IT (Information Technology) evaluation. Laura Marcial understands both the human and the tech sides of clinical decision support and has several years of experience explaining it to me and us. Today’s episode is Laura’s third as our guest.

Realizing the fragility of health

Health Hats: Welcome. I appreciate both of you, Laura and Bryn joining us this morning. I wanted to start with an introduction; when did you first realize that health was fragile? Laura, do you want to start?

Laura Marcial: Sure. I’ve been working in the health domain after growing up as a doctor’s child for most of my adult life. I would say that there was a period when I was a young mom trying to help my clinical father navigate a major procedure. And we were trying to determine the right path and ended up going to dozens of clinical appointments to decide which path would work the best for him. It was an eye-opener how complicated the system and how manual the process can be. Like. One of the key issues was passing around medical images, getting them to the right people, making sure that everyone was looking at the same thing. Making sure that it was the right kind of information to decide. Yeah, that was a complicated process. And then the whole time I was railing it, why isn’t this easier? Why is this so difficult that you have to know what you’re doing and how to navigate the system

Health Hats: Bryn, how about you?

Bryn Rhodes: I don’t know. That’s a tough question. I guess it would be when I was probably in elementary school. And my mother was diagnosed with MS. That would probably be when I didn’t understand what was going on, but yeah. And I know it wasn’t good, so that would probably be it.

Health Hats: You were in elementary school, so probably you weren’t thinking about decisions made. Now your career is involved with health decision-making. How did you start becoming aware that there were decisions to be made in health and medical care?

Bryn Rhodes: I guess it’s probably pretty late in my life. But I remember taking my father-in-law to an appointment, and halfway through the appointment, the nurse realized that they couldn’t perform the procedure that had been scheduled because of an operation my father-in-law had a long time ago. And that’s a decision that I knew was being made, but it was my first kind of firsthand experience with a mistake, if you will. And had the correct information been available, that wouldn’t have happened. That’s not a life-threatening situation, but it’s an example of sharing the appropriate information leading to a wrong decision. And that was something that kicked me in that direction. How do we make sure that this kind of information is available and that those kinds of information are a very simple mistake? And it goes up from there.

We made which decision? How did it turn out?

Health Hats: So, the reason I wanted to talk with the two of you was that it seems like all these decisions are being made, and there are decision aids, guidelines that help people, whether they are clinicians or patients or caregivers, help them make the decisions. And so, people choose, they choose A, choose B, choose C, or they don’t choose it all. But then what happens? Choices are made for so many different reasons. Some of it is based on science, some of that based on circumstances, but a decision is made, and it doesn’t seem like we systematically keep track of the decisions made. And then what were the outcomes? And I guess my question for you guys is first, is that true? Is my assumption valid? In the informatics realm, what are the challenges of doing that? Bryn, do you want to start?

Bryn Rhodes: Sure. It’s true in general. We don’t tend to track the outcome in a way that lends itself to learning from that outcome from a system perspective. Many specific studies control all the variables, establish outcomes, and try to learn from them. And that’s where we are in terms of contributing to decision-making. The challenges are, how do you get to that level of rigor that’s needed to establish a result, when you’re not designing the experiment, so to speak, from the beginning, right? When you’re just dealing with real-world data that’s coming through. And part of it is that we don’t capture the outcome. But part of that is because we don’t necessarily know that we need to. And maybe we didn’t track the particulars that need to be tracked. And so, you get a lot of perspectives trying to understand what happened and using proxies for the real data you need. And that’s ambiguous and not having that level of detail on the data means that you can’t trust it to the level you need to establish a result. Oh, it was this kind of visit. So, it must’ve been this kind of blood pressure reading.

End point? Decision, action, continual learning?

Health Hats:  Perhaps the sequence to action based on research is research completed, findings shared and used by people and clinicians, choice/decision making, then decision acted upon. But that’s not the endpoint. What was the actual outcome of that decision acted upon? High blood pressure found. Guidelines say take drug X and change habit Y. Drug X prescribed, and habit Y changed. Did the blood pressure come down? What else happened, unintended or otherwise? What do we know about the person’s context and the data? What did we learn? Bryn is talking about outcomes, not the decision made. I asked Laura to help me out framing a question

What data to collect? When to collect it?

Laura Marcial: Sure. I just have a couple of responses. So, I think Bryn, you’re right. That part of the nature of the problem is what to collect, when to collect it, how to manage the outcome piece of things. And I think the gold standard here is a clinical trial where everything is highly constrained, and the outcomes of interest are where we control for other factors. Still, with real-world data and real-world situations, it’s much harder to control for other factors. That said, I think there’s also this sort of series of parallel paths. So, we’re working on looking at interventions that may disrupt that process in a way that helps us collect outcomes information for a specific decision. And yeah, the workflow associated with making a decision and then implementing a change as a result of that decision. So, a change in medication, an order for a lab, or a referral to another provider. Those aren’t well connected yet—the ideas around generating information to support decision making and then executing on a decision. And then what happened, as a result, the decision they’re all running in parallel paths and not very well connected. That’s at least the case if they are all running effectively or not.

Who does this work for in real life?

Health Hats: So, I come at this from a different point of view, which is OK. So, these controlled studies have been done. And decision aids or guidelines are built around that, and so the recommendations or the evidence is around specific diagnosis, specific circumstances. But then, if we’re starting to think about the disparities of people in their backgrounds, circumstances, and environments. OK, so let’s get broader and be able to say, OK, who did this work for in real life? And so, it’s, so it just frustrates me that this need to like control so much. As life is not that controlled. Wouldn’t we be building a data set of broader circumstances and broader diversity if we were able to collect this information? And so, in a way, it would seem maybe it would be pretty good information versus Grade A information. I just don’t know how I don’t. I feel like we stop before it’s significant to more people. Do you have some thoughts there, Bryn?

Bryn Rhodes: Yeah, it’s challenging because I hear you, and I understand the concern and the frustration with why we can’t just share all the data and everybody. But every data point comes with a bunch of contexts. And the further away from the actual clinical process that data point is, in terms of where it’s collected, the more context is lost and even in cases where it’s collected right at the point of care, there’s not necessarily the context captured that’s needed.

Health Hats: Can you give an example of context.

Context matters for blood pressure

Bryn Rhodes: Yes, blood pressure. I recently broke my right fifth metacarpal. I’ve been working in this field for a long time now and thinking about all the decisions and what kind of measures I was going to show up on. And what kind of things the doctor is going to get dinged for if they didn’t do when they saw me. And one of the things was a blood pressure reading, and it was high. And they said that’s normal for having an injury of this kind. It’s not unexpected, so nothing about it., But maybe it really is high. We don’t know. And on a follow-up visit no, no blood pressure was taken. Obviously, that’s unrelated to the injury, but the blood pressure reading associated with that visit being abnormally high will probably show up on a measure somewhere. But was the context that it’s part of an injury captured as part of that blood pressure rating. And that’s, again, it’s a very simple example. But that’s the kind of context that isn’t always captured with the specificity needed to use the data and even say something like make use of the data. What, for what purpose? If it’s for a quality measure, explicitly looking for blood pressure control, you need a lot of information about that blood pressure reading to make sure that it’s valid. And that context is key, not only for just having it be available but for knowing what kinds of things you want to use it for. And that, I think, is the real challenge because you can’t just take data from an unrelated visit and apply it universally in any situation.

Health Hats: That makes so much sense.

Bryn Rhodes: So, it’s hard, right? It’s challenging. I think there’s a lot of really if the more focused with an application, the more confidence you have that the data you’re collecting as part of that application matches the clinical intent of the workflow in the application. And so, if you have an EHR, that’s a very general-purpose tool, right? That covers thousands and thousands of use cases. The data you collect in those workflows is not necessarily going to match the clinical intent of the use case that’s being served. So, it’s challenging for lots and lots of reasons.

Health Hats: Let’s unpack that for a second. The EHR, the Electronic Health Record, is a general-purpose tool covering thousands of use cases. Software engineers think in use cases. Most people think of a scenario, a situation with a patient, a diagnosis, and a clinician.  Many people have similar use cases. High blood pressure after an injury is a use case.  So is high blood pressure with diabetes. So, is high blood pressure after a bonk on the head or high blood pressure and glaucoma. Electronic medical records, a general-purpose tool, needs to work for all of those scenarios, also known as use cases. So, see where context fits in?

Ongoing learning post research

Health Hats: So, let’s just posit that money, funding is not a limitation and that what we want to do is increase the ongoing learning we can do post-research. We came up with these guidelines about treating blood pressure and we did it in with limits. Again, limited populations, limited settings. We came up with something, and then we wanted to continue the study for ever-larger groups of people by collecting useful data routinely over time. What would that take? On the information/records level, what would it take to continue those studies?

Spanning the gulf between specialized expertise

Bryn Rhodes: To me, the real gap is the distance between the people that can make the systems behave the way they need to and the people who have the domain expertise and understanding and clinical expertise know what the workflow is, what the data means, and how it should be reasoned over. And that gap is too big because it’s too specialized in both directions, right? There’s this giant kind of gulf between software engineers on the one hand and clinical experts on the other. And that’s where informatics comes in. If you think about the problem, how can we best support the creation of systems that match clinical intent? If you start from an approach that recognizes that’s what you’re doing. You’re building a clinical system. And you involve informatics and clinical expertise throughout that process in a way that lets them not only participate in the requirements gathering and all of that. But, in the definition, in the expression of logic involved and the reasoning that’s required. Then I think you start to close that gap. And that’s where I think you get to places where you have systems that collect data that matches clinical intent and captures appropriate content because it’s part of a clinical workflow. And if you have that and you have a blood pressure, for example, measure and process that you’re working through. And you’ve built it in that way, then you have some confidence in the collected data and that the appropriate context was established whenever that data was collected.

Health Hats: And interesting. Laura, what do you think?

Data needs infrastructure to become information

Laura Marcial: Just a couple of comments. To take from what Bryn said, I know sometimes we’ve used the analogy, and maybe it’s lost a certain point, but I think it does work at basic building blocks. But with infrastructure like highways and bridges and things, we’ve been working toward some sort of rules of the road, some rules of engagement, some standards. These help us interoperate this data, move it from place to place, or exchange it in a meaningful way. And then some infrastructure to support that. What we’re saying here is that some of that infrastructure is non-specific. It’s not so specialized. And we’ve been working harder to make it more specific to the clinical problem and a clinical solution. Being able to connect pieces of information from the patient and pieces of information from clinical training and bring them together in terms of deciding, and then chasing an outcome, looking toward a path that may improve quality of life for a specific person in a specific way. High blood pressure being a good example. It’s important to keep trying to chase that down. But there are pressures at every step in the sequence. Pressure from the kind of quality measure a world to develop a way of looking at those outcomes as a succinct, concise quality measure. And then certainly there are pressures from the patient perspective to try to achieve an outcome that improves their lives. So manages blood pressure, for example, and ensures that they don’t develop secondary or additional disease processes due to not managing the blood pressure. But I think the overall story here is that it’s not super straightforward. It isn’t a matter of just hopping in your car, which you can then drive down the road. It isn’t. Some things are the same, and there are some infrastructures that we can, there are some roads we can lay down here to make this easier. And I think that that’s the path we’re on is what kinds of roads can we lay down? So, in terms of what is a holy grail, without money being considered without everyone playing nice together? Is there an app store environment for clinicians to access the tools they think will serve them best in interacting with their patients? I know that clinicians are configuring electronic health record environments to the best of their ability to match the workflow they think will work best for their patient population. We’ve been working on developing applications that can support some of those kinds of activities and improve the movement of information from patients to providers. And then from providers back to patients to make clinical decisions and support them.

Now a word about our sponsor, ABRIDGE.

Use Abridge to record your doctor’s visit. Push the big pink button and record the conversation. Read the transcript or listen to clips when you get home. Check out the app at abridge.com or download it on the Apple App Store or Google Play Store. Record your health care conversations. Let me know how it went!” 

Summarizing for the Public

Health Hats: Laura, can you help me? Let’s try to summarize a little bit. When I have talked about this to laypeople, they’re like it’s a no-brainer. And when I talk to technical people, it’s like impossible. If I wanted to if we wanted to say to laypeople why this is such a challenge, what would we say?

Laura Marcial: I think Bryn brings up a great point that some of this is just like we described in terms of RCTs or randomized clinical trials. To do good science or research or develop a more solid, robust clinical practice guideline, you need to control outside factors. So, you do these pieces of research in silos, not completely disconnected but very highly controlled. And then you generalize from there you take that guideline or principle and say, let’s put it into practice. And that placing that action into practice means that you’re moving it into the wild you’re moving into the real world. And I think what we underestimate is how we can model the complexity of the real world in a meaningful way and result in kind of clinical practice guidelines that are robust enough to support influencing care decision-making. So, I guess that means that when a person has an individual problem that they’re trying to solve and bringing that to their clinician, they feel like it’s straightforward. But clinicians trained to think about other potential disease processes, other factors, maybe external, may be biological, who knows to make a treatment, to help make a treatment decision and will bring their experience with patients, their expertise in terms of what they know from their clinical training and what they’ve learned from and about the patient. What we’re trying to do is say that we want to capture at least decisions and their outcomes in that conversation or in that process. And decisions happen in real-time. Sometimes they happen very quickly. If we think about the situation with COVID and how quickly some treatment decisions must be made, then actions are taken. Without a lot of solid evidence, in many cases, it’s a process that can turn information around quickly. So, in the perfect sense, in our heads, this idea that we can capture data that’s meaningful at different time points would help us see how a clinical recommendation is connected to a decision connected to an outcome. And then that outcome reinforms that clinical guidance, to begin with, that clinical practice guideline. It sounds good. It’s easy to draw on a piece of paper, but it’s much harder to build.

Hubris. Satisfied with stopping at results.

Health Hats: Thank you for that. Then there’s such hubris in this in the sense that we think, and here I am eyeballing deep in research. I’m on the Board of PCORI. And yet we believe that by doing these studies and laying things out in narrow circumstances, decisions will work in real life when there is so much variety and so much excluded. It’s just hubris.

Bryn Rhodes: That’s an interesting take. I think it’s caution, right? Because what Laura is saying is that complexity is recognized. You have to get to that level of control to establish a result with a level of statistical significance that you can use to say, yeah, this is an effective decision to be made in this case. But there are so many variables that are controlled to get to that level of confidence in the data. When we say apply this decision because we’ve established that it works in this scenario, I think there’s a tremendous amount of caution that comes with a guideline and that there’s a huge amount of effort that goes into making sure that the evidence supports recommendation being made and there are whole frameworks right related to and whole fields of people that are dedicated to figuring out how best to communicate the level of confidence and support that the evidence has for a particular decision and surfacing that when a recommendation is made as part of decision support.

Health Hats: So then but I’ll wrap this up, but I have to say that so many of these studies are not done, for example, with women, or they’re not done with people who live rurally or people of color and so then to think that and then anyway so anyway. Thank you. This is wonderful. I appreciate this.

Health Hats: Thank you guys so much. I appreciate that. Thank you very much for taking the time. Yeah. Thank you. Thank you. It’s awesome.

Laura Marcial: Have a good day. Bye-bye, Happy New Year.

Reflection

Data is not information. Information is not action. Data by itself only takes up space – ink on paper and bytes on a drive. Data, cooked into Information, can result in action, could lead to action. Decisions made, habits changed, treatment completed, medicine taken, peace of mind attained are actions. To transform data into information, people – patients, clinicians, researchers, informaticists, designers, companies – add context, values, history, culture, biases. Sometimes people use algorithms – the superfast crunching of data – to transform that data into information.  But it’s still all done by people. Even with radical transparency, we can’t possibly know all the context, values, history, culture, and biases that transform data into information. The bridge between data and information requires infrastructure – standards, methods, communities, hardwired collaborations between stakeholders, as Bryn said about informaticists, clinicians, and patients, not to mention academic researchers, developers, health systems, government. The action also requires infrastructure, communities of support, money, and hope. Perhaps we could put as much energy into understanding and building infrastructure and methods for context and action as academic research. Would that get us closer to my pie in the sky desire to better understand the ongoing impact of decisions on individuals and groups that look like each of us? Goodness, the complexity boggles my mind.  Although I’m almost 70 and a white man of privilege, I often think of myself as little 8-year-old Danny van Leeuwen, bewildered and unsettled. What can I/we do?  What action can I /we take? We can be curious, ask questions, listen, network, participate, expect, partner, adapt, invest, and hope. Thank you. Onward.

 

 

 

 

 

Danny van Leeuwen

Patient/Caregiver activist: learn on the journey toward best health

Verified by MonsterInsights