Yesterday, I Part of our in-depth interview with Dr. Brian Hausfeld of Johns Hopkins Medicine, senior medical director of digital health and innovation and associate director of Johns Hopkins in Health, discussed the role of artificial intelligence in healthcare as a whole. What did you think?
Today, Hausfeld, who is also a primary care physician in internal medicine and pediatrics at Johns Hopkins Community Physicians, turns his attention to Johns Hopkins itself, where he and multiple teams across the organization have developed ambient prescribing and patient portal applications. I have implemented AI. . They are working with EHR giant Epic on deploying AI for chart summarization – a major step.
Question. Let’s turn to AI at Johns Hopkins Medicine. You are using Ambient Scribe technology. How does it work in your workflow and what kind of results are you seeing?
Oh A very situational place for sure. We’re seeing a lot of products that adopt a broader strategy. We are similar to many who have taken some early steps in this space, recognizing that technology has not done what it should have done in healthcare.
Of course, much of the data would say, at least for the clinician, that technology has done more harm than good in some ways, not least our own work flow and healthcare experience. So, we’re trying to think of some of the pieces where we can take the technology back to the core and make it more fun.
Again, many have recognized the documentation burden that sits atop our clinicians with the explosion of EHR content, both through regulatory requirements and general workflows across many large systems. So, for most of our systems that is. Raised on Ambient AI, a listening device, its ambient part is listening to the clinical encounter, whether it’s an outpatient visit, an ER history or an inpatient round.
And on the backend, the AI tool, typically what is now known as a large language model, such as GPT, then takes the spoken word between multiple parties and constructs it into a new generative paragraph.
It is using the original function of these larger language models to create a paragraph of content, usually around a specific prompt. Given this model, “Please write a history based on this medical background.” And we’ve currently deployed it in a number of ambulatory or outpatient clinics, in a few different specialty areas, currently with our first product and possibly thinking about how we can understand different levels of functionality. How to use more than one product.
I had a clinic myself just this morning and I was lucky enough to be using a device using the Ambient AI technology, my smartphone, with our EHR on the phone, and launching the Ambient AI product. Be the enabler, who listens and creates the encounter. A draft note, which, of course, I am responsible for and need to review myself and edit to ensure clinical accuracy. It’s really improving that clinical interaction a lot.
The ability to take your hands off the keyboard, look directly at the patient, and have an open conversation about a very intimate topic, their own personal health, and really take their eyes off the computer and into the patient, in my mind, so far. The main advantage is
Q. Johns Hopkins Medicine is also using AI for patient portal message draft responses. Please explain how doctors and nurses use it and what kind of results they get.
Oh This enterprise tool is intended for beginner users. As many who follow HIMSS media content are probably well aware by now, patient emails or inbox messages, generated through the patient portal, have exploded through the pandemic. .
Here at Hopkins, we saw a nearly 3-fold increase in the number of messages sent by patients to our physicians in late 2019 from our pre-Covid 2019 run rate to what we’re seeing now. And some of it is actually a good thing. We want our patients to engage with us. We want to know when they are feeling well or not, and help with triage.
But again, clinical workflows, including payment models and clinical care models, are not designed for that constant contact, that continuous communication. It is built around tours. We acted in good faith, communicating with our patients. It’s a very simple method, something we all do every day – email and text.
We are used to communicating what we would say informally or through written communication. But we didn’t really change the other side of it. The unintended consequence was dumping all this volume on an unchanged clinical practice system.
Now, we are all trying to figure out how we accelerate improvement in this important area of clinician burnout while maintaining the benefit of our patients having free contact with their clinical team.
So, a message comes up. Some things are excluded, especially if they have attachments and the like, because those types of messages are more difficult to interpret. And once the message reaches a member of the clinical care team, those who have access to the pilot deployment of AI draft responses will see the option to select a draft response based on the content of the original message. , will then look at the draft of the larger language model. Answer, try to interpret it appropriately based on some instructions given to it.
I, as a clinician, can choose to start with this draft or start with a blank message. Stanford just put out a paper on this, and lays out some of the pros and cons pretty well, that one benefit is the reduced cognitive load of trying to think of responses to very common types of messages. Is.
We’ve also seen that clinicians who have picked up this tool and use it on a regular basis are definitely expressing reductions in these baskets and metrics of clinician well-being. But at the same time, I think there’s still minimal time saved because draft responses are only really applicable and really useful for a minority of patient message time. In the published Stanford paper, it was 20% of the time.
We see our clinics range from low single digit percentages to 30-40%, depending on the type of customer, but still less than half. The tool isn’t perfect, the workflow isn’t perfect, and it’s going to be part of this rapid but iterative process to figure out how we apply these tools to the most useful scenarios at this point.
Q. I understand that Johns Hopkins Medicine is working on chart summarization through AI, with an initial emphasis on inpatient course summarization. How will AI work here and what are your expectations?
Oh Of all the projects, this one is in its early stages. This is a good example of the difference in the application of technology across the continuum of care and addressing the depth of the problem.
In the previous examples, the environment and these basket draft responses, we’re really working on a very comprehensive transactional component for the clinical continuum. Drafting single visit and associated discussion, single message and response. This is a lot of data.
When we start thinking about this broad topic of chart summarization, the sky is the limit, unfortunately or fortunately, for the problem to be solved – the depth of data that needs to be understood. And again, it needs to be extrapolated from the unstructured to the structured.
In fact, what we do as clinicians is that every time we interact with a chart, we move through the chart in different ways, we extract what we feel we need to know, and we summarize again. It is a complex task. We’re trying to work in the most targeted area, during patient admissions, you’re essentially more time-bound than other versions of chart summaries.
In an outpatient setting, you may have to The chart summarizes 10 years of information depending on why you are seeing this physician or the reason for your visit. I had a new patient earlier today. I needed to know everything about his medical history. This is the task of summarizing a large scale chart.
In inpatients, we have the opportunity to create some time constraints that need to be summarized. So, not starting out completely about the hospitalization – which may actually include the reason for the admission, which can then be relegated to the rest of the chart.
Within the admission, we have a daily progression of your journey through your hospital stay and transition. These are communicated in daily progress notes, handouts between clinical teams. And we can reduce the information to summarize the things that change and happen from yesterday to today, even though there are so many possible things – photos, labs, notes from the primary team, notes from the consultant, notes from the nursing team.
This is very time-consuming and still provides meaningful efficiency to inpatient teams, and certainly identifies a known area of risk, which is handoff. Whenever your clinical team changes during your inpatient stay, which is often because we don’t ask clinicians to work 72 hours a day in most cases, we have an opportunity to have more. Assist in these areas of risk handoff.
So, trying to be bounded by the range, and even in this limited case, a lot of work needs to be done to develop a potential tool for real use. Clinical workflow is, quite frankly, the breadth and depth of available data. We’ve just begun this journey of discovery, working with our EHR partners at Epic, and look forward to seeing what might be possible here.
To watch a video of this interview with bonus material not included in this story, Click here.
Editor’s Note: This is the seventh in a series of features on the top voices in health IT discussing the use of artificial intelligence in healthcare. To read the first feature, on Dr. John Halamka at the Mayo Clinic, Click here. To read another interview with Dr. Alpin Patel at Geisinger, click here. For a third reading with Meditech’s Helen Waters, click here. To read the fourth, with Rana including Epic, click here. For Mass General Brigham’s fifth reading with Dr. Rebecca G. Mushoris, click here. And for a holiday reading, with Dr. Melek Somai of Froedtert & Medical College of Wisconsin Health Network, click here.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS media publication.