This transcript has been edited for clarity.
Eric J. Topol, MD: Hello. I'm Eric Topol, editor-in-chief of Medscape, and I'm excited to be joined by Abraham Verghese from Stanford University for our first "Medicine and the Machine" podcast. We're envisioning a monthly podcast where we discuss the intersection of artificial intelligence (AI) with the practice of medicine.
This is obviously an important topic. It's timely, and we're thrilled to have the chance to have these conversations. Along the way, we will be bringing in other people and perspectives. Welcome, Abraham.
Abraham Verghese, MD: Thank you so much, Eric. It's great to be a part of this.
Topol: I thought we'd start out talking about the program you have been building at Stanford. It's called "Presence. The Art and Science of Human Connection." I had the privilege to join one of your programs last year at Stanford and I'm sure you're continuing to build on that. Maybe you could tell us about what you've been building there, because it's perfect for the subject of medicine and machines.
Verghese: I'd love to tell you more about that. "Presence" came about for two reasons. One was our sense that the electronic medical record, at the time, was a technology that was so intrusive in the lives of physicians and was causing so much distress that we needed to better understand it. We needed to better explore the effects of technology—especially new technology—on the practice of medicine and anticipate what it would do to human beings, as opposed to what it did in terms of what it was designed to do.
That also led us to realize that on a campus like ours, where you have all these other schools—engineering, law, business, fine arts, and so on—so many other people were doing interesting work that never quite made it to the living laboratories of the hospital and clinic, even though it applied to us.
For example, a lawyer was doing fascinating work on how, if you're my age, let's say in your 60's, and you go to see a young orthopedist, you have a higher chance of immediately getting a recommendation for a knee replacement than if you see an older orthopedist, who might try other options before coming to that. And there was someone else in sociology doing wonderful work on trust.
Again, none of that was making it into our clinic, so we decided that we would build a center to encourage collaborations—not for the purpose of writing papers and academic exercises, but to see whether we can change what happens in the living laboratory of the hospital and the clinic. One of our biggest interests, as you can imagine, is very much the advent of AI, the advent of any new technology that comes with some kind of a cost, not always visible but very much there.
In a nutshell, that's what the Presence Center is about. I have some wonderful associates, and "Presence" is growing in many different directions, like a hydra, with all kinds of interests—more than I can keep up with sometimes.
Topol: That's fantastic. It's a great initiative and much needed.
It reminds me of an article that you wrote, "Narrative Matters," a section in Health Affairs, a couple of years ago about presence. There you told the story of how you would take med students, I believe, to an art museum to get them to be better observers and have more presence. Tell us about that.
Verghese: I have the great honor of rounding with the chief residents at Stanford every Thursday afternoon and the medical students on Wednesday afternoons. These rounds are very much about reading the body as a text, and so we deliberately walk into the patient's room without knowing the patient's condition. I ask them not to tell the group or me what they have—not so we can be clever and come up with the diagnosis, but because it's amazing how, when you put labels on a patient, you simply stop seeing other things. These are really exercises about looking, about the clinical gaze.
To give you an example of the kinds of things that are shocking in terms of what we don't notice, one time we were seeing a very pleasant Asian gentleman who was in the hospital for liver dysfunction. He was quite gracious and let us examine him. When I came out of the room, I asked the assembled crew, "What do you notice about the mother?" The mother was in the room and they hadn't really noticed anything about her, but the striking thing to me about the mother was that she had vitiligo. Her skin was white and her son's skin was brown. Sometimes our focus is so tight on the patient that we miss interesting observations.
So, I take them to the museum that's right next door to us, a modern art museum, and we look at various things. It's a nice way for them to appreciate that there is so much around for them to look at. And the interpretation is very much colored by the lens through which they look at the world. We spend a long time analyzing our own subjectivity based on our biases, our training. It's a fun thing we do.
Topol: That is right up our alley here, as we're discussing the narrow function of AI; it doesn't have the context and the human side. It doesn't have the medical side, which can be cultivated and is so complementary, symbiotic with what AI tools can do.
For example, you have an AI reading of a scan, and it found the pulmonary nodules that otherwise would have been missed. In fact, 30% of scans today are interpreted by radiologists with false negatives, with things that are missed. Here you have the person, the human, making the interpretation but with a much wider context, not just with training for finding specific things. This idea of the man and machine combo—how does that help in presence, Abraham?
Verghese: One of the things that strikes me, Eric, is that I had the pleasure of coming to your annual conference at Scripps, which used to be the Future of Genomic Medicine and now it's called the Future of Individualized Medicine. I remember being stunned because I felt like I had arrived at this moment when the machines are indeed better than us. It was hard to argue in the select applications, such as the radiology example you mentioned, that they weren't better than us.
But AI in medicine is different from AI in so many other industries. If you replace a postal delivery system with AI or if you replace car assembly with robots, there is a human cost as far as job displacement and so on. But in medicine, in the context of illness, you can't just plug in a machine and have it serve all your functions when illness itself is such an isolating, emotional, terrible experience. You very much need the kind of nurturing, the kind of hand-holding that you don't have to think about in the assembly of a car, for example.
So, as I sat in your conference, I was struck by where we are: We have come to this dystopian moment in so many novels, and it isn't a novel anymore; it's here. But I was also struck by the idea that the great privilege of medicine is that we are admitted to people at their most vulnerable, delicate moments, and part of the therapy, part of the gift is just being with them, willing to listen. It's not always knowledge. Oliver Wendell Holmes Sr. said, "It is the province of knowledge to speak, and it is the privilege of wisdom to listen." There is a very valuable function in listening and being.
I am excited about where AI is taking us, but I also think it is the beginning of our understanding of what our role is going to be. It's going to be different, that's for sure, and I hark back to a figure in your book, Deep Medicine.[1] You include a very simple graphic that shows the course of medical and intellectual human development or human knowledge, and then you have this rising parallel line of machine knowledge that, for many centuries, I would say, was well below human knowledge. But then this machine knowledge is rising, rising, and now the line has eclipsed human knowledge. You ask in your book, what is it that we need to do? We don't just throw up our hands and give up. We human begins need to be more human.
I'm hoping that our discussions in this podcast series will allow us to explore that realm, even as we catalog the many marvels that are coming down the pike with AI. I hope that it will also help us be less shy about talking about what it is we offer. As [William] Osler said famously, "It's much more important to know what sort of patient has this disease than what sort of disease the patient has." I think now the burden is going to be on us to do that part very well.
Topol: You touched on a critical concept, a component of this, which is listening. One of the issues is that we don't listen well because we have limited time. Listening is part of presence, and it seems that, as good as AI is, moving forward and gaining momentum—at least in my view, and I would be interested in yours as well—it will never truly be able to digitize the life story of a person.
Listening to that life story often may help us make the correct diagnosis that the patient is already well in touch with. But beyond that, it helps just to understand and cue into the story: what this person is all about, what they are worried about, what they are excited about. Yet we don't typically listen. We don't give time because we interrupt patients within seconds of them starting to talk. So, what about this listening? How does that build to presence?
Verghese: I think it's important to say at the outset—I know this is certainly true of you, but people have their doubts about me—I'm not a Luddite. I am actually a great fan of technology and I'm an early adopter of most new technology, but I believe it's worth saying and emphasizing that the great promise of AI for us is that it will free up time so we can spend better quality time with the patient. Free us up so we are not distracted by trying to enter the requisite things for Epic or Cerner or whatever system we're using. We aren't shifting our gaze back and forth, because in the background, natural language processing is capturing our whole encounter and putting it in the computer so that we don't have to be the highest-paid clerks in the hospital. I believe that is the greatest promise of AI—that it will enable us to spend more quality time with the patient, more listening.
Topol: I couldn't agree with you more. That was something we came together on in Deep Medicine[1] in the wonderful forward that you wrote. Some time ago, you hatched the idea of the iPatient, the idea that we really weren't cuing into the patient but all too often looking at a scan of the patient or lab test, or doing card-flip rounds, instead of actually going into the room to the patient's bedside.
How do we get that back on track? Is that something we need to mandate, that there is no such thing as card-flip rounds? Do you think we could ever turn back to the way it used to be, where that was one of the fundamental parts—not only of teaching in an academic center, but a critical part of the human touch, the human factor, where the person, the patient wasn't talked about unless they were present?
Do you have card-flip rounds at Stanford?
Verghese: You know, I'm sure we do. I don't think anyone is willing to acknowledge it, and from time to time, I believe that it has a small place. But I believe that we have to be fair when we talk about this. I don't think there's a single medical student or resident who comes to medicine aspiring to spend 6 of their 8-10 hours at work sitting in front of a computer.
Eric, you and I did this to them. Our generation did this to them in the sense that our generation allowed this to happen. We allowed this creep of technology to happen such that for every hour we spend cumulatively with patients, data show that we're spending 1.5 hours on the computer and another hour of our personal time at night, answering the beck and call of technology.
When I meet with the residents and medical students, I am struck that they come to medical school with all the right intentions, and they're actually quite excited by the old art of bedside medicine. They want to spend more time with the patients, and for the most part, given the chance, they would. It's we who stand in their way.
However, the longer this continues, the longer we don't have this tradition of going to see the patient together—and I recognize that things are changing, nevertheless—if we don't see the patient and we don't see them together with the trainee, the conversation doesn't really begin. It becomes abstract and you're not talking about a person. My hope is that when we have fewer chains keeping us in the room in front of that screen, we will have more time with that patient. That is a big promise.
I also wanted to mention that we have all of this wonderful coding for diagnoses—codes for if you hurt your left foot in the trailer on the toilet and fell off it versus whether you hurt your right foot—giving exquisite detail in diagnosis and diagnostic codes, and yet we don't have an ontology for who the patient is in front of us. The best we do is their age, maybe their occupation, do they smoke, how many family members they have. But we can't get a granular picture of this person in front of us, their aspirations, did they drop out of college, did they want to go to college, what were their parents like? We have an ontology fellow working with one of our bioinformatics people, trying to bring as much granularity as possible, so that the 67-year-old man in front of you with heart failure isn't the same as the next 67-year-old man with heart failure. They have rich stories that are completely different, and as you and I know, their stories will completely affect the outcome of the disease, the course of the disease, the approach to therapy.
So even as we talk about humanism and humanity, we can also use the best of technology to get better at understanding the humanism of our patients. I had a chance to speak to computer and bioinformatics people at a small convention and I challenged them. I said, welcome to our messy world where we're called to the emergency department to see a patient; the information is incomplete. EMS has some tablets that were found at the bedside. The ex-wife is on her way in. And meanwhile, labs are being drawn, those results are coming back, and we're making decisions in real time. That's the place that should be the frontier for bioinformatics and computer investigation. Not the neat areas of looking at the genome, which is all wonderful, but help us with the messy stuff. Human beings—we're intrinsically messy. Let's get in there.
Topol: There are so many ways that informatics and analytics can help. As you say, we're messy, we have all sorts of data over long periods of time. If the data were properly packaged or assembled, it would be great input for a deep learning about a person, making it easier for the clinician, doctor, and also the patient in the future. Dealing with the messiness is certainly a challenge.
There's a lot of skepticism that the AI technology could actually make things better. Better productivity and efficiency can be gained with AI—for example, not having to type on a keyboard during the middle of an encounter, or assembling the data of a patient. These are just some of the many ways we could outsource some of the tasks to machines.
But many people are concerned that, as in the past, any gain in productivity will just be used to squeeze doctors more, to actually see more patients; or if you're a radiologist or pathologist, to read more scans or slides. Whatever you do, to do more. That is a major concern. For decades, that has been the primary objective of the administrators or managers who are financially responsible, as compared with the mission of medicine. What do you think is a workaround for that?
Verghese: It is a very real fear. Actually, we're seeing this already. Our graduating medical students used to talk about the ROAD to success, ROAD being the acronym for their aspirations, standing for radiology, orthopedics, anesthesia, and dermatology. Now, it's become the OAD to success, the R having dropped away. The radiology societies have a great concern. There are fewer trainees, and when they train, many more want to go straight to interventional radiology because they appreciate the interaction with the patient. But if you're just going to be in a room looking at images—even though you and I know that it's precious to walk in and interact with a skilled radiologist and give them the context, draw on their expertise, have them ask us questions—there's a great fear that this will vanish. I'm hoping that in this series we can talk about workflow patterns and how they've changed, and how AI might either improve or exacerbate some of these things.
Topol: I'm looking forward to delving into that because I think that is a big nut to crack. It may take multiple generations to restore medicine to the famous Francis Peabody quote that you turned me on to: "The secret of the care of the patient is in caring for the patient." But that may not happen without considerable activism.
The other thing I want to delve into with you in this series is burnout. That topic is very worrisome. It's partly, as you said, related to the data clerk function, but also to the overwhelming sense of not connecting with patients and being constrained in providing care and losing our way. It is coupled with a doubling of medical errors; that has been established. That's a vicious cycle; where you have a tendency toward burnout, you make an error and you find out about it, and then you get even more sense of burnout and even clinical depression.
The question is, how do we get ourselves out of these repeat levels of burnout, and could it get even worse?
Verghese: It seems to go hand in hand with technology. When we founded the Presence Center, one of the reasons we picked that term, "presence," is that we were struck by how often it crept into both the patients' narrative and the doctors' narrative.
Doctors complained about not being allowed to be present, and patients complained that we were hardly present, that they were sent from here to there but were not necessarily seeing the doctor for as long as they wanted to see the doctor. If we don't address this issue of what I call meaning in medicine, we're losing sight of something very important. Most of us didn't go into medicine for the pure science of it, although I believe that many individuals who are true physician scientists do that. But you know, caring for our fellow human beings is an art and a science, and that was the draw.
What has happened is this creep of more keystrokes and more keystrokes and more QA, and more of this and that, and all of a sudden... It happened quite suddenly, because there was a long lag period where people just sucked it up and took one more thing to fill out, one more drop box. And then all of a sudden, we saw this huge surge in people expressing dissatisfaction and people leaving for other careers.
In a way, it's been a blessing because everywhere I look, institutions have woken up to the cost of this. At Stanford, we know that if a physician is unhappy according to a very well-validated metric, they have a 30% or so chance of leaving within the next 2 years. Given how much time and effort it takes to recruit someone like that, it's a huge loss to us financially, let alone having someone leave unhappily. I think Stanford was the first medical center to hire a full-time wellness officer in the form of a distinguished researcher, Tait Shanafelt, who has done many of the wellness studies.
Burnout will be an ongoing problem. But programs that discuss wellness and put it right there in front of us as a topic that matters to us, and talk about the patient's perception of us and their takeaway, all of these things are terribly important. We can't relegate that to the people who are interested in the touchy-feely stuff. It turns out, that's all of us.
Topol: I think it's good for us to end this first podcast with the acknowledgement that technology in some respects, if not many, created the mess. Both of us can agree that becoming data clerks and having keyboards and computer screens in office visits has detracted from the patient-doctor relationship. Now, ironically—counterintuitively and diametrically opposed to what got us into this mess—we're proposing that technology in the form of AI will provide some type of rescue, will restore some humanity to medicine.
That will be the subject of many discussions going forward. Can we get there? Is it possible? Is it crazy? Are we idealistic? We'll ponder that in our future discussions and we'll bring others into the discussion, and we'll certainly welcome comments from people who are listening in the Medscape audience.
So, with that, any closing remarks for our first podcast together, Abraham?
Verghese: This was fun, Eric. We set out the agenda, and I'd love to see us invite the most unusual types to come and share their views, because I think it's going to take more than science and bioinformatics to truly get at this. It will be all about humanism and finding people who can explain that to us, and who can tell us how we can better meet that essential part of what we do, the art of "the art of medicine."
Topol: That's great. Thanks so much to you, Abraham, for joining us in this first "Medicine and the Machine" podcast. We look forward to a series of these, approximately once a month, and we'll be back with you, our audience, and incorporate your suggestions and comments as we go forward. Thank you very much.
Follow Medscape on Facebook, Twitter, Instagram, and YouTube
Medscape © 2019 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Eric Topol and Abraham Verghese: 'We Need to Be More Human' - Medscape - Jun 26, 2019.
Comments