Deep Medicine: Restoring the Doctor-Patient Relationship

Deep Medicine: How AI Will Restore Intimacy to the Doctor-Patient Relationship

; Siddhartha Mukherjee, MD, PhD; Moderator: Ivan Oransky, MD

Disclosures

March 28, 2019

0

This transcript has been edited for clarity.

Ivan Oransky: Good evening and welcome. I'm Ivan Oransky, vice president of editorial here at Medscape. It is my great pleasure to welcome you to a very special event, the launch of Deep Medicine,[1] the latest book by Eric Topol, editor-in-chief of Medscape.

In keeping with some of what I took away from this fascinating and important read, I've fed everything I know about our guests, about book launches, and about artificial intelligence (AI) into my deep neural network and asked it to spit out a set of opening remarks. First, it told me to wear a blue shirt; check. Then it said I should be brief; check. But what it said next really made me nervous and a little worried, because it told me to be funny. That was followed by an odd voice, which some of you may recognize and may find familiar; it said, calmly, "I'm sorry, Ivan. I am afraid you can't do that."

And just as Dr Topol tossed out his AI-informed diet advice when it suggested, I believe, that he eat cheesecake and bratwurst, I threw out that draft and decided just to introduce our speakers so that we can spend the most time with them rather than with me. (If you missed Dr Topol's article, "The A.I. Diet" in TheNew York Times, you should definitely read it.[2])

With us is Pulitzer Prize–winning author Dr Siddhartha Mukherjee whose book The Emperor of All Maladies[3] quickly became a must-read for anyone interested in cancer, as well as a New York Times Best Seller. He is an oncology researcher and physician at Columbia University in New York City and is also the author of The Gene: An Intimate History.[4]

Tonight, Dr Mukherjee will be turning the tables on Dr Topol, who interviewed Dr Mukherjee in 2015 as part of our Medscape One-on-One series of interviews with medicine's leading thinkers. Dr Topol is a practicing cardiologist at Scripps Research Institute in La Jolla, California, and, as I mentioned, editor-in-chief here at Medscape. He is widely considered one of the world's most influential thought leaders in digital innovation and healthcare, and is the author of two previous books.[5,6]

I know we can look forward to a thought-provoking and thoughtful conversation. Please join me in welcoming Dr Mukherjee and Dr Topol.

Defining 'Deep Learning'

Siddhartha Mukherjee, MD, PhD: Eric, I enjoyed your book and your article. We have been talking about some of these issues for a long time—almost a decade, I would say. And now we have your book. Perhaps we should start with some definitions so that people are up to speed. When you talk about AI and deep learning or machine learning, what do you mean? What does it do?

Computers have been in medicine now for 20 or 30 years. Computational algorithms have been in medicine for 20, if not 30, years. What's different now, and why? Explain how deep learning and AI are fundamentally different. Define it for us.

Eric J. Topol, MD: That's a good question, Sid. What we're talking about is the deep learning story that took root just under a decade ago at the University of Toronto in Canada, by Geoffrey Hinton and his colleagues. They came up with a whole new subtype of AI, the idea of taking data and putting it through neuron layers. The humans don't decide how many layers; rather, the layers of the artificial neurons determine what it takes to read the features, whether that is speech or image.

I believe that [deep learning] can connect the data that we are flooded with in medicine and enable us to get back to the patient care that we have lost over time.

This was applied initially to ImageNet, the fantastic labeled images that Fei-Fei Li and her colleagues at Stanford put together. They carefully annotated millions of images to use in visual object recognition software. Geoffrey was able to show that the AI can read images, interpret them, and classify them as well as human beings can, and then over time, over the past few years, even better than humans can.

That's what set the potential in medicine. You can have pattern recognition with this type of AI specifically, and that could be applied to medical scans, pathology slides, and skin lesions, for example. And the nice part about it is that human bias is not part of the neural network. You can program human bias as part of the input, but if you don't, it really lets the machine do the work. I believe it has potential that transcends these initial areas. Of course, there are other complementary aspects of deep learning and AI tools that are going to be transformative.

The deep learning side is quite new, and I believe that it can connect the data that we are flooded with in medicine and enable us to get back to the patient care that we have lost over time.

Mukherjee: We will come back to the patient care we've lost. It's an important feature of all of this and I want to mark out time to discuss it. But I noticed that you used a very narrow definition of deep learning and of AI. Geoffrey Hinton and I have been in conversation for a long time. I wrote a piece about Geoffrey's work.

Topol: In The New Yorker, 2017. Good article. "A.I. Versus M.D."[7] But it should be "AI plus MD."

Mukherjee: That's right. And we will talk about that in a while. I am obviously interested in the fact that you used pattern recognition—you used ImageNet—and the examples you used were diagnosis of skin lesions, of pathology, and of radiology, etc. Is it your impression that AI will be limited in this way or will it expand outwards and become wider? Will it ask the deeper, wider questions about medicine that we ask as doctors? In other words, is this a tool that is a pattern recognition tool—which is extraordinarily important; let's not be glib or flip about that—but for which the capacity will be limited?

In that New Yorker article, I talk about when a young dermatologist in training finds his or her first melanoma; they go from a case study of zero to a case study of one. But when a neural network that has ingested data—578,000 melanomas—takes another one, it goes from a case study of 578,000 to 578,001. So we understand the power of these data, but do you have a sense of how broad this will be?


 

Topol: That's a very important point because today, it is relatively narrow and that's partly because the datasets we have to work with in the medical sphere are relatively limited. We don't have these massive annotated sets of data. But it will go much more broadly. I believe that one of the greatest lessons we learned to date is that we can train machines to have vision that far surpasses that of humans.

What was started with some of the things I mentioned has now expanded. For example, in a cardiogram, you can not only tell the function of the heart but also the probability of a person developing this or that type of arrhythmia. This is something humans can't see.

Perhaps the greatest example of that is the retina. With this kind of algorithm, you can distinguish a man from a woman without necessarily having to look at the retina picture. This is something that no one has yet explained, and it emphasizes the black box explainability feature. If you get retinal experts, international authorities, to look at retina pictures, they can't tell the difference between a man and a woman. They have a 50/50 chance to get that right, male or female. But you can train an algorithm to be more than 97% or 98% accurate, and no one knows why.

When you say narrow definition, we are only beginning to imagine the things that we can train machines to do. Then when you start to bring in all of the different layers of a human being and the corpus of the medical literature—the sensors, genomics, microbiome, all these different things—then you have a setup that's much broader, both for the individual and the people who are providing care for that person.

23andMe


 

Mukherjee: One of the things we'll obviously touch on is privacy, which is an incredibly important arena, so let's chalk out some time for that later. My field is cancer, and I was impressed by the data that have come out of the UK Biobank in terms of breast cancer predictability. You discuss this in your book.

Just to give the audience a sense of how this world is moving—and this is also true for cardiovascular disease—imagine that you have breast cancer in your family. You know that it has crossed multiple generations. In the past, our capacity to predict whether you yourself, a woman or a man, were at risk for future breast cancer was limited to single highly penetrant genes such as BRCA1 and BRCA2. People would make important decisions in their lives—Angelina Jolie being one of them—based on that genetic diagnosis.

Recommendations

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....