This transcript has been edited for clarity.
Ivan Oransky: Good evening and welcome. I'm Ivan Oransky, vice president of editorial here at Medscape. It is my great pleasure to welcome you to a very special event, the launch of Deep Medicine,[1] the latest book by Eric Topol, editor-in-chief of Medscape.
In keeping with some of what I took away from this fascinating and important read, I've fed everything I know about our guests, about book launches, and about artificial intelligence (AI) into my deep neural network and asked it to spit out a set of opening remarks. First, it told me to wear a blue shirt; check. Then it said I should be brief; check. But what it said next really made me nervous and a little worried, because it told me to be funny. That was followed by an odd voice, which some of you may recognize and may find familiar; it said, calmly, "I'm sorry, Ivan. I am afraid you can't do that."
And just as Dr Topol tossed out his AI-informed diet advice when it suggested, I believe, that he eat cheesecake and bratwurst, I threw out that draft and decided just to introduce our speakers so that we can spend the most time with them rather than with me. (If you missed Dr Topol's article, "The A.I. Diet" in TheNew York Times, you should definitely read it.[2])
With us is Pulitzer Prize–winning author Dr Siddhartha Mukherjee whose book The Emperor of All Maladies[3] quickly became a must-read for anyone interested in cancer, as well as a New York Times Best Seller. He is an oncology researcher and physician at Columbia University in New York City and is also the author of The Gene: An Intimate History.[4]
Tonight, Dr Mukherjee will be turning the tables on Dr Topol, who interviewed Dr Mukherjee in 2015 as part of our Medscape One-on-One series of interviews with medicine's leading thinkers. Dr Topol is a practicing cardiologist at Scripps Research Institute in La Jolla, California, and, as I mentioned, editor-in-chief here at Medscape. He is widely considered one of the world's most influential thought leaders in digital innovation and healthcare, and is the author of two previous books.[5,6]
I know we can look forward to a thought-provoking and thoughtful conversation. Please join me in welcoming Dr Mukherjee and Dr Topol.
Defining 'Deep Learning'
Siddhartha Mukherjee, MD, PhD: Eric, I enjoyed your book and your article. We have been talking about some of these issues for a long time—almost a decade, I would say. And now we have your book. Perhaps we should start with some definitions so that people are up to speed. When you talk about AI and deep learning or machine learning, what do you mean? What does it do?
Computers have been in medicine now for 20 or 30 years. Computational algorithms have been in medicine for 20, if not 30, years. What's different now, and why? Explain how deep learning and AI are fundamentally different. Define it for us.
Eric J. Topol, MD: That's a good question, Sid. What we're talking about is the deep learning story that took root just under a decade ago at the University of Toronto in Canada, by Geoffrey Hinton and his colleagues. They came up with a whole new subtype of AI, the idea of taking data and putting it through neuron layers. The humans don't decide how many layers; rather, the layers of the artificial neurons determine what it takes to read the features, whether that is speech or image.
This was applied initially to ImageNet, the fantastic labeled images that Fei-Fei Li and her colleagues at Stanford put together. They carefully annotated millions of images to use in visual object recognition software. Geoffrey was able to show that the AI can read images, interpret them, and classify them as well as human beings can, and then over time, over the past few years, even better than humans can.
That's what set the potential in medicine. You can have pattern recognition with this type of AI specifically, and that could be applied to medical scans, pathology slides, and skin lesions, for example. And the nice part about it is that human bias is not part of the neural network. You can program human bias as part of the input, but if you don't, it really lets the machine do the work. I believe it has potential that transcends these initial areas. Of course, there are other complementary aspects of deep learning and AI tools that are going to be transformative.
The deep learning side is quite new, and I believe that it can connect the data that we are flooded with in medicine and enable us to get back to the patient care that we have lost over time.
Mukherjee: We will come back to the patient care we've lost. It's an important feature of all of this and I want to mark out time to discuss it. But I noticed that you used a very narrow definition of deep learning and of AI. Geoffrey Hinton and I have been in conversation for a long time. I wrote a piece about Geoffrey's work.
Topol: In The New Yorker, 2017. Good article. "A.I. Versus M.D."[7] But it should be "AI plus MD."
Mukherjee: That's right. And we will talk about that in a while. I am obviously interested in the fact that you used pattern recognition—you used ImageNet—and the examples you used were diagnosis of skin lesions, of pathology, and of radiology, etc. Is it your impression that AI will be limited in this way or will it expand outwards and become wider? Will it ask the deeper, wider questions about medicine that we ask as doctors? In other words, is this a tool that is a pattern recognition tool—which is extraordinarily important; let's not be glib or flip about that—but for which the capacity will be limited?
In that New Yorker article, I talk about when a young dermatologist in training finds his or her first melanoma; they go from a case study of zero to a case study of one. But when a neural network that has ingested data—578,000 melanomas—takes another one, it goes from a case study of 578,000 to 578,001. So we understand the power of these data, but do you have a sense of how broad this will be?
Topol: That's a very important point because today, it is relatively narrow and that's partly because the datasets we have to work with in the medical sphere are relatively limited. We don't have these massive annotated sets of data. But it will go much more broadly. I believe that one of the greatest lessons we learned to date is that we can train machines to have vision that far surpasses that of humans.
What was started with some of the things I mentioned has now expanded. For example, in a cardiogram, you can not only tell the function of the heart but also the probability of a person developing this or that type of arrhythmia. This is something humans can't see.
Perhaps the greatest example of that is the retina. With this kind of algorithm, you can distinguish a man from a woman without necessarily having to look at the retina picture. This is something that no one has yet explained, and it emphasizes the black box explainability feature. If you get retinal experts, international authorities, to look at retina pictures, they can't tell the difference between a man and a woman. They have a 50/50 chance to get that right, male or female. But you can train an algorithm to be more than 97% or 98% accurate, and no one knows why.
When you say narrow definition, we are only beginning to imagine the things that we can train machines to do. Then when you start to bring in all of the different layers of a human being and the corpus of the medical literature—the sensors, genomics, microbiome, all these different things—then you have a setup that's much broader, both for the individual and the people who are providing care for that person.
23andMe
Mukherjee: One of the things we'll obviously touch on is privacy, which is an incredibly important arena, so let's chalk out some time for that later. My field is cancer, and I was impressed by the data that have come out of the UK Biobank in terms of breast cancer predictability. You discuss this in your book.
Just to give the audience a sense of how this world is moving—and this is also true for cardiovascular disease—imagine that you have breast cancer in your family. You know that it has crossed multiple generations. In the past, our capacity to predict whether you yourself, a woman or a man, were at risk for future breast cancer was limited to single highly penetrant genes such as BRCA1 and BRCA2. People would make important decisions in their lives—Angelina Jolie being one of them—based on that genetic diagnosis.
If you look at that pie chart of people with familial breast cancer, only about 10%-20% of that pie chart was predictable in terms of single, highly penetrant genetic changes. The rest of it was dark matter, to some extent. In other words, you could come to me and say, "I have breast cancer in my family; can you tell me what my risk is?" And I would say, "If you don't have BRCA1 or BRCA2 mutations, I can't tell you your risk. I can't tell if you're at the highest quartile or the lowest quartile of risk."
One thing that's happened with the UK Biobank and other biobanks is that if you take genomes and then you map fate along that genome—and one aspect of fate could be breast cancer—you can now begin to make surprisingly deep predictions about people who are in the highest quartile of breast cancer, or are likely to have breast cancer in the future, such as a woman who could have a ninefold risk compared with the rest of the population of future breast cancer, based on her genetic makeup.
This has happened in cardiovascular disease as well, and these algorithms, as you pointed out, are relatively simple. They are additive algorithms. Walk us through a scenario of what would happen once we create these gene fate maps. Unleash the tools of AI on them. Walk us through what could be profound and walk us through the problems.
Topol: Sure. Of course, you want to be careful not to put fate and genomics in the same sentence, perhaps, and you wrote about that eloquently in your book, The Gene: An Intimate History. But I believe the point you are getting at is that the polygenic risk score for breast cancer that is not related to the BRCA and rare mutations—
Mukherjee: Explain the polygenic risk score a little bit more, because it's important.
Topol: You don't even need to do a sequence for that. How many people have accessed 23andMe? A lot of people. From that or Ancestry.com you can get 1 million letters of a genome through a chip which can be run for as little as $20. You can find all of these variants collectively—hundreds of changed letters that would be the equivalent of having BRCA1 or 2 mutations.
Approximately 88% of women will never have breast cancer in their lives. Who are the 12% who are really at risk? Today we have this remarkably wasteful way of putting all women through mammography, with a 60% false-positive rate, but we can already see that between the rare mutations that are well characterized, plus this collection of these variants, of common variants, that together we can predict very close to those 12%—maybe 20%.
You would have a "bye" for all of these women who wouldn't have to be screened, or maybe screened only every 10 years or something like that. The same for all of these conditions. There is an actionable path. It's not just for breast cancer.
Mukherjee: Talk about cardiovascular disease.
Topol: Heart disease is the one that is even more firmly established so that you can find the top 10% of people at risk. I have no heart disease in my family, so that was very dejecting for me. Our team at Scripps made an app; it's free, it takes a few minutes, and you can get your gene rank. You can go to your gene rank and you can upload your 23andMe data to the app and get your score out of 100. I did that and my score was 92, which is very high risk. Because of this, I started taking a statin. It turns out that statins have a much bigger impact at higher risk. A lot of people are taking statins just because they have high LDL cholesterol levels, but it's going to have no benefit for them.
Mukherjee: What's interesting about this, of course, is that this score is a risk factor that's independent from LDL cholesterol. It is somewhat orthogonal as a risk factor.
Topol: It is orthogonal and additive. It's better than family history or most of the others, such as smoking, and even just plain LDL cholesterol. The point about this genomic layer of information is that even one layer provides a lot of data. That's just with a chip. Then, when you start taking the genome sequence, which AI, particularly deep learning, is doing so much to unravel, you say, wow, we're already doing this for a risk score for various common conditions. It's zooming forward. Where is this going to be in a year or two? In the book, the chapter on deep discovery is about the science, genomics, and cancer. That's where the biggest advances are happening right now. Someday this will be translated into far better prevention and far better, parsimonious use of resources, so we won't have the current, unwitting scare tactics.
Can We Prevent Ourselves From Becoming a 'Locus of Risk'?
Mukherjee: Talk about the flipside of that scenario. Our understanding of ourselves as human beings changes as we do this, as we unleash algorithms ourselves that we've invented ourselves. For me, there is a significant concern that we become and imagine everyone as a locus of risk.
An insurance company imagines everyone as a locus of risk, and that fundamentally changes who human beings are and how we conceive of ourselves. Then there is the proximal question of privacy. If you're a locus of risk and you happen to leave that backpack with your 23andMe app on the subway and someone finds it... Maybe that's a far-fetched scenario, but there are much more near-fetched scenarios. Would you want to share this information with your spouse? Let's say your polygenic risk score for breast cancer happens to be in the highest quartile. Is this information that you'd want to share with your spouse? It changes the structure of human relations if you decide not to act on knowledge that you have. What happens then?
Topol: I believe that this privacy issue is fundamental. Many of you have seen that privacy has been declared dead in other circles, but not yet in health and medicine because it is not acceptable in our world of healthcare.
There are ways to work around this problem. The cyber experts who understand hacking and the breaches, data being held hostage at health systems, all these sorts of things, their recommendation is that the data should be in the smallest units possible: An individual should own their data. In fact, we have to get there someday.
In Estonia, each person owns their data and it's not in a mass server, which is the ideal target of cyber thieves. That's one way to preserve privacy, but the ownership is really important. The reason for that is not just about privacy. Right now, no one has all of their medical data from the time they were in the womb, when it's really important through their lives to this moment. No one has it because you would have to go to a lot of different people and different places for your data. If you're going to use AI to prevent conditions or better manage them, then you must have all of the input. We know that and no one here has it.
We have a balance between maximizing privacy but also aggregating all of that data. Today, people are generating their own data through sensors, and if you get your genome sequenced or even your chip from 23andMe or whatever, you don't want that in your medical record that's sitting in the hospital or health system because that can be used against you. The only things that are covered in this country are health insurance and employer status, but life insurance and disability are not covered, so you really don't want to put your genetic information into your electronic record. We need a secure place for it.
Right now, many of these datasets are homeless. We need a home, and that should be owned by the individual. We will get there someday. We are way behind Estonia, and now other countries such as Finland, Sweden, and Switzerland are moving in that direction, but we are not.
Mukherjee: What are the obstacles in the United States? From the standpoint of cancer genetics and cancer genomics, it was an embarrassment that the UK Biobank was created months, years, before we created ours and since we've been accessing that data. And the United Kingdom's is not the only biobank; there are others. Why are we late to this game?
Topol: We are quite late. I just finished an almost 2-year review of the UK's National Health Service, and not only are they the world leader in genomics, but they started their biobank and now genomes in England years ago. They also have the work I did with them to plan a digital and an AI strategy for the next 20 years. They have put billions of pounds of resources into this, so in the midst of Brexit, they are planning ahead for healthcare. In this country, we haven't put in one dollar toward a national strategy. There was an announcement by our president recently about an AI strategy with zero dollars and no specifics. The UK is zooming forward and we are left behind.
Mukherjee: Is it a cultural problem? Can you diagnose the problem?
Topol: Right now, it seems that we are just trying to survive day to day in this country. Having spent a lot of time and interaction with the British, I've been impressed that they take this planning very seriously and they respect the power of AI. For example, they got rid of keyboards in one of their very large emergency departments. You want to talk to some happy doctors and happy patients? No keyboards, all voice recognition. They showed that it is possible, and that was in the emergency setting where you have diverse types of patients coming in. They are showing the world that they are going to get rid of keyboards first in their country. In the old days, we would talk about getting rid of paper. We never got rid of paper. But keyboards are the enemy of doctors, patients, nurses, and everyone.
Speech recognition, which is deep learning, is so advanced today and we are doing next to nothing in the United States. More than 20 US companies are working on this, including many tech companies, but as a country we are not behind the effort, whereas the United Kingdom and China are. We are slipping on this. It is a great opportunity. I do not believe that we will ever have another opportunity like this for many years, and even generations, ahead.
The Disembodied Diagnosis
Mukherjee: You talk a lot about patients. I want to talk a bit about doctors as well. I recently wrote a big article[8] on how keyboarding has become one source of physician burnout and why we find ourselves automated, but also dehumanized, as we go through the whole day. I am going to read something from your book.
This is on page 306 of your book[1]:
We also need to rewire the minds of medical students so that they are human-oriented rather than disease-oriented. Hospital and sign out continuity rounds are all too frequently conducted by card flip whereby one training doctor reviews the disease or the patient's status and relevant test results without ever going to the patient's bedside.
Even the diagnosis of disease is disembodied by looking at the scan or lab tests instead of laying hands on the person. Such routines are far quicker and easier than getting to know a human being. Rana Awdish, a physician in Detroit, laid this out well with two groups of medical students. One was called pathology and the other humanistic. The pathology group gets extraordinary training in recognizing diseases by recognizing a skin lesion, listening for murmurs, or knowing the clotting cascade. The humanistic group gets all that training but is also trained to pursue the context of the human being, letting patients talk and learning about what their lives are like, what is important to them, what worries them. Given a patient who starts crying, the pathology group can diagnose the disease but can't respond. The humanistic group, wired for emotion even before the tears begin, hears the tense pitch of vocal cords stretched by false bravery and comforts.
Is this your vision of how removing these burdens from medicine will end up restoring a kind of faith that is beginning to fray in our medical students?
Topol: Thank you for that passage. It captures the whole story, and that is that we've lost our way. In the 40 years since I finished medical school, I have seen a steady erosion, and we have gotten further and further away from the care, the true human bond. We have an opportunity—and I believe the only opportunity, at least in my lifetime—to turn it back and return to where we were. It is the gift of time, but not just that. As you said, it's about the human-centric aspect of medical care. You give doctors and patients time together, but that's not enough; it's got to be cultivated. It is so incredibly important, that restoration of trust, the presence, this precious relationship that used to be intimate. It used to be sacrosanct. What has happened?
Mukherjee: When I have medical students on rounds, on the last day of rounds I give them two options, with a week to prepare. One option is to choose a topic, a pathology topic: triple-negative breast cancer, acute leukemia in the elderly, whatever it may be—that's one choice. The other choice is to take any of the patients the student presented, but present them in full. I want to know where they were born, what their name is. Make a three-dimensional human being out of this patient.
About 5-7 years ago, everyone wanted to present triple-negative breast cancer or some other pathology topic; now everyone wants to present the patient. All of the medical students want to present the so-called three-dimensional patient. I tell them that they need to spend hours by the patient's bedside—this is cancer, of course—and talk to them about their anxiety, their worries, their future, whether they have children, whether they don't have children. Who pays the rent?
Ask [students] the kinds of questions you would ask if you were presenting triple-negative breast cancer, saying, "What is the statistic that shows us the number of African American women who have triple-negative breast cancer, but also, how much rent do they pay? What is the cost where they live? What is the cost of a typical day's meal?" If the students don't know the answer to those questions, they have to go back and ask those questions.
Topol: We need more teachers like you.
Mukherjee: I am the most impatient teacher you could ever imagine.
We have covered a vast ground. You have defined for us a new kind of medicine, a new kind of medicine that liberates patients and liberates the idea of medicine itself, but also has a powerful influence on doctors and how we think of ourselves. I will end with one last query: Who are the skeptics? Who doesn't believe you and why? Why should we not believe you?
Topol: There's a good reason not to believe, and that is because we have a history of the administrators, the managers, the business people squeezing clinicians more and more. If there is more productivity and better efficiency and workflow, the natural default mode is going to be to see more patients, read more scans, read more slides. If we as a medical community—and that means the healthcare providers and the patients—don't stand up to this business force, then we won't see the potential here. That's the biggest challenge.
As far as the naysayers, they are most everyone. People have become pretty cynical. We watch the electronic health record, the singular worst disaster to happen in medicine in recent decades, give digital a bad name; many people think that's somehow a continuum of AI when it couldn't be further from the truth.
Follow Medscape on Facebook, Twitter, Instagram, and YouTube
Medscape © 2019 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Deep Medicine: How AI Will Restore Intimacy to the Doctor-Patient Relationship - Medscape - Mar 28, 2019.
Comments