top of page

Farewell Marcus Welby... but is that so bad?


When I was growing up in the ‘60s and ‘70s, the individual who epitomized the doctor’s doctor—the kind, compassionate, knowledgeable, and eminently trustworthy medical practitioner—was TV’s Marcus Welby. Well, you can’t talk to medical students today about Marcus Welby because they never heard of him. That’s not their fault, though. Today’s learning environments don’t stress an appreciation of the cultural history that shapes our medical encounters, perhaps because it is technology, rather than the likes of Marcus Welby, that has become the contemporary mover-and-shaper of healthcare delivery.

Still, I continue to be struck by the way my senior physician-ethicist colleagues wax nostalgic over the Marcus Welby days. They seem to think that the doctor-patient relationship is eternally captured by a Welby-like persona, such that as it recedes into our cultural memory, the worse we will be.

If that is a widely shared belief, then I respectfully disagree. I’ve lived long enough to be impressed by the transitory nature of human institutions and cultural practices and the extent to which human societies inevitably experiment and make things up as they go (Rorty 1990). I very much think the “doctor-patient relationship” is one of these creations. Of course, I like it when my physician listens to me attentively, doesn’t interrupt, makes good eye contact (rather than be glued to his or her clinical notes), treats me as though I’m the only patient in the world, asks for my input about treatment planning, and the like. And I suspect that physicians who do these things have more trusting and treatment-adherent patients and are less likely to be sued when things go south (Banja 2005). But I also stubbornly believe that the provider-patient relationship is ultimately a scientifically-based one where we are out to identify and eradicate (or mitigate) some pathogenic bio-entity that is causing distress. So, what I really want is for my physician to be competent and capable in the ways of medical science and to know how to relieve my pain or suffering in the most expeditious and decisive way. Of course, if he or she evinces a Marcus Welby-like manner in the process, all the better. But it’s not my primary concern and, frankly, as the years go by, I think it will become increasingly unnecessary.

It’s important to raise this issue now in light of the technological revolution that many researchers, especially in artificial intelligence (AI), believe is inevitable and fast approaching—a revolution that will make the medical technology of the 20th and early 21st centuries look Cro-Magnon (Harari 2017). Futurists are predicting that in 20 to 40 years, maybe even sooner, people will visit a health professional who will perfunctorily jot down their symptoms and complaints and perhaps perform or order some diagnostic work, such as an imaging study or a blood or tissue culture. But that information will quickly be fed into a 1) “neural network” that will process and analyze it, 2) sync its findings with that individual’s DNA or biomarkers that are 3) further synced with his or her medical record all of which are 4) vetted by a massive database, like PubMed (Daher 2016; Harari 2017; Obermeyer and Emanuel 2016). In fact, the process won’t be the linear one I just described but will witness these technologies recursively bouncing the data and their AI generated impressions back and forth every nano-step of the way. The system’s algorithms (or “neural networks”) will output a diagnosis and treatment recommendation (or request additional information) that will ultimately be reviewed and approved by a health professional. So, perhaps we’ll have a human, Marcus Welby-like presence at the front and back ends of the process. Possibly, though, a computer will function as the front-end professional, who will perform an excellent history and physical because it knows exactly what questions to ask, where to look, and what to reasonably expect from treatment A, B or C. The endpoint professional, however, might function as little more than a medical proofreader or quality control inspector who relies on the technology’s algorithms to do the heavy lifting (Chockley and Emanuel 2016; Harari 2017).

Now, if that seems dismal or scary, consider that some physicians of the early 19th century protested the invention of the stethoscope, not only because they thought it wouldn’t work but because prior to its appearance, the physician had to press his ear to the patient’s chest to listen to heart sounds (Holtz 2006). Eliminating that warm and comforting physical contact was thought by some to degrade the humanity of the clinical encounter by interposing an unfeeling piece of technology between doctors and their patients (Holtz 2006, 100). Just so, the technology that’s right around the corner will likely change the doctor-patient encounter forever. But it may also do so with a capability and quality that, at its maturity, is fantastically beyond today’s best clinical standards.

Predictably, and just like the stethoscope, the reception from the professional community is likely to be guarded or downright skeptical. For instance, in a December 2017 JAMA Viewpoint article, Abraham Verghese and his co-authors argued that an algorithmic model of medical decision making “might classify patients with a history of asthma who present with pneumonia as having a lower risk of mortality than those with pneumonia alone, not registering the context that this is an artifact of clinicians admitting and treating such patients earlier and more aggressively” (Verghese, Shah and Harrington 2017). And in the early phase of AI implementation, the system might indeed make those kinds of mistakes. But as the system matures and self-corrects from its errors, without needing to be taught or re-programmed, the AI’s algorithms will interface with the patient’s medical record and “know” the patient’s prior asthmatic history. Armed with that information, the system will aggressively alert treating clinicians that this (asthmatic) patient’s pneumonia is extremely concerning and must be treated aggressively (Obermeyer and Emanuel 2016).

But will we forever lose something valuable, indeed “sacred,” by bidding Marcus Welby farewell? If you believe, as I do, that there is very little that humans have creatively embedded in their social relationships that is eternally valid or correct—other than “try really hard not to harm one another” (Rorty 1990)—we shouldn’t be terribly distressed if a Marcus Welby-like clinical encounter is replaced by something “artificial.” Yes, we will have traded a very commendable human being for a machine. But if the latter helps us to secure our welfare goals much more efficiently, successfully, and, indeed, less expensively, then I think we’ll have considerably gained in the trade-off.

Besides, Marcus Welby wasn’t real. I like to joke that Marcus Welby’s success was due to the fact that he treated only one patient a week. Consider, though, that the technology of the future, in addition to its unprecedented capability, will be available 24/7 and thus allow unprecedented global access to health care. Consider, too, that the algorithms will never tire, and when they are down, employers won’t have to pay them workers’ compensation, sick leave, or maternity benefits (Harari 2017; Urban 2015).

Of course, there is a dark side to the AI technologies of the next 100 years as countless jobs as we know them will be eliminated or modified, perhaps to the point of unrecognizability. Worst, some futurists as well as Hollywood believe they portend an apocalypse. They expect that AI will take over and destroy us, or that some evil power will harness and unleash AI generated weaponry that will wipe out humankind (and quickly, too, perhaps in a matter of hours) (Harari 2017; Urban 2015). But it is also quite possible that, with some cosmic luck and very determined ethical preparation, such a day will never occur. For now, though, it’s pretty clear that western medicine is on the cusp of a dramatic transformation of how things used to be done. Marcus Welby was inspiring. But there is reason to think that he will be replaced by something considerably better.

 

References

Banja J. 2005. Medical Errors and Medical Narcissism. Sudbury, MA: Jones and Bartlett.

Chockley K and Emanuel E. 2016. The end of radiology? Three threats to the future practice of radiology. Journal of the American College of Radiology 2016; 13:1415-1420.

Daher NM. 2016. Deep learning in medical imaging: The not-so-near future. Diagnostic Imaging blog. Available at http://www.diagnosticimaging.com/pacs-and-informatics/deep-learning-medical-imaging-not-so-near-future.

Harari YN. 2017. Homo Deus: A Brief History of Tomorrow. New York, NY: Harper Collins.

Holtz A. 2006. The Medical Science of House MD. New York, NY: Berkley Press.

Obermeyer Z and Emanuel EJ. 2016. Predicting the future—Big data, machine learning, and clinical medicine. The New England Journal of Medicine 375;13:1216-1219.

Rorty R. 1990. The priority of democracy to philosophy. In Objectivity, Relativism and Truth: Philosophical Papers. Cambridge UK: Cambridge University Press, pp. 175-196.

Urban T. 2015. The AI revolution: The road to superintelligence. Wait but why blog. Available at https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.

Verghese A, Shah NH, and Harrington RA. Dec. 20, 2017. What this computer needs is a physician: Humanism and artificial intelligence. JAMA Viewpoint (online). Available at https://jamanetwork.com/journals/jama/fullarticle/2666717.

RECENT POSTS

FEATURED POSTS

FOLLOW US

  • Grey Facebook Icon
  • Grey Twitter Icon
bottom of page