Artificial Intelligence — Alternative or Adjunct?

Joel Arun Sursas
5 min readJun 13, 2019

--

Introduction

Artificial Intelligence (A.I.) has seen a resurgence in public and scientific discourse, particularly given its emerging potential for embellishing existing medical services and healthcare delivery models globally. Sensationalist articles in peer-reviewed journals and popular news bulletins have painted a promising picture of A.I. — one that incites wonder and awe from the general public and simultaneously stirs unrest in healthcare professionals. We are led to believe that A.I. will replace most if not all medical services in the near future, due in part to the invisible hand in the world of technology — Moore’s Law.

In 1965, Gordon Moore made a startling observation — the density of integrated circuits was seen to double every 2 years. Based on this law, scientists propose that as we transition from technological leaps in hardware to that of software (specifically, data), it won’t be long before exponential learning is within our reach for exploitation. Indeed, the CEO of IBM — Ginny Rometty has coined a new law — Watson’s Law (eponymous for IBM’s A.I. system) to mark this transition.

What does this new era of data-science, A.I., and machine-learning mean for Medicine at large? Most people I speak to envision the future to be akin to an episode out of “Black Mirror” — the British sci-fi anthology which examines the unforeseen sequelae of emerging and novel technologies. The inner geek in me revels in a future that is dotted with fantastical health technology that can diagnose patients with an unsurpassed diagnostic accuracy, interpret complex CT and MRI scans with a greater degree of sensitivity and specificity, as well as build robust predictive models that leverage existing health data (of which large swathes currently exist, locked-up in unwieldy electronic medical record systems) to enable intervention before crisis strikes.

A.I. — An Alternative or an Adjunct?

Simply put, healthcare is complex. One of the most significant barriers to implementing novel technologies in the field of healthcare is something entirely unscientific — people. Specifically, I refer to the concept of interoperability. Medicine is as much an art as it is a science, and large tertiary hospitals function on the panoply of business processes that themselves consist of disparate and intertwined workflows. Implementing technologies in healthcare isn’t as simple as “plug and play” — it involves deep, intimate and engaging conversations with key opinion leaders, stakeholders, and on-the-ground staff to ensure that workflows aren’t disrupted. Novel drugs or medical devices require oversight and compliance with stringent clinical trial design, robust data analysis, and post-market surveillance to get them into the market. New medical technology also requires an on-boarding process within an institution to preserve the existing e-infrastructure. However, this challenge of accounting for and remedying interoperability is not insurmountable. Most if not all tertiary and specialist healthcare institutions have medical informatics teams that have dedicated roles in assessing, modifying, and implementing novel technologies to suit their needs.

Assuming that novel A.I. technologies are implemented in our healthcare institutions, what does this mean for our healthcare workers? Although there is a considerable emphasis on the “data” moiety of the term “data science,” I believe that the focus should be on “science” instead. Data are only ever as good as the research question driving their procurement and analysis. In other words, doctors will still retain a vital position as consultants of medicine, and use data as tools to ply their trade. Just as a doctor uses a stethoscope to auscultate a patient’s chest to audibly collect data on their heart sounds in order to formulate a diagnosis, doctors of the future will be manipulating reams of data to answer specific clinical or research questions to develop robust conclusions at the individual or population level. A.I. can do powerful things with data, from analysis to prediction — but it still requires a trained human in the loop to craft an appropriate and impactful question that needs answering, as well as feed it training material upon which it builds its neural network.

What about A.I. that has already been trained and is ready to replace core functionalities of doctors? Two companies are interesting to discuss in this regard. Deep learning algorithms have been utilized in image analysis by Qure to detect common abnormalities in chest x-rays. Qure’s A.I. has purportedly been trained with over 1 million x-rays — a staggeringly higher number than what a radiology trainee fresh out of medical school would have seen. This technology is definitely something to marvel at, but it most definitely would not replace radiologists. Instead, it serves as a useful adjunct to radiologists who are inundated with routine chest x-rays, and allows them to free up time to do what matters — interpreting more complex CT, MRI and PET scans, as well as interact with their patients and spend more time doing other meaningful things within the hospital setting, such as teach, conduct research, and champion quality improvement projects. Just as technology in the health space is evolving, the role of doctors too must evolve. As the eminent author, H.G Wells said, “Adapt or Perish.” In summary, only healthcare workers who refuse to work with emerging technologies such as A.I. should feel threatened.

Core Essentials in Medicine

Yet another element of healthcare which frequently eludes the hotly debated topic of A.I. is that of the core expertise that doctors bring with them. Medical students spend 3 years on average learning and crafting (I defer from using the word “perfecting” as these skills are continually built upon in every doctor’s career) the nuances of taking a history and performing a physical examination. The act of formulating a list of potential diagnoses (differentials) in a patient who presents with abdominal pain is contingent upon asking the right questions at the right time, and performing the right set of physical manoeuvres properly. While a wearable gadget could have obtained the patient’s heart rate, blood pressure, movements and temperature in the preceding few days, it does little to detract from the value that a curated history and targeted physical exam bring to the bedside. So far, A.I. has done little to replace these niche skills. Even though companies such as Babylon have developed A.I. chatbots, they are still fraught with limitations and don’t quite emulate the Doctor-Patient relationship that is frequently built during these personal interactions.

Conclusion

“I am extremely excited about what the future brings us in the health technology space. I currently lead Clinical Affairs at a medical-technology startup that seeks to revolutionize the field of Obstetrics with advanced signal processing and novel metrics of fetal distress. In my discussions with global key-opinion-leaders in the field of Obstetrics, I always emphasize that our solution aims to enhance, not replace the current model of healthcare delivery in maternal-fetal medicine,” shared Joel Arun Sursas. Doctors and healthcare workers should not paint a pessimistic vision of A.I. and instead should be involved in optimistic conversations surrounding A.I. and its potential to shape the future of healthcare to achieve better health outcomes for patients, job-satisfaction for physicians and economic outcomes for national healthcare services.

--

--

Joel Arun Sursas
Joel Arun Sursas

Written by Joel Arun Sursas

0 Followers

Joel Arun Sursas is a skilled Medical Doctor and Health Informatician motivated to solve administrative problems in healthcare.

No responses yet