By Paul Weeks
Artificial Intelligence (AI), machine learning, deep learning, data analytics — all are hallmarks of any discussion surrounding health care right now. Technology and the potential for its applications to ripple through and alter the landscape are both tantalizing and overwhelming. In fact, there are big ideas about how the evolution of health care will be shaped from a technological standpoint to impact care delivery, as well as operations.
Health care organizations and vendors alike are exploring opportunities to deliver AI solutions to patients and health care facilities. Partners Healthcare recently launched a $30 million investment vehicle to link researchers at Partners with vendors who can help deliver AI and other digital solutions. Meanwhile, other institutions and organizations are creating groups to study potential AI solutions that could impact their patient population. Looking at the industry as a whole, organizations with the capital to invest are looking to AI to help deliver better care as well as control costs, two objectives that don’t always align.
Examining the current state of AI in two primary areas — machine learning and natural language processing — the best lens through which to view it is one of enhancement rather than replacement. In its current state within health care, AI serves as a tool to optimize operations and provide insights into the patient population. It is not serving in a decision-making capacity, and humans are still very much involved in the process.
Before we go any further on this topic, let’s discuss exactly what AI is, and why it matters.
What is AI?
AI describes any process whereby a computer emulates human behavior and/or intelligence. There’s no requisite level of difficulty to say whether a process or system utilizes AI, as long as it falls under that definition. In practice, people generally speak about AI and its subsets as being interchangeable; however, that is not an accurate depiction and can often lead to confusion about the AI processes being used by an organization.
When you hear the term “AI,” you should think of a large umbrella that has a variety of tools and methods beneath it. Some of these tools overlap and are used in combination, while others are a more specialized version of another tool (think of a nesting doll, for instance). To say that an organization uses AI is not that meaningful without mentioning the problem being addressed and the specific AI tools being applied.
Why do we care?
The goal of AI is to leverage human and computer intelligence over larger amounts of data with greater speed. Technology, in general, excels at scaling processes to be performed repeatedly with great accuracy. Humans have a great capacity for nuance and the ability to take many different sources of information to derive a solution. This means AI is the marriage of this technological advantage (scalability) and the human elements of intelligence. As you will find in the examples below, AI allows us to find a problem before it becomes a headwind, as well as to ingest millions of data points to help inform critical decisions. Machine learning is an ideal example of this.
One of the more prominent AI specialties is machine learning (ML), which refers to a computer learning from examples. As opposed to writing voluminous amounts of code to perform a task, a computer is given examples through massive datasets, which then allows it to learn. What’s impressive is that as an ML model consumes more data, it learns more about the specific task that it is designed to perform. Within ML (once again, think of a nesting doll), we also have an area referred to most commonly as deep learning (DL). However, it’s somewhat of a paradox: DL is ML but ML is not DL. In a DL algorithm, a neural network is created (inspired by the human brain) that helps the computer analyze and extract the characteristics of what it is being pointed at. The best example of DL involves driverless cars where a computer is trained to recognize a variety of images (such as pedestrians, other vehicles, traffic signs, etc.). In DL, the computer learns to isolate the different features of an image (like a car, pedestrian, or stoplight). In ML, you have to manually identify the shapes that represent a car versus a pedestrian versus a stoplight. This seemingly subtle difference is actually one of the components of DL that makes it attractive, because it requires less knowledge about the task.
Examples of ML/DL in health care
A field that has continually shown great promise in utilizing AI is medical imaging. Interpreting medical images is a task well-suited to ML, given the relatively consistent input (medical image) and output (presence of abnormality or not). Additionally, there are a high volume of images available where the diagnosis is already known, through which an ML algorithm could be trained to identify abnormalities.
For example, researchers at MIT, in collaboration with Massachusetts General Hospital, developed a DL model that can predict breast cancer, based on imaging studies, nearly five years in advance of a traditional diagnosis. The team trained the model on over 60,000 patients and were able to significantly outperform other models in terms of accuracy. Effectively, the researchers are now moving toward treating the future patient in advance and potentially reducing the mortality risk. Those who worked to develop the tool are optimistic about its application to other diseases, which is an exciting development.
Triage, patient monitoring, risk assessment
In addition to radiology, emergency departments (EDs) and intensive care units (ICUs) have started to develop AI tools to help triage patients and to identify those who are on the verge of becoming high-risk. By monitoring vital signs and laboratory reports, these systems are able to provide alerts to clinicians indicating which patients may need their immediate intervention.
For instance, Healthcare Corporation of America (HCA), which owns over 185 hospitals in the United States, implemented an ML model to identify patients who were at risk of developing sepsis. The system monitors vital signs and lab reports and, if it believes a patient is in the beginning phases of sepsis, it will provide an alert to the clinician along with the criteria that resulted in a sepsis determination. The decision-making of whether to intervene still lies with the clinician, with the tool serving in an enhancement role.
Additionally, the Duke Institute for Health Innovation has also created an AI-based system to detect sepsis within the ICU and ED departments. The innovation center plans to implement the system at Duke-affiliated hospitals along with other sites. Like the sepsis identifying system developed by HCA, Duke’s system is not the decision-maker in the care process. Rather, it analyzes current and historical data to determine the likelihood that sepsis is setting in. A risk score and other factors are presented to the human clinicians who make the decisions surrounding next steps. The presentation of the facts that played a role in creating the alert is the key to any AI algorithm. Clinicians can then determine if the alert is a false positive or take steps to intervene and prevent a precipitous decline in a patient’s condition. This feedback helps to overcome one of the criticisms of AI technology, i.e., that it can sometimes be difficult to understand how it derived an answer.
The growing market and demand for wearable devices are not limited to the fitness-savvy — they are also carried over into the health care sphere. In March 2019, the FDA approved a wearable device for use in the post-acute care setting. The device collects vital signs and transmits them to the cloud, where a provider can view the results in real time. Additionally, the device is connected to a tablet that uses an automated chatbot to ask the patient questions about his or her health, as well as provide reminders for medications. If a patient starts to experience a decline in health, the device will send automated alerts to the care team to help facilitate an appointment and/or intervention.
Wearable devices can provide additional data for AI models to analyze and provide recommendations to clinicians. Once again, this is an example of AI technology being used to proactively manage patient populations by catching potential issues before they become significant events. The real-time capability of these devices can improve patient compliance (taking medications), reduce hospital admissions, and allow patients to proactively manage their health.
Natural language processing
Moving outside the walls of ML, another area under the AI umbrella is natural language processing (NLP). NLP crosses several different fields of study, including AI and linguistics (study of language).
This tool is incredibly important for health care due to the presence of large blocks of unstructured data that exist within medical records. Many providers do not utilize the point-and-click features of an electronic medical record and prefer to type or dictate their notes into a free text area. This data becomes lost, because it typically is not structured to be accessible by the computer. Utilizing NLP allows the computer to “read” and analyze free text and other unstructured data sources, providing a level of structure that allows the data to be analyzed.
Examples of NLP in health care
Multiple companies have products in the marketplace that can extract text from a doctor’s clinical note and recommend the correct coding (CPT and ICD). Ultimately, this could reduce coding errors as well as claims denials related to coding. For example, Nebraska Medicine recently used a vendor offering an NLP solution and was able to increase turnaround and decrease AR.
This particular area has a wide variety of applications. A recent study published in BMC Medical Informatics and Decision Making outlined how researchers were able to use NLP to identify different types of fractures related to osteoporosis by scanning radiology reports. This method could be used to identify at-risk patients as well as potential participants for research studies. NLP can provide structure and extract information that is not being consistently tracked by current electronic medical records.
For instance, chatbots (or AI assistants) are virtual non-human workers who employ both NLP and ML to interact with patients and customers. Many of us have encountered these entities when calling a customer service line or logging on to a website. This technology can reduce the amount of time that human workers spend on mundane tasks, freeing them to work on more complicated consumer questions. Within the health care space, there are great opportunities for investment in AI technologies that can perform the work that doctors and administrators loathe to perform.
Navigating health care
Continuing on the theme of enhancement, one of the areas of health care that is sometimes overlooked is the role of a patient navigator. These employees are charged with helping patients get connected with the correct providers as they traverse the health care system. Given the complexities of the health care system, navigating the turbulent waters can be tough. To assist their patient navigators, Sarah Cannon instituted an NLP/ML model to sift through clinical reports and identify patients to be followed up on by the human navigators. Previously, this process had to be performed manually, with the navigators searching through patient records to identify those at-risk patients.
This article was originally published on CLAconnect.com. The information contained herein is general in nature and is not intended, and should not be construed, as legal, accounting, investment, or tax advice or opinion provided by CliftonLarsonAllen LLP (CliftonLarsonAllen) to the reader. For more information, visit CLAconnect.com.
For more information about the healthcare industry in North Carolina, please contact Jeremy Hicks at [email protected].