News

Uses of AI in the Healthcare industry

Various use cases and applications for AI and machine learning in the healthcare industry are being proposed on a daily basis. This fast changing technology can seem like something out of minority report, making it difficult for industry stakeholders to keep up.

In this article, we give a comprehensive overview of how and where AI is already being used in healthcare and detail how physicians, hospitals, and healthcare providers have benefitted.

Doctors, Nurses and other caregivers across various hospital departments currently use AI and machine learning for:

  • Transcribing physician notes
  • Planning for surgery
  • Analyzing medical images to find anomalies and take measurements
  • Managing emergencies and assessing patient condition
  • Helping doctors diagnose cancer and other diseases
  • Monitoring surgery for adherence to planned procedures and best practices

Medical Transcription

Doctors and physicians could use AI to transcribe their voice into medical documents such as outpatient letters or directly into electronic health records (EHR). Currently, hospitals spend a significant amount of money on admin staff who transcribe documents from dictated audio, outsourced medical transcription companies or in a lot of cases clinicians transcribing their own notes and letters.

With AI base automatic speech recognition (ASR) and natural language processing (NLP) applications used for voice recognition, Physicians are not simply turning their voice directly into text. The latest AI driven technology can now generate and format documents, learning the physicians style as they use it.

Although AI for medical transcription seems trivial compared to other uses of the technology, hospitals are investing heavily in the products. This is because there is a huge administrative burden which affects both budgets and physicians ability and time to care for patients.

T-Pro Speech is a speech recognition technology that is delivered as part of the T-Pro workflow platform. It has been shown to reduce the amount of time physicians spend on documentation by up to 40% and has reduced transcription costs for the likes of The Glaway Clinic, The Mater Hospital and Beaumont Hospital

The machine learning model behind T-Pro Speech has been trained on thousands of hours of transcribed speech in order to accurately recognise a physician’s voice. The voices used to train that model include multiple accents, inflections, and pitches so that the software can recognise the voice of any and all doctors. The deep domain models are also tuned to recognise medical terminology.

In future the Technology will extend further to not only understand speech and commands, but also to understand context and infer what the user wants without the need for them to give an explicit command.

In the future, a doctor might be able to say, “0.15 units of insulin per kilogram,” and the NLP system might be able to interpret that the doctor is looking for the system to write a prescription for the patient she’s talking with. It might comply because the doctor said her command slightly louder than the voice she used to speak with the patient and based on the context that the doctor is currently in a conversation with a patient who has diabetes.

As such, the NLP system might pull up a prescription form that it could then print out for the patient and/or send along to an insurance company. Again, NLP software is unlikely to be able to do this now, but it is a future possibility.

Planning for Surgery

Surgeons could leverage AI and machine learning for planning surgery. AI and machine learning could also help orthopedics departments with medical imaging and the identification of distinct body structures in those medical images.

In orthopedics, it is imperative to understand the exact shape of the patient’s body structures, such as bones, cartilage, and ligaments, in the area where they are suffering from pain or disability. Medical images such as MRIs and X-rays can lack the details, the exact measurements and depth of the imaged structures, required to fully assess the situation.

Machine learning solutions could help with measuring the patient’s bones, ligaments, and cartilage from those images to extract more information from them. For example, an orthopedic specialist would want to know the size of each bone, ligament, and muscle in their patient’s ankle in order to more accurately replace parts of it if need be.

Determining these measurements could reveal acute inflammation or other problems that may be hard to see with typical medical imaging technology.

In addition, healthcare providers may use an AI-based predictive software to create estimates on the success of certain surgical procedures. This could allow healthcare companies to make better recommendations on how to go about surgery.

This type of software typically uses a scoring system to determine the likelihood of success for a given medical procedure and could estimate the score of alternate procedures that might be more or less to the patient’s benefit.

Healthcare companies seeking predictive analytics solutions can also come to vendors such as Medicrea for customized tools or implants for the surgeries in question in conjunction with those software solutions.

The company offers implants made specifically to fit each patient and are in accordance with the surgical plan the client healthcare company made using Medicrea’s UNiD platform.

OM1 offers predictive analytics software they call OM1 Medical Burden Index (OMBI), which they claim helps providers predict the outcomes of surgeries. Their website also states the software can create a plan suggestion for those surgeries using predictive analytics. The OMBI calculates an estimate of how much a patient’s ailments and diseases are affecting their daily life.

The OMBI evaluation is a measurement of the combined burden on an individual patient from all of their medical problems, and is scored from 1 to 1000.

The OMBI scoring metric was generated from analyzing over 200 million patient profiles across the United States. OM1’s website states the OMBI score is a strong predictor of future patient resource utilization and mortality.

Below is an image from OM1’s website that shows what the OMBI software would look like on a tablet:

OMBI software, including the OMBI Score
OMBI software, including the OMBI Score

The machine learning model behind OMBI would have needed to be trained on tens of thousands of electronic medical and health records (EMRs/EHRs). Each factor that would debilitate a patient’s normal daily functions, such as being paralyzed or unable to speak, would be given a numerical value, and then run through the OMBI algorithm to provide the user with the OMBI score.

Analysing Medical Images

With machine vision software becoming more widespread across the healthcare industry, it follows that medical imaging would be an applicable use case. Machine vision software for medical imaging typically consists of applications that scan medical images to extract more information from them, as we referred to in the previous section regarding orthopedics.

Some software may segment the medical images or 3D models it produces in order to highlight specific parts of the imaged area of the body or foreign or malignant bodies such as tumors or cancer cells.

RSIP Vision, for example, focuses primarily on orthopedic image scanning, however, Sigtuple developed a software called Shonit that the company claims can extract information from images of blood. Sigtuple’s claims its solution can scan for blood cells in an image using microscopes programmed to upload pictures to the cloud.

By storing their medical images in the cloud, they help facilitate telepathology for their client healthcare providers. They also claim to offer remote access to blood smear information if a provider needs a more detailed understanding of patient’s medical history, for example.

ECG Testing and Emergency Management

In the case of an emergency, accuracy and speed are always the highest priority. This is especially true in the healthcare industry where medical professionals cannot wait for extended periods of time for computer systems to finish processing. This includes critical situations such as cardiac arrest or life-threatening injuries.

AI applications for these situations typically provide insights from the patient’s medical data using predictive analytics. A prescriptive analytics solution may also offer a recommendation for how to treat the patient.

For example, a patient’s ECG/EKG results could be run through a machine learning application to help doctors find abnormalities in the heartbeat.

If the software in question is a prescriptive analytics application, it may provide recommendations for how to treat the patient listed in order of priority. It follows that an AI application could extract insights and make recommendations based on other types of medical data, such as blood sugar or alcohol levels, blood pressure, or white blood cell count.

Some applications might need to leverage machine vision technology in order to properly assess the patient’s situation and recommend the best treatments. This could be imperative for determining blood cell counts and lesions or other marks on the outside of the body.

Medical machine vision applications face the challenge of finding the best hardware to run them on. For some applications, an iPad is chosen for its portability and ease of use. Otherwise, a patient may need to be laid underneath a larger camera during care so that the doctors and the camera have a clear view of the patient.

Providing Information That Could Lead to Diagnosis

AI breakthroughs in assisting medical diagnostics continue to push the technology forward with innovations for each use case.

Perhaps one of the most prominent use cases for AI and ML applications in healthcare is in helping doctors with medical diagnostics. Currently, there are AI vendors offering mobile apps for information on patients’ symptoms, as well as chatbots that can be accessed through apps or company websites that offer a similar function.

Some smartphone apps are also built to process internet of things (IoT) sensor data through machine learning algorithms, which would allow the software to track heart rate. The IoT sensor could be attached to the smartphone itself, a smartwatch, or other wearable devices along the lines of a FitBit. The results of measuring this data would then be displayed on the patient’s smartphone app, which could include recommendations or reminders such as medication times.

Patients could use chatbot applications to quickly find information about their symptoms by typing them into the chat window. A machine learning model trained to recognize symptoms in text written in typed language would then be able to find information on illnesses that correlate with the input symptoms.

Monitoring Surgery

Monitoring surgery is a use case for which several machine vision applications may be applicable. As such with oncology and radiology, machine vision software could be used to track the progress of surgery.

Some vendors may offer software that recommends a step by step surgical procedure when it detects when the surgeon has completed a step. Others may monitor the surgery for patient vital signs or adherence to the hospital’s best practices. Conditions could include blood loss, blood sugar levels, heart rate, and blood oxygen levels.

Gauss offers their iPad software Triton, which they claim can help physicians keep track of patient blood loss using machine vision.

Physicians can purportedly point the iPad’s camera at used surgical sponges so that it can measure the patient’s blood loss and the rate at which they are losing blood. In order to determine these factors accurately, Gauss likely trained the machine learning model behind Triton on millions of images of surgical sponges with various amounts of blood on them. The algorithm could then discern the images that correlate to higher rates of blood loss.

Gauss does not display its marquee clients on its website, but the Triton software has been approved by the FDA. The company raised $51.5 million in venture capital and is backed by Softbank Ventures Korea and Polaris Partners.

free demo cta