Majority of individuals may associate augmented reality (AR) with smartphone applications and computer games, maybe even Pokemon Go. But AR is gaining traction in a number of industries, including manufacturing and healthcare.
The healthcare industry specifically might be ready to adopt AR into work processes as a less expensive option in contrast to AI for some use cases. AI often includes a lengthy and time-intensive integration, a total redesign of work processes, and an upgrade of worker abilities.
This isn’t the situation with AR; augmented reality solutions are considerably more like conventional software solutions: plug and play. So, there are AR applications that include AI, despite the fact that these are now significantly less common in the industry.
One such potential AI-enhanced AR use-case is Google Glass. Google Glass may have come up short with consumers, however, the healthcare industry has expressed interest in making it work for doctors. Despite the fact that use-case is still in its nascent stages, healthcare networks are hoping to outfit their physicians with Google Glass software that would allow them to see patient notes without taking their eyes off their patient.
This could allow doctors to be more present with patients during visits. Later on, AI startups may develop natural language processing-based (NLP) software and facial recognition tech that automatically pulls up a patient’s record with a voice command and deciphers notes about the visit into an electronic health record system.
There are new companies that are looking to integrate NLP technology into their Google Glass-based service, however, such a use-case is limited right now.
All things considered, there are a couple of vendors offering AR solutions to healthcare networks. In this article, we talk about two specifically: AccuVein and EchoPixel. The previous purports its software can lessen the occasions a medical attendant or doctor leaves a patient with a needle to discover a vein. The last claims its software can visualize 3D images of a patient’s organs and allow doctors to interact with those images. Let’s get started with AccuVein.
AccuVein offers a namesake vein representation device it claims can help medical professionals at healthcare organizations more easily find patients’ veins for blood draws. As indicated by the organization, this improves the patient experience and saves healthcare providers from doing different “sticks.”
Accuvein claims that the handheld augmented reality device uses a near-infrared (NIR) 3D imaging system to detect veins. As the device is focused on a patient’s target area, the first IR detects hemoglobin in the veins through the patient’s skin. A photodetector gets light reflected from the patient’s target area, for example, the arm or back of the hand, as a contrasted image. As the power of the first laser increases and goes deeper into the skin, the photodetector receives several vein images based on the depth.
Image processing algorithms then layer the images of veins as per their depth to create one image. Another laser emits a second wavelength of light and combines with the scanner to project a composite picture of the veins onto the target area.
AccuVein claims to have helped four hemophilia treatment centers in France lessen the instances of difficult venous access (DVA) among patients. The study involves 450 participants. Among the total participants, access to the veins was reported to be difficult in 165 because they had poor vein condition, were too young, or were overweight. Of this number, actual difficulty in finding veins was encountered in 82.4%, and a fourth of these patients required more than one puncture attempt.
Using the gadget, veins were hard to locate in fewer DVA patients when the vein visualizer was used (76.0%) than when not used (92.3%). The contextual analysis also reported just 34% of the DVA patients reported being in pain during the puncturing when using the device, compared with 55.4% when not using the device.
EchoPixel offers the True 3D medical visualization software, which it claims can help healthcare professionals visualize and interface with 3D images that depict human tissue and organs in open space as though they were real objects using augmented reality and image processing.
EchoPixel claims that the software is used as a pre-surgery software to simulate and evaluate surgical treatment options. Using it requires special glasses to view and a stylus to manipulate the pictures.
As indicated by the company, the software uses ML to record the manner in which doctors interacted with the generated 3D image. Accordingly, other doctors can, therefore, interact with the image similarly the way previous doctors had. This likely reduces the time it takes for doctors over a healthcare network to familiarize themselves with the image the application generates.
EchoPixel claims to have helped doctors at the Lucile Packard Children’s Hospital Stanford exactly map the native vessels supplying the lungs of patients afflicted with pulmonary atresia (PA) with major aortopulmonary collateral veins (MAPCA) prior to surgery.
Since every individual has unique vascular structure, doctors used EchoPixel’s True 3D to capture the computed tomography angiography (CTA) images of a patient’s 774 vessel branches, and were used to plan and play perform the surgery.
The researchers found that, compared with traditional 2D tomography techniques, the images captured by the True 3D Viewer improved the surgeon’s ability to see and interpret anatomical details rose to 90% from 81% previously.
The case study included that the time for physicians to interpret a tomographic readout declined to 13 ± 4 minutes using information from True 3D, compared with 22 ± 7 minutes in the past.