SCIENTIFIC NEWS AND
INNOVATION FROM ÉTS
Assisting Surgeons with Artificial Intelligence - By : Roseline Olory Agomma, Carlos Vázquez, Thierry Cresson, Jacques De Guise,

Assisting Surgeons with Artificial Intelligence


This paper is from one of the finalists in the 2017 SARA Abstract Contest. The writer was awarded third place for the clarity and quality of a research project presentation. The other texts submitted for the SARA Contest are also available.

Roseline Olory Agomma
Roseline Olory Agomma Author profile
Roseline Olory Agomma is a PhD student in the Department of Software and IT Engineering at ÉTS. She is conducting her studies at the Imaging and Orthopedics Research Laboratory (LIO)

Carlos Vázquez
Carlos Vázquez Author profile
Carlos Vázquez is a professor in the Department of Software and IT Engineering at ÉTS. His research interests include image and video digital processing, stereoscopic and multiview imaging, 3D-TV systems, multiview video coding, computer vision and GPGPU programming.

Thierry Cresson
Thierry Cresson Author profile
Thierry Cresson is a research associate at the ÉTS LIO.

Jacques De Guise
Jacques De Guise Author profile
Jacques de Guise is a full professor at the Department of automated manufacturing engineering at the ÉTS and associate professor at the Department of surgery of the Université de Montréal faculty of medicine.

X-rays for interpretation by a radiologist.

Header image is provided by the author. Substance CC license applies

SUMMARY

Identifying information on X-rays is essential in establishing a diagnosis and planning a medical procedure. This process, usually performed manually by a radiologist, is repetitive, time-consuming and can produce highly variable results. The purpose of this work is to develop a fully automatic method based on convolutional neural networks (CNN) to estimate the anatomical area of thirteen lower limb landmarks on frontal X-rays. To estimate these anatomical areas, we started with an automatic identification of salient points in a database consisting of 180 frontal X-rays. Knowing the relative position of the thirteen landmarks points manually labelled by an expert, the proposed approach was to train a CNN on the displacement of each salient point toward each of the thirteen landmarks. Once training is complete, it is possible to predict and combine the displacement of each salient point to estimate the probable area where the landmarks are likely to be found. Mean Euclidean distances between the thirteen predicted points and those identified by an expert are 29 +/- 18 mm, which is acceptable for a reliable identification of the anatomical areas of each landmark.

Misdiagnoses

Identifying clinical information on X-rays—e.g. tibia, center of the femoral head, femur—is an essential task in medical imaging to establish a diagnosis and plan a medical procedure. This task is often done manually by a radiologist, and is therefore user-dependent. Also, X-ray-based diagnoses and interpretations depend mainly on the radiologist’s visual perception and ability to identify existing clinical information [7], which contributes to misdiagnoses due to the high level of noise usually observed on X-rays and the limited capacity of the human eye. In areas where multiple bone structures are covered with soft tissue, it is sometimes difficult to visualize the actual clinical information.

In addition, the growing data flow—many X-rays, too few radiologists—causes the radiologist visual fatigue: it becomes more difficult to perform the work on time, resulting in increased waiting time for interpreted images [6]. These factors lead to a decrease in the quality of diagnoses, which can have negative consequences for the patient. A study by [7] showed that 41% to 80% of fractures are misdiagnosed in emergency departments and that orthopaedic injuries alone account for 75% of misdiagnoses. In order to reduce errors in diagnoses and data interpretation as well as in waiting time, one solution would be to automate the manual and visual identification of information performed by radiologists or, in other words, provide a support tool that can assist with their diagnoses.

Imperfect Methods

Some work, [8] and [9], focused on providing a solution by developing semi-automatic methods, which still require some manual tasks. These methods assist radiologists in their task of extracting clinical information and in making diagnoses. All they have to do is roughly locate points or areas of interest, and these methods take care of the rest—e.g., identifying the areas of interest. The problem with this type of method is that the end result usually depends on the radiologist’s ability to locate points or areas of interest [7], hence the need to develop fully automatic methods. Other work, [2] and [3], was conducted to automate the task of locating and identifying areas of interest on X-ray images using methods like random forests, deformable contours, etc. However, automatic detection of landmarks and characteristic areas for proper diagnoses remains an issue because of the quality of X-ray images—poor contrast between overlapping objects [1], bone structures covered with soft tissue.

The objective of this work is to develop a fully automatic method, based on convolutional neural networks (CNN), to estimate the anatomical area in which some landmarks of the lower limb can be seen on a frontal X-ray.

Radiologist examining the X-ray of a lower limb.

 

Proposed Method

Thumbnail centered over landmarks of lower limbs

Figure 1 Patches extraction

In order to obtain automatic estimates for the anatomical areas of thirteen landmarks—femoral head, proximal tibia, distal femur, etc.—salient point must first be automatically identified—using the Irrera method, P. 2015—in a database consisting of 180 frontal X-rays (see Figure 1). Patches measuring 93 x 93 mm centered over each salient point are then retrieved. Knowing the relative position of the thirteen points manually digitized by an expert in relation to the salient points, the proposed approach is to train a CNN on the displacement of each salient point toward each of the 13 landmarks (see Figure 2).

Convolutional neural network movement learning

Figure 2 Learning phase

Once the learning is complete, it is possible to predict and combine the displacements of each salient point—using the DBSCAN [4] algorithm to estimate the probable area in which the landmarks are found on a frontal X-ray (see Figure 3).

In order to evaluate the predictive ability of the proposed method, we cross-validated 180 X-rays. Mean Euclidean distances between the 13 predicted points and those identified by an expert are 29 +/- 18 mm, which is acceptable for a reliable identification of the anatomical areas of each landmark.

Results of the automatic identification method of areas of interest on X-rays.

Figure 3 Aggregation of predictions

Conclusion

We proposed an automatic approach to estimate the anatomical area of thirteen points of the lower limb on frontal X-rays. This solution could be used in the near future to initiate semi-automatic approaches in estimating clinical parameter values or 3D lower limb reconstruction. This work is currently being extended to other areas of the body and to lateral X-rays. The idea is to have a tool capable of identifying anatomical areas and landmarks—e.g. center of the femoral head, center of the diaphysis—on frontal and lateral X-rays in order to obtain a complete tool that can be integrated in clinical practice.

 

 

Roseline Olory Agomma

Author's profile

Roseline Olory Agomma is a PhD student in the Department of Software and IT Engineering at ÉTS. She is conducting her studies at the Imaging and Orthopedics Research Laboratory (LIO)

Program : Software Engineering 

Research laboratories : LIO – Imaging and orthopedics research laboratory 

Author profile

Carlos Vázquez

Author's profile

Carlos Vázquez is a professor in the Department of Software and IT Engineering at ÉTS. His research interests include image and video digital processing, stereoscopic and multiview imaging, 3D-TV systems, multiview video coding, computer vision and GPGPU programming.

Program : Software Engineering  Information Technology Engineering 

Author profile

Thierry Cresson

Author's profile

Thierry Cresson is a research associate at the ÉTS LIO.

Program : Software Engineering 

Research laboratories : LIO – Imaging and orthopedics research laboratory 

Author profile

Jacques De Guise

Author's profile

Jacques de Guise is a full professor at the Department of automated manufacturing engineering at the ÉTS and associate professor at the Department of surgery of the Université de Montréal faculty of medicine.

Program : Automated Manufacturing Engineering 

Research chair : Canada Research Chair on 3D Imaging and Biomedical Engineering  Marie-Lou and Yves Cotrel Montreal University and ÉTS Research Chair in Orthopaedics 

Research laboratories : LIO – Imaging and orthopedics research laboratory 

Author profile