05 Jul 2017 |
Research article |
Intelligent and Autonomous Systems
Image Segmentation Methods to Process Images from Mars
We recently introduced several articles on the research and technological developments conducted for the colonization of Mars, the Moon, and other planets. These papers addressed, among other things, two imaging systems designed to monitor the health of plants that would grow in autonomous greenhouses to sustain human life:
- Growing Plants on Mars,
- Inside the Arthur Clarke Mars Greenhouse (ACMG),
- M‐PHIS: A New Imaging System to Grow Plants on Mars,
- Experiments with the M-PHIS to Grow Plants on Mars.
These imaging systems generate a large number of images which need to be analyzed quickly and transmitted to Earth. The method described in this article facilitates image analysis and ensures transmission despite the limits of the communication link.
As we, humans, continue to explore beyond the confines of our own planet, we face a number of challenges in order to facilitate these explorations. One of the challenges is environmental engineering, which includes human support in extraterrestrial habitats (Bamsey et al, 2009b). Plants are central to all life support systems that contain biological components. The Canadian Space Agency is studying the possibility of supporting human presence on the Moon or Mars by establishing greenhouses (Bamsey et al, 2009a). The advantage of these greenhouses is their ability to provide, in a closed loop system, a regenerative system of the three pillars of life support which are:
- Producing edible biomass;
- Managing the atmosphere, principally CO2 and O2; and
- Producing drinking water (Ramsey et al, 2009b).
This explains the importance of understanding the metabolic issues that may affect plant growth and their development in space. The « Transgenic Arabidopsis Gene Expression System » (TAGES) is a detector for the state of plant health (Manak et al, 2002). TAGES uses genetically modified thale cress (Arabidopsis thaliana) as bio-monitors to determine the quality of the environment in which they are found (figure 1).
After a mission of the TAGES imagery system (Paul et al, 2003), hundreds of images need to be analyzed. Indeed, a pair of images is captured every hour for 15 to 30 days, one showing the visible spectrum and the other showing fluorescence (green fluorescent protein, GFP). The images have a resolution of 96 dpi, and 8 bits in depth (figure 2).
Analyzing these images requires a lot of time and energy from astronauts or scientists. Also, transmitting images to Earth is another challenge. Stability or speed of the communication link between the station or the space vehicle and Earth is not reliable enough to send large image files, even when images are compressed. In addition, compression is not an option as it can result in a great loss of information.
The goal is to simplify the analysis of images captured by the system by distinguishing the different parts of the plant (leaves, stems, and roots). This information simplifies the analysis of the conditions under which the different parts of the plant are the most stressed (diseased). The program can analyze and send the data to Earth in the form of numbers, not images, which should solve the problem of communication and analysis time.
Tested Segmentation Methods
Shades of gray between the different parts of the plant play an important role in segmentation. Indeed, there is a difference between the grayscale of each segment of the plant, but some pixels in one part overlap with those of another and have the same intensities. For example, in the GFP image, there is a difference between the gray shade of the leaves and the stems, but not all over since there are parts of stems that have the same shade of gray as parts of leaves. Therefore, separation between different parts of the plant using the thresholding method is not efficient. As clearly shown in Figure 3, segmentation is possible, but at the cost of several errors between the stems and the leaves.
The four parts of the plant have a different intensity of gray; the same part of the plant also has a different intensity on the visible image, and on the GFP image. There are therefore two dimensions of intensity for each part of the plant. By taking samples for each class of pixels (leaves, stems, roots, and background), and each type of images (visible and GFP), a vector containing the value of the visible pixel and the value of the same pixel in GFP is produced.
Figure 4 demonstrates that class separation can be done. Although the separation between the “root” class and the “background” class is difficult to see in the GFP image, it is easier in the visible image. Consequently, Figure 4 leads us to the conclusion that the characterization vectors will enable classifiers to clearly separate the different parts of the plant.
A sample of sixty pixels for each class was recorded, where each pixel is represented by a characterization vector that contains the value of the visible pixel and the value of the GFP pixel:
Characterization vector = [visible pixel, GFP pixel]
Classification algorithms are algorithms designed to assign a tag to a set of data using a learning base (samples) where the tags are known (classes). Three different classification algorithms were implemented, tested, and compared:
- The Bayes quadratic classifier (Bayes- Q) (Cheriet et al., 2007);
- The k-nearest neighbours (K-PPV) (Cheriet et al., 2007);
- The support vector machine (SVM) (Adankon, 2005) et (Cheriet et al., 2007).
Results of the Three Classification Algorithms
The comparison of the classification methods is based on four criteria:
- The quality of the reconstructed image;
- The error rate, if any;
- The total time;
- The complexity of the algorithm (based on the number of loops), which impacts the energy consumed by the processor.
Figures 4, 5, and 6 show that the best classifier, the one with the image containing the least misclassified pixels, is the k-NN where K = 5. The Bayes quadratic is also effective, but shows slightly more misclassified pixels. However, the effort and the time needed for the k-NN algorithm is 17 times greater than that required for the Bayes quadratic (581 s / 35 s = 16.6). The least performing algorithm is the SVM. Not only does it require a lot of time and energy, it also shows the highest error rate; in addition, there are many misclassified pixels around the edges of the roots and even in the leaves.
These tests demonstrate that the Bayes quadratic has the best performance / error ratio: although there are slightly more mistakes than with the k-NN, the algorithm showing the least errors, it does not require as much time and energy. Instead of 581 seconds and 405,721,120 instructions, the BAYES quadratic classifier requires only 35 seconds, and 2,144,348 instructions. As to the performance / error ratio of the SVM, it lags far behind, requiring 2,385 seconds, and 116,467,596 instructions.
The results show that, with the designed characterization method and the use of classification algorithms, the goal of segmenting the plant was completed successfully, and with a very low error rate. The errors are mostly concentrated in the “root” class. This is because the fluorescence signal (GFP picture) of the “root” class is low and very close to the shade of gray of the “background” class. The signal to noise ratio is therefore high, which causes more errors than in the other classes which have a stronger signal. To reduce the error rate, one solution to consider is to filter the image at the beginning of the process, before characterization, and to increase the number of samples.
For more information, see the following research article:
Abboud, Talal; Hedjam, Rachid; Noumeir, Rita; Berinstain, Alain. 2012. Segmentation d’images de plantes capturées par une système d’imagerie fluorescente. Presented at IEEE Conference Montréal, Canada in April 2012.
Or the following master’s thesis: Abboud, Talal (2013). Systèmes d’imagerie pour l’étude de la santé des plantes et la biologie spatiale. Mémoire de maîtrise électronique, Montréal, École de technologie supérieure. 90 p.
Talal Abboud holds a Bachelor and a Master of Engineering from the Department of Electrical Engineering of the École de technologie supérieure (ÉTS). He is currently an electronics designer at Kongsberg Automotive centre of excellence (CoE) department.
Rachid Hedjam is a postdoctoral fellow in the Department of Geography at McGill University and also a member of Synchromedia laboratory at ÉTS.
Research laboratories : SYNCHROMEDIA – Multimedia Communication in Telepresence
Rita Noumeir is a professor in the Electrical Engineering Department at ÉTS. Her research includes applying artificial intelligence methods to create decision support systems as well as video and image processing.
Program : Electrical Engineering