SCIENTIFIC NEWS AND
INNOVATION FROM ÉTS
Multispectral Image Reconstruction from Color Images - By : Xu Liu, Abdelouahed Gherbi, Mohamed Cheriet,

Multispectral Image Reconstruction from Color Images


Xu Liu
Xu Liu is a Ph.D. candidate in the Department of Software and IT Engineering at ÉTS. His research interests are high performance machine learning and image processing with machine learning.

Abdelouahed Gherbi
Abdelouahed Gherbi Author profile
Abdelouahed Gherbi is a professor in the Department of Software and IT Engineering at ÉTS. His research domains include modeling and analysis of embedded software systems, model-driven software engineering and software systems security.

Mohamed Cheriet
Mohamed Cheriet Author profile
Mohamed Cheriet is a professor in the Department of Systems Engineering at ÉTS and Director of Synchormedia. His research focuses on eco-cloud computing, knowledge acquisition and artificial intelligence systems and learning algorithms.

Mutlispectral Image

Imaging Tech Solutions. No known restriction of usage.

SUMMARY

Multispectral images (MSIs) contain rich spectral information, which helps identify object features unseen by the naked eye. However, acquiring MSIs is a time-consuming, complicated, and expensive process. In contrast, RGB images are much easier to obtain from common consumer cameras, but typical RGB photos have no spectral information. Consequently, we took advantage of deep learning tech to train a model based on multispectral datasets. Since the model learned the spectral information during training, we can use it to reconstruct MSIs from RGB images. Therefore, the cost and time of acquiring MSIs could be cut down dramatically. Keywords: Multispectral Image, RGB Image, Spectral Reconstruction, Deep Learning, Variational Autoencoders (VAEs), Generative Adversarial Network (GAN).

A Severely Underconstrained Problem

Lights composed of various wavelengths have different propagating, reflecting, and refracting properties. When individual bands are used to record these properties, we get Multispectral images (MSIs) containing affluent spectral information. And with the spectral data, MSIs can help identify some features that RGBs cannot.

Multispectral images have a wide range of applications, such as agriculture management and non-invasive analysis. In agricultural production, MSIs can help people judge whether the crop is healthy, the soil needs to be fertilized, or the land has too much water. In non-invasive analysis, MSIs can help people identify the chemical composition of paints in famous paintings, thereby further helping to determine the age of the paintings.

MSIs have broad application scopes. However, tens or hundreds of MSI bands have to be processed one by one, which consumes a significant amount of time and storage space, and results in large, complicated, and expensive multispectral apparatus. On the contrary, three-channel RGB images (RGBs) are obtained more quickly and cheaply through common consumer cameras. However, as RGBs contain scarce spectral information we cannot apply RGB images directly to the MSI applications.

Based on this, we believe that reconstructing MSIs from RGBs would be a good solution. However, as shown in 1, the MSI space contains much more information than the RGB space. There is no direct mapping between the RGB image and the MSI image. One RGB pixel may map too many possible MSI pixels. Reconstructing MSI images from RGB images is a severely underconstrained problem.

Comparison between RGB and MSI images

Figure 1: Potential difference in space of RGB vs. MSI

The Reconstruction Approach

We designed a new neural network, VAE-GAN, which ingeniously merged Variational Autoencoder (VAE) and Generative Adversarial Networks (GAN) to tackle the aforementioned problem. 2 shows its detailed architecture.

Overview of VAE-GAN

Figure 2: Detailed architecture of VAE-GAN [1]

From 2, we find that the VAE-GAN consists of four main parts – encoder, re-parameter, decoder, and discriminator. The encoder takes RGB images as inputs and extracts the core features from the RGB images. Then, the core features are combined with random numbers sampled from normal distribution to form the various latent vectors. The latent vectors are input into the decoder and rephrased into MSI-like images. The three aforementioned parts constitute the generator. We train the generator by discriminating “true” and “false” in MSIs through the discriminator.

The whole training process is very similar to the human growth process. The input RGB image is like parents who give birth to many children. The encoding process is similar to having babies. The babies inherited some common characteristics from their parents but also have their own unique characteristics. The process of growing up is similar to the process of decoding. Their common characteristics will gradually weaken, and their individual characteristics will gradually increase. But we can still determine whether they are from the same family through their remaining common characteristics.

This way, an RGB image can be compressed and extracted into countless kinds of latent vectors. Then these different latent vectors are used as sources and input into the GAN network to train the desired generator. The aforementioned underconstrained problem can be solved quickly.

Comparison between Reconstructed and Original Images

CAVE and ICVL are the two primary datasets used to evaluate the reconstruction performance in the multispectral reconstruction field. The CAVE dataset consists of 32 indoor scenes, and the ICVL dataset has 201 mainly outdoor scenes. We separated each dataset into two parts – one for training and the other for testing. Additionally, we conducted qualitative and quantitative assessments simultaneously.

We used five figures to demonstrate the qualitative reconstruction performance. La figure 3 is a brief view of TaijiGNN’s reconstruction performance. We put eight selected spectral image bands into one image and individually presented the ground truth RGB image, ground truth MSI image, reconstruction MSI image, and the error map. The error map shows the degree of dissimilarity between the ground truth image and the reconstruction image. The error map in blue represents the positive error, green signifies zero error, and red indicates the negative error.

Reconstruction of MSI images

Figure 3: Example of MSI reconstruction

Figure 4 shows the performance of MSI reconstruction in the CAVE dataset.

Reconstruction of MSI images

Figure 4: MSI reconstruction in the CAVE dataset

Figure 5 shows the performance of RGB reconstruction in the CAVE dataset.

RGB images reconstructed from MSI images

Figure 5: RGB reconstruction in the CAVE dataset

Figure 6 shows the performance of MSI reconstruction in the ICVL dataset.

Reconstruction of MSI images

Figure 6: MSI reconstruction in the ICVL dataset

Figure 7 shows the performance of RGB reconstruction in the ICVL dataset.

RGB images reconstructed from MSI images

Figure 7: RGB reconstruction in the ICVL dataset.

The above qualitative results show that our model works well whether reconstructing MSIs from RGBs or recovering RGBs from MSIs.

We also conducted several quantitative assessments, such as Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE), and we obtained state-of-the-art results in the test dataset, using only about one third of the dataset to train the model.

Conclusion

After introducing some concepts and exciting applications of multispectral images, we explained the difficulties of acquiring MSIs and the challenges of reconstructing MSIs from RGBs. We then developed our approach based on VAE and GAN and showed the reconstruction effects of some multi-spectral images and RGB images.

Additional Information

For more information on this research, please refer to the following conference article: Xu Liu, Abdelouahed Gherbi, Zhenzhou Wei, Wubin Li, Mohamed Cheriet. 2021. “Multispectral Image Reconstruction From Color Images Using Enhanced Variational Autoencoder and Generative Adversarial Network”. Journal of Sensors.

Xu Liu

Author's profile

Xu Liu is a Ph.D. candidate in the Department of Software and IT Engineering at ÉTS. His research interests are high performance machine learning and image processing with machine learning.

Program : Software Engineering  Information Technology Engineering 

Research laboratories : LASI – Computer System Architecture Research Laboratory 

Author profile

Abdelouahed Gherbi

Author's profile

Abdelouahed Gherbi is a professor in the Department of Software and IT Engineering at ÉTS. His research domains include modeling and analysis of embedded software systems, model-driven software engineering and software systems security.

Program : Software Engineering  Information Technology Engineering 

Research laboratories : LASI – Computer System Architecture Research Laboratory 

Author profile

Mohamed Cheriet

Author's profile

Mohamed Cheriet is a professor in the Department of Systems Engineering at ÉTS and Director of Synchormedia. His research focuses on eco-cloud computing, knowledge acquisition and artificial intelligence systems and learning algorithms.

Program : Automated Manufacturing Engineering 

Research chair : Canada Research Chair in Smart Sustainable Eco-Cloud 

Research laboratories : SYNCHROMEDIA – Multimedia Communication in Telepresence 

Author profile


Get the latest scientific news from ÉTS
comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *