Satellite-based remote sensing images are essential for Earth surface analysis, serving diverse purposes in both civilian and military domains. Satellite images are used for analysis and decision making and are considered a reliable source of information. Recently, the field of image generation has developed increasingly sophisticated techniques, such as generative neural models, usually known as generative adversarial networks (GANs), to create synthetic images from scratch that appear almost real. Generative models have traditionally been applied to RGB or grayscale images and have been used for generating fake images of faces, animals, or objects. Currently, there are few studies regarding the application of GAN to multispectral satellite images. This work aims to test GAN models against the generation of multispectral satellite images, and in particular, the work explores the ability of the state-of-the-art StyleGAN3 model to produce high-quality synthetic Sentinel-2 images. The work delves into the configuration, training process, and evaluation of StyleGAN3 using custom Sentinel-2 datasets. StyleGAN3 results are compared with those provided by the proGAN model, the only GAN model tested so far on multispectral satellite data. Evaluation methods include visual inspection, spectral signature analysis, and a modified Fréchet inception distance for quantitative assessment. Results show that StyleGAN3 outperforms proGAN model exhibiting visually pleasing images. The quantitative comparison shows that StyleGAN3 provides the best results in terms of FID scores, in particular the improvement compared to proGAN increases as the spatial extent and spectral dimension of the generated images increases.
Multispectral satellite image generation using StyleGAN3
Alibani M.
Primo
Methodology
;Acito N.Secondo
Conceptualization
;
2024-01-01
Abstract
Satellite-based remote sensing images are essential for Earth surface analysis, serving diverse purposes in both civilian and military domains. Satellite images are used for analysis and decision making and are considered a reliable source of information. Recently, the field of image generation has developed increasingly sophisticated techniques, such as generative neural models, usually known as generative adversarial networks (GANs), to create synthetic images from scratch that appear almost real. Generative models have traditionally been applied to RGB or grayscale images and have been used for generating fake images of faces, animals, or objects. Currently, there are few studies regarding the application of GAN to multispectral satellite images. This work aims to test GAN models against the generation of multispectral satellite images, and in particular, the work explores the ability of the state-of-the-art StyleGAN3 model to produce high-quality synthetic Sentinel-2 images. The work delves into the configuration, training process, and evaluation of StyleGAN3 using custom Sentinel-2 datasets. StyleGAN3 results are compared with those provided by the proGAN model, the only GAN model tested so far on multispectral satellite data. Evaluation methods include visual inspection, spectral signature analysis, and a modified Fréchet inception distance for quantitative assessment. Results show that StyleGAN3 outperforms proGAN model exhibiting visually pleasing images. The quantitative comparison shows that StyleGAN3 provides the best results in terms of FID scores, in particular the improvement compared to proGAN increases as the spatial extent and spectral dimension of the generated images increases.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.