FSDAN

Generative adversarial network-based full-space domain adaptation for land cover classification from multiple source remote sensing images

Introduction

We propose a novel end-to-end GAN-based full-space domain adaptation learning framework. It aligns the distribution of the source images and target images among in the image space, the feature space and the output space. It not only is applicable in land cover classification from multi-source remote sensing images, including overlapped and non-overlapped multi-source remote sensing images, but also can be generic to many UDA segmentation tasks. Our method can also be used to generate stylized images similar to the target domain images for improving the performance of deep learning tasks.

Overview

The code contains a TensorFlow implementation of "Multi-source Remote Sensing Images land cover classification with GAN-based Full-space Domain Adaptation Network" and the original paper is published in TGRS. The proposed network was trained and tested on a single NVIDIA TITAN RTX GPU with 24GB RAM.

Dependencies

Usage

  1. Download the code. Click HERE to Download
  2. 2. Prepare the image datasets and convert them to TFrecords files with tfrecord_transfer.py.
  3. Run
python main.py