Abstract:
Generative adversarial network (GAN) continue to drive advancements in numerous artificial intelligence applications. However, in data-scarce scenarios, existing transfer algorithms often struggle to adequately capture target domain characteristics while suffering from insufficient diversity in generated results. To address these limitations, a training algorithm based on mixing source and target domains is proposed. Through an implicitly constrained strategy, it preserves prior knowledge from the source domain while endowing the model with the necessary flexibility. The algorithm incorporates two core techniques: Swap Adaptable Training (SAT) for the discriminator and Expanded Latent Distribution (ELD) for the generator. SAT implicitly mixes feature maps to align deep discriminator layers with the pre-trained feature space, thereby preserving source domain discrimination logic while mitigating overfitting. ELD enhances model fitting capacity and alleviates mode collapse by combining the high-diversity distribution of the source domain with distributions mined from the target domain through latent space interpolation. The generation quality and diversity of the proposed algorithm were evaluated against several existing transfer methods across seven image transfer tasks. Experimental results demonstrate that the ELD+SAT method achieves optimal Fréchet Inception Distance (FID) scores across all tasks, significantly outperforming mainstream transfer methods such as MineGAN and FreezeD. This study offers a novel perspective for GAN transfer training under data-limited conditions without requiring explicit loss function design.