DiffAug

Motivation

  • GANs heavily rely on vast quantities of diverse and high-quality training examples
  • reduce the amount of training data results in drastic degradation in the performance

Contribution

  • 提出DiffAug,使用可微分数据增强来对D & G模块进行反向传播权重更新
  • SOTA

Method

对G & D模块进行优化,因此需要可微分的数据增强方式,能够使loss反向传播至G

Augment reals only

仅对D模块输入的真实图片进行数据增强

Augment D only

对D模块输入的真实/生成图片均进行数据增强

Differentiable Augmentation for GANs

前两种方法反而会导致模型效果变差

在D模块优化的基础上对G模块优化的生成图片进行数据增强

from DiffAugment_pytorch import DiffAugment
# from DiffAugment_tf import DiffAugment
policy = 'color,translation,cutout' 
# Welcome to discover more DiffAugment transformations!
...
# Training loop: update D
reals = sample_real_images() # a batch of real images
z = sample_latent_vectors()
fakes = Generator(z) # a batch of fake images
real_scores = Discriminator(DiffAugment(reals, policy=policy))
fake_scores = Discriminator(DiffAugment(fakes, policy=policy))
# Calculating D's loss based on real_scores and fake_scores...
...
# Training loop: update G
z = sample_latent_vectors()
fakes = Generator(z) # a batch of fake images
fake_scores = Discriminator(DiffAugment(fakes, policy=policy))
# Calculating G's loss based on fake_scores...
...

Compare

Experiment

ImageNet

CIFAR10/100

Model Size


   Reprint policy


《DiffAug》 by Liangyu Cui is licensed under a Creative Commons Attribution 4.0 International License
  TOC