Multi-objective training of Generative Adversarial Networks with multiple discriminators

    Isabela Albuquerque
    Isabela Albuquerque
    João Monteiro
    João Monteiro
    Thang Doan
    Thang Doan
    Breandan Considine
    Breandan Considine

    arXiv: Learning, 2018.

    Cited by: 0|Bibtex|Views0|Links
    EI
    Keywords:
    Generative Multi-Adversarial Networksmultiple gradient descentgenerative modelingLeast-square GANmulti objective optimizationMore(13+)
    Wei bo:
    In this work we show that employing multiple discriminators on Generative Adversarial Networks training is a practical approach for directly trading extra capacity - and thereby extra computational cost - for higher quality and diversity of generated samples

    Abstract:

    Recent literature has demonstrated promising results for training Generative Adversarial Networks by employing a set of discriminators, in contrast to the traditional game involving one generator against a single adversary. Such methods perform single-objective optimization on some simple consolidation of the losses, e.g. an arithmetic av...More

    Code:

    Data:

    Summary
    • Generative Adversarial Networks (GANs) (Goodfellow et al, 2014) offer a new approach to generative modeling, using game-theoretic training schemes to implicitly learn a given probability density.
    • Multi-objective training of GANs with multiple discriminators alternatives such as maximizing hypervolume in the region defined between a fixed, shared upper bound on the losses, which we refer to as the nadir point η∗, and each of the component losses.
    • Experiments performed on MNIST show that HVM presents a useful compromise between computational cost and sample quality when compared to GMAN’s average loss minimization, and MGD.
    • The goal of using the proposed averaging scheme is to favor discriminators yielding higher losses to the generator, providing more useful gradients during training.
    • All methods described previously for the solution of GANs with multiple discriminators, i.e. average loss minimization (Neyshabur et al, 2017), GMAN’s weighted average (Durugkar et al, 2016) and HVM can be defined as MGDlike two-step algorithms consisting of: Step 1 - consolidate all gradients into a single update direction; Step 2 - update parameters in the direction returned in Step 1.
    • The same architecture, set of hyperparameters and initialization were used for both AVG, GMAN and our proposed method, the only variation being the generator loss.
    • These results include our proposed approach and implementation of (Miyato et al, 2018), alongside the FID measured using a ResNet classifier trained in advance on the CIFAR-10 dataset.
    • The addition of the multiple discriminators setting along with HV yields a relevant shift in performance for the DCGAN-like generator, improving the evaluated metrics while the generator architecture was kept unchanged.
    • Computational cost In Table 2 we present a comparison of minimum FID obtained during training, along with computation cost in terms of time and space for different GANs, with both 1 and 24 discriminators.
    • We repeat the experiments in (Srivastava et al, 2017) aiming to analyze how the number of discriminators affects the sample diversity of the corresponding generator when trained using the HV algorithm.
    • In this work we show that employing multiple discriminators on GAN training is a practical approach for directly trading extra capacity - and thereby extra computational cost - for higher quality and diversity of generated samples.
    • We introduce a multi-objective optimization framework for studying multiple discriminator GANs, and showed strong similarities between previous work using such setting and the MGD algorithm.
    • The proposed approach, namely a single-solution variation of the hypervolume maximization, was observed to consistently yield higher quality samples in terms of FID when compared to average loss and GMAN’s aggregation rule.
    • We further observed a higher number of discriminators to increase sample diversity and generator robustness
    Funding
    • ResNet was trained on the 10-class classification task of CIFAR-10 up to approximately 95% test accuracy
    Your rating :
    0

     

    Tags
    Comments