Authors:
Xudong Pan, Mi Zhang, Daizong Ding;
Publication:
This paper is included in the Proceedings of the 35th International Conference on Machine Learning(ICML), 2018
Abstract:
Recently, a unified model for image-to-image translation tasks within adversarial learning framework has aroused widespread research interests in computer vision practitioners. Their reported empirical success however lacks solid theoretical interpretations for its inherent mechanism. In this paper, we reformulate their model from a brand-new geometrical perspective and have eventually reached a full interpretation on some interesting but unclear empirical phenomenons from their experiments. Furthermore, by extending the definition of generalization for generative adversarial nets to a broader sense, we have derived a condition to control the generalization capability of their model. According to our derived condition, several practical suggestions have also been proposed on model design and dataset construction as a guidance for further empirical researches.
Theoretical Analysis of Image-to-Image Translation with Adversarial Learning.pdf