Dynamic unilateral dual learning for text to image synthesis

2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP(2023)

引用 0|浏览0
暂无评分
摘要
Dual learning trains two inverse processes tasks dually to further improve the selected tasks' performance. There are currently two training paradigms in dual learning. One is to directly utilize two existing models for training in a dual manner to improve the selected models' performance. However, it cannot effectively guarantee the improvement of the selected models. Another is that the networks of both parties are manually designed. Nevertheless, the network performance of both parties will be poor in the initial stage of training, which will easily lead to unsatisfactory training. Besides, most of dual learning researches can only be used for the conversion between the same data types, but it is powerless for the conversion of different data types. To address the above issues, a paradigm called unilateral dual learning (UDL) is proposed and verified in the text-to-image (T2I) synthesis field. UDL allows one party to design the network manually, and the other party calls the pre-trained model to promote the training of the manually designed network to achieve satisfactory training. Experimental results on the Oxford-102 flower and Caltech-UCSD Birds datasets demonstrate the feasibility of our proposed UDL paradigm in the T2I field, and it achieves excellent performance qualitatively and quantitatively.
更多
查看译文
关键词
Unilateral Dual Learning,Text-to-Image Synthesis,Deep Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要