Multimodal Unsupervised Image-to-Image Translation-Supplementary Material

semanticscholar(2018)

引用 0|浏览0
暂无评分
摘要
Proof. Let z1 denote the latent code, which is the concatenation of c1 and s1. We denote the encoded latent distribution by pE(z1), which is defined by z1 = E1(x1) and x1 sampled from the data distribution p(x1). We denote the latent distribution at generation time by p(z1), which is obtained by s1 ∼ q(s1) and c1 ∼ p(c2). The generated image distribution pG(x1) = p(x2→1) is defined by x1 = G1(z1) and z1 sampled from p(z1). According to the change of variable formula for probability density functions:
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要