Zero Grads: Learning Local Surrogate Losses for Non-Differentiable Graphics
arxiv(2023)
摘要
Gradient-based optimization is now ubiquitous across graphics, but
unfortunately can not be applied to problems with undefined or zero gradients.
To circumvent this issue, the loss function can be manually replaced by a
“surrogate” that has similar minima but is differentiable. Our proposed
framework, ZeroGrads, automates this process by learning a neural approximation
of the objective function, which in turn can be used to differentiate through
arbitrary black-box graphics pipelines. We train the surrogate on an actively
smoothed version of the objective and encourage locality, focusing the
surrogate's capacity on what matters at the current training episode. The
fitting is performed online, alongside the parameter optimization, and
self-supervised, without pre-computed data or pre-trained models. As sampling
the objective is expensive (it requires a full rendering or simulator run), we
devise an efficient sampling scheme that allows for tractable run-times and
competitive performance at little overhead. We demonstrate optimizing diverse
non-convex, non-differentiable black-box problems in graphics, such as
visibility in rendering, discrete parameter spaces in procedural modelling or
optimal control in physics-driven animation. In contrast to other
derivative-free algorithms, our approach scales well to higher dimensions,
which we demonstrate on problems with up to 35k interlinked variables.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要