Visual to Sound: Generating Natural Sound for Videos in the Wild

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 208|浏览132
暂无评分
摘要
As two of the five traditional human senses (sight, hearing, taste, smell, and touch), vision and sound are basic sources through which humans understand the world. Often correlated during natural events, these two modalities combine to jointly affect human perception. In this paper, we pose the task of generating sound given visual input. Such capabilities could help enable applications in virtual reality (generating sound for virtual scenes automatically) or provide additional accessibility to images or videos for people with visual impairments. As a first step in this direction, we apply learning-based methods to generate raw waveform samples given input video frames. We evaluate our models on a dataset of videos containing a variety of sounds (such as ambient sounds and sounds from people/animals). Our experiments show that the generated sounds are fairly realistic and have good temporal synchronization with the visual inputs.
更多
查看译文
关键词
human perception,virtual reality,virtual scenes,visual impairments,ambient sounds,generated sounds,natural sound,raw waveform samples,video frames
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要