Audio and User Directed Sound Synthesis

ICMC(2003)

引用 27|浏览20
暂无评分
摘要
We present techniques to simplify the production of soundtracks in video by re-targeting existing soundtracks. The source audio is analyzed and segmented into smaller chunks, or clips, which are then used to generate statistically similar variants of the original audio to fit particular constraints. These constraints are specified explicitly by the user in the form of large-scale properties of the sound texture. For instance, by specifying where preferred clips from the source audio should be favored during the synthesis, or by defining the preferred audio properties (e.g. pitch, volume) at each instant in the new soundtrack. Alternatively, audio-driven synthesis is supported by matching certain audio properties of the generated sound texture to that of another soundtrack.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要