Learning Program Representations for Food Images and Cooking Recipes

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 18|浏览50
暂无评分
摘要
In this paper, we are interested in modeling a how-to instructional procedure, such as a cooking recipe, with a meaningful and rich high-level representation. Specifically, we propose to represent cooking recipes and food images as cooking programs. Programs provide a structured repre-sentation of the task, capturing cooking semantics and se-quential relationships of actions in the form of a graph. This allows them to be easily manipulated by users and executed by agents. To this end, we build a model that is trained to learn a joint embedding between recipes and food images via self-supervision and jointly generate a program from this embedding as a sequence. To validate our idea, we crowdsource programs for cooking recipes and show that: (a) projecting the image-recipe embeddings into programs leads to better cross-modal retrieval results; (b) generating programs from images leads to better recognition re-sults compared to predicting raw cooking instructions; and (c) we can generate food images by manipulating programs via optimizing the latent code of a GAN. Code, data, and models are available online 1 1 http://cookingprograms.csail.mit.edu.
更多
查看译文
关键词
Vision + language, Datasets and evaluation, Recognition: detection,categorization,retrieval
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要