A Multimodal In-Context Tuning Approach for E-Commerce Product Description Generation
CoRR(2024)
摘要
In this paper, we propose a new setting for generating product descriptions
from images, augmented by marketing keywords. It leverages the combined power
of visual and textual information to create descriptions that are more tailored
to the unique features of products. For this setting, previous methods utilize
visual and textual encoders to encode the image and keywords and employ a
language model-based decoder to generate the product description. However, the
generated description is often inaccurate and generic since same-category
products have similar copy-writings, and optimizing the overall framework on
large-scale samples makes models concentrate on common words yet ignore the
product features. To alleviate the issue, we present a simple and effective
Multimodal In-Context Tuning approach, named ModICT, which introduces a similar
product sample as the reference and utilizes the in-context learning capability
of language models to produce the description. During training, we keep the
visual encoder and language model frozen, focusing on optimizing the modules
responsible for creating multimodal in-context references and dynamic prompts.
This approach preserves the language generation prowess of large language
models (LLMs), facilitating a substantial increase in description diversity. To
assess the effectiveness of ModICT across various language model scales and
types, we collect data from three distinct product categories within the
E-commerce domain. Extensive experiments demonstrate that ModICT significantly
improves the accuracy (by up to 3.3
on D-5) of generated results compared to conventional methods. Our findings
underscore the potential of ModICT as a valuable tool for enhancing automatic
generation of product descriptions in a wide range of applications.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要