Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs
arxiv(2024)
摘要
Recent advancements in multimodal large language models (MLLMs) have been
noteworthy, yet, these general-domain MLLMs often fall short in their ability
to comprehend and interact effectively with user interface (UI) screens. In
this paper, we present Ferret-UI, a new MLLM tailored for enhanced
understanding of mobile UI screens, equipped with referring, grounding, and
reasoning capabilities. Given that UI screens typically exhibit a more
elongated aspect ratio and contain smaller objects of interest (e.g., icons,
texts) than natural images, we incorporate "any resolution" on top of Ferret to
magnify details and leverage enhanced visual features. Specifically, each
screen is divided into 2 sub-images based on the original aspect ratio (i.e.,
horizontal division for portrait screens and vertical division for landscape
screens). Both sub-images are encoded separately before being sent to LLMs. We
meticulously gather training samples from an extensive range of elementary UI
tasks, such as icon recognition, find text, and widget listing. These samples
are formatted for instruction-following with region annotations to facilitate
precise referring and grounding. To augment the model's reasoning ability, we
further compile a dataset for advanced tasks, including detailed description,
perception/interaction conversations, and function inference. After training on
the curated datasets, Ferret-UI exhibits outstanding comprehension of UI
screens and the capability to execute open-ended instructions. For model
evaluation, we establish a comprehensive benchmark encompassing all the
aforementioned tasks. Ferret-UI excels not only beyond most open-source UI
MLLMs, but also surpasses GPT-4V on all the elementary UI tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要