Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception
CoRR(2024)
Abstract
Mobile device agent based on Multimodal Large Language Models (MLLM) is
becoming a popular application. In this paper, we introduce Mobile-Agent, an
autonomous multi-modal mobile device agent. Mobile-Agent first leverages visual
perception tools to accurately identify and locate both the visual and textual
elements within the app's front-end interface. Based on the perceived vision
context, it then autonomously plans and decomposes the complex operation task,
and navigates the mobile Apps through operations step by step. Different from
previous solutions that rely on XML files of Apps or mobile system metadata,
Mobile-Agent allows for greater adaptability across diverse mobile operating
environments in a vision-centric way, thereby eliminating the necessity for
system-specific customizations. To assess the performance of Mobile-Agent, we
introduced Mobile-Eval, a benchmark for evaluating mobile device operations.
Based on Mobile-Eval, we conducted a comprehensive evaluation of Mobile-Agent.
The experimental results indicate that Mobile-Agent achieved remarkable
accuracy and completion rates. Even with challenging instructions, such as
multi-app operations, Mobile-Agent can still complete the requirements. Code
and model will be open-sourced at https://github.com/X-PLUG/MobileAgent.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined