CLIPSwarm: Generating Drone Shows from Text Prompts with Vision-Language Models
arxiv(2024)
摘要
This paper introduces CLIPSwarm, a new algorithm designed to automate the
modeling of swarm drone formations based on natural language. The algorithm
begins by enriching a provided word, to compose a text prompt that serves as
input to an iterative approach to find the formation that best matches the
provided word. The algorithm iteratively refines formations of robots to align
with the textual description, employing different steps for "exploration" and
"exploitation". Our framework is currently evaluated on simple formation
targets, limited to contour shapes. A formation is visually represented through
alpha-shape contours and the most representative color is automatically found
for the input word. To measure the similarity between the description and the
visual representation of the formation, we use CLIP [1], encoding text and
images into vectors and assessing their similarity. Subsequently, the algorithm
rearranges the formation to visually represent the word more effectively,
within the given constraints of available drones. Control actions are then
assigned to the drones, ensuring robotic behavior and collision-free movement.
Experimental results demonstrate the system's efficacy in accurately modeling
robot formations from natural language descriptions. The algorithm's
versatility is showcased through the execution of drone shows in photorealistic
simulation with varying shapes. We refer the reader to the supplementary video
for a visual reference of the results.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要