VisualCritic: Making LMMs Perceive Visual Quality Like Humans
arxiv(2024)
摘要
At present, large multimodal models (LMMs) have exhibited impressive
generalization capabilities in understanding and generating visual signals.
However, they currently still lack sufficient capability to perceive low-level
visual quality akin to human perception. Can LMMs achieve this and show the
same degree of generalization in this regard? If so, not only could the
versatility of LMMs be further enhanced, but also the challenge of poor
cross-dataset performance in the field of visual quality assessment could be
addressed. In this paper, we explore this question and provide the answer
"Yes!". As the result of this initial exploration, we present VisualCritic, the
first LMM for broad-spectrum image subjective quality assessment. VisualCritic
can be used across diverse data right out of box, without any requirements of
dataset-specific adaptation operations like conventional specialist models. As
an instruction-following LMM, VisualCritic enables new capabilities of (1)
quantitatively measuring the perceptual quality of given images in terms of
their Mean Opinion Score (MOS), noisiness, colorfulness, sharpness, and other
numerical indicators, (2) qualitatively evaluating visual quality and providing
explainable descriptions, (3) discerning whether a given image is AI-generated
or photographic. Extensive experiments demonstrate the efficacy of VisualCritic
by comparing it with other open-source LMMs and conventional specialist models
over both AI-generated and photographic images.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要