Towards Effective Automatic Evaluation of Generated Reflections for Motivational Interviewing

ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal Interaction(2023)

引用 0|浏览4
暂无评分
摘要
Reflection is an essential counselling skill where the therapist communicates their understanding of the client’s words to the client. Recent studies have explored language-model-based reflection generation, but automatic quality evaluation of generated reflections remains under-explored. In this work, we investigate automatic evaluation on one fundamental quality aspect: coherence and context-consistency. We test a range of automatic evaluators/metrics and examine their correlations with expert judgement. We find that large language models (LLMs) as zero-shot evaluators achieve the best performance, while other metrics correlate poorly with expert judgement. We also demonstrate that diverse LLM-as-evaluator configurations need to be explored to find the best setup.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要