The Counterfeit Conundrum: Can Code Language Models Grasp the Nuances of Their Incorrect Generations?
CoRR(2024)
摘要
While language models are increasingly more proficient at code generation,
they still frequently generate incorrect programs. Many of these programs are
obviously wrong, but others are more subtle and pass weaker correctness checks
such as being able to compile. In this work, we focus on these counterfeit
samples: programs sampled from a language model that 1) have a high enough
log-probability to be generated at a moderate temperature and 2) pass weak
correctness checks. Overall, we discover that most models have a very shallow
understanding of counterfeits through three clear failure modes. First, models
mistakenly classify them as correct. Second, models are worse at reasoning
about the execution behaviour of counterfeits and often predict their execution
results as if they were correct. Third, when asking models to fix counterfeits,
the likelihood of a model successfully repairing a counterfeit is often even
lower than that of sampling a correct program from scratch. Counterfeits also
have very unexpected properties: first, counterfeit programs for problems that
are easier for a model to solve are not necessarily easier to detect and only
slightly easier to execute and repair. Second, counterfeits from a given model
are just as confusing to the model itself as they are to other models. Finally,
both strong and weak models are able to generate counterfeit samples that
equally challenge all models. In light of our findings, we recommend that care
and caution be taken when relying on models to understand their own samples,
especially when no external feedback is incorporated.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要