Evaluating Gender Bias in Pair Programming Conversations with an Agent

2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)(2022)

引用 2|浏览2
暂无评分
摘要
While pair programming conversational agents have the potential to change the current landscape of programming, they require vast amounts of diverse data to train. However, due to gender gaps in the Computer Science field, it is difficult to obtain data involving women in pair programming scenarios; this may result in a bias in a future agent. Furthermore, previous research has highlighted differences between men and women in problem solving, communication, creativity, and leadership styles, which are critical for the success of pair collaboration. Therefore, it is crucial to understand how the agent’s performance is affected by the gender composition of training datasets. Using the transformer-based language model BERT, we created a natural language understanding (NLU) model for our future agent, and tested its intent classification performance when alternately trained and tested on datasets composed entirely of either men or women. We found that the model’s performance was significantly higher when trained and tested on men datasets, indicating the presence of gender bias within the NLU model of a future agent.
更多
查看译文
关键词
Gender,Pair Programming,Biases
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要