Generating Hypothetical Events For Abductive Inference

10TH CONFERENCE ON LEXICAL AND COMPUTATIONAL SEMANTICS (SEM 2021)(2021)

Cited 4|Views13
No score
Abstract
Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive ffNLI task - which consists in choosing the more likely explanation for given observations. We train a specialized language model LMI that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MT L to solve the ff NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypotheses events generated by LMI - and b) selecting the one that is most similar to the observed outcome. We show that our MT L model improves over prior vanilla pre-trained LMs finetuned on ffNLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.
More
Translated text
Key words
abductive inference,hypothetical events
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined