Making Negation-word Entailment Judgment via Supplementing BERT with Aggregative Pattern
2020 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)
This paper proposes two novel approaches to supplement BERT for making negation-word (NW) entailm... more This paper proposes two novel approaches to supplement BERT for making negation-word (NW) entailment judgment on a Chinese Social Studies QA dataset. Recently, BERT has shown excellent performance (and even outperforms human) on several natural language inference tasks. However, BERT is found to achieve the remarkable results mainly via utilizing the surface features (such as lexical distribution bias) that are unnoticed by humans. In our test, BERT’s performance would degrade significantly if it is tested on a NW involved dataset in which the influence of lexical distribution bias has been removed. Since a single unmatched NW could toggle the overall judgment, we propose a Negation-Word-Aggregative-Pattern (NWAP) for reflecting the NW matching status, and use it to supplement BERT. Our first approach then supplements BERT with a NW toggling module, which decides if BERT’s answer should be toggled according to the NWAP. The second approach concatenates the embedding of above NWAP with BERT’s output vectors, and then feed them into a feedforward (FF) neural network (NN) to make the final judgment. Experiments show our toggling module outperforms BERT by 59% accuracy on those lexical-bias removed NW data-set. With adversary training data added to both BERT and our FF NN, our model still outperforms BERT by 23% accuracy.
Uploads
Papers by Keh-Yih Su