1. Paper title

Negative Training for Neural Dialogue Response Generation

2. link

https://www.aclweb.org/anthology/2020.acl-main.185.pdf

3. 摘要

Although deep learning models have brought tremendous advancements to the field of opendomain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses. In this work, we propose a framework named “Negative Training” to minimize such behaviors. Given a trained model, the framework will first find generated samples that exhibit the undesirable behavior, and then use them to feed negative training signals for fine-tuning the model. Our experiments show that negative training can significantly reduce the hit rate of malicious responses, or discourage frequent responses and improve response diversity.

malicious:恶意的

4. 要解决什么问题

Opendomain对话生成具有“恶意响应”和“无聊响应”的负面特点。

5. 作者的主要贡献

Negative Training:用于减少以上负面特点的训练方法。

  1. 找到具有负面特点的数据
  2. 把数据作为训练样本fine-tuning模型。

6. 得到了什么结果

恶意响应、负面响应减少
响应多样性增加

7. 关键字

results matching ""

    No results matching ""