1. Paper title

Reverse Engineering Configurations of Neural Text Generation Models

2. link

https://www.aclweb.org/anthology/2020.acl-main.25.pdf

3. 摘要

This paper seeks to develop a deeper understanding of the fundamental properties of neural text generations models. The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area. Previously, the extent and degree to which these artifacts surface in generated text has not been well studied. In the spirit of better understanding generative text models and their artifacts, we propose the new task of distinguishing which of several variants of a given model generated a piece of text, and we conduct an extensive suite of diagnostic tests to observe whether modeling choices (e.g., sampling methods, top-k probabilities, model architectures, etc.) leave detectable artifacts in the text they generate. Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone. This suggests that neural text generators may be more sensitive to various modeling choices than previously thought.

nascent:新生的

4. 要解决什么问题

模型的选择对生成的文本的影响。

5. 作者的主要贡献

一种新的task:
根据生成的文本判断它是由哪个模型生成的。
分析模型选择是否在生成的文本上留下痕迹。

6. 得到了什么结果

可以通过生成的文本推断使用了哪种模型。
生成的文本对模型的依赖比我们认为的大。

7. 关键字

模型对生成文本的影响

results matching ""

    No results matching ""