1. Paper title

Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders

2. link

https://www.aclweb.org/anthology/2020.acl-main.23.pdf

3. 摘要

Conditional Text Generation has drawn much attention as a topic of Natural Language Generation (NLG) which provides the possibility for humans to control the properties of generated contents. Current conditional generation models cannot handle emerging conditions due to their joint end-to-end learning fashion. When a new condition added, these techniques require full retraining. In this paper, we present a new framework named Pre-train and Plug-in Variational Auto-Encoder (PPVAE) towards flexible conditional text generation. PPVAE decouples the text generation module from the condition representation module to allow “one-to-many” conditional generation. When a fresh condition emerges, only a lightweight network needs to be trained and works as a plug-in for PPVAE, which is efficient and desirable for real-world applications. Extensive experiments demonstrate the superiority of PPVAE against the existing alternatives with better conditionality and diversity but less training effort.1

4. 要解决什么问题

在当前的条件文本生成模型中,当有一个新的条件出现时,需要整个重新训练。

5. 作者的主要贡献

PPVAE:预训练+插件的VAE,用于灵活条件的场景。
将文本生成模型和条件表示模块解耦。
当出现新的条件时,只需要继续训练调整参数。

6. 得到了什么结果

条件和多样性更好
训练开销更小。

7. 关键字

动态条件

results matching ""

    No results matching ""