1. Paper title

Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning

2. link

https://www.aclweb.org/anthology/2020.acl-main.76.pdf

3. 摘要

Even though BERT has achieved successful performance improvements in various supervised learning tasks, BERT is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations. To resolve this limitation, we propose a novel deep bidirectional language model called a Transformer-based Text Autoencoder (T-TA). The T-TA computes contextual language representations without repetition and displays the benefits of a deep bidirectional architecture, such as that of BERT. In computation time experiments in a CPU environment, the proposed T-TA performs over six times faster than the BERT-like model on a reranking task and twelve times faster on a semantic similarity task. Furthermore, the T-TA shows competitive or even better accuracies than those of BERT on the above tasks. Code is available at https://github.com/joongbo/tta.

4. 要解决什么问题

BERT仍然受到对无监督任务的重复推理以计算上下文语言表示的限制

5. 作者的主要贡献

Transformer-based Text Autoencoder (T-TA):深度双向模型
无需重复计算就能表示上下文语言表示。

6. 得到了什么结果

在reranking task中比BERT快6倍
在semantic similarity Task中比BERT快12倍
准确率高于BERT

7. 关键字

encoder

results matching ""

    No results matching ""