1. Paper title

Why Overfitting Isn’t Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries

2. link

https://www.aclweb.org/anthology/2020.acl-main.201.pdf

3. 摘要

Cross-lingual word embeddings (CLWE) are often evaluated on bilingual lexicon induction (BLI). Recent CLWE methods use linear projections, which underfit the training dictionary, to generalize on BLI. However, underfitting can hinder generalization to other downstream tasks that rely on words from the training dictionary. We address this limitation by retrofitting CLWE to the training dictionary, which pulls training translation pairs closer in the embedding space and overfits the training dictionary. This simple post-processing step often improves accuracy on two downstream tasks, despite lowering BLI test accuracy. We also retrofit to both the training dictionary and a synthetic dictionary induced from CLWE, which sometimes generalizes even better on downstream tasks. Our results confirm the importance of fully exploiting the training dictionary in downstream tasks and explains why BLI is a flawed CLWE evaluation.

Cross-lingual word embeddings (CLWE):跨语言单词嵌入
bilingual lexicon induction (BLI): 双语词典推断
hinder:阻碍
retrofitting:改进

4. 要解决什么问题

用线性映射做CLWE会导致欠拟合,会影响下游任务的泛化能力。

5. 作者的主要贡献

让模型对训练词典过拟合。

6. 得到了什么结果

BLI上准确率下降,但下游任务的泛化能力提升。

7. 关键字

results matching ""

    No results matching ""