1. Paper title
Image-Chat: Engaging Grounded Conversations
2. link
https://www.aclweb.org/anthology/2020.acl-main.219.pdf
3. 摘要
To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the ImageChat test set (preferred 47.7% of the time).
4. 要解决什么问题
模型需要捕获人的说话模式。
5. 作者的主要贡献
生成基于图像的对话数据集。
6. 得到了什么结果
模型结果优于IGC?
[?] 这个论文好像没有设计模型?