A text-based visual context modulation neural model for multimodal machine translation
SCIE
SCOPUS
- Title
- A text-based visual context modulation neural model for multimodal machine translation
- Authors
- Kwon S.; Go B.-H.; Lee J.-H.
- Date Issued
- 2020-08
- Publisher
- ELSEVIER
- Abstract
- We introduce a novel multimodal machine translation model that integrates image features modulated by its caption. Generally, images contain vastly more information rather than just their description. Furthermore, in multimodal machine translation task, feature maps are commonly extracted from pre-trained network for objects. Therefore, it is not appropriate to utilize these feature map directly. To extract the visual features associated with the text, we design a modulation network based on the textual information from the encoder and visual information from the pretrained CNN. However, because multimodal translation data is scarce, using overly complicated models could result in poor performance. For simplicity, we apply a feature-wise multiplicative transformation. Therefore, our model is a modular trainable network embedded in the architecture in existing multimodal translation models. We verified our model by conducting experiments on the Transformer model with the Multi30k dataset and evaluating translation quality using the BLEU and METEOR metrics. In general, our model was an improvements over a text-based model and other existing models. (C) 2020 Elsevier B.V. All rights reserved.
- URI
- https://oasis.postech.ac.kr/handle/2014.oak/107860
- DOI
- 10.1016/j.patrec.2020.06.010
- ISSN
- 0167-8655
- Article Type
- Article
- Citation
- PATTERN RECOGNITION LETTERS, vol. 136, page. 212 - 218, 2020-08
- Files in This Item:
- There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.