Abstract
An urgent limitation in current Image Captioning models is their tendency to produce generic captions that avoid the interesting detail which makes each image unique. To address this limitation, we propose an approach that enforces a stronger alignment between image regions and specific segments of text. The model architecture is composed of a visual region proposer, a region-order planner and a region-guided caption generator. The region-guided caption generator incorporates a novel information gate which allows visual and textual input of different frequencies and dimensionalities in a Recurrent Neural Network.
Original language | English |
---|---|
DOIs | |
Publication status | Published - 2018 |
Event | ECCV 2018 Workshop on Shortcomings in Vision and Language (SiVL) - Munich, Germany Duration: 8 Sep 2018 → 8 Sep 2018 |
Conference
Conference | ECCV 2018 Workshop on Shortcomings in Vision and Language (SiVL) |
---|---|
Country/Territory | Germany |
City | Munich |
Period | 8/09/18 → 8/09/18 |
Keywords
- Image Captioning
- visual region proposer
- region-order planner
- region-guided caption generator
- information gate
- Recurrent Neural Network