Bidirectional LSTM approach to image captioning with scene features

Davis Agughalam, Pramod Pathak, Paul Stynes

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Image captioning involves generating a sentence that describes an image. More recently, it has been driven by encoder-decoder approaches where the encoder such as convolutional neural network (CNN) can extract the visual features of an image. The extracted visual features are passed to a decoder such as a long short-term memory (LSTM) network in order to generate a sentence that describes the image. One major challenge with this approach is to precisely include the scene of an image in the generated sentences. To resolve this challenge, visual scene features have been used with unidirectional LSTM decoders. However, for long sentences, this limits the precision of the generated text. This research proposes a novel approach to generate sentences using visual scene information with a bidirectional LSTM decoder. The encoder is based on Inception v3 to extract the object features and Places365 to extract the scene features. The decoder uses a bidirectional LSTM to generate a sentence. The encoder-decoder model is trained using the Flickr8k dataset. Results show improved performance for generating longer sentences with a 9% increase in BLEU-3 and a 12% increase in BLEU-4 scores compared to compared to other encoder-decoder methods that are limited to only using global image features. Visually impaired people that use screen readers would benefit from this research as they would get an enhanced description of an image that includes the background scene thereby creating a wholesome picture in the mind of the reader.

Original languageEnglish
Title of host publicationThirteenth International Conference on Digital Image Processing, ICDIP 2021
EditorsXudong Jiang, Hiroshi Fujita
PublisherSPIE
ISBN (Electronic)9781510646001
DOIs
Publication statusPublished - 2021
Externally publishedYes
Event13th International Conference on Digital Image Processing, ICDIP 2021 - Singapore, Singapore
Duration: 20 May 202123 May 2021

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume11878
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference13th International Conference on Digital Image Processing, ICDIP 2021
Country/TerritorySingapore
CitySingapore
Period20/05/2123/05/21

Keywords

  • Convolutional neural network
  • Image captioning
  • Long short-term memory network

Fingerprint

Dive into the research topics of 'Bidirectional LSTM approach to image captioning with scene features'. Together they form a unique fingerprint.

Cite this