Combining Text and Image Knowledge with GANs for Zero-Shot Action Recognition in Videos

Research output: Contribution to journalConference articlepeer-review

Abstract

The recognition of actions in videos is an active research area in machine learning, relevant to multiple domains such as health monitoring, security and social media analysis. Zero-Shot Action Recognition (ZSAR) is a challenging problem in which models are trained to identify action classes that have not been seen during the training process. According to the literature, the most promising ZSAR approaches make use of Generative Adversarial Networks (GANs). GANs can synthesise visual embeddings for unseen classes conditioned on either textual information or images related to the class labels. In this paper, we propose a Dual-GAN approach based on the VAEGAN model to prove that the fusion of visual and textual-based knowledge sources is an effective way to improve ZSAR performance. We conduct empirical ZSAR experiments of our approach on the UCF101 dataset. We apply the following embedding fusion methods for combining text-driven and image-driven information: averaging, summation, maximum, and minimum. Our best result from Dual-GAN model is achieved with the maximum embedding fusion approach that results in an average accuracy of 46.37%, which is improved by 5.37% at least compared to the leading approaches.

Keywords

  • Generative Adversarial Networks
  • Human Action Recognition
  • Semantic Knowledge Source
  • Zero-Shot Learning

Fingerprint

Dive into the research topics of 'Combining Text and Image Knowledge with GANs for Zero-Shot Action Recognition in Videos'. Together they form a unique fingerprint.

Cite this