Learning and storing the parts of objects: IMF

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A central concern for many learning algorithms is how to efficiently store what the algorithm has learned. An algorithm for the compression of Nonnegative Matrix Factorizations is presented. Compression is achieved by embedding the factorization in an encoding routine. Its performance is investigated using two standard test images, Peppers and Barbara. The compression ratio (18:1) achieved by the proposed Matrix Factorization improves the storage-ability of Nonnegative Matrix Factorizations without significantly degrading accuracy (≈ 1-3dB degradation is introduced). We learn as before, but storage is cheaper.

Original languageEnglish
Title of host publicationIEEE International Workshop on Machine Learning for Signal Processing, MLSP
EditorsMamadou Mboup, Tulay Adali, Eric Moreau, Jan Larsen
PublisherIEEE Computer Society
ISBN (Electronic)9781479936946
DOIs
Publication statusPublished - 14 Nov 2014
Externally publishedYes
Event2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014 - Reims, France
Duration: 21 Sep 201424 Sep 2014

Publication series

NameIEEE International Workshop on Machine Learning for Signal Processing, MLSP
ISSN (Print)2161-0363
ISSN (Electronic)2161-0371

Conference

Conference2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014
Country/TerritoryFrance
CityReims
Period21/09/1424/09/14

Keywords

  • compression
  • matrix factorization

Fingerprint

Dive into the research topics of 'Learning and storing the parts of objects: IMF'. Together they form a unique fingerprint.

Cite this