EVE: explainable vector based embedding technique using Wikipedia

M. Atif Qureshi, Derek Greene

Research output: Contribution to journalArticlepeer-review

Abstract

We present an unsupervised explainable vector embedding technique, called EVE, which is built upon the structure of Wikipedia. The proposed model defines the dimensions of a semantic vector representing a concept using human-readable labels, thereby it is readily interpretable. Specifically, each vector is constructed using the Wikipedia category graph structure together with the Wikipedia article link structure. To test the effectiveness of the proposed model, we consider its usefulness in three fundamental tasks: 1) intruder detection—to evaluate its ability to identify a non-coherent vector from a list of coherent vectors, 2) ability to cluster—to evaluate its tendency to group related vectors together while keeping unrelated vectors in separate clusters, and 3) sorting relevant items first—to evaluate its ability to rank vectors (items) relevant to the query in the top order of the result. For each task, we also propose a strategy to generate a task-specific human-interpretable explanation from the model. These demonstrate the overall effectiveness of the explainable embeddings generated by EVE. Finally, we compare EVE with the Word2Vec, FastText, and GloVe embedding techniques across the three tasks, and report improvements over the state-of-the-art.

Original languageEnglish
Pages (from-to)137-165
Number of pages29
JournalJournal of Intelligent Information Systems
Volume53
Issue number1
DOIs
Publication statusPublished - 15 Aug 2019

Keywords

  • Distributional semantics
  • Unsupervised learning
  • Wikipedia

Fingerprint

Dive into the research topics of 'EVE: explainable vector based embedding technique using Wikipedia'. Together they form a unique fingerprint.

Cite this