Abstract
With the increasing amounts of textual data being collected online, automated text classification techniques are becoming increasingly important. However, a lot of this data is in the form of short-text with just a handful of terms per document (e.g. Text messages, tweets or Facebook posts). This data is generally too sparse and noisy to obtain satisfactory classification. Two techniques which aim to alleviate this problem are Latent Dirichlet Allocation (LDA) and Formal Concept Analysis (FCA). Both techniques have been shown to improve the performance of short-text classification by reducing the sparsity of the input data. The relative performance of classifiers that have been enhanced using each technique has not been directly compared so, to address this issue, this work presents an experiment to compare them, using supervised models. It has shown that FCA leads to a much higher degree of correlation among terms than LDA and initially gives lower classification accuracy. However, once a subset of features is selected for training, the FCA models can outperform those trained on LDA expanded data.
Original language | English |
---|---|
Pages (from-to) | 50-62 |
Number of pages | 13 |
Journal | CEUR Workshop Proceedings |
Volume | 2086 |
DOIs | |
Publication status | Published - 2017 |
Event | 25th Irish Conference on Artificial Intelligence and Cognitive Science, AICS 2017 - Dublin, Ireland Duration: 7 Dec 2017 → 8 Dec 2017 |
Keywords
- text classification
- short-text
- Latent Dirichlet Allocation
- Formal Concept Analysis
- sparsity
- supervised models
- classification accuracy