Abstract
Projects that set out to create a linguistic resource often do so by using a machine learning model that pre-annotates or filters the content that goes through to a human annotator, before going into the final version of the resource. However, available budgets are often limited, and the amount of data that is available exceeds the amount of annotation that can be done. Thus, in order to optimize the benefit from the invested human work, we argue that the decision on which predictive model one should employ depends not only on generalized evaluation metrics, such as accuracy and F-score, but also on the gain metric. The rationale is that, the model with the highest F-score may not necessarily have the best separation and sequencing of predicted classes, thus leading to the investment of more time and/or money on annotating false positives, yielding zero improvement of the linguistic resource. We exemplify our point with a case study, using real data from a task of building a verb-noun idiom dictionary. We show that in our scenario, given the choice of three systems with varying F-scores, the system with the highest F-score does not yield the highest profits. In other words, we show that the cost-benefit trade off can be more favorable if a system with a lower F-score is employed.
Original language | English |
---|---|
DOIs | |
Publication status | Published - 2018 |
Event | Eleventh International Conference on Language Resources and Evaluation (LREC 2018) - Mayazaki, Japan Duration: 7 May 2018 → 12 May 2018 |
Conference
Conference | Eleventh International Conference on Language Resources and Evaluation (LREC 2018) |
---|---|
Country/Territory | Japan |
City | Mayazaki |
Period | 7/05/18 → 12/05/18 |
Keywords
- linguistic resource
- machine learning model
- human annotator
- budget
- data annotation
- evaluation metrics
- accuracy
- F-score
- gain metric
- predictive model
- false positives
- verb-noun idiom dictionary
- cost-benefit trade off