Abstract
Continuing advances in digital image capture and storage are resulting in a proliferation of imagery and associated problems of information overload in image domains. In this work we present a framework that supports image management using an interactive approach that captures and reuses task-based contextual information. Our framework models the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. During image analysis, interactions are captured and a task context is dynamically constructed so that human expertise, proficiency and knowledge can be leveraged to support other users in carrying out similar domain tasks using case-based reasoning techniques. In this article we present our framework for capturing task context and describe how we have implemented the framework as two image retrieval applications in the geo-spatial and medical domains. We present an evaluation that tests the efficiency of our algorithms for retrieving image context information and the effectiveness of the framework for carrying out goal-directed image tasks.
Original language | English |
---|---|
Pages (from-to) | 473-497 |
Number of pages | 25 |
Journal | Multimedia Tools and Applications |
Volume | 54 |
Issue number | 2 |
DOIs | |
Publication status | Published - Aug 2011 |
Externally published | Yes |
Keywords
- Capturing and reusing user context
- Case-based reasoning
- Image manipulation
- Semantic annotation
- Task-based information retrieval