(Comment left on ________'s Blog)
Qing Luan University of Science and Technology of China, Hefei, China
Steven M. Drucker Microsoft Live Labs Research, Redmond, WA, USA
Johannes Kopf University of Konstanz, Konstanz, Germany
Ying-Qing Xu Microsoft Research Asia, Beijing, China
Michael F. Cohen Microsoft Research, Redmond, WA, USA
Luan et al.'s system provides a way of annotating gigapixel images with three kinds of annotations:
1) Looping Sounds
2) Triggering Narrations
3) Visual Labels
The system also exhibits hysteresis where sounds persist after moving away and strength the of the sound increases as we get closer.
Smaller annotations gradually appear as the user stays on a particular part of the image.
The user can add the annotation and provide audio files. The size of the annotation marker is referenced against the size of the original file and the strength of the associated audio files and size of the annotation labels is determined by that reference value.
This kind of annotation system has obvious applications for things like google earth, space maps, and biological systems.
Providing these annotations make exploring these kinds of systems more fun, engaging, and informative and I could see this system coming to use for education purposes very easily.
The only thing I would want for this is a way to change your initial point of view to a place within the image when you're either zoomed in or panned out.