(comment left on: Jacob Faire's blog)
Mark Micire University of Massachusetts Lowell, Lowell, MA, USA
Jill L. Drury The MITRE Corporation, Bedford, MA, USA
Brenden Keyes The MITRE Corporation, Bedford, MA, USA
Holly A. Yanco University of Massachusetts Lowell, Lowell, MA, USA
The researchers in this paper developed a multi-touch interface to control an Urban Search-and-Rescue Robot (USAR).
Their primary objective was to observe how users would interact with the affordances provided in the control interface and what information could be generated based on those observations.
In their controller they provided a digital screen that showed:
1) A map generated by the robot as the user explored a space.
2) A front view display
3) A rear view display
4) A generated display of the area immediately surrounding the robot
5) A control panel with 4 directional arrows, a speed control slider, and a brake button.
With this controller, they had 6 users who were trained to operate the robot with a joystick, operate the robot using their multi-touch controller.
The results of the study showed that the controller they had designed generated a wide array of emergent behavior and emphasized that they needed to provide clearer affordances in the controls and provide separate camera and movement controls.
I think the idea to use multi-touch for robots makes sense, but I feel that their approach wasn't very ambitious or original. First of all most video game developers could have told them how to make an efficient controller for an entity separated from the user. They could have told them that a separate camera and movement controller was essential.
For the future work, I'd like to see them implement a control for a robot that has something more of a human form where the user could control arms and legs with the user's own arms and legs.
That would cool.
Providing clear affordances is important and they really needed to focus on that a bit more I think.