(comment left on Kerry Barone's blog)
Jin Sun Ju Konkuk University, Seoul, South Korea
Yunhee Shin Konkuk University, Seoul, South Korea
Eun Yi Kim Konkuk University, Seoul, South Korea
Ju et al. developed an intelligent wheelchair system that does 4 objectives:
1) Make a non-intrusive system for controlling a wheelchair that can be used by those disabled from the neck down.
2) Make the system usable for all times of the day.
3) Make the system accurately discriminate between intentional and non-intentional commands to decrease user frustration and system correctness.
4) Make the system able to recognize and avoid obstacles.
Their first objective to provide a non-intrusive system meant that they had to avoid any kind of objects that touched the face or head to control the system. So they used a Logitech PC camera to monitor the face's orientation, eyes movements, and mouth position. The user can tilt their face and eyes left or right to indicate that they want to move in those respective directions while the mouth shape controls the forward movement where a "Go" position signaled forward and an "uhm" position signaled the IW to stop.
Objective 3 was accomplished by making the system recognize when the user was facing forward or looking in another direction. If the user was facing forward, then commands were accepted. Else, they were ignored as non-intended commands.
Their fourth objective was achieved by implementing 10 range-sensors (2 ultrasonic and 8 infra-red) that detect the area around the IW (intelligent wheelchair). Faults of the system included a few blind spots around the IW that caused it to bump into objects in those blind spots.
The first study, they measured the accuracy of the facial recognition interface by setting the users in varying environments of lighting and backgrounds and found that the average time to process a frame was 62ms resulting in 19 frames processed per second. They also measured the recall and precision of their four commands (left,right,stop,go) and found that the average recall by users was 96.5% and the precision of the commands were an impressive 100%. Half of the users were able-bodied and the other half had disabilities.
In the second study, 10 able bodied users (1/2 male, 1/2 female) used three kinds of wheelchairs (joytick controlled, headband controlled, and the IW system) to navigate a course and the time to complete the run was measured.
They found that the joystick was the quickest method both before and after training and that the headband method yielded about a 2 second improvement in speed over their system. Once the user was trained in the methods of wheelchair control, the IW system was slightly better (by a few milliseconds) than the headband method.
I think this system provides a very reliable way to provide the extremely disabled with a way to navigate their wheelchair that is free from the annoyance of intrusive methods of control, which is good.
However I think they need to consider the people who can't even necessarily control their neck muscles for movement (of which I know a few). But then again you can't please everyone.
The fact that the system works as well as the headband method is encouraging and it is interesting they provide obstacle recognition.
They need to implement better sensing of the surrounding environment to have a truly intelligent chair, but I think that is a relatively minor problem. I am worried about how much energy consumption it takes to run all these sensors and to power the computer to translate all this data.
Also they need to be able to provide more complex controls to refine the movement of the IW.