We’ve already featured numerous Microsoft Kinect for Xbox 360 hacks that control Mac OS X and a Web browser. But this latest one is a huge jump.
MIT engineer Garratt Gallagher developed a way to use the Kinect sensor to distinguish both hands and individual fingers in a cloud of more than 60,000 points, at speeds up to 30 frames per second.
The system uses the Kinect and the libfreenect driver for interfacing with Linux. “The graphical interface and the hand detection software were written at MIT to interface with the open source robotics package ‘ROS,’ developed by Willow Garage (willowgarage.com),” according to MIT’s CSAIL page on YouTube.
“The hand detection software showcases the abilities of the Point Cloud Library (PCL), a part of ROS that MIT has been helping to optimize.”
What kills me with this is that no one is using head gear, gloves with diodes sticking out of them, or bulky “virtual reality” 3D glasses. It’s a Microsoft Kinect that you stand in front of. You could put it in someone’s living room, or at their desk in an office, and get this interface anywhere.
Ultimately, this level of control could have far-reaching implications for user interface design. It’s obviously not fully baked yet, but that doesn’t matter. It appears that all the right pieces are there, and there’s nothing experimental about the hardware, since it’s already available on store shelves.
Enterprising developers can check out the source code. Here’s a two-minute video of the system in action.