'I should probably not be standing this close,' I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. This article is part of our special report on AI, “ The Great AI Reckoning.” We've chosen our top 10 favorite examples of how Kinect can be used to make awesome robots, check it out:
Needless to say, this is an awesome capability to incorporate into a robot, and the cheap price makes it accessible to a huge audience. What you end up with is an RGBD image, where each pixel has both a color and a distance, which you can then use to map out body positions, gestures, motion, or even generate 3D maps. A dedicated IR sensor picks up on the laser to determine distance for each pixel, and that information is then mapped onto an image from a standard RGB camera.
Kinect, which is actually hardware made by an Israeli company called PrimeSense, works by projecting an infrared laser pattern onto nearby objects. At a mere $150, it's a dirt-cheap way to bring depth sensing and 3D vision to robots, and while open-source USB drivers made it easy, a forthcoming Windows SDK from Microsoft promises to make it even easier. We love Microsoft's Kinect 3D sensor, and not just because you can play games with it.