For decades science fiction has told us that the future will be filled with touch-based interfaces, and while in some cases that is true today, it is still limited to specialized devices. For that fantastic vision to be realized, projector, sensor, and computing technologies will have to be combined and intelligently designed to respond to a variety of inputs. Researchers at Carnegie Mellon University have brought that combination a little closer with the creation of WorldKit; a system to generate interfaces on the fly.
To create a smart room essentially requires a projector and a depth sensor, such as the Microsoft Kinect. WorldKit is the software to use with this hardware in order to interact with them and other devices. Instead of requiring an interface to have been created beforehand, WorldKit allows a user to paint an area they wish to use as a controller, and select what it controls from a menu. Using the data from the depth sensor, the software is also able to compensate for the curvature of objects in the room and warp the projections so they appear flat on a surface. This also allows the system to work with a standard coordinate system.
Next the researchers want to improve WorldKit to allow users to interact with interfaces floating in free-space, instead of just on the surfaces of objects. They see many applications for this technology, especially as devices become so advanced that we may see interactive light bulbs, that combine all of the hardware into a single unit.
Source: Carnegie Mellon University