Like high definition before it, the new craze in media is three dimensional video and pictures. Movies and video games are being made with the third spatial dimension in mind while monitors and TV’s can be bought to view the content. Creating an accurate depth map though can be intensive and require hardware inaccessible to the average consumer. Traditional time-of-flight depth sensors are inefficient as they can require measuring the distance to every pixel or need expensive light sensors. Now MIT has developed a new time-of-flight depth sensor, which is so efficient and simple it could be deployed in smartphones.
Like other time-of-flight sensors the MIT method uses a light source to illuminate a scene and a camera to capture the reflected light. By knowing the time between the flash and the detection, the distance to an object can be determined. However a few key things have been changed to improve the system. Instead of using a battery of light sensors, this technique uses a single camera with a single pixel. Also the light source does not always give out an omnidirectional burst of light, like a camera flash. Instead it will emit a pattern that looks like an irregular checkerboard. Even though the camera is a single pixel, it can actually distinguish between the different patterns.
Though the math involved in creating a depth map from the camera’s signals isn’t something most people could do, it is simple enough for a smartphone’s processor to work out. Compared to the Microsoft Kinect, which requires the power of a GPU, this is quite an accomplishment. The intended use of this technology is actually not so much to make 3D images with a phone, but to enable gesture control, like that used in Kinect-enabled games.