Touchscreens, especially those with multitouch capabilities, open up interfaces to user control methods mouse and keyboard cannot duplicate, such as pinch-to-zoom. Now researchers at Carnegie Mellon University are taking touchscreens to another level by giving devices the abilities to distinguish between the different parts of the finger users can use to interact with TapSense. By then mapping certain commands to the different parts of the finger, such as right-click to the knuckle and backspace to the nail, onscreen controls can be removed, giving more room to the information of interest to the use. The researchers use a microphone for distinguishing the parts of the finger by the sounds they make when impacting the screen. Unfortunately the microphones on phones with touchscreens are not adequate for TapSense, as they are designed for voices and not what the software needs.