‘SixthSense’ is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.
We’ve evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.
The SixthSense prototype comprises a pocket projector, mirror, and camera worn in a pendant-like mobile device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The system projects information onto the surfaces and physical objects around us, making any surface into a digital interface; the camera recognizes and tracks both the user’s hand gestures and physical objects using computer-vision-based techniques. SixthSense uses simple computer-vision techniques to process the video-stream data captured by the camera and follows the locations of colored markers on the user’s fingertips (which are used for visual tracking). In addition, the software interprets the data into gestures to use for interacting with the projected application interfaces. The current SixthSense prototype supports several types of gesture-based interactions, demonstrating the usefulness, viability, and flexibility of the system.