Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language.
However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.
Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant. Although this technology is still in its infancy, applications are beginning to appear.
Bill Gates said that the gesture recognition technology used in Natal could be used "for media consumption as a whole, and even if they connect it up to Windows PCs for interacting in terms of meetings, and collaboration, and communication." Gates added: "I think the value is as great for if you're in the home, as you want to manage your movies, music, home system type stuff, it's very cool there. And I think there's incredible value as we use that in the office connected to a Windows PC. So Microsoft research and the product groups have a lot going on there, because you can use the cost reduction that will take place over the years to say, why shouldn't that be in most office environments."
SoftKinetic is a Belgian company which develops gesture recognition hardware and software for real-time range imaging (3D) cameras (such as time-of-flight cameras). It was founded in July 2007. SoftKinetic provides gesture recognition solutions based on its technology to the interactive digital entertainment, consumer electronics, health and fitness, and serious game industries.
SoftKinetic technology has been applied to interactive digital signage and advergaming, interactive television, and physical therapy.
SoftKinetic's gesture recognition software platform, named iisu, can recognize and distinguish or isolate different scenic elements, can identify and track the body parts of a user, and can adapt the user's shape, posture, and movements to an existing physical model, and vice versa. iisu is compatible with all major real-time range imaging cameras, and the middleware guards developers from the particularities of the hardware.
Alternatively Leap Motion is a startup developing advanced motion sensing technology for human–computer interaction. Originally inspired by frustration surrounding 3D modeling using a mouse and keyboard, Leap Motion asserts that molding virtual clay should be as easy as molding clay in the real world. The Leap is a small USB peripheral designed to rest on a user's desk facing upward, thereby creating a 3D interaction space of roughly 8 cubic feet. Inside this space, The Leap has advertised through video footage to track hands and fingers as well as tools such as pens, pencils, and chopsticks with very high accuracy. This differentiates the product from the Kinect, which is more suitable for whole-body tracking in a space the size of a living room.
Microsoft Research has been developing Digits, a wrist-mounted device that lets users control devices such as smartphones, tablets, radios, and video game systems using gestures in the air. In addition to creating a 3D representation of a user’s hand on a screen, Digits can control devices when users simulate certain gestures, such as turning a knob to crank volume up or down.
The prototype still looks very much like a prototype, but amazingly it’s built from off-the-shelf parts, including an IR camera, IR laser line generator, IR diffuse illumination, and an inertial measurement unit, and and it’s worth noting that wearers don’t need any kind of data glove. A bare hand works perfectly. While companies like leap motion and SoftKinetic have software and hardware tied to a system at home or desktop, there is a possibility that hand gesture and computer interaction may even get closer.
The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. SixthSense supports multi-touch and multi-user interaction.
The possibility of the next generation of computer interaction may lay in the realm of wearable device to allow the user to swap data from everyday life to a computer. The good points are that it will allow natural flow of remembering whatever the user sees to be tagged or used in a computer. The bad points will be a constant need to interact with such devices. As Internet addiction and the constant need to use the phone have proven to create a short attention span in people. The seamless flow of recording data out of the office or away from the computer can make these devices a gateway drug to harder forms of addiction. The future of computer interaction may become much more natural, but it doesn't necessarily mean better...
No comments:
Post a Comment