In movies, the fastest way to convey a futuristic world is by giving characters the power to control their environment with a simple wave of the hand.
Gesture control isn’t ubiquitous yet, but the technology is progressing. In January, the Federal Communications Commission approved Google’s Project Soli, a sensing technology that uses miniature radar to detect touchless gestures.
“With the FCC approvals, Google can develop the gesture technology at a faster pace,” says Thadhani Jagdish, senior research analyst in the information and communications technology practice at Grand View Research, to eventually “transform an individual’s hand into a universal remote control.”
MORE FROM EDTECH: Check out what experts see as the future of classroom technology in K–12.
Microsoft Kinect Puts Content Control in Users’ Hands
Gesture technology is already being used in diverse applications, Jagdish notes. In South Africa’s O.R. Tambo International Airport, for example, a coffee company installed a machine that analyzes travelers’ facial gestures and dispenses a cup of coffee if it detects someone yawning.
A Samsung TV supports motion control for changing channels, playing games and using the internet. An input device from Leap Motion, which develops hand-tracking software, lets a user control a computer or laptop with hand movements.
In schools, these motion-sensing devices can be useful in spaces that rely on shared screens and collaboration platforms.
Rich Radke, a professor of electrical, computer and systems engineering at the Rensselaer Polytechnic Institute, says staff there use Microsoft Kinect, which lets them point at a screen to change the content.
The number of feet from which the Microsoft Kinetic system can detect gestures
Source: marketwatch.com, “Gesture Recognition Market Size, Key Players Analysis, Statistics, Emerging Technologies, Regional Trends, Future Prospects and Growth by Forecast 2023,” April 11, 2019
“This kind of technology enables the control of your pointers and your cursors on large displays where it would otherwise be very cumbersome to use your own mouse and keyboard,” he says.
Doug A. Bowman, a professor of computer science and director of the Center for Human-Computer Interaction at Virginia Tech, describes the technology as “still somewhat of a novelty,” but agrees it has big potential, especially in augmented and virtual reality.
“You don’t necessarily want to be holding a specialized control to interact with virtual content,” he says, adding that Microsoft’s HoloLens AR headset can recognize hand gestures to allow users to click buttons, choose menu items and swipe from one screen to the next.
“In the K–12 setting, gestures could be a great fit for hands-on projects,” Bowman adds. “For example, in the science lab, using traditional input devices such as track pads and mice to browse supporting information may not be feasible because of the messy or dangerous materials students are working with.”
Researchers from Adam Mickiewicz University in Poland published a 2017 study in the British Journal of Educational Technology that looked at middle and high school students who used a virtual chemistry laboratory equipped with Microsoft Kinect gesture technology. The researchers found that students using the virtual lab had improved retention and were better at solving complex laboratory tasks.
MORE FROM EDTECH: See how K–12 schools are using virtual and augmented reality for assistive learning.
Educational Uses of Hand Gesture Tech Will Facilitate Collaboration
The future for gesture technology seems certain: Grand View Research estimates that the global gesture recognition market will be worth nearly $31 billion by 2025, up from $6.2 billion in 2017.
Gesture technology will “increase classroom interaction,” Bowman says, “and allow students to see, learn, understand and interact with the environment, thereby creating an interactive digital world around them.”
The number of core gestures recognized by Microsoft’s HoloLens
Source: Microsoft, “Gestures,” February 2019
One challenge, he notes, will be standardizing the types of gestures that devices recognize, just as taps and swipes are consistent across various touch screens.
“Researchers and designers need to ask, ‘What is the minimal set of gestures that will allow us to do most of the things we want to do?’” Bowman says.