With Apple’s acquisition of PrimeSense there’s been a lot of excitement that motion interfaces are ready to move into prime time. But there are a lot of ways to do this kind of interaction--here's a primer for developers and designers that will have to learn this new design language soon enough.
Recently I was talking to a friend of mine in London who owns a product design firm. As a well-known product designer for hire, this friend (who asked to remain anonymous out of respect for his clients) has a range of clientele from the largest multinationals to small-time entrepreneurs who want him to bring their product ideas to market.
A few weeks ago a wealthy client--and self-proclaimed “Apple Geek” who likes throwing money at personal technology products he may or may not ever bring to market--walked into my friend’s office and said, “Apple just bought PrimeSense. 3-D motion is the future and my product can no longer just have an app remote. People need to be able to interact with it by waving their hands in the air or it won't be cool.”
The problem was this client’s product was an Arduino device that allows users to control the lighting in their house via an app that wirelessly talks to the module that gets plugged into their lamps. Plugging in dozens of Arduino modules to lamps and controlling them with a smartphone is relatively easy for the user, as all modules can be viewed from one app, but redesigning the product to enable the user to interact with it remotely by gestures would make the product needlessly complicated (not to mention necessitate a complete reengineering of the product and drastically increase its cost)--all for the sake of giving it a “cool factor.”
Luckily, my friend was able to talk the client out of his decision in about 20 minutes. And while most professional developers probably wouldn’t completely revamp an existing product solely because Apple might be doing something with motion control in the future, this story does bring up a good point: If you are thinking of getting into motion control, ask yourself if it’s really necessary--is it something that would truly make your hardware product or app better? Or does it offer nothing more than a brief “wow” factor?
And be honest with yourself because even the experts in motion control know the new human-computer input method of the future isn’t for everything.
“We strongly believe in using the right tool for the job,” Jon Altschuler, director of Creative Technology for Leap Motion, one of the most well-known leaders in 3-D motion products, tells me. “In some cases, typing or speech recognition is best, while in others it's the mouse or touch interaction. In other cases, it will be motion control, and we're continuing to explore new possibilities in this field. Ultimately, a combination of these tools may be the best approach--just like how the mouse joined the keyboard rather than replacing it.”
Chuck Gritton, CTO of Hillcrest Labs, one of the pioneers in the sector whose Freespace Motion Control Technology can be found in everything from Roku remotes to the Kopin Golden-i head-mounted displays used by firemen, agrees. “Motion interfaces make sense when they make interactions more efficient or more effective. For example, cameras [that sense motion] are useful for an immersive experience like action games. However, they are not practical for everyday use for navigation and control on TVs or PCs. Computers and smartphones have used pointing interfaces for years because they are the most efficient way to navigate a GUI, and this is not going to change.”
In other words, just because Tom Cruise looked so cool using motion gestures to control his computer in Minority Report and just because something similar is now possible in your product, don’t jump into it just for the coolness factor. Stop, breathe, and ask yourself if by adding motion control are you really helping your user by giving them an input method that will allow them to control the product in easier or innovative ways. If motion control accomplishes neither of those things, don’t add it.
However, if motion control would benefit the user, the next step is to understand how your users move.
As most software developers come from a computer science background and haven't usually had any physical anatomy or kinesiology training, the jump into designing for motion interfaces necessitates a need for developers to familiarize themselves with the biological side of things for a change: starting with an understanding of the basic types of motion.
“Today, people are conflating terms such as ‘motion,’ ‘gestures,’ ‘3-D,’ and ‘pointing’--when in fact, each of those terms means something very specific from a design and UI perspective,” Gritton says. “It is very important to realize that the terms ‘3-D motion’ and ‘gesture’ are too limiting. Among human computer interaction designers the full suite of human motions are often lumped together with the term ‘gesture.’ But, the term gesture should not be used as a general term.”
Gritton says there are actually four main types of 3-D motions to consider:
- Natural motion. “Humans invoke natural motions continuously to perform our day-to-day activities. These motions are based on the structure of the human body. Eating, running, or hitting a ball all involve natural motions; to be useful the system must transmit those motions as closely as possible in their exact representation to the application or when you are playing video golf one person’s hook will look like another’s slice.”
- Pointing. “Pointing is a specific motion we learn to identify at an early age to communicate preference or to make a selection. To implement an accurate pointing solution the design must eliminate impairments like human tremor or the motion that occurs when a button is pressed.”
- Movement around an axis. “A third type of motion input is the tracking of movement around the X, Y, and Z (roll, pitch, and yaw) axes that we call Virtual Controls. For more than 100 years electronic devices of all kinds used knobs and buttons for control. Given the design of our arm and wrist, these input technologies were excellent for making both coarse and fine adjustments. In today’s touch and motion controlled products, rotations of devices around the roll, pitch, and yaw axes can replicate the mechanical devices of old to efficiently provide the same level of precision as the mechanical variety. It’s the emulation of these manual controls that leads us to use the term 'Virtual Controls.'"
- Unique gestures. “Finally, we are back to gestures, which is the fourth type. While the nuances of natural motions must not be lost from one user to the next, gestures are generally quite different. Each of us might make the same gesture quite differently. For example, each person may have a slightly different wave, but the system must interpret each one the same way--as with the command, 'hello.' Gestures must be interpreted to be useful."
It is only when a developer understands the four types of human motions that they can choose the most appropriate type of motion their user will need to perform to interact with their product. As Gritton says, “If 3-D motion has an advantage for a specific use case, the developer needs to understand how to distinguish that use case from the rest, and then be clear on what sorts of 3-D motion are useful.”
Which leads to the next step…
When most people think of motion control, they think of Microsoft’s Kinect, the first version of which was made by PrimeSense, the company Apple just bought for unknown reasons. The Kinect uses cameras to capture images of a user’s motion and then converts the placement of that user’s body parts (like arms and legs) into software commands so they can control a game character on screen.
But camera-based motion control is only one type of motion input available on the market. Leap’s type of motion control is entirely different. Their technology uses infrared LEDs and digital sensors to track a user’s hand and finger positions above the device. Leap’s technology is more refined on a micro level, tracking both hands of a user and all 10 fingers in real time.
Another popular type of motion control is the one pioneered by Hillcrest. Their technology relies on embedded sensors within a device like a remote or a heads-up display that incorporates accelerometers, gyroscopes, magnetometers, and other sensors and reads the orientation of the device in 3-D space to produce movement on screen.
As you can see the three types of motion input technologies available are radically different from one another, and the right one to use depends on your unique product. Leap’s technology might be best where individual finger movement is key, such as an app that allows a doctor to manipulate a 3-D MRI scan. However, if you’re making a product such as a set-top box, a motion control remote based on Hillcrest’s tech may be ideal.
Making the right choice will not only depend on your understanding of the benefits and limitations of each type of motion control interface, but also how each one would facilitate the most natural movement of the user.
For developers used to working in a 2-D world, Gritton says they’ll actually have very little problems making the leap into the 3-D sphere. Instead, the real challenge will be one of design restraint and a new conceptual approach to problem-solving.
“Just because the toolbox includes the ability to detect 300 different 3-D gestures does not mean that the average Joe wants to memorize that new language just to watch a football game on his living room TV,” Gritton says. “Capability does not equal desirability.”
Gritton urges developers to use the motion capabilities in a way that is natural and simple for the user and enhances the overall user experience, not complicates it, noting that if something works extremely well--like inertial pointing for a TV screen--you keep it and build added functionality elsewhere.
Leap’s Altschuler agrees with Gritton that coding for motion interfaces, on average, isn't any more difficult than coding for the 2-D world. What he says developers need to be open to is changing the way they look at solving a problem via software.
“The challenge is less technical and more conceptual,” he says. “Motion control unlocks a great deal of power that forces developers to rethink the best way to interact with technology. It requires a lot of creativity to push beyond old ways of thinking to build something new, but we've seen lots of developers who are up to the challenge.”
And for developers, overcoming those challenges will be worth it. In the coming years there will be billions of motion control-enabled devices on the planet being used in a diverse array of industries, from consumer products, to the fields of medicine, the arts, defense, sports, education, and more. And though motion interfaces won’t displace the touchscreen, just as the mouse did not replace the keyboard, they will mark an important point in the history of human-computer interaction, allowing us to get more from and interact with our technology in ways that would have seemed like science fiction just five years ago.
As both Gritton and Altschuler told me, separately, “This is just the beginning.”