CES 2011: The Right Interface for the Right Task, As Picked By the User
LAS VEGAS – The Kinect lets you communicate with your Xbox through gestures, the iPad lets you control programs with your touch, and OnStar follows orders dictated by your voice. In the future, according to a panel of experts at this year's Consumer Electronics Show, all devices will give you the ability to pick the interface you like best. If you are willing to pay for it.
At the forum titled "From Touch Screens to Mind Control: the Future of User Interface" at CES 2011 , researchers from Microsoft, Synaptics, Hewlett-Packard and Sony predicted that digital devices soon will feature many natural interface mechanisms, such as voice control, motion control and multitouch, rather than just one or two. As users move from task to task on a single device, multiple input mechanisms will enable them to smoothly pick the best tool for each job – switching from, say, a keyboard for word processing to touch interface for app use to motion control for gaming – without skipping a beat.
"Modality is the death of user interface," said Albert Shum, a mobile design partner at Microsoft. "We used to look at tasks as doing many things at once. Now we worry about seamlessly moving between tasks. To do that we need to break the plane -- break the third wall of the screen with different inputs."
This transition to multiple inputs happens to have started already in one of the oldest devices on the market: cars. Since the driver needs at least one hand on the wheel at all times, dashboard and GPS interfaces have begun to integrate simultaneous voice, motion and touch control , said Paolo Pastorino, a business development and marketing manager at Loquendo.
While users feel the control of their computers in their fingertips, they will feel the drawback in their wallet. Each form of user interface requires adding more hardware to a device. Also, developers writing software for multiple input devices will need to put more work in covering all their bases, which also will raise the final cost of the device, said Richard Marks, a manager of special projects in research and development at Sony.
The movement to a ubiquitous computing ecosystem has accelerated, and complicated, this transformation. As users access the same information and services across different platforms, it's only natural that they will want to conserve their interface mechanism, Shum said. But at different scales – say, 10 feet away from the machine for home entertainment and gaming, 3 feet away for personal computing, and a single foot away for mobile device use -- the same kind of interface, such as touch or gesture, functions differently.
According to Marks, those scale differences remain the biggest barrier, even more than price or hardware power, toward producing devices that will let users have their interface any way they want it.