Future of Device Input Is Shadowy at Best
Two people stand in front of a projected Google Map, while a finger shadow moves to click on the map.
CREDIT: Courtesy of AT&T
NEW YORK – From the mouse and keyboard to touch screens, people have had to learn many new ways to manipulate images. Now an app being developed by two AT&T employees will require users to learn even newer gestures, but ones that may already be familiar from sleepovers and summer camp.
Researcher Kevin Li and summer intern Lisa Cowan made a prototype of a program that would let people make shadow figures, using their hands and arms, that would interact with the maps or photos projected from a smartphone.
The program is for any time "you have a bunch of people sitting around, each looking at their own device," Li told InnovationNewsDaily. Li demonstrated the application April 19 for reporters at the company's downtown building here, 11 months after he and Cowan presented the prototype at the Association for Computing Machinery's computer interaction conference.
The app recognizes the silhouette of fingers being pinched together, for example, as an instruction to zoom in on a particular object in the projected image.
Li says such technology may prove especially helpful when teachers or businesspeople are making interactive presentations, or when a bunch of friends are trying to find a nearby restaurant to visit.
Tiny, portable projectors that can beam a smartphone's screen onto a wall – a necessity for the new app – are themselves a fairly new technology. Right now, if a group of people are looking at a projection and want to change what they see, they have bend over the phone's screen. Li and Cowan wanted a way to let people interact with the screen more comfortably.
In Li's demonstration here, using Google Maps, people clicked on locations on the screen by "touching" them with the tip of the shadow of their finger. They zoomed in by paddling their hands toward them in front of the projector, in a "come hither" gesture that used the whole hand and forearm. Zooming out required the opposite gesture – paddling their hands as if pushing the map away. Pinching shadows will also work to zoom.
One strikingly rough edge to Li's prototype was the need to connect his iPhone to a laptop. The laptop's video camera pointed at the wall where the map was projected, to detect the shadows people cast.
Of course, no one will want to haul out a laptop in order to use an app, but Li explained that at the time he and Cowan made their program, Apple didn't let developers access data from their phones' video streams. Apple has since lifted that limitation. "There's no reason you couldn't build this prototype directly on an Apple" mobile device, Li said.
The major limitation of Li's idea now is that it still requires a device's projector and camera to point in the same direction, so that the camera can detect the shadows on the projection. No smartphones today are arranged that way, Li said. He couldn't say when a polished version of his program might be available for purchase.
Cowan and Li's paper highlighted the details of figuring out which shadow gestures to use. When they asked 16 volunteers to come up with a shadow figure appropriate for zooming, for example, 12 either pinched their fingers or their entire arms, like giant bird beaks.
The pinches came from people's understanding of how to zoom on touch screen devices. But when the AT&T pair showed videos of the gestures to another group of 16 volunteers, this second batch of testers found paddling gestures more intuitive. "Like bringing the map closer to you or pushing it away from you," one of the volunteers told the researchers.
Because the "shadow puppets" program is a new form of interaction, most people thought of what they already knew, but new interfaces may need totally new ways of interacting, Li and Cowan wrote.