Q&A: John Underkoffler Discusses the Future of Computer Interface
John Underkoffler using Oblong's G-speak system
CREDIT: Oblong, Inc.
Virtually everything about the computer has changed since the first modern PCs hit the market more than 30 years ago; everything, that is, except the way humans interact with them. The computer mouse paradigm remains firmly entrenched as the middleman between people and their machines. With people today drowning in data, John Underkoffler thinks the mouse has outlived much of its usefulness.
Underkoffler isn't a household name, but his work is widely known. Building on ideas he developed as a doctoral candidate at MIT's Media Lab, Underkoffler served as a technical advisor to Steven Spielberg's 2002 film "Minority Report," designing the gesture-based interface that Tom Cruise's character uses to quickly "motion" his way through multiple data sets using only hand gestures. Now known as G-speak, the interface forms the core offering of Oblong Industries, the company Underkoffler co-founded to turn this bit of science fiction into everyday reality.
In an exclusive interview, Underkoffler tells InnovationNewsDaily how the next computing paradigm will unfold.
InnovationNewsDaily: Through devices like Microsoft Kinect, people are becoming more familiar with gesture-based, peripheral-free interfaces. But Kinect essentially accomplishes with gesture what you can also do with a mouse or an Xbox controller. How is G-speak more than just another way to move a cursor?
John Underkoffler: Fundamentally, G-speak is a complete ground-up rethinking of the entire human-machine interface experience. In terms of the capability of the input device, it's kind of a go-cart to car comparison. One is certainly fun, but if you have real travel needs, you're going to choose the car.
If we're talking about moving a cursor around a screen, we've taken several steps back to say "Hold on, is it really a cursor that we need to be talking about?" And the answer for the most part is "No." When you have human hands as the direct input deviceand that's really the fundamental proposition of G-speakthen what happens on screen has to be rethought as well. So it's not a cursor and windows experience anymore. It's something far richer in fact.
INO: To see it in action, the gesture interface seems intuitive but also fairly complex. How did you go about designing that gestural language?
JU: What we've attempted to do, and what I think we've really succeeded at, is creating the exact right balance between a gestural vocabulary that is utterly simple and intuitive on the one hand, but just sophisticated enough to express anything you might want to do. It came down eventually to saying "what is the single most important gesture? Let's start there and build everything else around that." For us, that's clearly pointing. We taught the machine to do what humans already know how to do; you can point at something far away, something that's too far to touch, and people who watch you can do some weird vector math in their heads and know what you're pointing at. It's a pretty neat trick.
So we start with pointing, and we've layered a bunch of other gestures around that. But we've designed them to be really intuitive. Wherever possible we try to capture existing gestural meaning and recapitulate it inside the computer world.
INO: What's the killer app for this kind of interface?
JU: We ask ourselves that frequently, and the answer is: we're not quite sure, but in a good sense. We ourselves build all sorts of stuff on top of G-speak, and it's possible that one of those resulting systems is the killer app. But it's even more likelyand we want to make sure that it's possible for this to be truethat someone else builds that killer app.
That means ultimately an ecosystem of developers--hobbyists and hackers and enthusiasts, the same people that like to take apart every new device that comes out and build unexpected stuff around it. So we're pushing as hard as we can to get there really, really quickly. We want the world playing with this stuff because we certainly can't build everything ourselves, and we'd love for the full imagination of the whole world of programmers and hackers to do amazing stuff with G-speak, stuff that we haven't even have thought of.
INO: In a TED talk you said "we're not finished until all the computers in the world work like this." What does this future look like, and what comes after that?
JU: We're absolutely convinced that G-speak is the computing paradigm for the next thirty years. In fact, we're so convinced of it that we believe that the world is going to be like this even if we stop; someone else would introduce remarkably similar ideas over time and the world would get there anyhow. When we're finished, you will know because you'll be able to walk up to any screen anywhere and point at it, and it will respond. You can point into that screen and through it to get to any of your data and any of your programs, any of your information wherever that stuff is.
That's such a fundamental capability, such a fundamental modality, it seems like it ought to be good for a long, long time. Consider what that's in place of; if it's a computer we're talking about you've got your mouse, you've got the overlapping windows. But what's the interface for your TV ? You've got fifteen major television manufacturers, each one of them has a different style remote, they're all hideous, none of them are inter-compatible. What's the interface to your microwave oven? Something different yet again.
We're proposing to unify all of that through the language of gesture, through the language of space. We're going to make all of those digital objects work the same way that physical objects in the real world work. And we're going to do it in such a way that everyone is implicitly an expert at using them because everybody is already an expert at using the real world. It's where we live.