Classic AlpineInker Post #2: Pen + Touch Input in “Manual Deskterity”

Alright, here’s another blast from the not-so-distant past: our exploration of combined pen and touch input on the Microsoft Surface.

And this project was definitely a blast. A lot of fun and creative people got involved with the project and we just tried tons and tons of ideas, many that were stupid, many that were intriguing but wrong, and many cool ones that didn’t even make our demo reel. And as is clear from the demo reel, we definitely took a design-oriented approach in this work, meaning that we tried multiple possibilities without focusing too much on which was the “best” design in this work. Or, said another way, I would not advocate putting together a system that has all of the gestures that we explored in this work; but you can’t put together a map if you don’t explore the terrain, and this was most definitely a mapping expedition.

Since I did this original post, I’ve published a more definitive paper on the project called “Pen + Touch = New Tools” which appeared at the ACM UIST 2010 Symposium on User Interface Software and Technology. This is a paper I’m proud of; it really dissects this design space of pen + touch quite nicely. I’ll have to do another post about this work that gets into that next level of design nuances at some point.

I had a blast preparing the talk for this particular paper and to be honest it was probably one of the most entertaining academic talks that I’ve done in recent years. I have a very fun way of presenting this particular material, with help during the talk from a certain Mr. I.M.A. Bigbody:

MR. I.M.A. Bigbody, Corporate Denizen, Third Rate Inc.Mr. Bigbody, a Corporate Denizen of Third Rate, Inc., is exactly the sort of arrogant prove-it-to-me, you’re-just-wasting-my-time sort of fellow that seems to inhabit every large organization.

Well, Mr. Bigbody surfaces from time to time throughout my talk to needle me about the shortcomings of the pen:

Why the pen? I can type faster than I can write.

Just tell me which is best, touch or pen.

Touch and pen are just new ways to control the mouse, so what’s the big deal?

And in the end, of course, because the good guys always win, Mr. Bigbody gets sacked and the world gets to see just how much potential there is in combined Pen + Touch input for the betterment of mankind.

One other comment about this work before I turn it over to the classic post. We originally did this work on the Microsoft Surface, because at the time this was the only hardware platform available to us where we could have full multi-touch input while also sensing a pen that we could distinguish as a unique type of contact. This is a critical point. If you can’t tell the pen from any other touch– as is currently a limitation of capacitive multi-touch digitizers such as those used on the iPad– it greatly limits the type of pen + touch interactions that a system can support.

These days, though, a number of slates and laptops with pen + touch input are available. The Asus EP121 Windows 7 slate is a noteworthy example; this particular slate contains a Wacom active digitizer for high-quality pen input, and it also includes a second digitizer with two-touch multi-touch input. The really cool thing about it from my perspective is that you can also use Wacom’s multi-touch API’s to support simultaneous pen + touch input on the device. This normally isn’t possible under Windows 7 because Windows turns off touch when the pen comes in range. But it is possible if you use Wacom’s multi-touch API and handle all the touch events yourself, so you can do some cool stuff if you’re willing to work at it.

Which gets us back to the Manual Deskterity demo on the Surface. To be honest, the whole theme in the video about the digital drafting table is a bit of a head fake. I was thinking slates the whole time I was working on the project, it just wasn’t possible to try the ideas in a slate form factor at the time. But that’s definitely where we intended to go with the research. And it’s where we still intend to go, using devices like the Asus EP121 to probe further ahead and see what other issues, techniques, or new possibilities arise.

Because I’m still totally convinced that combined pen and touch is the way of the future. It might not happen now, or two years from now, or even five years from now– but the device of my dreams, the sleek devices that populate my vision of what we’ll be carrying around as the 21st century passes out of the sun-drenched days of its youth– well, they all have a fantastic user experience that incorporates both pen and touch, and everyone just expects things to work that way.

Even Mr. Bigbody.

Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input

With certain obvious multi-touch devices garnering a lot of attention these days, it’s easy to forget that touch does not necessarily make an interface “magically delicious” as it were. To paraphrase my collaborator Bill Buxton, we have to remember that:

Everything, including touch, is best for something and worst for something else.

Next week at the annual CHI 2010 Conference on Human Factors in Computing Systems, I’ll be presenting some new research that investigates the little-explored area of simultaneous pen and touch interaction.

Now, what does this really mean? Building on the message we have articulated in black and white above, we observe the following:

The future of direct interaction on displays is not about Touch.

Likewise, it is not about the Pen.

Nor is it about direct interaction on displays with Pen OR Touch.

It is about Pen AND Touch, simultaneously, designed such that one complements the other.

pen plus touch

That is, we see pen and touch as complementary, not competitive, modalities of interaction. By leveraging people’s natural use of pen and paper, in the real world, we can design innovative new user experiences that exploit the combination of pen and multi-touch input to support non-physical yet natural and compelling interactions.

Examples of various behaviors observed during natural interaction with real-world pens and paper notebooks

Our research delves into the question of how one should use pen and touch in interface design. This really boils down to three questions: (1) What is the role of the pen? (2) What is the role of multi-touch? And (3) What is the role of simultaneous pen and touch? The perspective that we have arrived at in our research is the following: the pen writes, touch manipulates, and the
combination of pen + touch yields new tools:

Pen Writes, Touch Manipulates

 

 

 

 

 

pen+touch=new-tools

 

 

 

 

 

 

I’ve now posted a video of the research on YouTube that shows a bunch of the techniques we explored. We have implemented these on the Microsoft Surface, using a special IR-emitting pen that we constructed. However, you can imagine this technology coming to laptops, tablets, and slates in the near future– the N-Trig hardware on the Dell-XT2, for example, already has this capability, although as a practical matter it is not currently possible to author applications that utlize simultaneous pen and touch; hence our exploration of the possibilities on the Microsoft Surface.

Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input

The name, of course, is a simple pun on “Manual Dexterity” – in the context of shuffling papers and content on a “digital desk” in our case.  Hence “manual deskterity” would be the metric of efficacy in paper-shuffling and other such activities of manual organization and arrangement of documents in your workspace. This name also has the virtue that it shot a blank on <name your favorite search engine>. Plus I have a weakness for unpronounceable neologisms.

Special thanks to my colleagues (co-authors on the paper) who contributed to the work, particularly Koji Yatani, who contributed many of the novel ideas and techniques and did all of the heavy lifting in terms of the technical implementation:

Koji Yatani (Microsoft Research Intern, Ph.D. from the University of Toronto, and as of 2011 a full-time employee at Microsoft Research’s Beijing lab)

Michel Pahud

Nicole Coddington

Jenny Rodenhouse

Andy Wilson

Hrvoje Benko

Bill Buxton

Advertisements

7 responses to “Classic AlpineInker Post #2: Pen + Touch Input in “Manual Deskterity”

  1. Overly verbose. Could we just have the abbreviated verson for those who don’t want to read war and peace?

    • Sure, here it is: Pen + Touch is pretty damned cool.

      Or go watch the YouTube video.

      This blog is for people who want to hear some of the stories behind the design of the technology and maybe learn something new beyond a superficial outline of the material.

      So if you don’t want that, this isn’t the blog for you.

      Sorry, but this is my blog so I get to write what I want.

  2. I am fascinated by your research. I have an ASUS EP121, and am a 6 year tablet PC user.

    Is it possible to implement any of these technologies in software on the ASUS EP121? It has a WACOM digitzer and multi-touch gorilla glass screen.

    You said you will be speaking next week. Is that now this week? When will YouTube and the world be treated to more of your research?

    Thanks for the hard work… nice blog!

    Matt Chapman

  3. Absolutely yes on the Asus EP121. Wacom has its own multi-touch API’s that you can use to get the touch data. (Doing it through standard Windows events doesn’t work because Windows 7 turns off touch as soon as the pen comes in range).

    You have to get that custom API from Wacom though.

    Also, you have to disable the “respond to touch” checkbox the touch contol panel. One of these days I will have to do a post with detailed directions on how to do it all.

    The EP121 only senses two points of touch contact, which limits what you can do, but other than that a great platform for experimenting with pen and touch.

    I don’t have a video of my talk. Sorry.

  4. Pingback: Classic Post: The Hidden Dimension of Touch | The Past and Present Future

  5. really love this. products like adobe autocad could really profit from this interaction method.

    • Glad you liked the video. And yes, I think a broad swath of applications could benefit from multi-touch and pen interaction together, but it does require some creative thinking about how to design variants of those applications– as well as what users would want to do with such variants– to fully leverage pen and touch interaction.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s