Category Archives: horizontal & tabletop computing

Project: Bimanual In-Place Commands

Here’s another interesting loose end, this one from 2012, which describes a user interface known as “In-Place Commands” that Michel Pahud, myself, and Bill Buxton developed for a range of direct-touch form factors, including everything from tablets and tabletops all the way up to electronic whiteboards a la the modern Microsoft Surface Hub devices of 2015.

In-Place Commands Screen Shot

The user can call up commands in-place, directly where they’re working, by touching both fingers down and fanning out the available tool palettes. Many of the functions thus revealed act as click-through tools, where the user may simultaneously select and apply the selected tool — as the user is about to do for the line-drawing tool in the image above.

Microsoft is currently running a Request for Proposals for Surface Hub research, by the way, so check it out if that sort of thing is at all up your alley. If your proposal is selected you’ll get a spiffy new Surface Hub and $25,000 to go along with it.

We’ve never written up a formal paper on our In-Place Commands work, in part because there is still much to do and we intend to pursue it further when the time is right. But in the meantime the following post and video documenting the work may be of interest to aficionados of efficient interaction on such devices. This also relates closely to the Finger Shadow and Accordion Menu explored in our Pen +Touch work, documented here and here, which collectively form a class of such techniques.

While we wouldn’t claim that any one of these represent the ultimate approach to command and control for direct input, in sum they illustrate many of the underlying issues, the rich set of capabilities we strive to support, and possible directions for future embellishments as well.

Watch Bimanual In-Place Commands video on YouTube


Thumbnail for In-Place CommandsKnies, R. In-Place: Interacting with Large Displays. Reporting on research by Pahud, M., Hinckley, K., and Buxton, B. TechNet Inside Microsoft Research Blog Post, Oct 4th, 2012. [Author’s cached copy of post as PDF] [Video mp4] [Watch on YouTube]

Book Chapter: Input Technologies and Techniques, 2012 Edition

Input Technologies and Techniques, 3rd EditionHinckley, K., Wigdor, D., Input Technologies and Techniques. Chapter 9 in The Human-Computer Interaction Handbook – Fundamentals, Evolving Technologies and Emerging Applications, Third Edition, ed. by Jacko, J., Published by Taylor & Francis. To appear. [PDF of author’s manuscript – not final]

This is an extensive revision of the 2007 and 2002 editions of my book chapter, and with some heavy weight-lifting from my new co-author Daniel Wigdor, it treats direct-touch input devices and techniques in much more depth. Lots of great new stuff. The book will be out in early 2012 or so from Taylor & Francis – keep an eye out for it!

Classic AlpineInker Post #2: Pen + Touch Input in “Manual Deskterity”

Alright, here’s another blast from the not-so-distant past: our exploration of combined pen and touch input on the Microsoft Surface.

And this project was definitely a blast. A lot of fun and creative people got involved with the project and we just tried tons and tons of ideas, many that were stupid, many that were intriguing but wrong, and many cool ones that didn’t even make our demo reel. And as is clear from the demo reel, we definitely took a design-oriented approach in this work, meaning that we tried multiple possibilities without focusing too much on which was the “best” design in this work. Or, said another way, I would not advocate putting together a system that has all of the gestures that we explored in this work; but you can’t put together a map if you don’t explore the terrain, and this was most definitely a mapping expedition.

Since I did this original post, I’ve published a more definitive paper on the project called “Pen + Touch = New Tools” which appeared at the ACM UIST 2010 Symposium on User Interface Software and Technology. This is a paper I’m proud of; it really dissects this design space of pen + touch quite nicely. I’ll have to do another post about this work that gets into that next level of design nuances at some point.

I had a blast preparing the talk for this particular paper and to be honest it was probably one of the most entertaining academic talks that I’ve done in recent years. I have a very fun way of presenting this particular material, with help during the talk from a certain Mr. I.M.A. Bigbody:

MR. I.M.A. Bigbody, Corporate Denizen, Third Rate Inc.Mr. Bigbody, a Corporate Denizen of Third Rate, Inc., is exactly the sort of arrogant prove-it-to-me, you’re-just-wasting-my-time sort of fellow that seems to inhabit every large organization.

Well, Mr. Bigbody surfaces from time to time throughout my talk to needle me about the shortcomings of the pen:

Why the pen? I can type faster than I can write.

Just tell me which is best, touch or pen.

Touch and pen are just new ways to control the mouse, so what’s the big deal?

And in the end, of course, because the good guys always win, Mr. Bigbody gets sacked and the world gets to see just how much potential there is in combined Pen + Touch input for the betterment of mankind.

One other comment about this work before I turn it over to the classic post. We originally did this work on the Microsoft Surface, because at the time this was the only hardware platform available to us where we could have full multi-touch input while also sensing a pen that we could distinguish as a unique type of contact. This is a critical point. If you can’t tell the pen from any other touch– as is currently a limitation of capacitive multi-touch digitizers such as those used on the iPad– it greatly limits the type of pen + touch interactions that a system can support.

These days, though, a number of slates and laptops with pen + touch input are available. The Asus EP121 Windows 7 slate is a noteworthy example; this particular slate contains a Wacom active digitizer for high-quality pen input, and it also includes a second digitizer with two-touch multi-touch input. The really cool thing about it from my perspective is that you can also use Wacom’s multi-touch API’s to support simultaneous pen + touch input on the device. This normally isn’t possible under Windows 7 because Windows turns off touch when the pen comes in range. But it is possible if you use Wacom’s multi-touch API and handle all the touch events yourself, so you can do some cool stuff if you’re willing to work at it.

Which gets us back to the Manual Deskterity demo on the Surface. To be honest, the whole theme in the video about the digital drafting table is a bit of a head fake. I was thinking slates the whole time I was working on the project, it just wasn’t possible to try the ideas in a slate form factor at the time. But that’s definitely where we intended to go with the research. And it’s where we still intend to go, using devices like the Asus EP121 to probe further ahead and see what other issues, techniques, or new possibilities arise.

Because I’m still totally convinced that combined pen and touch is the way of the future. It might not happen now, or two years from now, or even five years from now– but the device of my dreams, the sleek devices that populate my vision of what we’ll be carrying around as the 21st century passes out of the sun-drenched days of its youth– well, they all have a fantastic user experience that incorporates both pen and touch, and everyone just expects things to work that way.

Even Mr. Bigbody.

Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input

With certain obvious multi-touch devices garnering a lot of attention these days, it’s easy to forget that touch does not necessarily make an interface “magically delicious” as it were. To paraphrase my collaborator Bill Buxton, we have to remember that:

Everything, including touch, is best for something and worst for something else.

Next week at the annual CHI 2010 Conference on Human Factors in Computing Systems, I’ll be presenting some new research that investigates the little-explored area of simultaneous pen and touch interaction.

Now, what does this really mean? Building on the message we have articulated in black and white above, we observe the following:

The future of direct interaction on displays is not about Touch.

Likewise, it is not about the Pen.

Nor is it about direct interaction on displays with Pen OR Touch.

It is about Pen AND Touch, simultaneously, designed such that one complements the other.

pen plus touch

That is, we see pen and touch as complementary, not competitive, modalities of interaction. By leveraging people’s natural use of pen and paper, in the real world, we can design innovative new user experiences that exploit the combination of pen and multi-touch input to support non-physical yet natural and compelling interactions.

Examples of various behaviors observed during natural interaction with real-world pens and paper notebooks

Our research delves into the question of how one should use pen and touch in interface design. This really boils down to three questions: (1) What is the role of the pen? (2) What is the role of multi-touch? And (3) What is the role of simultaneous pen and touch? The perspective that we have arrived at in our research is the following: the pen writes, touch manipulates, and the
combination of pen + touch yields new tools:

Pen Writes, Touch Manipulates

 

 

 

 

 

pen+touch=new-tools

 

 

 

 

 

 

I’ve now posted a video of the research on YouTube that shows a bunch of the techniques we explored. We have implemented these on the Microsoft Surface, using a special IR-emitting pen that we constructed. However, you can imagine this technology coming to laptops, tablets, and slates in the near future– the N-Trig hardware on the Dell-XT2, for example, already has this capability, although as a practical matter it is not currently possible to author applications that utlize simultaneous pen and touch; hence our exploration of the possibilities on the Microsoft Surface.

Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input

The name, of course, is a simple pun on “Manual Dexterity” – in the context of shuffling papers and content on a “digital desk” in our case.  Hence “manual deskterity” would be the metric of efficacy in paper-shuffling and other such activities of manual organization and arrangement of documents in your workspace. This name also has the virtue that it shot a blank on <name your favorite search engine>. Plus I have a weakness for unpronounceable neologisms.

Special thanks to my colleagues (co-authors on the paper) who contributed to the work, particularly Koji Yatani, who contributed many of the novel ideas and techniques and did all of the heavy lifting in terms of the technical implementation:

Koji Yatani (Microsoft Research Intern, Ph.D. from the University of Toronto, and as of 2011 a full-time employee at Microsoft Research’s Beijing lab)

Michel Pahud

Nicole Coddington

Jenny Rodenhouse

Andy Wilson

Hrvoje Benko

Bill Buxton

Paper: Pen + Touch = New Tools

Pen + Touch = New ToolsHinckley, K., Yatani, K., Pahud, M., Coddington, N., Rodenhouse, J., Wilson, A., Benko, H., Buxton, B., Pen + Touch = New Tools. In Proc. UIST 2010  Symposium on User interface Software and Technology, New York, NY, pp. 27-36. [PDF] [video .WMV]

Watch Pen + Touch = New Tools on YouTube

Paper: Direct Display Interaction via Simultaneous Pen + Multi-touch Input

Simultaneous Pen + TouchHinckley, K., Pahud, M., Buxton, B., Direct Display Interaction via Simultaneous Pen + Multi-touch Input.  In Society for Information Display (SID) Symposium Digest of Technical Papers, May 2010, Volume 41(1), Session 38, pp. 537-540. [PDF]

This paper had no accompanying video, but you can see the system in action in this YouTube video:

Watch Simultaneous Pen + Touch video on YouTube

Paper: Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input

Manual DeskterityHinckley, K., Yatani, K., Pahud, M., Coddington, N., Rodenhouse, J., Wilson, A., Benko, H., Buxton, B., Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input. In CHI 2010 Extended Abstracts:  Proc. CHI 2010 Conf. on Human Factors in Computing Systems, Atlanta, GA, pp. 2793-2802. [PDF] [video .WMV]

Watch Manual Deskterity video on YouTube

Journal Article: Synchronous Gestures in Multi-Display Environments

Synchronous Gestures in Multi-Display EnvironmentsRamos, G., Hinckley, K., Wilson, A., and Sarin, R., Synchronous Gestures in Multi-Display Environments, In Human–Computer Interaction, Special Issue: Ubiquitous Multi-Display Environments, Volume 24, Issue 1-2, 2009, pp. 117-169. [Author’s Manuscript PDF – not final proof]

Book Chapter: Input Technologies and Techniques (in Human-Computer Interaction Fundamentals)

Input-Technologies-and-Techniques-HCI-Handbook-FundamentalsHinckley, K., Input Technologies and Techniques. Chapter 9 in Human-Computer Interaction Fundamentals (Human Factors and Ergonomics), ed. by Sears, A., and Jacko, J., CRC Press, Boca Raton, FL. Published March 2, 2009. Originally appeared as Chapter 9 in Human-Computer Interaction Handbook, 2nd Edition. [PDF of author’s manuscript – not final].

Paper: ShapeTouch: Leveraging Contact Shape on Interactive Surfaces

ShapeTouch Tabletop 2008Cao, X., Wilson, A.D., Balakrishnan, R., Hinckley, K., Hudson, S.E. ShapeTouch: Leveraging Contact Shape on Interactive SurfacesTABLETOP 2008, 3rd IEEE International Workshop on Horizontal Interactive Human Computer Systems, Oct 1-3, 2008, Amsterdam, pp. 129 – 136. [PDF] [video .WMV]

Book Chapter: Input Technologies and Techniques, 2007 Edition

Input-Technologies-and-Techniques-HCI-Handbook-2nd-EditionHinckley, K., Input Technologies and Techniques. Chapter 9 in The Human-Computer Interaction Handbook, 2nd Edition, ed. by Sears, A., and Jacko, J., CRC Press, Boca Raton, FL. Written in 2006. Published Sept 19, 2007. Also reprinted as Chapter 9 in Human-Computer Interaction Fundamentals. [PDF of author’s manuscript – not final]. See also the 2012 and 2002 editions.