Category Archives: device form-factors

Paper: WritLarge: Ink Unleashed by Unified Scope, Action, & Zoom

Electronic whiteboards remain surprisingly difficult to use in the context of creativity support and design.

A key problem is that once a designer places strokes and reference images on a canvas, actually doing anything useful with key parts of that content involves numerous steps.

Hence, with digital ink, scope—that is, selection of content—is a central concern, yet current approaches often require encircling ink with a lengthy lasso, if not switching modes via round-trips to the far-off edges of the display.

Only then can the user take action, such as to copy, refine, or re-interpret their informal work-in-progress.

Such is the stilted nature of selection and action in the digital world.

But it need not be so.

By contrast, consider an everyday manual task such as sandpapering a piece of woodwork to hew off its rough edges. Here, we use our hands to grasp and bring to the fore—that is, select—the portion of the work-object—the wood—that we want to refine.

And because we are working with a tool—the sandpaper—the hand employed for this ‘selection’ sub-task is typically the non-preferred one, which skillfully manipulates the frame-of-reference for the subsequent ‘action’ of sanding, a complementary sub-task articulated by the preferred hand.

Therefore, in contrast to the disjoint subtasks foisted on us by most interactions with computers, the above example shows how complementary manual activities lend a sense of flow that “chunks” selection and action into a continuous selection-action phrase. By manipulating the workspace, the off-hand shifts the context of the actions to be applied, while the preferred hand brings different tools to bear—such as sandpaper, file, or chisel—as necessary.

The main goal of the WritLarge project, then, is to demonstrate similar continuity of action for electronic whiteboards. This motivated free-flowing, close-at-hand techniques to afford unification of selection and action via bimanual pen+touch interaction.

WriteLarge-hero-figure

Accordingly, we designed WritLarge so that user can simply gesture as follows:

With the thumb and forefinger of the non-preferred hand, just frame a portion of the canvas.

And, unlike many other approaches to “handwriting recognition,” this approach to selecting key portions of an electronic whiteboard leaves the user in complete control of what gets recognized—as well as when recognition occurs—so as not to break the flow of creative work.

Indeed, building on this foundation, we designed ways to shift between flexible representations of freeform content by simply moving the pen along semantic, structural, and temporal axes of movement.

See our demo reel below for some jaw-dropping demonstrations of the possibilities for digital ink opened up by this approach.

Watch WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom video on YouTube


WritLarge-CHI-2017-thumbHaijun Xia, Ken Hinckley, Michel Pahud, Xioa Tu, and Bill Buxton. 2017. WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3227-3240. Denver, Colorado, United States, May 6-11, 2017. Honorable Mention Award (top 5% of papers).
https://doi.org/10.1145/3025453.3025664

[PDF] [Watch 30 second preview on YouTube]

Paper: Thumb + Pen Interaction on Tablets

Modern tablets support simultaneous pen and touch input, but it remains unclear how to best leverage this capability for bimanual input when the nonpreferred hand holds the tablet.

We explore Thumb + Pen interactions that support simultaneous pen and touch interaction, with both hands, in such situations. Our approach engages the thumb of the device-holding hand, such that the thumb interacts with the touch screen in an indirect manner, thereby complementing the direct input provided by the preferred hand.

For instance, the thumb can determine how pen actions (articulated with the opposite hand) are interpreted.

thumb-pen-fullsize

Alternatively, the pen can point at an object, while the thumb manipulates one or more of its parameters through indirect touch.

Our techniques integrate concepts in a novel way that derive from radial menus (also known as marking menus) and spring-loaded modes maintained by muscular tension — as well as indirect input, and in ways that leverage multi-touch conventions.

Our overall approach takes the form of a set of probes, each representing a meaningfully distinct class of application. They serve as an initial exploration of the design space at a level which will help determine the feasibility of supporting bimanual interaction in such contexts, and the viability of the Thumb + Pen techniques in so doing.

Watch Thumb + Pen Interaction on Tablets video on YouTube


thumb-pen-thumbKen Pfeuffer, Ken Hinckley, Michel Pahud, and Bill Buxton. 2017. Thumb + Pen Interaction on Tablets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3254-3266 . Denver, Colorado, United States, May 6-11, 2017.
https://doi.org/10.1145/3025453.3025567

[PDF] [Watch 30 second preview on YouTube]

Paper: As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences

This work sheds light on gaps and discrepancies between the experiences afforded by analog pens and their digital counterparts.

Despite the long history (and recent renaissance) of digital pens, the literature still lacks a comprehensive survey of what types of marks people make and what motivates them to use ink—both analog and digital—in daily life.

As-We-May-Ink-fullsize

To capture the diversity of inking behaviors and tease out the unique affordances of pen-and ink, we conducted a diary study with 26 participants from diverse backgrounds.

From analysis of 493 diary entries we identified 8 analog pen-and-ink activities, and 9 affordances of pens. We contextualized and contrasted these findings using a survey with 1,633 respondents and a follow-up diary study with 30 participants, observing digital pens.

Our analysis revealed many gaps and research opportunities based on pen affordances not yet fully explored in the literature.


As-We-May-Ink-CHI-2017-thumbYann Riche, Nathalie Henry Rich, Ken Hinckley, Sarah Fuelling, Sarah Williams, and Sheri Panabaker. 2017. As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3241-3253. Denver, Colorado, United States, May 6-11, 2017.
https://doi.org/10.1145/3025453.3025716

[PDF] [CHI 2017 Talk Slides (PowerPoint)]

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Award: CHI Academy, 2014 Inductee

I’ve been a bit remiss in posting this, but as of April 2014, I’m a member of the CHI Academy, which is an honorary group that recognizes leaders in the field of Human-Computer interaction.

Among whom I can apparently I now include myself, strange as that  may seem.

I was completely surprised by this and can honestly say I never expected any special recognition. I’ve just been plugging away on my little devices and techniques, writing papers here and there, but I suppose over the decades it all adds up. I don’t know if this means that my work is especially good or that I’m just getting older, but either way I appreciate the gesture of recognition from my peers in the field.

I was in a bit of a ribald mood when I got the news, so when the award organizers asked me to reply with my bio I decided what the heck and decided to have some fun with it:

Ken Hinckley is a Principal Researcher at Microsoft Research, where he has spent the last 17 years investigating novel input devices, device form-factors, and modalities of interaction.

He feels fortunate to have had the opportunity to collaborate with many CHI Academy members while working there, including noted trouble-makers such as Bill Buxton, Patrick Baudisch, and Eric Horvitz—as well as George Robertson, whom he owes a debt of gratitude for hiring him fresh out of grad school.

Ken is perhaps best know for his work on sensing techniques, cross-device interaction, and pen computing. He has published over 75 academic papers and is a named inventor on upwards of 150 patents. Ken holds a Ph.D. in Computer Science from the University of Virginia, where he studied with Randy Pausch.

He has also published fiction in professional markets including Nature and Fiction River, and prides himself on still being able to hit 30-foot jump shots at age 44.

Not too shabby.

Now, in the spirit of full disclosure, there are no real perks associated with being a CHI Academy member as far as I’ve been able to figure. People do seem to ask me for reference letters just a tiny bit more frequently. And I definitely get more junk email from organizers of dubious-sounding conferences than before. No need for research heroics if you want a piece of that, just email me and I’d be happy to forward them along.

But the absolute most fun part of the whole deal was a small private celebration that noted futurist Bill Buxton organized at his ultra-modern home fronting Lake Ontario in Toronto, and where I was joined by my Microsoft Research colleagues Abigail Sellen, her husband Richard Harper, and John Tang. Abi is already a member (and an occasional collaborator whom I consider a friend), and Richard and John were inducted along with me into the Academy in 2014.

Bill Buxton needs no introduction among the avant garde of computing. And he’s well known in the design community as well, not to mention publishing on equestrianism and mountaineering, among other topics. In particular, his collection of interactive devices is arguably the most complete ever assembled. Only a tiny fraction of it is currently documented on-line. It contains everything from the world’s first radio and television remote controls, to the strangest keyboards ever conceived by mankind, and even the very first handcrafted wooden computer mice that started cropping up in the 1960’s.

The taxi dropped me off, I rang the doorbell, and when a tall man with rock-star hair gone gray and thinned precipitously by the ravages of time answered the door, I inquired:

“Is this, by any chance, the Buxton Home for Wayward Input Devices?”

To which Bill replied in the affirmative.

I indeed had the right place, I would fit right in here, and he showed me in.

Much of Bill’s collection lives off the premises, but his below-ground sanctum sanctorum was still walled by shelves bursting with transparent tubs packed with handheld gadgets that had arrived far before their time, historical mice and trackballs, and hybrid bastard devices of every conceivable description. And what little space remained was packed with books on design, sketching, and the history of mountaineering and the fur trade.

Despite his home office being situated below grade, natural light poured down into it through the huge front windows facing the inland sea, owing to the home’s modern design. Totally awesome space and would have looked right at home on the front page of Architectural Digest.

Bill showed us his origami kayak on the back deck, treated us all to some hand-crafted martinis in the open-plan kitchen, and arranged for transportation to the awards dinner via a 10-person white stretch limousine. We even made a brief pit stop so Bill could dash out and pick up a bottle of champagne at a package store.

Great fun.

I’ve known Bill since 1994, when he visited Randy Pausch’s lab at the University of Virginia, and ever since people have often assumed that he was my advisor. He never was in any official capacity, but I read all of his papers in that period and in many ways I looked up to him as my research hero. And now that we’ve worked together as colleagues for nearly 10 years (!), and with Randy’s passing, I often do still see him as a mentor.

Or is that de-mentor?

Probably a little bit of each, in all honesty (grin).

Yeah, the award was pretty cool and all, but it was the red carpet thrown out by Bill that I’ll always remember.

Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., CHI Academy. Inducted April 27th, 2014 at CHI 2014 in Toronto, Ontario, Canada, for career research accomplishments and service to the ACM SIGCHI community (Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction). [Ken Hinckley CHI Academy Bio] 

The CHI Academy is an honorary group of individuals who have made substantial contributions to the field of human-computer interaction. These are the principal leaders of the field, whose efforts have shaped the disciplines and/or industry, and led the research and/or innovation in human-computer interaction. The criteria for election to the CHI Academy are:

  • Cumulative contributions to the field.
  • Impact on the field through development of new research directions and/or innovations.
  • Influence on the work of others.
  • Reasonably active participant in the ACM SIGCHI community.

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]