Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Advertisements

Book Chapter: Inking Outside the Box — How Context Sensing Affords More Natural Pen (and Touch) Computing

“Pen” and “Touch” are terms that tend to be taken for granted these days in the context of interaction with mobiles, tablets, and electronic-whiteboards alike.

Yet, as I have discussed in many articles here, even in the simplest combination of these modalities — that of “Pen + Touch” — new opportunities for interaction design abound.

And from this perspective we can go much further still.

Take “touch,” for example.

What does this term really mean in the context of input to computers?

Is it just when the user intentionally moves a finger into contact with the screen?

What if the palm accidentally brushes the display instead — is that still “touch?”

Or how about the off-hand, which plays a critical but oft-unnoticed role in gripping and skillfully orienting the device for the action of the preferred hand? Isn’t that an important part of “touch” as well?

Well, there’s good reason to argue that from the human perspective, these are all “touch,” even though most existing devices only generate a touch-event at the moment when a finger comes into contact with the screen.

Clearly, this is a very limited view, and clearly with greater insight of the context surrounding a particular touch (or pen, or pen + touch) event, we could enhance the naturalness of working with computers considerably.

This chapter, then, works through a series of examples and perspectives which demonstrate how much richness there is in such a re-conception of direct interaction with computers, and thereby suggests some directions for future innovations and richer, far more expressive interactions.


Thumbnail - Inking Outside the Box book chapterHinckley, K., Buxton, B., Inking Outside the Box: How Context Sensing Affords More Natural Pen (and Touch) Computing. 2016. Appears as Chapter 3 in Revolutionizing Education with Digital Ink: The Impact of Pen and Touch Technology on Education (Human-Computer Interaction Series), First Edition (2016). Ed. by Tracy Hammond, Stephanie Valentine, & Aaron Adler. Published by Springer, June 13, 2016.  [PDF – Author’s Draft]

P.S.: I’ve linked to the draft of the chapter that I submitted to the publisher, rather than the final version, as the published copy-edit muddied the writing by a gross misapplication of the Chicago Manual of Style, and in so doing introduced many semantic errors as well. Despite my best efforts I was not able to convince the publisher to fully reverse these undesired and unfortunate “improvements.” As such, my draft may contain some typographical errors or other minor discrepancies from the published version, but it is the authoritative version as far as I am concerned.


 

Short Story: Six Names for The End

SIX NAMES FOR THE END

Time to say goodbye.

nature futures iconMy latest confabulation is now available at Nature, in the award-winning Futures column. It was a fun piece of fiction to write — short, sharp, and packing a mighty wallop — and I hope that you enjoy reading it, too.

As well, you can find my post about the writing of this story on The Futures Conditional blog, also hosted by Nature.

Coming up shortly, my next short story is currently slated to appear in Interzone issue #265 in July. It’s a mighty strange one, which steps on pretty much every third rail known to mankind, and with even a title that has the potential to raise a large number of eyebrows.

What can I say, I try to keep things interesting around here. (grin).

Nature cover“Six Names for the End” by Ken Hinckley. In Nature, Vol. 534, No. 7607, p. 430. June 15, 2016. Futures column. [Available to read online for free]

Published by Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. DOI: 10.1038/534430a

Paper: Pre-Touch Sensing for Mobile Interaction

I have to admit it: I feel as if I’m looking at the sunrise of what may be a whole new way of interacting with mobile devices.

When I think about it, the possibilities bathe my eyes in a golden glow, and the warmth drums against my skin.

And in particular, my latest research peers out across this vivid horizon, to where I see touch — and mobile interaction with touchscreens in particular — evolving in the near future.

As a seasoned researcher, my job (which in reality is some strange admixture of interaction design, innovator, and futurist) is not necessarily to predict the future, but rather to invent it via extrapolation from a sort of visionary present which occupies my waking dreams.

I see things not as they are, but as they could be, through the lens afforded by a (usually optimistic) extrapolation from extant technologies, or those I know are likely to soon become more widely available.

With regards to interaction with touchscreens in particular, it has been clear to me for some time that the ability to sense the fingers as they approach the device — well before contact with the screen itself — is destined to become commonplace on commodity devices.

This is interesting for a number of reasons.

And no, the ability to do goofy gestures above the screen, waving at it frantically (as if it were a fancy-pants towel dispenser in a public restroom) in some dim hope of receiving an affirmative response, is not one of them.

In terms of human capabilities, one obviously cannot touch the screen of a mobile device without approaching it first.

But what often goes unrecognized is that one also must hold the device, typically in the non-preferred hand, as a precursor to touch. Hence, how you hold the device — the pattern of your grip and which hand you hold it in — are additional details of context that are more-or-less wholly ignored by current mobile devices.

So in this new work, my colleagues and I collectively refer to these two precursors of touch — approach and the need to grip the device — as pre-touch.

And it is my staunch belief that the ability to sense such pre-touch information could radically transform the mobile ‘touch’ interfaces that we all have come to take for granted.

You can get a sense of these possibilities, all implemented on a fully functional mobile phone with pre-touch sensing capability, in our demo reel below:

The project received a lot of attention, and coverage from many of the major tech blogs and other media outlets, for example:

  • The Verge (“Microsoft’s hover gestures for Windows phones are magnificent”)
  • SlashGear (“Smartphones next big thing: ‘Pre-Touch’”)
  • Business Insider (“Apple should definitely copy Microsoft’s incredible finger-sensing smartphone technology”)
  • And Fast Company Design (and again in “8 Incredible Prototypes That Show The Future Of Human-Computer Interaction.”)

But I rather liked the take that Silicon Angle offered, which took my concluding statement from the video above:

Taken as a whole, our exploration of pre-touch hints that the evolution of mobile touch may still be in its infancy – with many possibilities, unbounded by the flatland of the touchscreen, yet to explore.

 And then responded as follows:

This is the moon-landing-esque conclusion Microsoft comes to after demonstrating its rather cool pre-touch mobile technology, i.e., a mobile phone that senses what your fingers are about to do.

While this evolution of touch has been coming in the research literature for at least a decade now, what exactly to do with above- and around-screen sensing (especially in a mobile setting) has been far from obvious. And that’s where I think our work on pre-touch sensing techniques for mobile interaction distinguishes itself, and in so doing identifies some very interesting use cases that have never been realized before.

The very best of these new techniques possess a quality that I love, namely that they have a certain surprising obviousness to them:

The techniques seem obvious — but only in retrospect.

And only after you’ve been surprised by the new idea or insight that lurks behind them.

If such an effort is indeed the first hint of a moonshot for touch, well, that’s a legacy for this project that I can live with.


UPDATE: The talk I gave at the CHI 2016 conference on this project is now available. Have a gander if you are so inclined.


 

Thumb sensed as it hovers over pre-touch mobile phoneKen Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O’Hara, Gavin Smyth, William Buxton. 2016. Pre-Touch Sensing for Mobile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, p. 2869-2881. San Jose, CA, May 7-12, 2016. http://dx.doi.org/10.1145/2858036.2858095

[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Pre-Touch Sensing for Mobile Interaction video on YouTube

 

Olfactory Pen Creates Giant Stink, Fails to Make it out of Research Skunkworks

Microsoft has shown incredible stuff this week at \\build around Pen and Ink experiences — including simultaneous Pen + Touch experiences — as showcased for example in the great video on “Inking at the Speed of Thought” that is now available on Channel 9.

But I’ve had a skunkworks project — so to speak — in the works as part of my research (in the course of a career spanning decades) for a long time now, and this particular vision of the future of pen computing has consumed my imagination for at least the last 37 seconds or so. I’ve put a lot of thought into it.

It’s long been recognized that the sense of smell is a powerful index into the human memory. The scent of decaying pulp instantly brings to mind a favorite book, for example — in my case a volume of the masterworks of Edgar Allen Poe that was bequeathed to me by my grandfather.

Or who can ever forget the dizzying scent of their first significant other?

So I thought: Why not a digital pen with olfactory output?

Just think of the possibilities for this remarkable technology:

Not only can you ink faster than the speed of thought, but now you can stink faster than the speed of thought!

And I’m here to tell you that this is entirely possible. I think. I’ve already conceived of an amazing confabulation called the Aromatic Recombinator (patent pending; filed April 1st, 2016 at 2:55 PM; summarily rejected by patent office, 2:57 PM; earnest appeal filed in hope of an affirmative response, 2:59 PM; earnest response received: TBA).

Nonetheless, I can understand the patent office’s reticence.

Because with this remarkable technology one can arouse almost any scent, from the headiest of perfumes all the way to the most cloying musk, simply by scribbling on the screen of your tablet as if it were an electronic scratch-n-sniff card. A conception on which I have another patent pending, by the way.

Admittedly, some details remain sketchy, but I remain highly optimistic that the obvious problems can be sniffed out in short order.

And if not, rest assured, I will raise one hell of a stink.

[Happy April Fools Day.]

Editorial: Welcome to a New Era for TOCHI

Wherein I tell the true story of how I became an Editor-in-Chief.


Constant change is a given in the world of high technology.

But still it can come as a rude awakening when it arrives in human terms, and we find that it also applies to our friends, our colleagues, and the people we care for.

Not to mention ourselves!

So it was that I found myself, with a tumbler full of fresh coffee steaming between my hands, looking in disbelief at an email nominating me to assume the editorial helm of the leading journal in my field, the ACM Transactions on Computer-Human Interaction (otherwise known as TOCHI).

Ultimately (through no fault of my own) the ACM Publications Board was apparently seized by a episode of temporary madness and, deeming my formal application to have the necessary qualifications (with a dozen years of TOCHI associate editorship under my belt, a membership in the CHI Academy for recognized leaders in the field, and a Lasting Impact Award for my early work on mobile sensing—not to mention hundreds of paper rejections that apparently did no lasting damage to my reputation) they forthwith approved me to take over as Editor-in-Chief from my friend and long-time colleague, Shumin Zhai.

I’ve known Shumin since 1994, way back when I delivered my very first talk at CHI in the same session as he presented his latest results on “the silk cursor.” I took an instant liking to him, but I only came to fully appreciate over the years that followed that Shumin’s work ethic is legendary. As my colleague Bill Buxton (who sat on Shumin’s thesis committee) once put it, “Shumin works harder than any two persons I have ever known.”

And of course that applied to Shumin’s work ethic with TOCHI as well.

A man who now represents an astoundingly large pair of shoes that I must fill.

To say that I respect Shumin enormously, and the incredible progress he brought to the operation and profile of the journal during his six-year tenure, would be a vast understatement.

But after I got over the sheer terror of taking on such an important role, I began to get excited.

And then I got ideas.

Lots of ideas.

A few of them might even be good ones:

Ways to advance the journal.

Ways to keep operating at peak efficiency in the face of an ever-expanding stream of submissions.

And most importantly, ways to deliver even more impact to our readers, and on behalf of our authors.

Those same authors whose contributions make it possible for us to proclaim:

TOCHI is the flagship journal of the Computer-Human Interaction community.

So in this, my introductory editorial as the head honcho, new sheriff in town, and supreme benevolent dictator otherwise known as the Editor-in-Chief, I would like to talk about how the transition is going, give a few updates on TOCHI’s standard operating procedure, and—with an eye towards growing the impact of the journal—announce the first of what I hope will be many exciting new initiatives.

And in case it is not already obvious, I intend to have some fun with this.

All while preserving the absolutely rigorous and top-notch reputation of the journal, and the constant push for excellence in all of the papers that we publish.

[Read the rest at: http://dx.doi.org/10.1145/2882897]


Be sure to also check out The Editor’s Spotlight, highlighting the many strong contributions in this issue. This, along with the full text of my introductory editorial, is available without an ACM Digital Library subscription via the links below.


_TOCHI-thumbKen Hinckley. 2016. Editorial: Welcome to a New Era for TOCHI. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1e (February 2016), 6 pages. http://dx.doi.org/10.1145/2882897

_TOCHI-thumbKen Hinckley. 2016. The Editor’s Spotlight: TOCHI Issue 23:1. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1 (February 2016), 4 pages. http://dx.doi.org/10.1145/2882899