Tag Archives: 2016-Paper

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Advertisements

Editorial: Welcome to a New Era for TOCHI

Wherein I tell the true story of how I became an Editor-in-Chief.


Constant change is a given in the world of high technology.

But still it can come as a rude awakening when it arrives in human terms, and we find that it also applies to our friends, our colleagues, and the people we care for.

Not to mention ourselves!

So it was that I found myself, with a tumbler full of fresh coffee steaming between my hands, looking in disbelief at an email nominating me to assume the editorial helm of the leading journal in my field, the ACM Transactions on Computer-Human Interaction (otherwise known as TOCHI).

Ultimately (through no fault of my own) the ACM Publications Board was apparently seized by a episode of temporary madness and, deeming my formal application to have the necessary qualifications (with a dozen years of TOCHI associate editorship under my belt, a membership in the CHI Academy for recognized leaders in the field, and a Lasting Impact Award for my early work on mobile sensing—not to mention hundreds of paper rejections that apparently did no lasting damage to my reputation) they forthwith approved me to take over as Editor-in-Chief from my friend and long-time colleague, Shumin Zhai.

I’ve known Shumin since 1994, way back when I delivered my very first talk at CHI in the same session as he presented his latest results on “the silk cursor.” I took an instant liking to him, but I only came to fully appreciate over the years that followed that Shumin’s work ethic is legendary. As my colleague Bill Buxton (who sat on Shumin’s thesis committee) once put it, “Shumin works harder than any two persons I have ever known.”

And of course that applied to Shumin’s work ethic with TOCHI as well.

A man who now represents an astoundingly large pair of shoes that I must fill.

To say that I respect Shumin enormously, and the incredible progress he brought to the operation and profile of the journal during his six-year tenure, would be a vast understatement.

But after I got over the sheer terror of taking on such an important role, I began to get excited.

And then I got ideas.

Lots of ideas.

A few of them might even be good ones:

Ways to advance the journal.

Ways to keep operating at peak efficiency in the face of an ever-expanding stream of submissions.

And most importantly, ways to deliver even more impact to our readers, and on behalf of our authors.

Those same authors whose contributions make it possible for us to proclaim:

TOCHI is the flagship journal of the Computer-Human Interaction community.

So in this, my introductory editorial as the head honcho, new sheriff in town, and supreme benevolent dictator otherwise known as the Editor-in-Chief, I would like to talk about how the transition is going, give a few updates on TOCHI’s standard operating procedure, and—with an eye towards growing the impact of the journal—announce the first of what I hope will be many exciting new initiatives.

And in case it is not already obvious, I intend to have some fun with this.

All while preserving the absolutely rigorous and top-notch reputation of the journal, and the constant push for excellence in all of the papers that we publish.

[Read the rest at: http://dx.doi.org/10.1145/2882897]


Be sure to also check out The Editor’s Spotlight, highlighting the many strong contributions in this issue. This, along with the full text of my introductory editorial, is available without an ACM Digital Library subscription via the links below.


_TOCHI-thumbKen Hinckley. 2016. Editorial: Welcome to a New Era for TOCHI. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1e (February 2016), 6 pages. http://dx.doi.org/10.1145/2882897

_TOCHI-thumbKen Hinckley. 2016. The Editor’s Spotlight: TOCHI Issue 23:1. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1 (February 2016), 4 pages. http://dx.doi.org/10.1145/2882899