Category Archives: mobile devices

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Commentary: On Excellence in Reviews, Thoughts for the HCI Community

Peer review — and particularly the oft-sorry state it seems to sink to — is a frequent topic of conversation at the water-coolers and espresso machines of scientific institutions the world over.

Of course, every researcher freshly wounded by a rejection has strong opinions about reviews and reviewers.  These are often of the sort that are spectacularly unfit to print, but they are widely held nonetheless.

Yet these same wounded researchers typically serve as reviewers themselves, and write reviews which other authors receive.

And I can assure you that “other authors” all too frequently regard the remarks contained in the reviews of these erstwhile wounded researchers with the same low esteem.

So if we play out this vicious cycle to its logical conclusion, in a dystopian view peer review boils down to the following:

  • We trash one another’s work.
  • Everything gets rejected.
  • And we all decide to pack up our toys and go home.

That’s not much of a recipe for scientific progress.

But what fuels this vicious cycle and what can be done about it?

As reviewers, how can we produce Excellent Reviews that begin to unwind this dispiriting scientific discourse?

As authors, how should we interpret the comments of referees, or (ideally) write papers that will be better received in the first place?

When I pulled together the program committee for the annual MobileHCI conference last year, I found myself pondering all of these issues, and really wondering what we could do to advance the conference’s review process with a positive footing.

And particularly because MobileHCI is a smaller venue, with many of the program committee members still relatively early in their research careers, I really wanted to get them started with the advice that I wished someone had given me when I first started writing and reviewing scientific papers in graduate school.

So I penned an essay that surfaces all of these issues. It describes some of the factors that lead to this vicious cycle in reviews. It makes some very specific recommendations about what an excellent review is, and how to produce one. And if you read it as an author (perhaps smarting from a recent rejection) who wants to better understand where the heck do these reviews come from anyway? and as a by-product actually write better papers, then reading between the lines will give you some ideas of how to go about that as well.

And I was pleased, if not more than a bit surprised, to see that my little rant essay was well-received by the research community:

And I received many other private responses with a similar tenor.

So if you care at all about these issues I hope that you will take a look at what I had to say. And circle back here to leave comments or questions, if you like.

There’s also a companion presentation [Talk PPTX] [Talk PDF], which I used with the MobileHCI program committee to instill a positive and open-minded attitude as we embarked on our deliberations. I’ve included that here as well in the hope that it might be of some use to others hoping to gain a little insight into what goes on in such meetings, and how to run them.


Thumbnail - Excellence in ReviewsHinckley, K., So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit. Published as “The MobileHCI Philosophy” on the MobileHCI 2015 Web Site, Feb 10th, 2015. [Official MobileHCI Repository PDF] [Author’s Mirror Site PDF], [Talk PPTX] [Talk PDF].

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Paper: LightRing: Always-Available 2D Input on Any Surface

In this modern world bristling with on-the-go-go-go mobile activity, the dream of an always-available pointing device has long been held as a sort of holy grail of ubiquitous computing.

Ubiquitous computing, as futurists use the term, refers to the once-farfetched vision where computing pervades everything, everywhere, in a sort of all-encompassing computational nirvana of socially-aware displays and sensors that can respond to our every whim and need.

From our shiny little phones.

To our dull beige desktop computers.

To the vast wall-spanning electronic whiteboards of a future largely yet to come.

How will we interact with all of these devices as we move about the daily routine of this rapidly approaching future? As we encounter computing in all its many forms, carried on our person as well as enmeshed in the digitally enhanced architecture of walls, desktops, and surfaces all around?

Enter LightRing, our early take on one possible future for ubiquitous interaction.

LightRing device on a supporting surface

By virtue of being a ring always worn on the finger, LightRing travels with us and is always present.

By virtue of some simple sensing and clever signal processing, LightRing can be supported in an extremely compact form-factor while providing a straightforward pointing modality for interacting with devices.

At present, we primarily consider LightRing as it would be configured to interact with a situated display, such as a desktop computer, or a presentation projected against a wall at some distance.

The user moves their index finger, angling left and right, or flexing up and down by bending at the knuckle. Simple stuff, I know.

But unlike a mouse, it’s not anchored to any particular computer.

It travels with you.

It’s a go-everywhere interaction modality.

Close-up of LightRing and hand angles inferred from sensors

Left: The degrees-of-freedom detected by the LightRing sensors. Right: Conceptual mapping of hand movement to the sensed degrees of freedom. LightRing then combines these to support 2D pointing at targets on a display, or other interactions.

LightRing can then sense these finger movements–using a one-dimensional gyroscope to capture the left-right movement, and an infrared sensor-emitter pair to capture the proximity of the flexing finger joint–to support a cursor-control mode that is similar to how you would hold and move a mouse on a desktop.

Except there’s no mouse at all.

And there needn’t even be a desktop, as you can see in the video embedded below.

LightRing just senses the movement of your finger.  You can make the pointing motions on a tabletop, sure, but you can just as easily do them on a wall. Or on your pocket. Or a handheld clipboard.

All the sensing is relative so LightRing always knows how to interpret your motions to control a 2D cursor on a display. Once the LightRing has been paired with a situated device, this lets you point at targets, even if the display itself is beyond your physical reach. You can sketch or handwrite characters with your finger–another scenario we have explored in depth on smartphones and even watches.

The trick to the LightRing is that it can automatically, and very naturally, calibrate itself to your finger’s range of motion if you just swirl your finger. From that circular motion LightRing can work backwards from the sensor values to how your finger is moving, assuming it is constrained to (roughly) a 2D plane. And that, combined with a button-press or finger touch on the ring itself, is enough to provide an effective input device.

The LightRing, as we have prototyped it now, is just one early step in the process. There’s a lot more we could do with this device, and many more practical problems that would need to be resolved to make it a useful adjunct to everyday devices–and to tap its full potential.

But my co-author Wolf Kienzle and I are working on it.

And hopefully, before too much longer now, we’ll have further updates on even more clever and fanciful stuff that we can do through this one tiny keyhole into this field of dreams, the verdant golden country of ubiquitous computing.

_____________________________________________________

LightRing thumbnailKienzle, W., Hinckley, K., LightRing: Always-Available 2D Input on Any Surface. In the 27th ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, Hawaii, Oct. 5-8, 2014, pp. 157-160. [PDF] [video.mp4 TBA] [Watch on YouTube]

Watch LightRing video on YouTube

Paper: Writing Handwritten Messages on a Small Touchscreen

Here’s the final of our three papers at the MobileHCI 2013 conference. This was a particularly fun project, spearheaded by my colleague Wolf Kienzle, looking at a clever way to do handwriting input on a touchscreen using just your finger.

In general I’m a fan of using an actual stylus for handwriting, but in the context of mobile there are many “micro” note-taking tasks, akin to scrawling a note to yourself on a post-it, that wouldn’t justify unsheathing a pen even if your device had one.

The very cool thing about this approach is that it allows you to enter overlapping multi-stroke characters using the whole screen, and without resorting to something like Palm’s old Graffiti writing or full-on handwriting recognition.

Touchscreen-Writing-fullres

The interface also incorporates some nice fluid gestures for entering spaces between words, backspacing to delete previous strokes, or transitioning to a freeform drawing mode for inserting little sketches or smiley-faces into your instant messages, as seen above.

This paper also had the distinction of receiving an Honorable Mention Award for best paper at MobileHCI 2013. We’re glad the review committee liked our paper and saw its contributions as noteworthy, as it were (pun definitely intended).

Writing-Small-Touchscreen-thumbKienzle, W., Hinckley, K., Writing Handwritten Messages on a Small Touchscreen. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 179-182. Honorable Mention Award (Awarded to top 5% of all papers). [PDF] [video MP4] [Watch on YouTube – coming soon.]

Paper: A Tap and Gesture Hybrid Method for Authenticating Smartphone Users

Tap-Gesture-Authentication-thumbArif, A., Pahud, M., Hinckley, K., Buxton, W., A Tap and Gesture Hybrid Method for Authenticating Smartphone Users (Poster). In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 486-491. [Paper PDF] [Poster Presentation PDF] [Video .WMV] [Video .MP4]

Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video – MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube