Category Archives: device form-factors

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Advertisements

Award: CHI Academy, 2014 Inductee

I’ve been a bit remiss in posting this, but as of April 2014, I’m a member of the CHI Academy, which is an honorary group that recognizes leaders in the field of Human-Computer interaction.

Among whom I can apparently I now include myself, strange as that  may seem.

I was completely surprised by this and can honestly say I never expected any special recognition. I’ve just been plugging away on my little devices and techniques, writing papers here and there, but I suppose over the decades it all adds up. I don’t know if this means that my work is especially good or that I’m just getting older, but either way I appreciate the gesture of recognition from my peers in the field.

I was in a bit of a ribald mood when I got the news, so when the award organizers asked me to reply with my bio I decided what the heck and decided to have some fun with it:

Ken Hinckley is a Principal Researcher at Microsoft Research, where he has spent the last 17 years investigating novel input devices, device form-factors, and modalities of interaction.

He feels fortunate to have had the opportunity to collaborate with many CHI Academy members while working there, including noted trouble-makers such as Bill Buxton, Patrick Baudisch, and Eric Horvitz—as well as George Robertson, whom he owes a debt of gratitude for hiring him fresh out of grad school.

Ken is perhaps best know for his work on sensing techniques, cross-device interaction, and pen computing. He has published over 75 academic papers and is a named inventor on upwards of 150 patents. Ken holds a Ph.D. in Computer Science from the University of Virginia, where he studied with Randy Pausch.

He has also published fiction in professional markets including Nature and Fiction River, and prides himself on still being able to hit 30-foot jump shots at age 44.

Not too shabby.

Now, in the spirit of full disclosure, there are no real perks associated with being a CHI Academy member as far as I’ve been able to figure. People do seem to ask me for reference letters just a tiny bit more frequently. And I definitely get more junk email from organizers of dubious-sounding conferences than before. No need for research heroics if you want a piece of that, just email me and I’d be happy to forward them along.

But the absolute most fun part of the whole deal was a small private celebration that noted futurist Bill Buxton organized at his ultra-modern home fronting Lake Ontario in Toronto, and where I was joined by my Microsoft Research colleagues Abigail Sellen, her husband Richard Harper, and John Tang. Abi is already a member (and an occasional collaborator whom I consider a friend), and Richard and John were inducted along with me into the Academy in 2014.

Bill Buxton needs no introduction among the avant garde of computing. And he’s well known in the design community as well, not to mention publishing on equestrianism and mountaineering, among other topics. In particular, his collection of interactive devices is arguably the most complete ever assembled. Only a tiny fraction of it is currently documented on-line. It contains everything from the world’s first radio and television remote controls, to the strangest keyboards ever conceived by mankind, and even the very first handcrafted wooden computer mice that started cropping up in the 1960’s.

The taxi dropped me off, I rang the doorbell, and when a tall man with rock-star hair gone gray and thinned precipitously by the ravages of time answered the door, I inquired:

“Is this, by any chance, the Buxton Home for Wayward Input Devices?”

To which Bill replied in the affirmative.

I indeed had the right place, I would fit right in here, and he showed me in.

Much of Bill’s collection lives off the premises, but his below-ground sanctum sanctorum was still walled by shelves bursting with transparent tubs packed with handheld gadgets that had arrived far before their time, historical mice and trackballs, and hybrid bastard devices of every conceivable description. And what little space remained was packed with books on design, sketching, and the history of mountaineering and the fur trade.

Despite his home office being situated below grade, natural light poured down into it through the huge front windows facing the inland sea, owing to the home’s modern design. Totally awesome space and would have looked right at home on the front page of Architectural Digest.

Bill showed us his origami kayak on the back deck, treated us all to some hand-crafted martinis in the open-plan kitchen, and arranged for transportation to the awards dinner via a 10-person white stretch limousine. We even made a brief pit stop so Bill could dash out and pick up a bottle of champagne at a package store.

Great fun.

I’ve known Bill since 1994, when he visited Randy Pausch’s lab at the University of Virginia, and ever since people have often assumed that he was my advisor. He never was in any official capacity, but I read all of his papers in that period and in many ways I looked up to him as my research hero. And now that we’ve worked together as colleagues for nearly 10 years (!), and with Randy’s passing, I often do still see him as a mentor.

Or is that de-mentor?

Probably a little bit of each, in all honesty (grin).

Yeah, the award was pretty cool and all, but it was the red carpet thrown out by Bill that I’ll always remember.

Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., CHI Academy. Inducted April 27th, 2014 at CHI 2014 in Toronto, Ontario, Canada, for career research accomplishments and service to the ACM SIGCHI community (Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction). [Ken Hinckley CHI Academy Bio] 

The CHI Academy is an honorary group of individuals who have made substantial contributions to the field of human-computer interaction. These are the principal leaders of the field, whose efforts have shaped the disciplines and/or industry, and led the research and/or innovation in human-computer interaction. The criteria for election to the CHI Academy are:

  • Cumulative contributions to the field.
  • Impact on the field through development of new research directions and/or innovations.
  • Influence on the work of others.
  • Reasonably active participant in the ACM SIGCHI community.

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Invited Talk: WIPTTE 2015 Presentation of Sensing Techniques for Tablets, Pen, and Touch

The organizers of WIPTTE 2015, the Workshop on the Impact of Pen and Touch Technology on Education, kindly invited me to speak about my recent work on sensing techniques for stylus + tablet interaction.

One of the key points that I emphasized:

To design technology to fully take advantage of human skills, it is critical to observe what people do with their hands when they are engaged in manual activites such as handwriting.

Notice my deliberate the use of the plural, hands, as in both of ’em, in a division of labor that is a perfect example of cooperative bimanual action.

The power of crayon and touch.

My six-year-old daughter demonstrates the power of crayon and touch technology.

And of course I had my usual array of stupid sensor tricks to illustrate the many ways that sensing systems of the future embedded in tablets and pens could take advantage of such observations. Some of these possible uses for sensors probably seem fanciful, in this antiquated era of circa 2015.

But in eerily similar fashion, some of the earliest work that I did on sensors embedded in handheld devices also felt completely out-of-step with the times when I published it back in the year 2000. A time so backwards it already belongs to the last millennium for goodness sakes!

Now aspects of that work are embedded in practically every mobile device on the planet.

It was a fun talk, with an engaged audience of educators who are eager to see pen and tablet technology advance to better serve the educational needs of students all over the world. I have three kids of school age now so this stuff matters to me. And I love speaking to this audience because they always get so excited to see the pen and touch interaction concepts I have explored over the years, as well as the new technologies emerging from the dim fog that surrounds the leading frontiers of research.

Harold and the Purple Crayon book coverI am a strong believer in the dictum that the best way to predict the future is to invent it.

And the pen may be the single greatest tool ever invented to harness the immense creative power of the human mind, and thereby to scrawl out–perhaps even in the just-in-time fashion of the famous book Harold and the Purple Crayon–the uncertain path that leads us forward.

                    * * *

Update: I have also made the original technical paper and demonstration video available now.

If you are an educator seeing impacts of pen, tablet, and touch technology in the classroom, then I strongly encourage you to start organizing and writing up your observations for next year’s workshop. The 2016 edition of the series, (now renamed CPTTE) will be held at Brown University in Providence, Rhode Island, and chaired by none other than the esteemed Andries Van Dam, who is my academic grandfather (i.e. my Ph.D. advisor’s mentor) and of course widely respected in computing circles throughout the world.

Thumbnail - WIPTTE 2015 invited TalkHinckley, K., WIPTTE 2015 Invited Talk: Sensing Techniques for Tablet + Stylus Interaction. Workshop on the Impact of Pen and Touch Technology on Education, Redmond, WA, April 28th, 2015. [Slides (.pptx)] [Slides PDF]

 

Project: Bimanual In-Place Commands

Here’s another interesting loose end, this one from 2012, which describes a user interface known as “In-Place Commands” that Michel Pahud, myself, and Bill Buxton developed for a range of direct-touch form factors, including everything from tablets and tabletops all the way up to electronic whiteboards a la the modern Microsoft Surface Hub devices of 2015.

Microsoft is currently running a Request for Proposals for Surface Hub research, by the way, so check it out if that sort of thing is at all up your alley. If your proposal is selected you’ll get a spiffy new Surface Hub and $25,000 to go along with it.

We’ve never written up a formal paper on our In-Place Commands work, in part because there is still much to do and we intend to pursue it further when the time is right. But in the meantime the following post and video documenting the work may be of interest to aficionados of efficient interaction on such devices. This also relates closely to the Finger Shadow and Accordion Menu explored in our Pen +Touch work, documented here and here, which collectively form a class of such techniques.

While we wouldn’t claim that any one of these represent the ultimate approach to command and control for direct input, in sum they illustrate many of the underlying issues, the rich set of capabilities we strive to support, and possible directions for future embellishments as well.

Thumbnail for In-Place CommandsKnies, R. In-Place: Interacting with Large Displays. Reporting on research by Pahud, M., Hinckley, K., and Buxton, B. TechNet Inside Microsoft Research Blog Post, Oct 4th, 2012. [Author’s cached copy of post as PDF] [Video MP4] [Watch on YouTube]

In-Place Commands Screen Shot

The user can call up commands in-place, directly where he is working, by touching both fingers down and fanning out the available tool palettes. Many of the functions thus revealed act as click-through tools, where the user may simultaneously select and apply the selected tool — as the user is about to do for the line-drawing tool in the image above.

Watch Bimanual In-Place Commands video on YouTube

Symposium Abstract: Issues in bimanual coordination: The props-based interface for neurosurgical visualization

I have a small backlog of updates and new posts to clear out, which I’ll be undertaking in the next few days.

The first of these is the following small abstract that actually dates from way back in 1996, shortly before I graduated with my Ph.D. in Computer Science from the University of Virginia.

It was a really fun symposium organized by the esteemed Yves Guiard, famous for his kinematic chain model of human bimanual action, that included myself and Bill Buxton, among others. For me this was a small but timely recognition that came early in my career and made it possible for me to take the stage alongside two of my biggest research heroes.

Thumbnail for Symposium on Human Bimanual SpecializationHinckley, K., 140.3: Issues in bimanual coordination: The props-based interface for neurosurgical visualization. Appeared in Symposium 140: Human bimanual specialization: New perspectives on basic research and application, convened by Yves Guiard, Montréal, Quebec, Canada, Aug. 17, 1996. Abstract published in  International Journal of Psychology, Volume 31, Issue 3-4, Special Issue: Abstracts of the XXVI INTERNATIONAL CONGRESS OF PSYCHOLOGY, 1996. [PDF – Symposium 140 Abstracts]

Abstract

I will describe a three-dimensional human-computer interface for neurosurgical visualization based on the bimanual manipulation of real-world tools. The user’s nonpreferred hand holds a miniature head that can be “sliced open” or “pointed to” using a cross-sectioning plane or a stylus held in the preferred hand. The nonpreferred hand acts as a dynamic frame-of-reference relative to which the preferred hand articulates its motion. I will also discuss experiments that investigate the role of bimanual action in virtual manipulation and in the design of human-computer interfaces in general.

Paper: LightRing: Always-Available 2D Input on Any Surface

In this modern world bristling with on-the-go-go-go mobile activity, the dream of an always-available pointing device has long been held as a sort of holy grail of ubiquitous computing.

Ubiquitous computing, as futurists use the term, refers to the once-farfetched vision where computing pervades everything, everywhere, in a sort of all-encompassing computational nirvana of socially-aware displays and sensors that can respond to our every whim and need.

From our shiny little phones.

To our dull beige desktop computers.

To the vast wall-spanning electronic whiteboards of a future largely yet to come.

How will we interact with all of these devices as we move about the daily routine of this rapidly approaching future? As we encounter computing in all its many forms, carried on our person as well as enmeshed in the digitally enhanced architecture of walls, desktops, and surfaces all around?

Enter LightRing, our early take on one possible future for ubiquitous interaction.

LightRing device on a supporting surface

By virtue of being a ring always worn on the finger, LightRing travels with us and is always present.

By virtue of some simple sensing and clever signal processing, LightRing can be supported in an extremely compact form-factor while providing a straightforward pointing modality for interacting with devices.

At present, we primarily consider LightRing as it would be configured to interact with a situated display, such as a desktop computer, or a presentation projected against a wall at some distance.

The user moves their index finger, angling left and right, or flexing up and down by bending at the knuckle. Simple stuff, I know.

But unlike a mouse, it’s not anchored to any particular computer.

It travels with you.

It’s a go-everywhere interaction modality.

Close-up of LightRing and hand angles inferred from sensors

Left: The degrees-of-freedom detected by the LightRing sensors. Right: Conceptual mapping of hand movement to the sensed degrees of freedom. LightRing then combines these to support 2D pointing at targets on a display, or other interactions.

LightRing can then sense these finger movements–using a one-dimensional gyroscope to capture the left-right movement, and an infrared sensor-emitter pair to capture the proximity of the flexing finger joint–to support a cursor-control mode that is similar to how you would hold and move a mouse on a desktop.

Except there’s no mouse at all.

And there needn’t even be a desktop, as you can see in the video embedded below.

LightRing just senses the movement of your finger.  You can make the pointing motions on a tabletop, sure, but you can just as easily do them on a wall. Or on your pocket. Or a handheld clipboard.

All the sensing is relative so LightRing always knows how to interpret your motions to control a 2D cursor on a display. Once the LightRing has been paired with a situated device, this lets you point at targets, even if the display itself is beyond your physical reach. You can sketch or handwrite characters with your finger–another scenario we have explored in depth on smartphones and even watches.

The trick to the LightRing is that it can automatically, and very naturally, calibrate itself to your finger’s range of motion if you just swirl your finger. From that circular motion LightRing can work backwards from the sensor values to how your finger is moving, assuming it is constrained to (roughly) a 2D plane. And that, combined with a button-press or finger touch on the ring itself, is enough to provide an effective input device.

The LightRing, as we have prototyped it now, is just one early step in the process. There’s a lot more we could do with this device, and many more practical problems that would need to be resolved to make it a useful adjunct to everyday devices–and to tap its full potential.

But my co-author Wolf Kienzle and I are working on it.

And hopefully, before too much longer now, we’ll have further updates on even more clever and fanciful stuff that we can do through this one tiny keyhole into this field of dreams, the verdant golden country of ubiquitous computing.

_____________________________________________________

LightRing thumbnailKienzle, W., Hinckley, K., LightRing: Always-Available 2D Input on Any Surface. In the 27th ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, Hawaii, Oct. 5-8, 2014, pp. 157-160. [PDF] [video.mp4 TBA] [Watch on YouTube]

Watch LightRing video on YouTube