Category Archives: mobile devices

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [Full video – MP4] [Watch 30 second preview on YouTube]

Advertisements

Commentary: On Excellence in Reviews, Thoughts for the HCI Community

Peer review — and particularly the oft-sorry state it seems to sink to — is a frequent topic of conversation at the water-coolers and espresso machines of scientific institutions the world over.

Of course, every researcher freshly wounded by a rejection has strong opinions about reviews and reviewers.  These are often of the sort that are spectacularly unfit to print, but they are widely held nonetheless.

Yet these same wounded researchers typically serve as reviewers themselves, and write reviews which other authors receive.

And I can assure you that “other authors” all too frequently regard the remarks contained in the reviews of these erstwhile wounded researchers with the same low esteem.

So if we play out this vicious cycle to its logical conclusion, in a dystopian view peer review boils down to the following:

  • We trash one another’s work.
  • Everything gets rejected.
  • And we all decide to pack up our toys and go home.

That’s not much of a recipe for scientific progress.

But what fuels this vicious cycle and what can be done about it?

As reviewers, how can we produce Excellent Reviews that begin to unwind this dispiriting scientific discourse?

As authors, how should we interpret the comments of referees, or (ideally) write papers that will be better received in the first place?

When I pulled together the program committee for the annual MobileHCI conference last year, I found myself pondering all of these issues, and really wondering what we could do to advance the conference’s review process with a positive footing.

And particularly because MobileHCI is a smaller venue, with many of the program committee members still relatively early in their research careers, I really wanted to get them started with the advice that I wished someone had given me when I first started writing and reviewing scientific papers in graduate school.

So I penned an essay that surfaces all of these issues. It describes some of the factors that lead to this vicious cycle in reviews. It makes some very specific recommendations about what an excellent review is, and how to produce one. And if you read it as an author (perhaps smarting from a recent rejection) who wants to better understand where the heck do these reviews come from anyway? and as a by-product actually write better papers, then reading between the lines will give you some ideas of how to go about that as well.

And I was pleased, if not more than a bit surprised, to see that my little rant essay was well-received by the research community:

And I received many other private responses with a similar tenor.

So if you care at all about these issues I hope that you will take a look at what I had to say. And circle back here to leave comments or questions, if you like.

There’s also a companion presentation [Talk PPTX] [Talk PDF], which I used with the MobileHCI program committee to instill a positive and open-minded attitude as we embarked on our deliberations. I’ve included that here as well in the hope that it might be of some use to others hoping to gain a little insight into what goes on in such meetings, and how to run them.


Thumbnail - Excellence in ReviewsHinckley, K., So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit. Published as “The MobileHCI Philosophy” on the MobileHCI 2015 Web Site, Feb 10th, 2015. [Official MobileHCI Repository PDF] [Author’s Mirror Site PDF], [Talk PPTX] [Talk PDF].

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Contribute to MobileHCI 2015 and Help Advance the Frontiers of Mobility: Submissions Due Feb 6th, 2015.

Mobile HCI 2015 bannerSend us your work. If it makes us go “Wow!” we want it.

Along with Hans Gellersen of Lancaster University (UK), I’m proud to announce that I’m co-chairing the papers selection committee for the 2015 installment of the long-running MobileHCI conference (sponsored by the ACM and SIGCHI), to take place Aug 24th-Aug 27th, 2015, in wonderful and historic Copenhagen, Denmark.

MobileHCI is the premiere venue to publish and learn about state-of-the-art innovations and insights for all aspects of human-computer interaction as it pertains to mobility–whether in terms of the devices we use, the services we engage with, or the new patterns of human behavior emerging from the wilderness of the modern-day digital ecology.

Submissions due Feb 6th, 2015.

Call for Papers

MobileHCI seeks contributions in the form of innovations, insights, or analyses related to human experiences with mobility.

Our interpretation of mobility is inclusive and broadly construed. Likewise, our view of contribution encompasses technology, experience, methodology, and theory—or any mix thereof, and beyond. We seek richness and diversity in topic as well as approach, method, and viewpoint. If you can make a convincing case that you have something important to say about mobility, in all its many forms, we want to see your work.

In no particular order, this includes contributions in the form of:

Systems & infrastructures. The design, architecture, deployment, and evaluation of systems and infrastructures that support development of or interaction with mobile devices and services.

Devices & techniques: The design, construction, usage, and evaluation of devices and techniques that create valuable new capabilities for mobile human-computer interaction.

Applications & experiences. Descriptions of the design, empirical study of interactive applications, or analysis of usage trends that leverage mobile devices and systems.

Methodologies & tools. New methods and tools designed for or applied to studying or building mobile user interfaces, applications, and mobile users.

Theories & models. Critical analysis or organizing theory with clearly motivated relevance to the design or study of mobile human-computer interaction; taxonomies of design or devices; well-supported essays on emerging trends and practice in mobile human-computer interaction.

Visions & wildcards. Well-argued and well-supported visions of the future of mobile computing; non-traditional topics that bear on mobility; under-represented viewpoints and perspectives that convincingly bring something new to mobile research and practice. Surprise us with something new and compelling.

We seek contribution of ideas, as opposed to convention of form.

If you write a good paper—present clear, well-argued and well-cited ideas that are backed up with some form of compelling evidence (proof-of-concept implementations, system demonstrations, data analysis, user studies, or whatever methodology suits the contribution you are trying to make)—then we want to see your work, and if we agree it is good, we will accept it.

We are not particularly picky about page lengths or the structure of papers. Use the number of pages you need to convey a contribution, no more, no less.

Reviewers traditionally expect about 4pp for shorter contributions, and about 10pp for long-form contributions, but these are simply guideposts of what authors most commonly submit.

If you have a great 10 page paper with an intriguing set of ideas and the references spill over onto page 12, we are happy with that.

If you can convey a solid idea in 8 pages, that is fine too.

Or a four-pager with a clearly articulated nugget of contribution is always welcome.

Finally, keep the “Wow!” test in mind.

We are always happy to consider thought-provoking work that might not be perfect but clearly does inject new ideas into the discourse on mobile interaction, what it is now, what it could be in the future.

We would rather have 10 thought-provoking papers that break new ground in their own unique ways, than that one perfect paper that is dull and unassailable.

Send us your work. If it makes us go “Wow!” we want it. By the same token there is nothing wrong with solid work that advances the state of the art. We are excited to expand the many frontiers of mobility and we need your contributions to help us get there.

You can find full details in the online call for or papers at the MobileHCI 2015 website.

And be sure to spread the word to your peers and collaborators so that we can have a rich conference programme with a great diversity of neat projects and results to showcase the cutting edge of mobility.

Paper: LightRing: Always-Available 2D Input on Any Surface

In this modern world bristling with on-the-go-go-go mobile activity, the dream of an always-available pointing device has long been held as a sort of holy grail of ubiquitous computing.

Ubiquitous computing, as futurists use the term, refers to the once-farfetched vision where computing pervades everything, everywhere, in a sort of all-encompassing computational nirvana of socially-aware displays and sensors that can respond to our every whim and need.

From our shiny little phones.

To our dull beige desktop computers.

To the vast wall-spanning electronic whiteboards of a future largely yet to come.

How will we interact with all of these devices as we move about the daily routine of this rapidly approaching future? As we encounter computing in all its many forms, carried on our person as well as enmeshed in the digitally enhanced architecture of walls, desktops, and surfaces all around?

Enter LightRing, our early take on one possible future for ubiquitous interaction.

LightRing device on a supporting surface

By virtue of being a ring always worn on the finger, LightRing travels with us and is always present.

By virtue of some simple sensing and clever signal processing, LightRing can be supported in an extremely compact form-factor while providing a straightforward pointing modality for interacting with devices.

At present, we primarily consider LightRing as it would be configured to interact with a situated display, such as a desktop computer, or a presentation projected against a wall at some distance.

The user moves their index finger, angling left and right, or flexing up and down by bending at the knuckle. Simple stuff, I know.

But unlike a mouse, it’s not anchored to any particular computer.

It travels with you.

It’s a go-everywhere interaction modality.

Close-up of LightRing and hand angles inferred from sensors

Left: The degrees-of-freedom detected by the LightRing sensors. Right: Conceptual mapping of hand movement to the sensed degrees of freedom. LightRing then combines these to support 2D pointing at targets on a display, or other interactions.

LightRing can then sense these finger movements–using a one-dimensional gyroscope to capture the left-right movement, and an infrared sensor-emitter pair to capture the proximity of the flexing finger joint–to support a cursor-control mode that is similar to how you would hold and move a mouse on a desktop.

Except there’s no mouse at all.

And there needn’t even be a desktop, as you can see in the video embedded below.

LightRing just senses the movement of your finger.  You can make the pointing motions on a tabletop, sure, but you can just as easily do them on a wall. Or on your pocket. Or a handheld clipboard.

All the sensing is relative so LightRing always knows how to interpret your motions to control a 2D cursor on a display. Once the LightRing has been paired with a situated device, this lets you point at targets, even if the display itself is beyond your physical reach. You can sketch or handwrite characters with your finger–another scenario we have explored in depth on smartphones and even watches.

The trick to the LightRing is that it can automatically, and very naturally, calibrate itself to your finger’s range of motion if you just swirl your finger. From that circular motion LightRing can work backwards from the sensor values to how your finger is moving, assuming it is constrained to (roughly) a 2D plane. And that, combined with a button-press or finger touch on the ring itself, is enough to provide an effective input device.

The LightRing, as we have prototyped it now, is just one early step in the process. There’s a lot more we could do with this device, and many more practical problems that would need to be resolved to make it a useful adjunct to everyday devices–and to tap its full potential.

But my co-author Wolf Kienzle and I are working on it.

And hopefully, before too much longer now, we’ll have further updates on even more clever and fanciful stuff that we can do through this one tiny keyhole into this field of dreams, the verdant golden country of ubiquitous computing.

_____________________________________________________

LightRing thumbnailKienzle, W., Hinckley, K., LightRing: Always-Available 2D Input on Any Surface. In the 27th ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, Hawaii, Oct. 5-8, 2014, pp. 157-160. [PDF] [video.mp4 TBA] [Watch on YouTube]

Watch LightRing video on YouTube

Project: The Analog Keyboard: Text Input for Small Devices

With the big meaty man-thumbs that I sport, touchscreen typing–even on a full-size tablet computer–can be challenging for me.

Take it down to a phone, and I have to spend more time checking for typographical errors and embarrassing auto-miscorrections than I do actually typing in the text.

But typing on a watch?!?

I suppose you could cram an entire QWERTY layout, all those keys, into a tiny 1.6″ screen, but then typing would become an exercise in microsurgery, the augmentation of a high-power microscope an absolute necessity.

But if you instead re-envision ‘typing’ in a much more direct, analog fashion, then it’s entirely possible. And in a highly natural and intuitive manner to boot.

Enter the Analog Keyboard Project.

Analog Watch Keyboard on Moto 360 (round screen)

Wolf Kienzle, a frequent collaborator of mine, just put out an exciting new build of our touchscreen handwriting technology optimized for watches running the Android Wear Platform, including the round Moto 360 device that everyone seems so excited about.

Get all the deets–and the download–from Wolf’s project page, available here.

This builds on the touchscreen writing prototype we first presented at the MobileHCI 2013 conference, where the work earned an Honorable Mention Award, but optimized in a number of ways to fit on the tiny screen (and small memory footprint) of current watches.

All you have to do is scrawl the letters that you want to type–in a fully natural manner, not in some inscrutable secret computer graffiti-code like in those dark days of the late 1990’s–and the prototype is smart enough to transcribe your finger-writing to text.

It even works for numbers and common punctuation symbols like @ and #, indispensable tools for the propagation of internet memes and goofy cat videos these days.

Writing numbers and punctuation symbols on the Analog Keyboard

However, to fit the resource-constrained environment of the watch, the prototype currently only supports lowercase letters.

Because we all know that when it comes to the internet, UPPERCASE IS JUST FOR TROLLZ anyway.

Best of all, if you have an Android Wear device you can try it out for yourself. Just side-load the Analog Keyboard app onto your watch and once again you can write the analog way, the way real men did in the frontier days. Before everyone realized how cool digital watches were, and all we had to express our innermost desires was a jar of octopus ink and a sharpened bald eagle feather. Or something like that.

Y’know, the things that made America great.

Only now with more electrons.

You can rest easy, though, if these newfangled round watches like the Moto 360 are just a little bit too fashionable for you. As shown below, it works just fine on the more chunky square-faced designs such as the Samsung Gear Live as well.

Analog Keyboard on Samsung Gear Live watch

Check out the video embedded below, and if you have a supported Android Wear device, download the prototype and give it a try. I know Wolf would love to get your feedback on what it feels like to use the Analog Keyboard for texting on your watch.

Bring your timepiece into the 21st century.

You’ll be the envy of every digital watch nerd for miles around.

Besides: it’s clearly an idea whose time has come.

Thumbnail - Analog Keyboard ProjectKienzle, W., Hinckley, K., The Analog Keyboard Project. Handwriting keyboard download for Android Wear. Released October 2014. [Project Details and Download] [Watch demo on YouTube]

 

Watch Analog Keyboard video on YouTube

Paper: Writing Handwritten Messages on a Small Touchscreen

Here’s the final of our three papers at the MobileHCI 2013 conference. This was a particularly fun project, spearheaded by my colleague Wolf Kienzle, looking at a clever way to do handwriting input on a touchscreen using just your finger.

In general I’m a fan of using an actual stylus for handwriting, but in the context of mobile there are many “micro” note-taking tasks, akin to scrawling a note to yourself on a post-it, that wouldn’t justify unsheathing a pen even if your device had one.

The very cool thing about this approach is that it allows you to enter overlapping multi-stroke characters using the whole screen, and without resorting to something like Palm’s old Graffiti writing or full-on handwriting recognition.

Touchscreen-Writing-fullres

The interface also incorporates some nice fluid gestures for entering spaces between words, backspacing to delete previous strokes, or transitioning to a freeform drawing mode for inserting little sketches or smiley-faces into your instant messages, as seen above.

This paper also had the distinction of receiving an Honorable Mention Award for best paper at MobileHCI 2013. We’re glad the review committee liked our paper and saw its contributions as noteworthy, as it were (pun definitely intended).

Writing-Small-Touchscreen-thumbKienzle, W., Hinckley, K., Writing Handwritten Messages on a Small Touchscreen. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 179-182. Honorable Mention Award (Awarded to top 5% of all papers). [PDF] [video MP4] [Watch on YouTube – coming soon.]