Category Archives: Published Papers

Paper: Writing Handwritten Messages on a Small Touchscreen

Here’s the final of our three papers at MobileHCI 2013 conference. This was a particularly fun project, spearheaded by my colleague Wolf Kienzle, looking at a clever way to do handwriting input on a touchscreen using just your finger.

In general I’m a fan of using an actual stylus for handwriting, but in the context of mobile there are many “micro” note-taking tasks, akin to scrawling a note to yourself on a post-it, that wouldn’t justify unsheathing a pen even if your device had one.

The very cool thing about this approach is that it allows you to enter overlapping multi-stroke characters using the whole screen, and without resorting to something like Palm’s old Graffiti writing or full-on handwriting recognition.

Touchscreen-Writing-fullres

The interface also incorporates some nice fluid gestures for entering spaces between words, backspacing to delete previous strokes, or transitioning to a freeform drawing mode for inserting little sketches or smiley-faces into your instant messages, as seen above.

This paper also had the distinction of receiving an Honorable Mention Award for best paper at MobileHCI 2013. We’re glad the review committee liked our paper and saw its contributions as noteworthy, as it were (pun definitely intended).

Writing-Small-Touchscreen-thumbKienzle, W., Hinckley, K., Writing Handwritten Messages on a Small Touchscreen. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 179-182. Honorable Mention Award (Awarded to top 5% of all papers). [PDF] [video MP4] [Watch on YouTube - coming soon.]

Paper: A Tap and Gesture Hybrid Method for Authenticating Smartphone Users

Tap-Gesture-Authentication-thumbArif, A., Pahud, M., Hinckley, K., Buxton, W., A Tap and Gesture Hybrid Method for Authenticating Smartphone Users (Poster). In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 486-491. [Paper PDF] [Poster Presentation PDF]

Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video - MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube

Paper: Motion and Context Sensing Techniques for Pen Computing

I continue to believe that stylus input — annotations, sketches, mark-up, and gestures — will be an important aspect of interaction with slate computers in the future, particularly when used effectively and convincingly with multi-modal pen+touch input. It also seems that every couple of years I stumble across an interesting new use or set of techniques for motion sensors, and this year proved to be no exception.

Thus, it should come as no surprise that my latest project has continued to push in this direction, exploring the possibilities for pen interaction when the physical stylus itself is augmented with inertial sensors including three-axis accelerometers, gyros, and magnetometers.

Figure-1-Sensor-Pen-hardware

In recent years such sensors have become integrated with all manner of gadgets, including smart phones and tablets, and it is increasingly common for microprocessors to include such sensors directly on the die. Hence in my view of the world, we are just at the cusp of sensor-rich stylus devices becoming  commercially feasible, so it is only natural to consider how such sensors afford new interactions, gestures, or context-sensing techniques when integrated directly with an active (powered) stylus on pen-operated devices.

In collaboration with Xiang ‘Anthony’ Chen and Hrvoje Benko I recently published a paper exploring motion-sensing capabilities for electronic styluses, which takes a first look at some techniques for such a device. With some timely help from Tom Blank’s brilliant devices team at Microsoft Research, we built a custom stylus — fully wireless and powered by an AAAA battery — that integrates these sensors.

These range from very simple but clever things such as reminding the user if they have left behind the pen — a common problem that users encounter with pen-based devices – to fun new techniques that emulate physical media, such as the gesture of striking a loaded brush on one’s finger in water media.

fig-ink-spatter

Check out the video below for an overview of these and some of the other techniques we have come up with so far, or read more about it in the technical paper linked below.

We are continuing to work in this area, and have lots more ideas that go beyond what we were able to accomplish in this first stage of the project, so stay tuned for future developments along these lines.

Motion-Context-Pen-thumbHinckley, K., Chen, X., and Benko, H., Motion and Context Sensing Techniques for Pen
Computing. 
In Proc. Graphics Interface 2013 (GI’13).  Canadian Information Processing Society, Toronto, Ont., CanadaRegina, Saskatchewan, Canada, May 29-31, 2013. [PDF] [video - MP4].

Watch Motion and Context Sensing Techniques for Pen Computing video on YouTube

Paper: Implicit Bookmarking: Improving Support for Revisitation in Within-Document Reading Tasks

The March 2013 issue of the International Journal of Human-Computer Studies features a clever new technique for automatically (implicitly) bookmarking recently-visited locations in documents, which (as our paper reveals) eliminates 66% of all long-distance scrolling actions for users in active reading scenarios.

The technique, devised by Chun Yu (Tsinghua University Department of Computer Science and Technology, Beijing, China) in collaboration with  Ravin Balakrishnan, myself, Tomer Moscovish, and Yuanchun Shi, requires only minimal modification of existing scrolling behavior in document readers — in fact, our prototype works by implementing a simple layer on top of the standard Adobe PDF Reader.

The technique would be particularly valuable for students or information workers whose activities necessitate deep engagements with texts such as technical documentation, non-fiction books on e-readers, or– of course, my favorite pastime– scientific papers.

Implicit Bookmarking prototype and studyYu, C., Balakrishnan, R., Hinckley, K., Moscovich, T.,  Shi, Y., Implicit bookmarking: Improving support for revisitation in within-document reading tasks. International Journal of Human-Computer Studies, Vol. 71, Issue 3, March 2013, pp. 303-320. [Definitive Version] [Author's draft PDF -- may contain discrepancies]

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video - MP4].

Watch the Gradual Engagement via Proximity video on YouTube

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video - WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube

Paper: Informal Information Gathering Techniques for Active Reading

This is my latest project, which I will present tomorrow (May 9th) at the CHI 2012 Conference on Human Factors in Computing Systems.

I’ll have a longer post up about this project after I return from the conference, but for now enjoy the video. I also link to the PDF of our short paper below which has a nice discussion of the motivation and design rationale for this work.

Above all else, I hope this work makes clear that there is still tons of room for innovation in how we interact with the e-readers and tablet computers of the future– as well as in terms of how we consume and manipulate content to produce new creative works.

Informal Information Gathering Techniques for Active ReadingHinckley, K., Bi, X., Pahud, M., Buxton, B., Informal Information Gathering Techniques for Active Reading. 4pp Note. In Proc. CHI 2012  Conf. on Human Factors in Computing Systems, Austin, TX, May 5-10, 2012. [PDF]

[Watch Informal Information Gathering Techniques for Active Reading on YouTube]

Paper: CodeSpace: Touch + Air Gesture Hybrid Interactions for Supporting Developer Meetings

CodeSpace systemBragdon, A., DeLine, R., Hinckley, K., and Morris, M. R., Code space: Touch + Air Gesture Hybrid Interactions for Supporting Developer Meetings.  In Proc. ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA,  Kobe, Japan, November 13-16, 2011, pp. 212-221. [PDF] [video - WMV]. As featured on Engadget and many other online forums.

Watch CodeSpace video on YouTube

Paper: Enhancing Naturalness of Pen-and-Tablet Drawing through Context Sensing

Context-Sensing Pen with multi-touch and orientation sensorsSun, M. Cao, X., Song, H., Izadi, S., Benko, H., Guimbretiere, F., Ren, X., and Hinckley, K. Enhancing Naturalness of Pen-and-Tablet Drawing through Context Sensing.  In Proc. ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA,  Kobe, Japan, November 13-16, 2011, pp. 212-221. [PDF] [video - WMV].

Watch Enhancing Naturalness of Pen through Context Sensing video on YouTube