Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video - MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube

Short Story: The Totem of Curtained Minds

My latest short story appears today in the new issue of Fiction River:Time Streams, a collection of 15 great time travel stories by newcomers and established professional writers alike, edited by Dean Wesley Smith.

I’ve really enjoyed the first two volumes of Fiction River, so I hope you’ll check it out, and of course I hope that you enjoy my contribution, The Totem of Curtained Minds, as well. It’s really an honor to be included in this volume with so many other great writers, pulled together by a widely respected editor like Dean.

The Totem of Curtained Minds is a moving story with a nice strong theme to it that I wrote in a paroxysm of blind inspiration from nothing more than the title. I often write short stories this way, pulling ideas from thin air and just letting the story come to me as it must, which is great fun and a great way to come up with some really unique ideas.


“The Totem of Curtained Minds” by Ken Hinckley.

In Fiction River: Time Streams, Vol. 1, No. 3, August 20th, 2013.

Edited by Dean Wesley Smith (series editors: Dean Wesley Smith & Kristine Kathryn Rusch).

Now available in electronic and trade paper editions from your local bookseller, Amazon, B&N, and Smashwords.

Update: Time Streams, including my story, is now also available in audio from

Paper: Motion and Context Sensing Techniques for Pen Computing

I continue to believe that stylus input — annotations, sketches, mark-up, and gestures — will be an important aspect of interaction with slate computers in the future, particularly when used effectively and convincingly with multi-modal pen+touch input. It also seems that every couple of years I stumble across an interesting new use or set of techniques for motion sensors, and this year proved to be no exception.

Thus, it should come as no surprise that my latest project has continued to push in this direction, exploring the possibilities for pen interaction when the physical stylus itself is augmented with inertial sensors including three-axis accelerometers, gyros, and magnetometers.


In recent years such sensors have become integrated with all manner of gadgets, including smart phones and tablets, and it is increasingly common for microprocessors to include such sensors directly on the die. Hence in my view of the world, we are just at the cusp of sensor-rich stylus devices becoming  commercially feasible, so it is only natural to consider how such sensors afford new interactions, gestures, or context-sensing techniques when integrated directly with an active (powered) stylus on pen-operated devices.

In collaboration with Xiang ‘Anthony’ Chen and Hrvoje Benko I recently published a paper exploring motion-sensing capabilities for electronic styluses, which takes a first look at some techniques for such a device. With some timely help from Tom Blank’s brilliant devices team at Microsoft Research, we built a custom stylus — fully wireless and powered by an AAAA battery — that integrates these sensors.

These range from very simple but clever things such as reminding the user if they have left behind the pen — a common problem that users encounter with pen-based devices — to fun new techniques that emulate physical media, such as the gesture of striking a loaded brush on one’s finger in water media.


Check out the video below for an overview of these and some of the other techniques we have come up with so far, or read more about it in the technical paper linked below.

We are continuing to work in this area, and have lots more ideas that go beyond what we were able to accomplish in this first stage of the project, so stay tuned for future developments along these lines.

Motion-Context-Pen-thumbHinckley, K., Chen, X., and Benko, H., Motion and Context Sensing Techniques for Pen
In Proc. Graphics Interface 2013 (GI’13).  Canadian Information Processing Society, Toronto, Ont., CanadaRegina, Saskatchewan, Canada, May 29-31, 2013. [PDF] [video - MP4].

Watch Motion and Context Sensing Techniques for Pen Computing video on YouTube

Short Story: The Ostracons of Europa

"The Ostracons of Europa" in Nature


A measure of life.

The current issue of Nature features my short story The Ostracons of Europa, a nifty story-of-revelation set on (you guessed it) Jupiter’s mysterious moon Europa.

The story appears in Nature’s long-running (and award-winning) Futures column of short speculative fictions, edited by Colin Sullivan. I hope you enjoy it.

Update: The editors at Nature picked my story as their favorite of the month for July 2013, and feature it in their free podcast, read by Henry Gee! Also available as an MP3 Download.

I’ve also got a short story coming out in Fiction River: Time Streams, edited by Dean Wesley Smith, coming out next month (Aug. 20th, 2013). For details check out my Fiction tab.

Nature-Ostracons-Europa-cover-full“The Ostracons of Europa” by Ken Hinckley. In Nature, Vol. 499, No. 7456, p. 120. July 3rd, 2013. Futures column. [Available to read online for free]

[Also available as a Nature Futures podcast and MP3 Download.]

Published by Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. DOI: 10.1038/499120a.

Paper: Implicit Bookmarking: Improving Support for Revisitation in Within-Document Reading Tasks

The March 2013 issue of the International Journal of Human-Computer Studies features a clever new technique for automatically (implicitly) bookmarking recently-visited locations in documents, which (as our paper reveals) eliminates 66% of all long-distance scrolling actions for users in active reading scenarios.

The technique, devised by Chun Yu (Tsinghua University Department of Computer Science and Technology, Beijing, China) in collaboration with  Ravin Balakrishnan, myself, Tomer Moscovish, and Yuanchun Shi, requires only minimal modification of existing scrolling behavior in document readers — in fact, our prototype works by implementing a simple layer on top of the standard Adobe PDF Reader.

The technique would be particularly valuable for students or information workers whose activities necessitate deep engagements with texts such as technical documentation, non-fiction books on e-readers, or– of course, my favorite pastime– scientific papers.

Implicit Bookmarking prototype and studyYu, C., Balakrishnan, R., Hinckley, K., Moscovich, T.,  Shi, Y., Implicit bookmarking: Improving support for revisitation in within-document reading tasks. International Journal of Human-Computer Studies, Vol. 71, Issue 3, March 2013, pp. 303-320. [Definitive Version] [Author's draft PDF -- may contain discrepancies]

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video - MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt