Category Archives: Scrolling & Document Navigation

ACM SIGMOBILE 2017 Test of Time Award for “Sensing Techniques for Mobile Interaction”

Recently the SIGMOBILE community recognized my turn-of-the-century research on mobile sensing techniques with one of their 2017 Test of Time Awards.

This was the paper (“Sensing Techniques for Mobile Interaction“) that first introduced techniques such as automatic screen rotation and raise-to-wake to mobile computing — features now taken for granted on the iPhones and tablets of the world.

The award committee motivated the award as follows:

This paper showed how combinations of simple sensors could be used to create rich mobile interactions that are now commonplace in mobile devices today. It also opened up people’s imaginations about how we could interact with mobile devices in the future, inspiring a wide range of research on sensor-based interaction techniques.


And so as not to miss the opportunity to have fun with the occasion, in the following video I reflected (at times irreverently) on the work — including  what I really thought about it at the time I was doing the research.

And some of the things that still surprise me about it after all these years.

You can find the original paper here.

Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., Pierce, J., Sinclair, M., Horvitz, E. ACM SIGMOBILE 2017 Test of Time Award. [SIGMOBILE Test of Time Awards archive]

Paper: Thumb + Pen Interaction on Tablets

Modern tablets support simultaneous pen and touch input, but it remains unclear how to best leverage this capability for bimanual input when the nonpreferred hand holds the tablet.

We explore Thumb + Pen interactions that support simultaneous pen and touch interaction, with both hands, in such situations. Our approach engages the thumb of the device-holding hand, such that the thumb interacts with the touch screen in an indirect manner, thereby complementing the direct input provided by the preferred hand.

For instance, the thumb can determine how pen actions (articulated with the opposite hand) are interpreted.


Alternatively, the pen can point at an object, while the thumb manipulates one or more of its parameters through indirect touch.

Our techniques integrate concepts in a novel way that derive from radial menus (also known as marking menus) and spring-loaded modes maintained by muscular tension — as well as indirect input, and in ways that leverage multi-touch conventions.

Our overall approach takes the form of a set of probes, each representing a meaningfully distinct class of application. They serve as an initial exploration of the design space at a level which will help determine the feasibility of supporting bimanual interaction in such contexts, and the viability of the Thumb + Pen techniques in so doing.

Watch Thumb + Pen Interaction on Tablets video on YouTube

thumb-pen-thumbKen Pfeuffer, Ken Hinckley, Michel Pahud, and Bill Buxton. 2017. Thumb + Pen Interaction on Tablets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3254-3266 . Denver, Colorado, United States, May 6-11, 2017.

[PDF] [Watch 30 second preview on YouTube]

Paper: As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences

This work sheds light on gaps and discrepancies between the experiences afforded by analog pens and their digital counterparts.

Despite the long history (and recent renaissance) of digital pens, the literature still lacks a comprehensive survey of what types of marks people make and what motivates them to use ink—both analog and digital—in daily life.


To capture the diversity of inking behaviors and tease out the unique affordances of pen-and ink, we conducted a diary study with 26 participants from diverse backgrounds.

From analysis of 493 diary entries we identified 8 analog pen-and-ink activities, and 9 affordances of pens. We contextualized and contrasted these findings using a survey with 1,633 respondents and a follow-up diary study with 30 participants, observing digital pens.

Our analysis revealed many gaps and research opportunities based on pen affordances not yet fully explored in the literature.

As-We-May-Ink-CHI-2017-thumbYann Riche, Nathalie Henry Rich, Ken Hinckley, Sarah Fuelling, Sarah Williams, and Sheri Panabaker. 2017. As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3241-3253. Denver, Colorado, United States, May 6-11, 2017.

[PDF] [CHI 2017 Talk Slides (PowerPoint)]

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video – MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube

Paper: Implicit Bookmarking: Improving Support for Revisitation in Within-Document Reading Tasks

The March 2013 issue of the International Journal of Human-Computer Studies features a clever new technique for automatically (implicitly) bookmarking recently-visited locations in documents, which (as our paper reveals) eliminates 66% of all long-distance scrolling actions for users in active reading scenarios.

The technique, devised by Chun Yu (Tsinghua University Department of Computer Science and Technology, Beijing, China) in collaboration with  Ravin Balakrishnan, myself, Tomer Moscovish, and Yuanchun Shi, requires only minimal modification of existing scrolling behavior in document readers — in fact, our prototype works by implementing a simple layer on top of the standard Adobe PDF Reader.

The technique would be particularly valuable for students or information workers whose activities necessitate deep engagements with texts such as technical documentation, non-fiction books on e-readers, or– of course, my favorite pastime– scientific papers.

Implicit Bookmarking prototype and studyYu, C., Balakrishnan, R., Hinckley, K., Moscovich, T.,  Shi, Y., Implicit bookmarking: Improving support for revisitation in within-document reading tasks. International Journal of Human-Computer Studies, Vol. 71, Issue 3, March 2013, pp. 303-320. [Definitive Version] [Author’s draft PDF — may contain discrepancies]

Paper: Informal Information Gathering Techniques for Active Reading

This is my latest project, which I will present tomorrow (May 9th) at the CHI 2012 Conference on Human Factors in Computing Systems.

I’ll have a longer post up about this project after I return from the conference, but for now enjoy the video. I also link to the PDF of our short paper below which has a nice discussion of the motivation and design rationale for this work.

Above all else, I hope this work makes clear that there is still tons of room for innovation in how we interact with the e-readers and tablet computers of the future– as well as in terms of how we consume and manipulate content to produce new creative works.

Informal Information Gathering Techniques for Active ReadingHinckley, K., Bi, X., Pahud, M., Buxton, B., Informal Information Gathering Techniques for Active Reading. 4pp Note. In Proc. CHI 2012  Conf. on Human Factors in Computing Systems, Austin, TX, May 5-10, 2012. [PDF]

[Watch Informal Information Gathering Techniques for Active Reading on YouTube]

Award: Lasting Impact Award

Lasting Impact Award thumbnailLasting Impact Award, for Sensing Techniques for Mobile Interaction, UIST 2000. “Awarded for its scientific exploration of mobile interaction, investigating new interaction techniques for handheld mobile devices supported by hardware sensors, and laying the groundwork for new research and industrial applications.” Awarded to Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz at the 24th ACM UIST October 2011 (Sponsored by the ACM, SIGCHI, and SIGGRAPH). October 18, 2011. Check out the original paper or watch the video appended below.

UIST 2011 Lasting Impact Award for "Sensing techniques for mobile interaction"

Sensing Techniques for Mobile Interaction on YouTube

Book Chapter: Input Technologies and Techniques, 2012 Edition

Input Technologies and Techniques, 3rd EditionHinckley, K., Wigdor, D., Input Technologies and Techniques. Chapter 9 in The Human-Computer Interaction Handbook – Fundamentals, Evolving Technologies and Emerging Applications, Third Edition, ed. by Jacko, J., Published by Taylor & Francis. To appear. [PDF of author’s manuscript – not final]

This is an extensive revision of the 2007 and 2002 editions of my book chapter, and with some heavy weight-lifting from my new co-author Daniel Wigdor, it treats direct-touch input devices and techniques in much more depth. Lots of great new stuff. The book will be out in early 2012 or so from Taylor & Francis – keep an eye out for it!

Book Chapter: Input Technologies and Techniques (in Human-Computer Interaction Fundamentals)

Input-Technologies-and-Techniques-HCI-Handbook-FundamentalsHinckley, K., Input Technologies and Techniques. Chapter 9 in Human-Computer Interaction Fundamentals (Human Factors and Ergonomics), ed. by Sears, A., and Jacko, J., CRC Press, Boca Raton, FL. Published March 2, 2009. Originally appeared as Chapter 9 in Human-Computer Interaction Handbook, 2nd Edition. [PDF of author’s manuscript – not final].