I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.
This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.
I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.
That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.
In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.
While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.
This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.
But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.
Pahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video – MP4]
Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube