Tag Archives: one-handed interaction for mobiles

Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video – MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube

Classic Post: The Hidden Dimension of Touch

I’ve had a number of conversations with people recently about the new opportunities for mobile user interfaces afforded by the increasingly sophisticated sensors integrated with hand-held devices.

I’ve been doing research on sensors on and off for over twelve years now, and it’s a topic I keep coming back to every few years. The possibilities offered by these sensors have never been more promising. They increasingly will be integrated right on the microchip with all the other specialized computational units, so they are only going to become more widespread to the point that it will be practically impossible to buy a mobile gadget of any sort that doesn’t contain sensors. In practical terms there will be no incremental cost to include the sensors, and it’s just a matter of smart software to take advantage of them and enrich the user experience.

I continue to be excited about this line of work and think there’s a lot more that could be done to leverage these sensors. In particular, I believe the possibilities afforded by modern high-precision gyroscopes– and their combination with other sensors and input modalities– are not yet well-understood. And I believe the whole area of contextual sensing in general remains rich with untapped possibilities.

I posted about this on my old blog a while back, but I definitely wanted to make this post available here as well, so here it is. If you just want to cut to the chase, I’ve embedded the video demonstration at the bottom of the post.

The Hidden Dimension of Touch

The Hidden Dimension of Touch

What’s the gesture of one hand zooming?

This might seem like a silly question, but it’s not. The beloved multi-touch pinch gesture is ubiquitous, but it’s almost impossible to articulate with one hand. Need to zoom in on a map, or a web page? Are you using your phone while holding a bunch of shopping bags, or the hand of your toddler?

Well then, you’re a better man than I am if you can zoom in without dropping your darned phone on the pavement. You gotta hold it in one hand, and pinch with the other, and that ties up both hands.  Oh, sure, you can double-tap the thing, but that doesn’t give you much control, and you’ll probably just tap on some link by mistake anyway.

So what do you do? What’s the gesture of one hand zooming?

Well, I found that if you want an answer to that, first you have to break out of the categorical mindset that seems to pervade so much of mainstream thinking, the invisible cubicle walls that we place around our ideas and our creativity without even realizing it. And Exhibit A in the technology world is the touch-is-best-for-everything stance that seems to be the Great Unwritten Rule of Natural User Interfaces these days.

Here’s a hint: The gesture of one hand zooming isn’t a touch-screen gesture.

Well, that’s not completely true either. It’s more than that.

Got any ideas?

– # –

Every so often in my research career I stumble across something that reminds me that this whole research gig is way easier than it seems.

And way harder.

Because I’ve repeatedly found that some of my best ideas were hiding in plain sight. Obvious things. Things I should have thought of five years ago, or ten.

The problem is they’re only obvious in retrospect.

Of course touch is all the rage; every smartphone these days has to have a touchscreen.

But people forget that every smartphone has motion sensors too– accelerometers and gyroscopes and such– that let the device respond to physical movement, such as when you hold your phone in landscape and the display follows suit.

I first prototyped that little automatic screen rotation interaction, by the way, over twelve years ago, so if you don’t like it, you can blame it on me. Come on, admit it, you’ve cussed more than once when you lay down in bed with your smartphone and the darned screen flipped to landscape. It’s ok, let loose your volley of curses. You won’t be judged here.

Because the first step to a solution is admitting you have a problem.

I started thinking hard about all of this- touch and motion sensing, zooming with one hand and automatic screen rotation gone wild– a while back and gradually realized that there’s an interesting new class of gestures for handhelds hiding in plain sight here. And it’s always been there. Any fool– like me, twelve years ago, for example– could have taken the inputs from a touchscreen and the signals from the sensors and started to build out a vocabulary of gestures based on that.

But well, um… nope. Never been explored in any kind of systematic way, as it turns out.

Call it the Hidden Dimension of Touch, if you like, an uncharted continent of gestures just laying there under the surface of your touchscreen, waiting to be discovered.

– # –

So now that we’re surveying this new landscape, let me show you the way to the first landmark, the Gesture of One Hand Zooming:

  • Hold your thumb on the screen, at the point you want to zoom.
  • Tip the device back and forth to zoom in or zoom out.
  • Lift your thumb to stop.

Yep, it’s that simple and that hard.

It’s a cross-modal gesture: that is, a gesture that combines both motion and touch. Touch: hold your thumb at a particular location on the screen. Motion sensing: your phone’s accelerometer senses the tilt of the device, and maps this to the rate of expansion for the zoom.

It’s not any faster or more intuitive than pinch-to-zoom.

But, gosh darn it, you can do it with one hand.

One-Handed Zooming

One-Handed Zooming by holding the screen and subtly tilting the device back and forth.

– # –

All right then, what about this problem of your smartphone gone wild in your bed? Ahem. The problem with the automatic screen rotation, that is.

Well, just hold your finger on the screen as you lay down. Or as you pivot the screen to a new viewing orientation.

Call it Pivot-to-Lock, another monument on this new touch-plus-motion landscape: just hold the screen while rotating the device.

Screen Pivot Lock

Lock engaged. Just flip the screen to a new orientation to slip out of the lock. Simple, and fun to use.

– # –

Is that it? Is there more?

Sure, there’s a bunch more touch-and-motion gestures that we have experimented with. For example, here’s one more: you can collect bits of content that you encounter on your phone- say, crop out piece of a picture that you like- just by framing it with your fingers and then flipping the phone back in a quick motion. Here, holding two fingers still plus the flipping motion defines the cross-modal gesture, as demonstrated in our prototype for Windows Phone 7:

Check out the video below to see all of these in action, and some other ideas that we’ve tried out so far.

But there’s something else.

Another perspective. Something completely different from all the examples above.

There’s really two ways to look at interaction with motion sensors.

We can use them to support explicit new gestures– like giving your device a shake, for example– or the phone can use them in a more subtle way, by just sitting there in the background and seeing what the sensors have to say about how the device is being used.  Did the user just pick up the phone? Is the user walking around with the phone? Is the phone sitting flat and motionless on a desk? Yep, you can infer all these things with high confidence.

And we can bring this perspective back to our thinking about combined touch and motion.

Imagine your touchscreen as the surface of a pond on a windless day. Perfectly flat. Smooth.

Motionless.

Now what happens when you set your finger to the surface of that pond?

Motion in Touch

Yep, ripples.

Touch the surface of the pond again, somewhere else. More ripples, expanding from a different spot.

Now take your finger and sweep it along the surface of the water. Another disturbance– a wake in the trail of your finger this time. That’s another pattern. A different pattern.

Touch and motion are inextricably linked. The sensors on these devices– particularly the new generation of low-cost gyroscopes that are making their way onto handhelds– are increasingly sensitive, even to rather subtle motions and vibrations.

When you touch the screen of your device, or place a finger anywhere on the case of your device for that matter, we have a good sense of how you’re touching it and about where you’re touching it and how you’re holding it.

And all of this can be used to optimize how your device reacts, how it interprets your gestures, how accurately it can respond to you. And maybe some more stuff that nobody even realizes is possible yet.

Frankly, I’m not even sure myself. We’ve probably only just scratched the surface of the possibilities here.

Yeah, there’s a hidden dimension of touch all right, and to be honest I still feel like we’re a long way from surveying all the landmarks of this new world.

But I like what we see so far.

VIDEO

Here’s a video of our system in action:

YouTube Video of Touch and Motion Gestures for Mobiles.

PUBLICATION DETAILS

Our scientific paper on the work described in this post won an Honorable Mention Award for CHI 2011 Best Paper.  The paper appeared May 9th at the ACM CHI 2011 Conference on Human Factors in Computing Systems in Vancouver, British Columbia, Canada.

Check out the paper for a full and nuanced discussion of this design space, as well as references to a whole bunch of exciting work that has been conducted by other researchers in recent years.

Sensor Synaesthesia: Touch in Motion, and Motion in Touch, by Ken Hinckley and Hyunyoung Song. CHI 2011 Conf. on Human Factors in Computing Systems.

The paper was presented at the conference by my co-author Hyunyoung Song of the University of Maryland. Hyunyoung worked with me for her internship at Microsoft Research in the summer of 2010 and her contributions to this project were tremendous– very, very impressive work by a great young researcher.

Paper: Sensor Synaesthesia: Touch in Motion, and Motion in Touch

Sensor SynaesthesiaHinckley, K., and Song, H., Sensor Synaesthesia: Touch in Motion, and Motion in Touch, In Proc. CHI 2011 Conf. on Human Factors in Computing Systems. CHI 2011 Honorable Mention Award. [PDF] [video .WMV].

Watch Sensor Synaesthesia video on YouTube

Paper: Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments

Mobile Touch-Screen GesturesBragdon, A., Nelson-Brown, E., Li, Y., Hinckley, K., Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments, In Proc. CHI 2011 Conf. on Human Factors in Computing Systems. [PDF]