Category Archives: Awards & Recognitions

ACM SIGMOBILE 2017 Test of Time Award for “Sensing Techniques for Mobile Interaction”

Recently the SIGMOBILE community recognized my turn-of-the-century research on mobile sensing techniques with one of their 2017 Test of Time Awards.

This was the paper (“Sensing Techniques for Mobile Interaction“) that first introduced techniques such as automatic screen rotation and raise-to-wake to mobile computing — features now taken for granted on the iPhones and tablets of the world.

The award committee motivated the award as follows:

This paper showed how combinations of simple sensors could be used to create rich mobile interactions that are now commonplace in mobile devices today. It also opened up people’s imaginations about how we could interact with mobile devices in the future, inspiring a wide range of research on sensor-based interaction techniques.

SIGMOBILE-Test-of-Time-Award-photo-700h

And so as not to miss the opportunity to have fun with the occasion, in the following video I reflected (at times irreverently) on the work — including  what I really thought about it at the time I was doing the research.

And some of the things that still surprise me about it after all these years.

You can find the original paper here.


Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., Pierce, J., Sinclair, M., Horvitz, E. ACM SIGMOBILE 2017 Test of Time Award. [SIGMOBILE Test of Time Awards archive]

Paper: WritLarge: Ink Unleashed by Unified Scope, Action, & Zoom

Electronic whiteboards remain surprisingly difficult to use in the context of creativity support and design.

A key problem is that once a designer places strokes and reference images on a canvas, actually doing anything useful with key parts of that content involves numerous steps.

Hence, with digital ink, scope—that is, selection of content—is a central concern, yet current approaches often require encircling ink with a lengthy lasso, if not switching modes via round-trips to the far-off edges of the display.

Only then can the user take action, such as to copy, refine, or re-interpret their informal work-in-progress.

Such is the stilted nature of selection and action in the digital world.

But it need not be so.

By contrast, consider an everyday manual task such as sandpapering a piece of woodwork to hew off its rough edges. Here, we use our hands to grasp and bring to the fore—that is, select—the portion of the work-object—the wood—that we want to refine.

And because we are working with a tool—the sandpaper—the hand employed for this ‘selection’ sub-task is typically the non-preferred one, which skillfully manipulates the frame-of-reference for the subsequent ‘action’ of sanding, a complementary sub-task articulated by the preferred hand.

Therefore, in contrast to the disjoint subtasks foisted on us by most interactions with computers, the above example shows how complementary manual activities lend a sense of flow that “chunks” selection and action into a continuous selection-action phrase. By manipulating the workspace, the off-hand shifts the context of the actions to be applied, while the preferred hand brings different tools to bear—such as sandpaper, file, or chisel—as necessary.

The main goal of the WritLarge project, then, is to demonstrate similar continuity of action for electronic whiteboards. This motivated free-flowing, close-at-hand techniques to afford unification of selection and action via bimanual pen+touch interaction.

WriteLarge-hero-figure

Accordingly, we designed WritLarge so that user can simply gesture as follows:

With the thumb and forefinger of the non-preferred hand, just frame a portion of the canvas.

And, unlike many other approaches to “handwriting recognition,” this approach to selecting key portions of an electronic whiteboard leaves the user in complete control of what gets recognized—as well as when recognition occurs—so as not to break the flow of creative work.

Indeed, building on this foundation, we designed ways to shift between flexible representations of freeform content by simply moving the pen along semantic, structural, and temporal axes of movement.

See our demo reel below for some jaw-dropping demonstrations of the possibilities for digital ink opened up by this approach.

Watch WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom video on YouTube


WritLarge-CHI-2017-thumbHaijun Xia, Ken Hinckley, Michel Pahud, Xioa Tu, and Bill Buxton. 2017. WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3227-3240. Denver, Colorado, United States, May 6-11, 2017. Honorable Mention Award (top 5% of papers).
https://doi.org/10.1145/3025453.3025664

[PDF] [30 second preview – mp4 | YouTube] [Full video – mp4]

Editor-in-Chief, ACM Transactions on Computer-Human Interaction (TOCHI)

_TOCHI-fullsizeThe ACM Transactions on Computer-Human Interaction (TOCHI) has long been regarded as the flagship journal of the field. I’ve actually served on their editorial board since 2003, and thus have a long history with the endeavor.

So now that Shumin Zhai’s second term has come to a close, it is a great honor to report that I’ve assumed the helm as Editor-in-Chief. Shumin worked wonders in improving the efficiency and impact of the journal, diligent efforts that I am working hard to build upon. And I have many ideas and creative initiatives in the works that I hope can further advance the journal and help it to have even more impact.

The journal publishes original and significant research papers, and especially likes to see more systems-focused, long-term, or integrative contributions to human-computer interaction. TOCHI also publishes individual studies, methodologies, and techniques if we deem the contributions to be substantial enough. Occasionally impactful, well-argued, and well-supported essays on important  or emerging issues in human-computer interaction are published as well, though not often.

TOCHI prides itself on a rapid turn-around on manuscripts, with an average response time of about 50 days, and we often return manuscripts (particularly when there is not a good fit) much faster than that. We strive to make decisions within 90 days, and although that isn’t always possible, upon acceptance we do also feature very rapid publication. Digital editions of articles publish to the ACM Digital Library as soon as they are accepted, copyedited, and typeset. TOCHI can often, therefore, move articles into publication as fast as or faster than many of the popular conference venues.

Accepted papers at TOCHI also have the opportunity to present at participating SIGCHI conferences, which currently include CHI, CSCW, UIST, and MobileHCI. Authors therefore get the benefits of a rigorous reviewing process with a full journal revision cycle, plus the prestige of the TOCHI brand when you present new work to your colleagues at a top HCI conference.

To keep track of all the latest developments, you can get alerts for new TOCHI articles as they hit the Digital Library — never miss a key new result.  Or subscribe to our feed — just click on the little RSS link on the far right of the TOCHI landing page.

 


_TOCHI-thumbHinckley, K., Editor-in-Chief, ACM Transactions on CHI. Three-year term, commencing Sept. 1st, 2015. [TOCHI on the ACM Digital Library] 

The flagship journal of CHI.

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind… [Spark radio interview with Ken Hinckley]

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing:

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus

Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014 (paywalled), among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Award: CHI Academy, 2014 Inductee

I’ve been a bit remiss in posting this, but as of April 2014, I’m a member of the CHI Academy, which is an honorary group that recognizes leaders in the field of Human-Computer interaction.

Among whom I can apparently I now include myself, strange as that  may seem.

I was completely surprised by this and can honestly say I never expected any special recognition. I’ve just been plugging away on my little devices and techniques, writing papers here and there, but I suppose over the decades it all adds up. I don’t know if this means that my work is especially good or that I’m just getting older, but either way I appreciate the gesture of recognition from my peers in the field.

I was in a bit of a ribald mood when I got the news, so when the award organizers asked me to reply with my bio I decided what the heck and decided to have some fun with it:

Ken Hinckley is a Principal Researcher at Microsoft Research, where he has spent the last 17 years investigating novel input devices, device form-factors, and modalities of interaction.

He feels fortunate to have had the opportunity to collaborate with many CHI Academy members while working there, including noted trouble-makers such as Bill Buxton, Patrick Baudisch, and Eric Horvitz—as well as George Robertson, whom he owes a debt of gratitude for hiring him fresh out of grad school.

Ken is perhaps best know for his work on sensing techniques, cross-device interaction, and pen computing. He has published over 75 academic papers and is a named inventor on upwards of 150 patents. Ken holds a Ph.D. in Computer Science from the University of Virginia, where he studied with Randy Pausch.

He has also published fiction in professional markets including Nature and Fiction River, and prides himself on still being able to hit 30-foot jump shots at age 44.

Not too shabby.

Now, in the spirit of full disclosure, there are no real perks associated with being a CHI Academy member as far as I’ve been able to figure. People do seem to ask me for reference letters just a tiny bit more frequently. And I definitely get more junk email from organizers of dubious-sounding conferences than before. No need for research heroics if you want a piece of that, just email me and I’d be happy to forward them along.

But the absolute most fun part of the whole deal was a small private celebration that noted futurist Bill Buxton organized at his ultra-modern home fronting Lake Ontario in Toronto, and where I was joined by my Microsoft Research colleagues Abigail Sellen, her husband Richard Harper, and John Tang. Abi is already a member (and an occasional collaborator whom I consider a friend), and Richard and John were inducted along with me into the Academy in 2014.

Bill Buxton needs no introduction among the avant garde of computing. And he’s well known in the design community as well, not to mention publishing on equestrianism and mountaineering, among other topics. In particular, his collection of interactive devices is arguably the most complete ever assembled. Only a tiny fraction of it is currently documented on-line. It contains everything from the world’s first radio and television remote controls, to the strangest keyboards ever conceived by mankind, and even the very first handcrafted wooden computer mice that started cropping up in the 1960’s.

The taxi dropped me off, I rang the doorbell, and when a tall man with rock-star hair gone gray and thinned precipitously by the ravages of time answered the door, I inquired:

“Is this, by any chance, the Buxton Home for Wayward Input Devices?”

To which Bill replied in the affirmative.

I indeed had the right place, I would fit right in here, and he showed me in.

Much of Bill’s collection lives off the premises, but his below-ground sanctum sanctorum was still walled by shelves bursting with transparent tubs packed with handheld gadgets that had arrived far before their time, historical mice and trackballs, and hybrid bastard devices of every conceivable description. And what little space remained was packed with books on design, sketching, and the history of mountaineering and the fur trade.

Despite his home office being situated below grade, natural light poured down into it through the huge front windows facing the inland sea, owing to the home’s modern design. Totally awesome space and would have looked right at home on the front page of Architectural Digest.

Bill showed us his origami kayak on the back deck, treated us all to some hand-crafted martinis in the open-plan kitchen, and arranged for transportation to the awards dinner via a 10-person white stretch limousine. We even made a brief pit stop so Bill could dash out and pick up a bottle of champagne at a package store.

Great fun.

I’ve known Bill since 1994, when he visited Randy Pausch’s lab at the University of Virginia, and ever since people have often assumed that he was my advisor. He never was in any official capacity, but I read all of his papers in that period and in many ways I looked up to him as my research hero. And now that we’ve worked together as colleagues for nearly 10 years (!), and with Randy’s passing, I often do still see him as a mentor.

Or is that de-mentor?

Probably a little bit of each, in all honesty (grin).

Yeah, the award was pretty cool and all, but it was the red carpet thrown out by Bill that I’ll always remember.

Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., CHI Academy. Inducted April 27th, 2014 at CHI 2014 in Toronto, Ontario, Canada, for career research accomplishments and service to the ACM SIGCHI community (Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction). [Ken Hinckley CHI Academy Bio] 

The CHI Academy is an honorary group of individuals who have made substantial contributions to the field of human-computer interaction. These are the principal leaders of the field, whose efforts have shaped the disciplines and/or industry, and led the research and/or innovation in human-computer interaction. The criteria for election to the CHI Academy are:

  • Cumulative contributions to the field.
  • Impact on the field through development of new research directions and/or innovations.
  • Influence on the work of others.
  • Reasonably active participant in the ACM SIGCHI community.

Paper: Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed

Text Entry on Touchscreen Keyboards: Less is More?

When we go from mechanical keyboards to touchscreens we inevitably lose something in the translation. Yet the proliferation of tablets has led to widespread use of graphical keyboards.

You can’t blame people for demanding more efficient text entry techniques. This is the 21st century, after all, and intuitively it seems like we should be able to do better.

While we can’t reproduce that distinctive smell of hot metal from mechanical keys clacking away at a typewriter ribbon, the presence of the touchscreen lets keyboard designers play lots of tricks in pursuit of faster typing performance. Since everything is just pixels on a display it’s easy to introduce non-standard key layouts. You can even slide your finger over the keys to shape-write entire words in a single swipe, as pioneered by Per Ola Kristensson and Shumin Zhai (their SHARK keyboard was the predecessor for Swype and related techniques).

While these type of tricks can yield substantial performance advantages, they also often demand a substantial investment in skill acquisition from the user before significant gains can be realized. In practice, this limits how many people will stick with a new technique long enough to realize such gains. The Dvorak keyboard offers a classic example of this: the balance of evidence suggests it’s slightly faster than QWERTY, but the high cost of switching to and learning the new layout just isn’t worth it.

In this work, we explored the performance impact of an alternative approach that builds on people’s existing touch-typing skills with the standard QWERTY layout.

And we do this in a manner that is so transparent, most people don’t even realize that anything is different at first glance.

Can you spot the difference?

Snap quiz time

Stroke-Kbd-redundant-keys-removed-fullres

What’s wrong with this keyboard?  Give it a quick once-over. It looks familiar, with the standard QWERTY layout, but do you notice anything unusual? Anything out of place?

Sure, the keys are arranged in a grid rather than the usual staggered key pattern, but that’s not the “key” difference (so to speak). That’s just an artifact of our quick ‘n’ dirty design of this research-prototype keyboard for touchscreen tablets.

Got it figured out?

All right. Pencils down.

Time to check your score. Give yourself:

  • One point if you noticed that there’s no space bar.
  • Two points if you noticed that there’s no Enter key, either.
  • Three points if the lack of a Backspace key gave you palpitations.
  • Four points and a feather in your cap if you caught the Shift key going AWOL as well.

Now, what if I also told you removing four essential keys from this keyboard–rather than harming performance–actually helps you type faster?

One Trick TO WOO THEM ALL

All we ask of people coming to our touchscreen keyboard is to learn one new trick. After all, we have to make up for the summary removal of Space, Backspace, Shift, and Enter somehow. We accomplish this by augmenting the graphical touchscreen keyboard with stroke shortcuts, i.e. short straight-line finger swipes, as follows:marking-menu-overlay-5

  • Swipe right, starting anywhere on the keyboard, to enter a Space.
  • Swipe left to Backspace.
  • Swipe upwards from any key to enter the corresponding shift-symbol. Swiping up on the a key, for example, enters an uppercase A; stroking up on the 1 key enters the ! symbol; and so on.
  • Swipe diagonally down and to the left for Enter.

marking-menu-overlay-with-finger

DESIGN PROPERTIES OF A STROKE-AUGMENTED GRAPHICAL KEYBOARD

In addition to possible time-motion efficiencies of the stroke shortcuts themselves, the introduction of these four gestures–and the elimination of the corresponding keys made redundant by the gestures–yields a graphical keyboard with number of interesting properties:

  • Allowing the user to input stroke gestures for Space, Backspace, and Enter anywhere on the keyboard eliminates fine targeting motions as well as any round-trips necessary for a finger to acquire the corresponding keys.
  • Instead of requiring two separate keystrokes—one to tap Shift and another to tap the key to be shifted—the Shift gesture combines these into a single action: the starting point selects a key, while the stroke direction selects the Shift function itself.
  • Removing these four keys frees an entire row on the keyboard.
  • Almost all of the numeric, punctuation, and special symbols typically relegated to the secondary and tertiary graphical keyboards can then be fit in a logical manner into the freed-up space.
  • Hence, the full set of characters can fit on one keyboard while holding the key size, number of keys, and footprint constant.
  • By having only a primary keyboard, this approach affords an economy of design that simplifies the interface, while offering further potential performance gains via the elimination of keyboard switching costs—and the extra key layouts to learn.
  • Although the strokes might reduce round-trip costs, we expect articulating the stroke gesture itself to take longer than a tap. Thus, we need to test these tradeoffs empirically.

RESULTS AND PRELIMINARY CONCLUSIONS

Our studies demonstrated that overall the removal of four keys—rather than coming at a cost—offers a net benefit.

Specifically, our experiments showed that a stroke keyboard with the gesture-redundant keys removed yielded a 16% performance advantage for input phrases containing mixed-case alphanumeric text and special symbols, without sacrificing error rate. We observed these performance advantages from the first block of trials onward.

Even in the case of entirely lowercase text—that is, in a context where we would not expect to observe a performance benefit because only the Space gesture offers any potential advantage—we found that our new design still performed as well as a standard graphical keyboard. Moreover, people learned the design with remarkable ease: 90% wanted to keep using the method, and 80% believed they typed faster than on their current touchscreen tablet keyboard.

Notably, our studies also revealed that it is necessary to remove the keys to achieve these benefits from the gestural stroke shortcuts. If both the stroke shortcuts and the keys remain in place, user hesitancy about which method to use undermines any potential benefit. Users, of course, also learn to use the gestural shortcuts much more quickly when they offer the only means of achieving a function.

Thus, in this context, less is definitely more in achieving faster performance for touchscreen QWERTY keyboard typing.

The full results are available in the technical paper linked below. The paper contributes a careful study of stroke-augmented keyboards, filling an important gap in the literature as well as demonstrating the efficacy of a specific design; shows that removing the gesture-redundant keys is a critical design choice; and that stroke shortcuts can be effective in the context of multi-touch typing with both hands, even though previous studies with single-point stylus input had cast doubt on this approach.

Although our studies focus on the immediate end of the usability spectrum (as opposed to longitudinal studies over many input sessions), we believe the rapid returns demonstrated by our results illustrate the potential of this approach to improve touchscreen keyboard performance immediately, while also serving to complement other text-entry techniques such as shape-writing in the future.

Stroke-Keyboard-GI-2014-thumbArif, A. S., Pahud, M., Hinckley, K., and Buxton, B.,  Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed In Proc. Graphics Interface 2014 (GI’14).  Canadian Information Processing Society, Toronto, Ont., CanadaMontreal, Quebec, Canada, May 7-9, 2014. Received the Michael A. J. Sweeney Award for Best Student Paper.  [PDF] [Talk Slides (.pptx)] [Video .MP4] [Video .WMV]

Watch A Touchscreen Keyboard with Gesture-Redundant Keys Removed video on YouTube

Paper: Writing Handwritten Messages on a Small Touchscreen

Here’s the final of our three papers at the MobileHCI 2013 conference. This was a particularly fun project, spearheaded by my colleague Wolf Kienzle, looking at a clever way to do handwriting input on a touchscreen using just your finger.

In general I’m a fan of using an actual stylus for handwriting, but in the context of mobile there are many “micro” note-taking tasks, akin to scrawling a note to yourself on a post-it, that wouldn’t justify unsheathing a pen even if your device had one.

The very cool thing about this approach is that it allows you to enter overlapping multi-stroke characters using the whole screen, and without resorting to something like Palm’s old Graffiti writing or full-on handwriting recognition.

Touchscreen-Writing-fullres

The interface also incorporates some nice fluid gestures for entering spaces between words, backspacing to delete previous strokes, or transitioning to a freeform drawing mode for inserting little sketches or smiley-faces into your instant messages, as seen above.

This paper also had the distinction of receiving an Honorable Mention Award for best paper at MobileHCI 2013. We’re glad the review committee liked our paper and saw its contributions as noteworthy, as it were (pun definitely intended).

Writing-Small-Touchscreen-thumbKienzle, W., Hinckley, K., Writing Handwritten Messages on a Small Touchscreen. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 179-182. Honorable Mention Award (Awarded to top 5% of all papers). [PDF] [video MP4] [Watch on YouTube – coming soon.]

Lasting Impact Award for “Sensing Techniques for Mobile Interaction”

Last week I received a significant award for some of my early work in mobile sensing.

It was not that long ago really, that I would get strange glances from practical-minded people– those folks who would look at me with heads tilted downwards ever so slightly, eyebrows raised, and eyeballs askew– when I would mention how I was painting mobile devices with conductive epoxy and duct-taping accelerometers and infrared range-finders to them.

The dot-com bubble was still expanding, smartphones didn’t exist yet, and accelerometers were still far too expensive to reasonably consider on a device’s bill of materials. Many people still regarded the apex of handheld nirvana as the PalmPilot, although its luster was starting to fade.

And this Frankensteinian contraption of sensors, duct tape, and conductive epoxy was taking shape on my laboratory bench-top:

Sensing Pocket PC, circa 2000, with proximity range sensor, touch sensitivity, and tilt sensor

The Idea

I’d been dabbling in the area of sensor-enhanced mobile interaction for about a year, trying one idea here, another idea there, but the project had stubbornly refused to come together. For a long time I felt like it was basically a failure. But every so often myself and my colleagues who worked with me on the project– Jeff Pierce, Mike Sinclair, and Eric Horvitz– would come up with one new example, or another type of idea to try out, and slowly we populated a space of interesting new ways to use the sensors to make mobile devices smarter– or to be more honest about it, just a little bit less stupid– in how they responded to the physical environment, how the user was handling the device, or the orientation of the screen.

The latter led to the idea of using the accelerometer to automatically re-orient the display based on how the user was holding the device. The accelerometer gave us a constant signal of this-way-up, and at some point we realized it would make a great way to switch between portrait and landscape display formats without any need for buttons or menus, or indeed without even explicitly having to think about the interaction at all. The handheld, by being perceptive about it, could offload the decision from the user– hey, I need to look at this table in landscape— to the background of the interaction, so that the user could simply move the device to the desired orientation, and our sensors and our software would automatically optimize the display accordingly.

There were also some interesting subtleties to it. Just using the raw angle of the display, relative to gravity, was not that satisfactory. We built in some hysteresis so the display wouldn’t chatter back and forth between different orientations. We added special handling when you put the handheld down flat on a desk, or picked it back up, so that the screen wouldn’t accidentally flip to a different orientation because of this brief, incidental motion. We noticed that flipping the screen upside-down, which we initially thought wouldn’t be useful, was an effective way to quickly show the contents of the screen to someone seated across the table from you. And we also added some layers of logic in there so that other uses of the accelerometer could co-exist with automatic screen rotation.

Once we had this automatic screen rotation idea working well, I knew we had something. We worked furiously right up to the paper deadline, hammering out additional techniques, working out little kinks and details, figuring out how to convey the terrain we’d explored in the paper we were writing.

The reviewers all loved the paper, and it received a Best Paper Award at the conference. We had submitted it to the Association of Computing Machinery’s annual UIST Symposium– the UIST 2000 13th Annual Symposium on User Interface Software and Technology, held in San Diego, California– because we knew the UIST community was ideally suited to evaluate this research. The paper had a novel combination of sensors. It was a systems paper– that is, it did not just propose a one-off technique but rather a suite of techniques that all used the sensors in a variety of creative ways that complemented one another. And UIST is a rigorously peer-reviewed single-track conference. It’s not the largest conference in the field of Human-Computer Interaction by a long shot– for many years it averaged about two hundred attendees– but as my Ph.D. advisor Randy Pausch (now known for “The Last Lecture“) would often say, “UIST is only 200 people, but its the right 200 people.”

This is the video, recorded back in the year 2000, that accompanied the paper. I think it’s stood the test of time pretty well– or at least a lot better than the hair on top of my head :-).

Sensing Techniques for Mobile Interaction on YouTube

The Award

Fast forward ten years, and the vast majority of handhelds and slates being produced today include accelerometers and other micro-electromechanical wonders. The cost of these sensors has dropped to essentially nothing. Increasingly, they’re included as a co-processor right on the die with other modules of mobile microprocessors. The day will soon come where it will be all but impossible to purchase a device without sensors directly integrated into the microscopic Manhattan of its silicon gates.

And our mobile screens all automatically rotate, like it or not 🙂

So, it was with great pleasure last week that I attended the 2011 24th annual ACM UIST Symposium, and received a Lasting Impact Award, presented to me by Stanford professor Dr. Scott Klemmer, for the contributions of our UIST 2000 paper “Sensing Techniques for Mobile Interaction.”

The inscription on the award reads:

Awarded for its scientific exploration of mobile interaction, investigating new interaction techniques for handheld mobile devices supported by hardware sensors, and laying the groundwork for new research and industrial applications.

UIST 2011 Lasting Impact Award

In the Meantime…

I remember demonstrating my prototype on-stage with Bill Gates at a media event here in Redmond, Washington in 2001. Gates spoke about the importance of keeping spending– both in the public and private sectors– on R & D and he used my demo as an example of some up-and-coming research, but what I most strongly recall is lingering in the green room backstage with him and some other folks. It wasn’t the first time that I’d met Gates, but it was the first occasion where I chit-chatted with him a bit in a casual, unstructured context. I don’t remember what we talked about but I do remember his foot twitching, always in motion, driving the pedal of a vast invisible loom, weaving a sweeping landscape surmounted by the towering summits of his electronic dreams.

I remember my palms sweating, nervous about the demo, hoping that the sensors I’d duct-taped to my transmogrified Cassiopeia E-105 Pocket PC wouldn’t break off or drain the battery or go crazy with some unforseen nuance of the stage lighting (yes, infrared proximity sensors most definitely have stage fright).

And then less than a week later came the 9/11 attacks. Suddenly spiffy little sensors for mobile devices didn’t seem so important any more. Many product groups, including Windows Mobile at the time, got excited about my demonstration but then the realities of a thousand other crushing demands and priorities rained down on the fragile bubble of technological wonderland I’d been able to cobble together with my prototype. The years stretched by and sensors still hadn’t become mainstream like I had expected them to be.

Then some laptops started shipping with accelerometers to automatically park the hard-disk when you dropped the laptop. I remember seeing digital cameras that would sense the orientation you snapped a picture in, so that you could view it properly when you downloaded it. And when the iPhone shipped in 2007, one of the coolest features on it was the embedded accelerometer, which enabled automatic screen rotation and tilt-based games.

A View to the Future

It took about five years longer than I expected, but we have finally reached an age where clever uses of sensors– both for obvious things like games, as well as for subtle and not-so-obvious things like counting footfalls while you are walking around with the device– abound.

Any my take on all this?

We ain’t seen nothin’ yet.

Since my initial paper on sensing techniques for mobile interaction, every couple of years another idea has struck me. How about answering your phone, or cuing a voice-recognition mode, just by holding your phone to your ear? How about bumping devices together as a way to connect them? What of dual-screen devices that can sense the posture of the screens, and thereby support a breadth of automatically sensed functions? What about new types of motion gestures that combine multi-touch interaction with the physical gestures, or vibratory signals, afforded by these sensors?

And I’m sure there’s many more. My children will never know a world where their devices are not sensitive to motion and proximity, to orientation and elevation and all the headings of the compass.

The problem is, the future is not so obvious until you’ve struck upon the right idea, until you’ve found the one gold nugget in acres and acres of tailings from the mine of your technological ambitions.

A final word of advice: if your aim is to find these nuggets– whether in research or in creative endeavors– what you need to do is dig as fast as you possibly can. Burrow deeper. Dig side-tunnels where no-one has gone before. Risk collapse and explosion and yes, worst of all, complete failure and ignominious rejection of your diligently crafted masterpieces.

Above all else, fail faster.

Because sometimes those “failed” projects turn out to be the most rewarding of all.

***

This project would not have been possible without standing on the shoulders of many giants. Of course, there are my colleagues on the project– Jeff Pierce, who worked with me as a Microsoft Research Graduate Fellowship recipient at the time, and did most of the heavy lifting on the software infrastructure and contributed many of the ideas and nuances of the resulting techniques. Mike Sinclair, who first got me thinking about accelerometers and spent many, many hours helping me cobble together the sensing hardware. And Eric Horvitz, who helped to shape the broad strokes of the project and who was always an energetic sounding board for ideas.

With the passing of time that an award like this entails, one also reflects on how life has changed, and the people who are no longer there. I think of my advisor Randy Pausch, who in many ways has made my entire career possible, and his epic struggle with pancreatic cancer. I think of my first wife, Kerrie Exely, who died in 1997, and of her father, Bill, who also was claimed by cancer a couple of years ago.

Then there are the many scientists whose work I built upon in our exploration of sensing systems. Beverly Harrison’s explorations of embodied interactions. Albrecht Schmidt’s work on context sensing for mobile phones. Jun Rekimoto’s exploration of tilting user interfaces. Bill Buxton’s insights into background sensing. And many others cited in the original paper.

Award: Lasting Impact Award

Lasting Impact Award thumbnailLasting Impact Award, for Sensing Techniques for Mobile Interaction, UIST 2000. “Awarded for its scientific exploration of mobile interaction, investigating new interaction techniques for handheld mobile devices supported by hardware sensors, and laying the groundwork for new research and industrial applications.” Awarded to Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz at the 24th ACM UIST October 2011 (Sponsored by the ACM, SIGCHI, and SIGGRAPH). October 18, 2011. Check out the original paper or watch the video appended below.

UIST 2011 Lasting Impact Award for "Sensing techniques for mobile interaction"

Sensing Techniques for Mobile Interaction on YouTube

Paper: Sensor Synaesthesia: Touch in Motion, and Motion in Touch

Sensor SynaesthesiaHinckley, K., and Song, H., Sensor Synaesthesia: Touch in Motion, and Motion in Touch, In Proc. CHI 2011 Conf. on Human Factors in Computing Systems. CHI 2011 Honorable Mention Award. [PDF] [video .WMV].

Watch Sensor Synaesthesia video on YouTube