Category Archives: Papers with YouTube Videos

Paper: Sensing Tablet Grasp + Micro-mobility for Active Reading

Lately I have been thinking about touch:

In the tablet-computer sense of the word.

To most people, this means the touchscreen. The intentional pokes and swipes and pinching gestures we would use to interact with a display.

But not to me.

Touch goes far beyond that.

Look at people’s natural behavior. When they refer to a book, or pass a document to a collaborator, there are two interesting behaviors that characterize the activity.

What I call the seen but unnoticed:

Simple habits and social cues, there all the time, but which fall below our conscious attention — if they are even noticed at all.

By way of example, let’s say we’re observing someone handle a magazine.

First, the person has to grasp the magazine. Seems obvious, but easy to overlook — and perhaps vital to understand. Although grasp typically doesn’t involve contact of the fingers with the touchscreen, this is a form of ‘touch’ nonetheless, even if it is one that traditionally hasn’t been sensed by computers.

Grasp reveals a lot about the intended use, whether the person might be preparing to pick up the magazine or pass it off, or perhaps settling down for a deep and immersive engagement with the material.

Second, as an inevitable consequence of grasping the magazine, it must move. Again, at first blush this seems obvious. But these movements may be overt, or they may be quite subtle. And to a keen eye — or an astute sensing system — they are a natural consequence of grasp, and indeed are what give grasp its meaning.

In this way, sensing grasp informs the detection of movements.

And, coming full circle, the movements thus detected enrich what we can glean from grasp as well.

Yet, this interplay of grasp and movement has rarely been recognized, much less actively sensed and used to enrich and inform interaction with tablet computers.

And this feeds back into a larger point that I have often found myself trying to make lately, namely that touch is about far more than interaction with the touch-screen alone.

If we want to really understand touch (as well as its future as a technology) then we need to deeply understand these other modalities — grasp and movement, and perhaps many more — and thereby draw out the full naturalness and expressivity of interaction with tablets (and mobile phones, and e-readers, and wearables, and many dreamed-of form-factors perhaps yet to come).

My latest publication looks into all of these questions, particularly as they pertain to reading electronic documents on tablets.

We constructed a tablet (albeit a green metallic beast of one at present) that can detect natural grips along its edges and on the entire back surface of the device. And with a full complement of inertial motion sensors, as well. This image shows the grip-sensing (back) side of our technological monstrosity:

Grip Sensing Tablet Hardware

But this set-up allowed us to explore ways of combining grip and subtle motion (what has sometimes been termed micro-mobility in the literature), resulting in the following techniques (among a number of others):

A Single User ENGAGING with a Single Device

Some of these techniques address the experience of an individual engaging with their own reading material.

For example, you can hold a bookmark with your thumb (much as you can keep your finger on a page in physical book) and then tip the device. This flips back to the page that you’re holding:

Tip-to-Flip-x715

This ‘Tip-to-Flip’ interaction  involves both the grip and the movement of the device and results in a fairly natural interaction that builds on a familiar habit from everyday experience with physical documents.

Another one we experimented with was a very subtle interaction that mimics holding a document and angling it up to inspect it more closely. When we sense this, the tablet zooms in slightly on the page, while removing all peripheral distractions such as menu-bars and icons:

Immersive Reading mode through grip sensing

This immerses the reader in the content, rather than the iconographic gewgaws which typically border the screen of an application as if to announce, “This is a computer!”

Multiple Users Collaborating around a Single Device

Another set of techniques we explored looked at how people pass devices to one another.

In everyday experience, passing a paper document to a collaborator is a very natural — and different — form of “sharing,” as compared to the oft-frustrating electronic equivalents we have at our disposal.

Likewise, computers should be able to sense and recognize such gestures in the real world, and use them to bring some of the socially and situationally appropriate sharing that they afford to the world of electronic documents.

We explored one such technique that automatically sets up a guest profile when you hand a tablet (displaying a specific document) to another user:

Face-to-Face-Handoff-x715

The other user can then read and mark-up that document, but he is not the beneficiary of a permanent electronic copy of it (as would be the case if you emailed him an attachment), nor is he permitted to navigate to other areas or look at other files on your tablet.

You’ve physically passed him the electronic document, and all he can do is look at it and mark it up with a pen.

Not unlike the semantics — long absent and sorely missed in computing — of a simple a piece of paper.

A Single User Working With Multiple Devices

A final area we looked at considers what happens when people work across multiple tablets.

We already live in a world where people own and use multiple devices, often side-by-side, yet our devices typically have little or no awareness of one another.

But contrast this to the messy state of people’s physical desks, with documents strewn all over. People often place documents side-by-side as a lightweight and informal way of organization, and might dexterously pick one up or hold it at the ready for quick reference when engaged in an intellectually demanding task.

Again, missing from the world of the tablet computer.

But by sensing which tablets you hold, or pick up, our system allows people to quickly refer to and cross-reference content across federations of such devices.

While the “Internet of Things” may be all the rage these days among the avant-garde of computing, such federations remain uncommon and in our view represent the future of a ‘Society of Devices’ that can recognize and interact with one another, all while respecting social mores, not the least of which are the subtle “seen but unnoticed” social cues afforded by grasping, moving, and orienting our devices.

Fine-Grained-Reference-x715

Closing ThoughtS:

An ExpanDED Perspective OF ‘TOUCH’

The examples above represent just a few simple steps. Much more can, and should, be done to fully explore and vet these directions.

But by viewing touch as far more than simple contact of the fingers with a grubby touchscreen — and expanding our view to consider grasp, movement of the device, and perhaps other qualities of the interaction that could be sensed in the future as well — our work hints at a far wider perspective.

A perspective teeming with the possibilities that would be raised by a society of mobile appliances with rich sensing capabilities, potentially leading us to far more natural, more expressive, and more creative ways of engaging in the knowledge work of the future.

 


 

Sensing-Tablet-Grasp-Micro-Mobility-UIST-2015-thumbDongwook Yoon, Ken Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, Michel Pahud, and Marcel Gavriliu. 2015. Sensing Tablet Grasp + Micro-mobility for Active Reading. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 477-487. Charlotte, NC, Nov. 8-11, 2015. http://dx.doi.org/10.1145/2807442.2807510
[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Sensing Tablet Grasp + Micro-mobility for Active Reading video on YouTube

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus
Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

Project: Bimanual In-Place Commands

Here’s another interesting loose end, this one from 2012, which describes a user interface known as “In-Place Commands” that Michel Pahud, myself, and Bill Buxton developed for a range of direct-touch form factors, including everything from tablets and tabletops all the way up to electronic whiteboards a la the modern Microsoft Surface Hub devices of 2015.

Microsoft is currently running a Request for Proposals for Surface Hub research, by the way, so check it out if that sort of thing is at all up your alley. If your proposal is selected you’ll get a spiffy new Surface Hub and $25,000 to go along with it.

We’ve never written up a formal paper on our In-Place Commands work, in part because there is still much to do and we intend to pursue it further when the time is right. But in the meantime the following post and video documenting the work may be of interest to aficionados of efficient interaction on such devices. This also relates closely to the Finger Shadow and Accordion Menu explored in our Pen +Touch work, documented here and here, which collectively form a class of such techniques.

While we wouldn’t claim that any one of these represent the ultimate approach to command and control for direct input, in sum they illustrate many of the underlying issues, the rich set of capabilities we strive to support, and possible directions for future embellishments as well.

Thumbnail for In-Place CommandsKnies, R. In-Place: Interacting with Large Displays. Reporting on research by Pahud, M., Hinckley, K., and Buxton, B. TechNet Inside Microsoft Research Blog Post, Oct 4th, 2012. [Author’s cached copy of post as PDF] [Video MP4] [Watch on YouTube]

In-Place Commands Screen Shot

The user can call up commands in-place, directly where he is working, by touching both fingers down and fanning out the available tool palettes. Many of the functions thus revealed act as click-through tools, where the user may simultaneously select and apply the selected tool — as the user is about to do for the line-drawing tool in the image above.

Watch Bimanual In-Place Commands video on YouTube

Paper: LightRing: Always-Available 2D Input on Any Surface

In this modern world bristling with on-the-go-go-go mobile activity, the dream of an always-available pointing device has long been held as a sort of holy grail of ubiquitous computing.

Ubiquitous computing, as futurists use the term, refers to the once-farfetched vision where computing pervades everything, everywhere, in a sort of all-encompassing computational nirvana of socially-aware displays and sensors that can respond to our every whim and need.

From our shiny little phones.

To our dull beige desktop computers.

To the vast wall-spanning electronic whiteboards of a future largely yet to come.

How will we interact with all of these devices as we move about the daily routine of this rapidly approaching future? As we encounter computing in all its many forms, carried on our person as well as enmeshed in the digitally enhanced architecture of walls, desktops, and surfaces all around?

Enter LightRing, our early take on one possible future for ubiquitous interaction.

LightRing device on a supporting surface

By virtue of being a ring always worn on the finger, LightRing travels with us and is always present.

By virtue of some simple sensing and clever signal processing, LightRing can be supported in an extremely compact form-factor while providing a straightforward pointing modality for interacting with devices.

At present, we primarily consider LightRing as it would be configured to interact with a situated display, such as a desktop computer, or a presentation projected against a wall at some distance.

The user moves their index finger, angling left and right, or flexing up and down by bending at the knuckle. Simple stuff, I know.

But unlike a mouse, it’s not anchored to any particular computer.

It travels with you.

It’s a go-everywhere interaction modality.

Close-up of LightRing and hand angles inferred from sensors

Left: The degrees-of-freedom detected by the LightRing sensors. Right: Conceptual mapping of hand movement to the sensed degrees of freedom. LightRing then combines these to support 2D pointing at targets on a display, or other interactions.

LightRing can then sense these finger movements–using a one-dimensional gyroscope to capture the left-right movement, and an infrared sensor-emitter pair to capture the proximity of the flexing finger joint–to support a cursor-control mode that is similar to how you would hold and move a mouse on a desktop.

Except there’s no mouse at all.

And there needn’t even be a desktop, as you can see in the video embedded below.

LightRing just senses the movement of your finger.  You can make the pointing motions on a tabletop, sure, but you can just as easily do them on a wall. Or on your pocket. Or a handheld clipboard.

All the sensing is relative so LightRing always knows how to interpret your motions to control a 2D cursor on a display. Once the LightRing has been paired with a situated device, this lets you point at targets, even if the display itself is beyond your physical reach. You can sketch or handwrite characters with your finger–another scenario we have explored in depth on smartphones and even watches.

The trick to the LightRing is that it can automatically, and very naturally, calibrate itself to your finger’s range of motion if you just swirl your finger. From that circular motion LightRing can work backwards from the sensor values to how your finger is moving, assuming it is constrained to (roughly) a 2D plane. And that, combined with a button-press or finger touch on the ring itself, is enough to provide an effective input device.

The LightRing, as we have prototyped it now, is just one early step in the process. There’s a lot more we could do with this device, and many more practical problems that would need to be resolved to make it a useful adjunct to everyday devices–and to tap its full potential.

But my co-author Wolf Kienzle and I are working on it.

And hopefully, before too much longer now, we’ll have further updates on even more clever and fanciful stuff that we can do through this one tiny keyhole into this field of dreams, the verdant golden country of ubiquitous computing.

_____________________________________________________

LightRing thumbnailKienzle, W., Hinckley, K., LightRing: Always-Available 2D Input on Any Surface. In the 27th ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, Hawaii, Oct. 5-8, 2014, pp. 157-160. [PDF] [video.mp4 TBA] [Watch on YouTube]

Watch LightRing video on YouTube

Project: The Analog Keyboard: Text Input for Small Devices

With the big meaty man-thumbs that I sport, touchscreen typing–even on a full-size tablet computer–can be challenging for me.

Take it down to a phone, and I have to spend more time checking for typographical errors and embarrassing auto-miscorrections than I do actually typing in the text.

But typing on a watch?!?

I suppose you could cram an entire QWERTY layout, all those keys, into a tiny 1.6″ screen, but then typing would become an exercise in microsurgery, the augmentation of a high-power microscope an absolute necessity.

But if you instead re-envision ‘typing’ in a much more direct, analog fashion, then it’s entirely possible. And in a highly natural and intuitive manner to boot.

Enter the Analog Keyboard Project.

Analog Watch Keyboard on Moto 360 (round screen)

Wolf Kienzle, a frequent collaborator of mine, just put out an exciting new build of our touchscreen handwriting technology optimized for watches running the Android Wear Platform, including the round Moto 360 device that everyone seems so excited about.

Get all the deets–and the download–from Wolf’s project page, available here.

This builds on the touchscreen writing prototype we first presented at the MobileHCI 2013 conference, where the work earned an Honorable Mention Award, but optimized in a number of ways to fit on the tiny screen (and small memory footprint) of current watches.

All you have to do is scrawl the letters that you want to type–in a fully natural manner, not in some inscrutable secret computer graffiti-code like in those dark days of the late 1990’s–and the prototype is smart enough to transcribe your finger-writing to text.

It even works for numbers and common punctuation symbols like @ and #, indispensable tools for the propagation of internet memes and goofy cat videos these days.

Writing numbers and punctuation symbols on the Analog Keyboard

However, to fit the resource-constrained environment of the watch, the prototype currently only supports lowercase letters.

Because we all know that when it comes to the internet, UPPERCASE IS JUST FOR TROLLZ anyway.

Best of all, if you have an Android Wear device you can try it out for yourself. Just side-load the Analog Keyboard app onto your watch and once again you can write the analog way, the way real men did in the frontier days. Before everyone realized how cool digital watches were, and all we had to express our innermost desires was a jar of octopus ink and a sharpened bald eagle feather. Or something like that.

Y’know, the things that made America great.

Only now with more electrons.

You can rest easy, though, if these newfangled round watches like the Moto 360 are just a little bit too fashionable for you. As shown below, it works just fine on the more chunky square-faced designs such as the Samsung Gear Live as well.

Analog Keyboard on Samsung Gear Live watch

Check out the video embedded below, and if you have a supported Android Wear device, download the prototype and give it a try. I know Wolf would love to get your feedback on what it feels like to use the Analog Keyboard for texting on your watch.

Bring your timepiece into the 21st century.

You’ll be the envy of every digital watch nerd for miles around.

Besides: it’s clearly an idea whose time has come.

Thumbnail - Analog Keyboard ProjectKienzle, W., Hinckley, K., The Analog Keyboard Project. Handwriting keyboard download for Android Wear. Released October 2014. [Project Details and Download] [Watch demo on YouTube]

 

Watch Analog Keyboard video on YouTube

Paper: Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed

Text Entry on Touchscreen Keyboards: Less is More?

When we go from mechanical keyboards to touchscreens we inevitably lose something in the translation. Yet the proliferation of tablets has led to widespread use of graphical keyboards.

You can’t blame people for demanding more efficient text entry techniques. This is the 21st century, after all, and intuitively it seems like we should be able to do better.

While we can’t reproduce that distinctive smell of hot metal from mechanical keys clacking away at a typewriter ribbon, the presence of the touchscreen lets keyboard designers play lots of tricks in pursuit of faster typing performance. Since everything is just pixels on a display it’s easy to introduce non-standard key layouts. You can even slide your finger over the keys to shape-write entire words in a single swipe, as pioneered by Per Ola Kristensson and Shumin Zhai (their SHARK keyboard was the predecessor for Swype and related techniques).

While these type of tricks can yield substantial performance advantages, they also often demand a substantial investment in skill acquisition from the user before significant gains can be realized. In practice, this limits how many people will stick with a new technique long enough to realize such gains. The Dvorak keyboard offers a classic example of this: the balance of evidence suggests it’s slightly faster than QWERTY, but the high cost of switching to and learning the new layout just isn’t worth it.

In this work, we explored the performance impact of an alternative approach that builds on people’s existing touch-typing skills with the standard QWERTY layout.

And we do this in a manner that is so transparent, most people don’t even realize that anything is different at first glance.

Can you spot the difference?

Snap quiz time

Stroke-Kbd-redundant-keys-removed-fullres

What’s wrong with this keyboard?  Give it a quick once-over. It looks familiar, with the standard QWERTY layout, but do you notice anything unusual? Anything out of place?

Sure, the keys are arranged in a grid rather than the usual staggered key pattern, but that’s not the “key” difference (so to speak). That’s just an artifact of our quick ‘n’ dirty design of this research-prototype keyboard for touchscreen tablets.

Got it figured out?

All right. Pencils down.

Time to check your score. Give yourself:

  • One point if you noticed that there’s no space bar.
  • Two points if you noticed that there’s no Enter key, either.
  • Three points if the lack of a Backspace key gave you palpitations.
  • Four points and a feather in your cap if you caught the Shift key going AWOL as well.

Now, what if I also told you removing four essential keys from this keyboard–rather than harming performance–actually helps you type faster?

One Trick TO WOO THEM ALL

All we ask of people coming to our touchscreen keyboard is to learn one new trick. After all, we have to make up for the summary removal of Space, Backspace, Shift, and Enter somehow. We accomplish this by augmenting the graphical touchscreen keyboard with stroke shortcuts, i.e. short straight-line finger swipes, as follows:marking-menu-overlay-5

  • Swipe right, starting anywhere on the keyboard, to enter a Space.
  • Swipe left to Backspace.
  • Swipe upwards from any key to enter the corresponding shift-symbol. Swiping up on the a key, for example, enters an uppercase A; stroking up on the 1 key enters the ! symbol; and so on.
  • Swipe diagonally down and to the left for Enter.

marking-menu-overlay-with-finger

DESIGN PROPERTIES OF A STROKE-AUGMENTED GRAPHICAL KEYBOARD

In addition to possible time-motion efficiencies of the stroke shortcuts themselves, the introduction of these four gestures–and the elimination of the corresponding keys made redundant by the gestures–yields a graphical keyboard with number of interesting properties:

  • Allowing the user to input stroke gestures for Space, Backspace, and Enter anywhere on the keyboard eliminates fine targeting motions as well as any round-trips necessary for a finger to acquire the corresponding keys.
  • Instead of requiring two separate keystrokes—one to tap Shift and another to tap the key to be shifted—the Shift gesture combines these into a single action: the starting point selects a key, while the stroke direction selects the Shift function itself.
  • Removing these four keys frees an entire row on the keyboard.
  • Almost all of the numeric, punctuation, and special symbols typically relegated to the secondary and tertiary graphical keyboards can then be fit in a logical manner into the freed-up space.
  • Hence, the full set of characters can fit on one keyboard while holding the key size, number of keys, and footprint constant.
  • By having only a primary keyboard, this approach affords an economy of design that simplifies the interface, while offering further potential performance gains via the elimination of keyboard switching costs—and the extra key layouts to learn.
  • Although the strokes might reduce round-trip costs, we expect articulating the stroke gesture itself to take longer than a tap. Thus, we need to test these tradeoffs empirically.

RESULTS AND PRELIMINARY CONCLUSIONS

Our studies demonstrated that overall the removal of four keys—rather than coming at a cost—offers a net benefit.

Specifically, our experiments showed that a stroke keyboard with the gesture-redundant keys removed yielded a 16% performance advantage for input phrases containing mixed-case alphanumeric text and special symbols, without sacrificing error rate. We observed these performance advantages from the first block of trials onward.

Even in the case of entirely lowercase text—that is, in a context where we would not expect to observe a performance benefit because only the Space gesture offers any potential advantage—we found that our new design still performed as well as a standard graphical keyboard. Moreover, people learned the design with remarkable ease: 90% wanted to keep using the method, and 80% believed they typed faster than on their current touchscreen tablet keyboard.

Notably, our studies also revealed that it is necessary to remove the keys to achieve these benefits from the gestural stroke shortcuts. If both the stroke shortcuts and the keys remain in place, user hesitancy about which method to use undermines any potential benefit. Users, of course, also learn to use the gestural shortcuts much more quickly when they offer the only means of achieving a function.

Thus, in this context, less is definitely more in achieving faster performance for touchscreen QWERTY keyboard typing.

The full results are available in the technical paper linked below. The paper contributes a careful study of stroke-augmented keyboards, filling an important gap in the literature as well as demonstrating the efficacy of a specific design; shows that removing the gesture-redundant keys is a critical design choice; and that stroke shortcuts can be effective in the context of multi-touch typing with both hands, even though previous studies with single-point stylus input had cast doubt on this approach.

Although our studies focus on the immediate end of the usability spectrum (as opposed to longitudinal studies over many input sessions), we believe the rapid returns demonstrated by our results illustrate the potential of this approach to improve touchscreen keyboard performance immediately, while also serving to complement other text-entry techniques such as shape-writing in the future.

Stroke-Keyboard-GI-2014-thumbArif, A. S., Pahud, M., Hinckley, K., and Buxton, B.,  Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed In Proc. Graphics Interface 2014 (GI’14).  Canadian Information Processing Society, Toronto, Ont., CanadaMontreal, Quebec, Canada, May 7-9, 2014. Received the Michael A. J. Sweeney Award for Best Student Paper.  [PDF] [Talk Slides (.pptx)] [Video .MP4] [Video .WMV]

Watch A Touchscreen Keyboard with Gesture-Redundant Keys Removed video on YouTube

Paper: Writing Handwritten Messages on a Small Touchscreen

Here’s the final of our three papers at the MobileHCI 2013 conference. This was a particularly fun project, spearheaded by my colleague Wolf Kienzle, looking at a clever way to do handwriting input on a touchscreen using just your finger.

In general I’m a fan of using an actual stylus for handwriting, but in the context of mobile there are many “micro” note-taking tasks, akin to scrawling a note to yourself on a post-it, that wouldn’t justify unsheathing a pen even if your device had one.

The very cool thing about this approach is that it allows you to enter overlapping multi-stroke characters using the whole screen, and without resorting to something like Palm’s old Graffiti writing or full-on handwriting recognition.

Touchscreen-Writing-fullres

The interface also incorporates some nice fluid gestures for entering spaces between words, backspacing to delete previous strokes, or transitioning to a freeform drawing mode for inserting little sketches or smiley-faces into your instant messages, as seen above.

This paper also had the distinction of receiving an Honorable Mention Award for best paper at MobileHCI 2013. We’re glad the review committee liked our paper and saw its contributions as noteworthy, as it were (pun definitely intended).

Writing-Small-Touchscreen-thumbKienzle, W., Hinckley, K., Writing Handwritten Messages on a Small Touchscreen. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 179-182. Honorable Mention Award (Awarded to top 5% of all papers). [PDF] [video MP4] [Watch on YouTube – coming soon.]