Category Archives: Awards & Recognitions

Editor-in-Chief, ACM Transactions on Computer-Human Interaction (TOCHI)

_TOCHI-fullsizeThe ACM Transactions on Computer-Human Interaction (TOCHI) has long been regarded as the flagship journal of the field. I’ve actually served on their editorial board since 2003, and thus have a long history with the endeavor.

So now that Shumin Zhai’s second term has come to a close, it is a great honor to report that I’ve assumed the helm as Editor-in-Chief. Shumin worked wonders in improving the efficiency and impact of the journal, diligent efforts that I am working hard to build upon. And I have many ideas and creative initiatives in the works that I hope can further advance the journal and help it to have even more impact.

The journal publishes original and significant research papers, and especially likes to see more systems-focused, long-term, or integrative contributions to human-computer interaction. TOCHI also publishes individual studies, methodologies, and techniques if we deem the contributions to be substantial enough. Occasionally impactful, well-argued, and well-supported essays on important  or emerging issues in human-computer interaction are published as well, though not often.

TOCHI prides itself on a rapid turn-around on manuscripts, with an average response time of about 50 days, and we often return manuscripts (particularly when there is not a good fit) much faster than that. We strive to make decisions within 90 days, and although that isn’t always possible, upon acceptance we do also feature very rapid publication. Digital editions of articles publish to the ACM Digital Library as soon as they are accepted, copyedited, and typeset. TOCHI can often, therefore, move articles into publication as fast as or faster than many of the popular conference venues.

Accepted papers at TOCHI also have the opportunity to present at participating SIGCHI conferences, which currently include CHI, CSCW, UIST, and MobileHCI. Authors therefore get the benefits of a rigorous reviewing process with a full journal revision cycle, plus the prestige of the TOCHI brand when you present new work to your colleagues at a top HCI conference.

To keep track of all the latest developments, you can get alerts for new TOCHI articles as they hit the Digital Library — never miss a key new result.  Or subscribe to our feed — just click on the little RSS link on the far right of the TOCHI landing page.

 


_TOCHI-thumbHinckley, K., Editor-in-Chief, ACM Transactions on CHI. Three-year term, commencing Sept. 1st, 2015. [TOCHI on the ACM Digital Library] 

The flagship journal of CHI.

Advertisements

Award: CHI Academy, 2014 Inductee

I’ve been a bit remiss in posting this, but as of April 2014, I’m a member of the CHI Academy, which is an honorary group that recognizes leaders in the field of Human-Computer interaction.

Among whom I can apparently I now include myself, strange as that  may seem.

I was completely surprised by this and can honestly say I never expected any special recognition. I’ve just been plugging away on my little devices and techniques, writing papers here and there, but I suppose over the decades it all adds up. I don’t know if this means that my work is especially good or that I’m just getting older, but either way I appreciate the gesture of recognition from my peers in the field.

I was in a bit of a ribald mood when I got the news, so when the award organizers asked me to reply with my bio I decided what the heck and decided to have some fun with it:

Ken Hinckley is a Principal Researcher at Microsoft Research, where he has spent the last 17 years investigating novel input devices, device form-factors, and modalities of interaction.

He feels fortunate to have had the opportunity to collaborate with many CHI Academy members while working there, including noted trouble-makers such as Bill Buxton, Patrick Baudisch, and Eric Horvitz—as well as George Robertson, whom he owes a debt of gratitude for hiring him fresh out of grad school.

Ken is perhaps best know for his work on sensing techniques, cross-device interaction, and pen computing. He has published over 75 academic papers and is a named inventor on upwards of 150 patents. Ken holds a Ph.D. in Computer Science from the University of Virginia, where he studied with Randy Pausch.

He has also published fiction in professional markets including Nature and Fiction River, and prides himself on still being able to hit 30-foot jump shots at age 44.

Not too shabby.

Now, in the spirit of full disclosure, there are no real perks associated with being a CHI Academy member as far as I’ve been able to figure. People do seem to ask me for reference letters just a tiny bit more frequently. And I definitely get more junk email from organizers of dubious-sounding conferences than before. No need for research heroics if you want a piece of that, just email me and I’d be happy to forward them along.

But the absolute most fun part of the whole deal was a small private celebration that noted futurist Bill Buxton organized at his ultra-modern home fronting Lake Ontario in Toronto, and where I was joined by my Microsoft Research colleagues Abigail Sellen, her husband Richard Harper, and John Tang. Abi is already a member (and an occasional collaborator whom I consider a friend), and Richard and John were inducted along with me into the Academy in 2014.

Bill Buxton needs no introduction among the avant garde of computing. And he’s well known in the design community as well, not to mention publishing on equestrianism and mountaineering, among other topics. In particular, his collection of interactive devices is arguably the most complete ever assembled. Only a tiny fraction of it is currently documented on-line. It contains everything from the world’s first radio and television remote controls, to the strangest keyboards ever conceived by mankind, and even the very first handcrafted wooden computer mice that started cropping up in the 1960’s.

The taxi dropped me off, I rang the doorbell, and when a tall man with rock-star hair gone gray and thinned precipitously by the ravages of time answered the door, I inquired:

“Is this, by any chance, the Buxton Home for Wayward Input Devices?”

To which Bill replied in the affirmative.

I indeed had the right place, I would fit right in here, and he showed me in.

Much of Bill’s collection lives off the premises, but his below-ground sanctum sanctorum was still walled by shelves bursting with transparent tubs packed with handheld gadgets that had arrived far before their time, historical mice and trackballs, and hybrid bastard devices of every conceivable description. And what little space remained was packed with books on design, sketching, and the history of mountaineering and the fur trade.

Despite his home office being situated below grade, natural light poured down into it through the huge front windows facing the inland sea, owing to the home’s modern design. Totally awesome space and would have looked right at home on the front page of Architectural Digest.

Bill showed us his origami kayak on the back deck, treated us all to some hand-crafted martinis in the open-plan kitchen, and arranged for transportation to the awards dinner via a 10-person white stretch limousine. We even made a brief pit stop so Bill could dash out and pick up a bottle of champagne at a package store.

Great fun.

I’ve known Bill since 1994, when he visited Randy Pausch’s lab at the University of Virginia, and ever since people have often assumed that he was my advisor. He never was in any official capacity, but I read all of his papers in that period and in many ways I looked up to him as my research hero. And now that we’ve worked together as colleagues for nearly 10 years (!), and with Randy’s passing, I often do still see him as a mentor.

Or is that de-mentor?

Probably a little bit of each, in all honesty (grin).

Yeah, the award was pretty cool and all, but it was the red carpet thrown out by Bill that I’ll always remember.

Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., CHI Academy. Inducted April 27th, 2014 at CHI 2014 in Toronto, Ontario, Canada, for career research accomplishments and service to the ACM SIGCHI community (Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction). [Ken Hinckley CHI Academy Bio] 

The CHI Academy is an honorary group of individuals who have made substantial contributions to the field of human-computer interaction. These are the principal leaders of the field, whose efforts have shaped the disciplines and/or industry, and led the research and/or innovation in human-computer interaction. The criteria for election to the CHI Academy are:

  • Cumulative contributions to the field.
  • Impact on the field through development of new research directions and/or innovations.
  • Influence on the work of others.
  • Reasonably active participant in the ACM SIGCHI community.

Paper: Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed

Text Entry on Touchscreen Keyboards: Less is More?

When we go from mechanical keyboards to touchscreens we inevitably lose something in the translation. Yet the proliferation of tablets has led to widespread use of graphical keyboards.

You can’t blame people for demanding more efficient text entry techniques. This is the 21st century, after all, and intuitively it seems like we should be able to do better.

While we can’t reproduce that distinctive smell of hot metal from mechanical keys clacking away at a typewriter ribbon, the presence of the touchscreen lets keyboard designers play lots of tricks in pursuit of faster typing performance. Since everything is just pixels on a display it’s easy to introduce non-standard key layouts. You can even slide your finger over the keys to shape-write entire words in a single swipe, as pioneered by Per Ola Kristensson and Shumin Zhai (their SHARK keyboard was the predecessor for Swype and related techniques).

While these type of tricks can yield substantial performance advantages, they also often demand a substantial investment in skill acquisition from the user before significant gains can be realized. In practice, this limits how many people will stick with a new technique long enough to realize such gains. The Dvorak keyboard offers a classic example of this: the balance of evidence suggests it’s slightly faster than QWERTY, but the high cost of switching to and learning the new layout just isn’t worth it.

In this work, we explored the performance impact of an alternative approach that builds on people’s existing touch-typing skills with the standard QWERTY layout.

And we do this in a manner that is so transparent, most people don’t even realize that anything is different at first glance.

Can you spot the difference?

Snap quiz time

Stroke-Kbd-redundant-keys-removed-fullres

What’s wrong with this keyboard?  Give it a quick once-over. It looks familiar, with the standard QWERTY layout, but do you notice anything unusual? Anything out of place?

Sure, the keys are arranged in a grid rather than the usual staggered key pattern, but that’s not the “key” difference (so to speak). That’s just an artifact of our quick ‘n’ dirty design of this research-prototype keyboard for touchscreen tablets.

Got it figured out?

All right. Pencils down.

Time to check your score. Give yourself:

  • One point if you noticed that there’s no space bar.
  • Two points if you noticed that there’s no Enter key, either.
  • Three points if the lack of a Backspace key gave you palpitations.
  • Four points and a feather in your cap if you caught the Shift key going AWOL as well.

Now, what if I also told you removing four essential keys from this keyboard–rather than harming performance–actually helps you type faster?

One Trick TO WOO THEM ALL

All we ask of people coming to our touchscreen keyboard is to learn one new trick. After all, we have to make up for the summary removal of Space, Backspace, Shift, and Enter somehow. We accomplish this by augmenting the graphical touchscreen keyboard with stroke shortcuts, i.e. short straight-line finger swipes, as follows:marking-menu-overlay-5

  • Swipe right, starting anywhere on the keyboard, to enter a Space.
  • Swipe left to Backspace.
  • Swipe upwards from any key to enter the corresponding shift-symbol. Swiping up on the a key, for example, enters an uppercase A; stroking up on the 1 key enters the ! symbol; and so on.
  • Swipe diagonally down and to the left for Enter.

marking-menu-overlay-with-finger

DESIGN PROPERTIES OF A STROKE-AUGMENTED GRAPHICAL KEYBOARD

In addition to possible time-motion efficiencies of the stroke shortcuts themselves, the introduction of these four gestures–and the elimination of the corresponding keys made redundant by the gestures–yields a graphical keyboard with number of interesting properties:

  • Allowing the user to input stroke gestures for Space, Backspace, and Enter anywhere on the keyboard eliminates fine targeting motions as well as any round-trips necessary for a finger to acquire the corresponding keys.
  • Instead of requiring two separate keystrokes—one to tap Shift and another to tap the key to be shifted—the Shift gesture combines these into a single action: the starting point selects a key, while the stroke direction selects the Shift function itself.
  • Removing these four keys frees an entire row on the keyboard.
  • Almost all of the numeric, punctuation, and special symbols typically relegated to the secondary and tertiary graphical keyboards can then be fit in a logical manner into the freed-up space.
  • Hence, the full set of characters can fit on one keyboard while holding the key size, number of keys, and footprint constant.
  • By having only a primary keyboard, this approach affords an economy of design that simplifies the interface, while offering further potential performance gains via the elimination of keyboard switching costs—and the extra key layouts to learn.
  • Although the strokes might reduce round-trip costs, we expect articulating the stroke gesture itself to take longer than a tap. Thus, we need to test these tradeoffs empirically.

RESULTS AND PRELIMINARY CONCLUSIONS

Our studies demonstrated that overall the removal of four keys—rather than coming at a cost—offers a net benefit.

Specifically, our experiments showed that a stroke keyboard with the gesture-redundant keys removed yielded a 16% performance advantage for input phrases containing mixed-case alphanumeric text and special symbols, without sacrificing error rate. We observed these performance advantages from the first block of trials onward.

Even in the case of entirely lowercase text—that is, in a context where we would not expect to observe a performance benefit because only the Space gesture offers any potential advantage—we found that our new design still performed as well as a standard graphical keyboard. Moreover, people learned the design with remarkable ease: 90% wanted to keep using the method, and 80% believed they typed faster than on their current touchscreen tablet keyboard.

Notably, our studies also revealed that it is necessary to remove the keys to achieve these benefits from the gestural stroke shortcuts. If both the stroke shortcuts and the keys remain in place, user hesitancy about which method to use undermines any potential benefit. Users, of course, also learn to use the gestural shortcuts much more quickly when they offer the only means of achieving a function.

Thus, in this context, less is definitely more in achieving faster performance for touchscreen QWERTY keyboard typing.

The full results are available in the technical paper linked below. The paper contributes a careful study of stroke-augmented keyboards, filling an important gap in the literature as well as demonstrating the efficacy of a specific design; shows that removing the gesture-redundant keys is a critical design choice; and that stroke shortcuts can be effective in the context of multi-touch typing with both hands, even though previous studies with single-point stylus input had cast doubt on this approach.

Although our studies focus on the immediate end of the usability spectrum (as opposed to longitudinal studies over many input sessions), we believe the rapid returns demonstrated by our results illustrate the potential of this approach to improve touchscreen keyboard performance immediately, while also serving to complement other text-entry techniques such as shape-writing in the future.

Stroke-Keyboard-GI-2014-thumbArif, A. S., Pahud, M., Hinckley, K., and Buxton, B.,  Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed In Proc. Graphics Interface 2014 (GI’14).  Canadian Information Processing Society, Toronto, Ont., CanadaMontreal, Quebec, Canada, May 7-9, 2014. Received the Michael A. J. Sweeney Award for Best Student Paper.  [PDF] [Talk Slides (.pptx)] [Video .MP4] [Video .WMV]

Watch A Touchscreen Keyboard with Gesture-Redundant Keys Removed video on YouTube

Paper: Writing Handwritten Messages on a Small Touchscreen

Here’s the final of our three papers at the MobileHCI 2013 conference. This was a particularly fun project, spearheaded by my colleague Wolf Kienzle, looking at a clever way to do handwriting input on a touchscreen using just your finger.

In general I’m a fan of using an actual stylus for handwriting, but in the context of mobile there are many “micro” note-taking tasks, akin to scrawling a note to yourself on a post-it, that wouldn’t justify unsheathing a pen even if your device had one.

The very cool thing about this approach is that it allows you to enter overlapping multi-stroke characters using the whole screen, and without resorting to something like Palm’s old Graffiti writing or full-on handwriting recognition.

Touchscreen-Writing-fullres

The interface also incorporates some nice fluid gestures for entering spaces between words, backspacing to delete previous strokes, or transitioning to a freeform drawing mode for inserting little sketches or smiley-faces into your instant messages, as seen above.

This paper also had the distinction of receiving an Honorable Mention Award for best paper at MobileHCI 2013. We’re glad the review committee liked our paper and saw its contributions as noteworthy, as it were (pun definitely intended).

Writing-Small-Touchscreen-thumbKienzle, W., Hinckley, K., Writing Handwritten Messages on a Small Touchscreen. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 179-182. Honorable Mention Award (Awarded to top 5% of all papers). [PDF] [video MP4] [Watch on YouTube – coming soon.]

Lasting Impact Award for “Sensing Techniques for Mobile Interaction”

Last week I received a significant award for some of my early work in mobile sensing.

It was not that long ago really, that I would get strange glances from practical-minded people– those folks who would look at me with heads tilted downwards ever so slightly, eyebrows raised, and eyeballs askew– when I would mention how I was painting mobile devices with conductive epoxy and duct-taping accelerometers and infrared range-finders to them.

The dot-com bubble was still expanding, smartphones didn’t exist yet, and accelerometers were still far too expensive to reasonably consider on a device’s bill of materials. Many people still regarded the apex of handheld nirvana as the PalmPilot, although its luster was starting to fade.

And this Frankensteinian contraption of sensors, duct tape, and conductive epoxy was taking shape on my laboratory bench-top:

The Idea

I’d been dabbling in the area of sensor-enhanced mobile interaction for about a year, trying one idea here, another idea there, but the project had stubbornly refused to come together. For a long time I felt like it was basically a failure. But every so often myself and my colleagues who worked with me on the project– Jeff Pierce, Mike Sinclair, and Eric Horvitz– would come up with one new example, or another type of idea to try out, and slowly we populated a space of interesting new ways to use the sensors to make mobile devices smarter– or to be more honest about it, just a little bit less stupid– in how they responded to the physical environment, how the user was handling the device, or the orientation of the screen.

The latter led to the idea of using the accelerometer to automatically re-orient the display based on how the user was holding the device. The accelerometer gave us a constant signal of this-way-up, and at some point we realized it would make a great way to switch between portrait and landscape display formats without any need for buttons or menus, or indeed without even explicitly having to think about the interaction at all. The handheld, by being perceptive about it, could offload the decision from the user– hey, I need to look at this table in landscape— to the background of the interaction, so that the user could simply move the device to the desired orientation, and our sensors and our software would automatically optimize the display accordingly.

There were also some interesting subtleties to it. Just using the raw angle of the display, relative to gravity, was not that satisfactory. We built in some hysteresis so the display wouldn’t chatter back and forth between different orientations. We added special handling when you put the handheld down flat on a desk, or picked it back up, so that the screen wouldn’t accidentally flip to a different orientation because of this brief, incidental motion. We noticed that flipping the screen upside-down, which we initially thought wouldn’t be useful, was an effective way to quickly show the contents of the screen to someone seated across the table from you. And we also added some layers of logic in there so that other uses of the accelerometer could co-exist with automatic screen rotation.

Once we had this automatic screen rotation idea working well, I knew we had something. We worked furiously right up to the paper deadline, hammering out additional techniques, working out little kinks and details, figuring out how to convey the terrain we’d explored in the paper we were writing.

The reviewers all loved the paper, and it received a Best Paper Award at the conference. We had submitted it to the Association of Computing Machinery’s annual UIST Symposium– the UIST 2000 13th Annual Symposium on User Interface Software and Technology, held in San Diego, California– because we knew the UIST community was ideally suited to evaluate this research. The paper had a novel combination of sensors. It was a systems paper– that is, it did not just propose a one-off technique but rather a suite of techniques that all used the sensors in a variety of creative ways that complemented one another. And UIST is a rigorously peer-reviewed single-track conference. It’s not the largest conference in the field of Human-Computer Interaction by a long shot– for many years it averaged about two hundred attendees– but as my Ph.D. advisor Randy Pausch (now known for “The Last Lecture“) would often say, “UIST is only 200 people, but its the right 200 people.”

This is the video, recorded back in the year 2000, that accompanied the paper. I think it’s stood the test of time pretty well– or at least a lot better than the hair on top of my head :-).

Sensing Techniques for Mobile Interaction on YouTube

The Award

Fast forward ten years, and the vast majority of handhelds and slates being produced today include accelerometers and other micro-electromechanical wonders. The cost of these sensors has dropped to essentially nothing. Increasingly, they’re included as a co-processor right on the die with other modules of mobile microprocessors. The day will soon come where it will be all but impossible to purchase a device without sensors directly integrated into the microscopic Manhattan of its silicon gates.

And our mobile screens all automatically rotate, like it or not 🙂

So, it was with great pleasure last week that I attended the 2011 24th annual ACM UIST Symposium, and received a Lasting Impact Award, presented to me by Stanford professor Dr. Scott Klemmer, for the contributions of our UIST 2000 paper “Sensing Techniques for Mobile Interaction.”

The inscription on the award reads:

Awarded for its scientific exploration of mobile interaction, investigating new interaction techniques for handheld mobile devices supported by hardware sensors, and laying the groundwork for new research and industrial applications.

UIST 2011 Lasting Impact Award

In the Meantime…

I remember demonstrating my prototype on-stage with Bill Gates at a media event here in Redmond, Washington in 2001. Gates spoke about the importance of keeping spending– both in the public and private sectors– on R & D and he used my demo as an example of some up-and-coming research, but what I most strongly recall is lingering in the green room backstage with him and some other folks. It wasn’t the first time that I’d met Gates, but it was the first occasion where I chit-chatted with him a bit in a casual, unstructured context. I don’t remember what we talked about but I do remember his foot twitching, always in motion, driving the pedal of a vast invisible loom, weaving a sweeping landscape surmounted by the towering summits of his electronic dreams.

I remember my palms sweating, nervous about the demo, hoping that the sensors I’d duct-taped to my transmogrified Cassiopeia E-105 Pocket PC wouldn’t break off or drain the battery or go crazy with some unforseen nuance of the stage lighting (yes, infrared proximity sensors most definitely have stage fright).

And then less than a week later came the 9/11 attacks. Suddenly spiffy little sensors for mobile devices didn’t seem so important any more. Many product groups, including Windows Mobile at the time, got excited about my demonstration but then the realities of a thousand other crushing demands and priorities rained down on the fragile bubble of technological wonderland I’d been able to cobble together with my prototype. The years stretched by and sensors still hadn’t become mainstream like I had expected them to be.

Then some laptops started shipping with accelerometers to automatically park the hard-disk when you dropped the laptop. I remember seeing digital cameras that would sense the orientation you snapped a picture in, so that you could view it properly when you downloaded it. And when the iPhone shipped in 2007, one of the coolest features on it was the embedded accelerometer, which enabled automatic screen rotation and tilt-based games.

A View to the Future

It took about five years longer than I expected, but we have finally reached an age where clever uses of sensors– both for obvious things like games, as well as for subtle and not-so-obvious things like counting footfalls while you are walking around with the device– abound.

Any my take on all this?

We ain’t seen nothin’ yet.

Since my initial paper on sensing techniques for mobile interaction, every couple of years another idea has struck me. How about answering your phone, or cuing a voice-recognition mode, just by holding your phone to your ear? How about bumping devices together as a way to connect them? What of dual-screen devices that can sense the posture of the screens, and thereby support a breadth of automatically sensed functions? What about new types of motion gestures that combine multi-touch interaction with the physical gestures, or vibratory signals, afforded by these sensors?

And I’m sure there’s many more. My children will never know a world where their devices are not sensitive to motion and proximity, to orientation and elevation and all the headings of the compass.

The problem is, the future is not so obvious until you’ve struck upon the right idea, until you’ve found the one gold nugget in acres and acres of tailings from the mine of your technological ambitions.

A final word of advice: if your aim is to find these nuggets– whether in research or in creative endeavors– what you need to do is dig as fast as you possibly can. Burrow deeper. Dig side-tunnels where no-one has gone before. Risk collapse and explosion and yes, worst of all, complete failure and ignominious rejection of your diligently crafted masterpieces.

Above all else, fail faster.

Because sometimes those “failed” projects turn out to be the most rewarding of all.

***

This project would not have been possible without standing on the shoulders of many giants. Of course, there are my colleagues on the project– Jeff Pierce, who worked with me as a Microsoft Research Graduate Fellowship recipient at the time, and did most of the heavy lifting on the software infrastructure and contributed many of the ideas and nuances of the resulting techniques. Mike Sinclair, who first got me thinking about accelerometers and spent many, many hours helping me cobble together the sensing hardware. And Eric Horvitz, who helped to shape the broad strokes of the project and who was always an energetic sounding board for ideas.

With the passing of time that an award like this entails, one also reflects on how life has changed, and the people who are no longer there. I think of my advisor Randy Pausch, who in many ways has made my entire career possible, and his epic struggle with pancreatic cancer. I think of my first wife, Kerrie Exely, who died in 1997, and of her father, Bill, who also was claimed by cancer a couple of years ago.

Then there are the many scientists whose work I built upon in our exploration of sensing systems. Beverly Harrison’s explorations of embodied interactions. Albrecht Schmidt’s work on context sensing for mobile phones. Jun Rekimoto’s exploration of tilting user interfaces. Bill Buxton’s insights into background sensing. And many others cited in the original paper.

Award: Lasting Impact Award

Lasting Impact Award thumbnailLasting Impact Award, for Sensing Techniques for Mobile Interaction, UIST 2000. “Awarded for its scientific exploration of mobile interaction, investigating new interaction techniques for handheld mobile devices supported by hardware sensors, and laying the groundwork for new research and industrial applications.” Awarded to Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz at the 24th ACM UIST October 2011 (Sponsored by the ACM, SIGCHI, and SIGGRAPH). October 18, 2011. Check out the original paper or watch the video appended below.

UIST 2011 Lasting Impact Award for "Sensing techniques for mobile interaction"

Sensing Techniques for Mobile Interaction on YouTube

Paper: Sensor Synaesthesia: Touch in Motion, and Motion in Touch

Sensor SynaesthesiaHinckley, K., and Song, H., Sensor Synaesthesia: Touch in Motion, and Motion in Touch, In Proc. CHI 2011 Conf. on Human Factors in Computing Systems. CHI 2011 Honorable Mention Award. [PDF] [video .WMV].

Watch Sensor Synaesthesia video on YouTube