Category Archives: Published Papers

Paper: Pre-Touch Sensing for Mobile Interaction

I have to admit it: I feel as if I’m looking at the sunrise of what may be a whole new way of interacting with mobile devices.

When I think about it, the possibilities bathe my eyes in a golden glow, and the warmth drums against my skin.

And in particular, my latest research peers out across this vivid horizon, to where I see touch — and mobile interaction with touchscreens in particular — evolving in the near future.

As a seasoned researcher, my job (which in reality is some strange admixture of interaction design, innovator, and futurist) is not necessarily to predict the future, but rather to invent it via extrapolation from a sort of visionary present which occupies my waking dreams.

I see things not as they are, but as they could be, through the lens afforded by a (usually optimistic) extrapolation from extant technologies, or those I know are likely to soon become more widely available.

With regards to interaction with touchscreens in particular, it has been clear to me for some time that the ability to sense the fingers as they approach the device — well before contact with the screen itself — is destined to become commonplace on commodity devices.

This is interesting for a number of reasons.

And no, the ability to do goofy gestures above the screen, waving at it frantically (as if it were a fancy-pants towel dispenser in a public restroom) in some dim hope of receiving an affirmative response, is not one of them.

In terms of human capabilities, one obviously cannot touch the screen of a mobile device without approaching it first.

But what often goes unrecognized is that one also must hold the device, typically in the non-preferred hand, as a precursor to touch. Hence, how you hold the device — the pattern of your grip and which hand you hold it in — are additional details of context that are more-or-less wholly ignored by current mobile devices.

So in this new work, my colleagues and I collectively refer to these two precursors of touch — approach and the need to grip the device — as pre-touch.

And it is my staunch belief that the ability to sense such pre-touch information could radically transform the mobile ‘touch’ interfaces that we all have come to take for granted.

You can get a sense of these possibilities, all implemented on a fully functional mobile phone with pre-touch sensing capability, in our demo reel below:

The project received a lot of attention, and coverage from many of the major tech blogs and other media outlets, for example:

  • The Verge (“Microsoft’s hover gestures for Windows phones are magnificent”)
  • SlashGear (“Smartphones next big thing: ‘Pre-Touch’”)
  • Business Insider (“Apple should definitely copy Microsoft’s incredible finger-sensing smartphone technology”)
  • And Fast Company Design (and again in “8 Incredible Prototypes That Show The Future Of Human-Computer Interaction.”)

But I rather liked the take that Silicon Angle offered, which took my concluding statement from the video above:

Taken as a whole, our exploration of pre-touch hints that the evolution of mobile touch may still be in its infancy – with many possibilities, unbounded by the flatland of the touchscreen, yet to explore.

 And then responded as follows:

This is the moon-landing-esque conclusion Microsoft comes to after demonstrating its rather cool pre-touch mobile technology, i.e., a mobile phone that senses what your fingers are about to do.

While this evolution of touch has been coming in the research literature for at least a decade now, what exactly to do with above- and around-screen sensing (especially in a mobile setting) has been far from obvious. And that’s where I think our work on pre-touch sensing techniques for mobile interaction distinguishes itself, and in so doing identifies some very interesting use cases that have never been realized before.

The very best of these new techniques possess a quality that I love, namely that they have a certain surprising obviousness to them:

The techniques seem obvious — but only in retrospect.

And only after you’ve been surprised by the new idea or insight that lurks behind them.

If such an effort is indeed the first hint of a moonshot for touch, well, that’s a legacy for this project that I can live with.


UPDATE: The talk I gave at the CHI 2016 conference on this project is now available. Have a gander if you are so inclined.


 

Thumb sensed as it hovers over pre-touch mobile phoneKen Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O’Hara, Gavin Smyth, William Buxton. 2016. Pre-Touch Sensing for Mobile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, p. 2869-2881. San Jose, CA, May 7-12, 2016. http://dx.doi.org/10.1145/2858036.2858095

[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Pre-Touch Sensing for Mobile Interaction video on YouTube

 

Editorial: Welcome to a New Era for TOCHI

Wherein I tell the true story of how I became an Editor-in-Chief.


Constant change is a given in the world of high technology.

But still it can come as a rude awakening when it arrives in human terms, and we find that it also applies to our friends, our colleagues, and the people we care for.

Not to mention ourselves!

So it was that I found myself, with a tumbler full of fresh coffee steaming between my hands, looking in disbelief at an email nominating me to assume the editorial helm of the leading journal in my field, the ACM Transactions on Computer-Human Interaction (otherwise known as TOCHI).

Ultimately (through no fault of my own) the ACM Publications Board was apparently seized by a episode of temporary madness and, deeming my formal application to have the necessary qualifications (with a dozen years of TOCHI associate editorship under my belt, a membership in the CHI Academy for recognized leaders in the field, and a Lasting Impact Award for my early work on mobile sensing—not to mention hundreds of paper rejections that apparently did no lasting damage to my reputation) they forthwith approved me to take over as Editor-in-Chief from my friend and long-time colleague, Shumin Zhai.

I’ve known Shumin since 1994, way back when I delivered my very first talk at CHI in the same session as he presented his latest results on “the silk cursor.” I took an instant liking to him, but I only came to fully appreciate over the years that followed that Shumin’s work ethic is legendary. As my colleague Bill Buxton (who sat on Shumin’s thesis committee) once put it, “Shumin works harder than any two persons I have ever known.”

And of course that applied to Shumin’s work ethic with TOCHI as well.

A man who now represents an astoundingly large pair of shoes that I must fill.

To say that I respect Shumin enormously, and the incredible progress he brought to the operation and profile of the journal during his six-year tenure, would be a vast understatement.

But after I got over the sheer terror of taking on such an important role, I began to get excited.

And then I got ideas.

Lots of ideas.

A few of them might even be good ones:

Ways to advance the journal.

Ways to keep operating at peak efficiency in the face of an ever-expanding stream of submissions.

And most importantly, ways to deliver even more impact to our readers, and on behalf of our authors.

Those same authors whose contributions make it possible for us to proclaim:

TOCHI is the flagship journal of the Computer-Human Interaction community.

So in this, my introductory editorial as the head honcho, new sheriff in town, and supreme benevolent dictator otherwise known as the Editor-in-Chief, I would like to talk about how the transition is going, give a few updates on TOCHI’s standard operating procedure, and—with an eye towards growing the impact of the journal—announce the first of what I hope will be many exciting new initiatives.

And in case it is not already obvious, I intend to have some fun with this.

All while preserving the absolutely rigorous and top-notch reputation of the journal, and the constant push for excellence in all of the papers that we publish.

[Read the rest at: http://dx.doi.org/10.1145/2882897]


Be sure to also check out The Editor’s Spotlight, highlighting the many strong contributions in this issue. This, along with the full text of my introductory editorial, is available without an ACM Digital Library subscription via the links below.


_TOCHI-thumbKen Hinckley. 2016. Editorial: Welcome to a New Era for TOCHI. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1e (February 2016), 6 pages. http://dx.doi.org/10.1145/2882897

_TOCHI-thumbKen Hinckley. 2016. The Editor’s Spotlight: TOCHI Issue 23:1. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1 (February 2016), 4 pages. http://dx.doi.org/10.1145/2882899

Paper: Sensing Tablet Grasp + Micro-mobility for Active Reading

Lately I have been thinking about touch:

In the tablet-computer sense of the word.

To most people, this means the touchscreen. The intentional pokes and swipes and pinching gestures we would use to interact with a display.

But not to me.

Touch goes far beyond that.

Look at people’s natural behavior. When they refer to a book, or pass a document to a collaborator, there are two interesting behaviors that characterize the activity.

What I call the seen but unnoticed:

Simple habits and social cues, there all the time, but which fall below our conscious attention — if they are even noticed at all.

By way of example, let’s say we’re observing someone handle a magazine.

First, the person has to grasp the magazine. Seems obvious, but easy to overlook — and perhaps vital to understand. Although grasp typically doesn’t involve contact of the fingers with the touchscreen, this is a form of ‘touch’ nonetheless, even if it is one that traditionally hasn’t been sensed by computers.

Grasp reveals a lot about the intended use, whether the person might be preparing to pick up the magazine or pass it off, or perhaps settling down for a deep and immersive engagement with the material.

Second, as an inevitable consequence of grasping the magazine, it must move. Again, at first blush this seems obvious. But these movements may be overt, or they may be quite subtle. And to a keen eye — or an astute sensing system — they are a natural consequence of grasp, and indeed are what give grasp its meaning.

In this way, sensing grasp informs the detection of movements.

And, coming full circle, the movements thus detected enrich what we can glean from grasp as well.

Yet, this interplay of grasp and movement has rarely been recognized, much less actively sensed and used to enrich and inform interaction with tablet computers.

And this feeds back into a larger point that I have often found myself trying to make lately, namely that touch is about far more than interaction with the touch-screen alone.

If we want to really understand touch (as well as its future as a technology) then we need to deeply understand these other modalities — grasp and movement, and perhaps many more — and thereby draw out the full naturalness and expressivity of interaction with tablets (and mobile phones, and e-readers, and wearables, and many dreamed-of form-factors perhaps yet to come).

My latest publication looks into all of these questions, particularly as they pertain to reading electronic documents on tablets.

We constructed a tablet (albeit a green metallic beast of one at present) that can detect natural grips along its edges and on the entire back surface of the device. And with a full complement of inertial motion sensors, as well. This image shows the grip-sensing (back) side of our technological monstrosity:

Grip Sensing Tablet Hardware

But this set-up allowed us to explore ways of combining grip and subtle motion (what has sometimes been termed micro-mobility in the literature), resulting in the following techniques (among a number of others):

A Single User ENGAGING with a Single Device

Some of these techniques address the experience of an individual engaging with their own reading material.

For example, you can hold a bookmark with your thumb (much as you can keep your finger on a page in physical book) and then tip the device. This flips back to the page that you’re holding:

Tip-to-Flip-x715

This ‘Tip-to-Flip’ interaction  involves both the grip and the movement of the device and results in a fairly natural interaction that builds on a familiar habit from everyday experience with physical documents.

Another one we experimented with was a very subtle interaction that mimics holding a document and angling it up to inspect it more closely. When we sense this, the tablet zooms in slightly on the page, while removing all peripheral distractions such as menu-bars and icons:

Immersive Reading mode through grip sensing

This immerses the reader in the content, rather than the iconographic gewgaws which typically border the screen of an application as if to announce, “This is a computer!”

Multiple Users Collaborating around a Single Device

Another set of techniques we explored looked at how people pass devices to one another.

In everyday experience, passing a paper document to a collaborator is a very natural — and different — form of “sharing,” as compared to the oft-frustrating electronic equivalents we have at our disposal.

Likewise, computers should be able to sense and recognize such gestures in the real world, and use them to bring some of the socially and situationally appropriate sharing that they afford to the world of electronic documents.

We explored one such technique that automatically sets up a guest profile when you hand a tablet (displaying a specific document) to another user:

Face-to-Face-Handoff-x715

The other user can then read and mark-up that document, but he is not the beneficiary of a permanent electronic copy of it (as would be the case if you emailed him an attachment), nor is he permitted to navigate to other areas or look at other files on your tablet.

You’ve physically passed him the electronic document, and all he can do is look at it and mark it up with a pen.

Not unlike the semantics — long absent and sorely missed in computing — of a simple a piece of paper.

A Single User Working With Multiple Devices

A final area we looked at considers what happens when people work across multiple tablets.

We already live in a world where people own and use multiple devices, often side-by-side, yet our devices typically have little or no awareness of one another.

But contrast this to the messy state of people’s physical desks, with documents strewn all over. People often place documents side-by-side as a lightweight and informal way of organization, and might dexterously pick one up or hold it at the ready for quick reference when engaged in an intellectually demanding task.

Again, missing from the world of the tablet computer.

But by sensing which tablets you hold, or pick up, our system allows people to quickly refer to and cross-reference content across federations of such devices.

While the “Internet of Things” may be all the rage these days among the avant-garde of computing, such federations remain uncommon and in our view represent the future of a ‘Society of Devices’ that can recognize and interact with one another, all while respecting social mores, not the least of which are the subtle “seen but unnoticed” social cues afforded by grasping, moving, and orienting our devices.

Fine-Grained-Reference-x715

Closing ThoughtS:

An ExpanDED Perspective OF ‘TOUCH’

The examples above represent just a few simple steps. Much more can, and should, be done to fully explore and vet these directions.

But by viewing touch as far more than simple contact of the fingers with a grubby touchscreen — and expanding our view to consider grasp, movement of the device, and perhaps other qualities of the interaction that could be sensed in the future as well — our work hints at a far wider perspective.

A perspective teeming with the possibilities that would be raised by a society of mobile appliances with rich sensing capabilities, potentially leading us to far more natural, more expressive, and more creative ways of engaging in the knowledge work of the future.

 


 

Sensing-Tablet-Grasp-Micro-Mobility-UIST-2015-thumbDongwook Yoon, Ken Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, Michel Pahud, and Marcel Gavriliu. 2015. Sensing Tablet Grasp + Micro-mobility for Active Reading. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 477-487. Charlotte, NC, Nov. 8-11, 2015. http://dx.doi.org/10.1145/2807442.2807510
[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Sensing Tablet Grasp + Micro-mobility for Active Reading video on YouTube

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus
Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

Commentary: On Excellence in Reviews, Thoughts for the HCI Community

Peer review — and particularly the oft-sorry state it seems to sink to — is a frequent topic of conversation at the water-coolers and espresso machines of scientific institutions the world over.

Of course, every researcher freshly wounded by a rejection has strong opinions about reviews and reviewers.  These are often of the sort that are spectacularly unfit to print, but they are widely held nonetheless.

Yet these same wounded researchers typically serve as reviewers themselves, and write reviews which other authors receive.

And I can assure you that “other authors” all too frequently regard the remarks contained in the reviews of these erstwhile wounded researchers with the same low esteem.

So if we play out this vicious cycle to its logical conclusion, in a dystopian view peer review boils down to the following:

  • We trash one another’s work.
  • Everything gets rejected.
  • And we all decide to pack up our toys and go home.

That’s not much of a recipe for scientific progress.

But what fuels this vicious cycle and what can be done about it?

As reviewers, how can we produce Excellent Reviews that begin to unwind this dispiriting scientific discourse?

As authors, how should we interpret the comments of referees, or (ideally) write papers that will be better received in the first place?

When I pulled together the program committee for the annual MobileHCI conference last year, I found myself pondering all of these issues, and really wondering what we could do to advance the conference’s review process with a positive footing.

And particularly because MobileHCI is a smaller venue, with many of the program committee members still relatively early in their research careers, I really wanted to get them started with the advice that I wished someone had given me when I first started writing and reviewing scientific papers in graduate school.

So I penned an essay that surfaces all of these issues. It describes some of the factors that lead to this vicious cycle in reviews. It makes some very specific recommendations about what an excellent review is, and how to produce one. And if you read it as an author (perhaps smarting from a recent rejection) who wants to better understand where the heck do these reviews come from anyway? and as a by-product actually write better papers, then reading between the lines will give you some ideas of how to go about that as well.

And I was pleased, if not more than a bit surprised, to see that my little rant essay was well-received by the research community:

And I received many other private responses with a similar tenor.

So if you care at all about these issues I hope that you will take a look at what I had to say. And circle back here to leave comments or questions, if you like.

There’s also a companion presentation [Talk PPTX] [Talk PDF], which I used with the MobileHCI program committee to instill a positive and open-minded attitude as we embarked on our deliberations. I’ve included that here as well in the hope that it might be of some use to others hoping to gain a little insight into what goes on in such meetings, and how to run them.


Thumbnail - Excellence in ReviewsHinckley, K., So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit. Published as “The MobileHCI Philosophy” on the MobileHCI 2015 Web Site, Feb 10th, 2015. [Official MobileHCI Repository PDF] [Author’s Mirror Site PDF], [Talk PPTX] [Talk PDF].

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Symposium Abstract: Issues in bimanual coordination: The props-based interface for neurosurgical visualization

I have a small backlog of updates and new posts to clear out, which I’ll be undertaking in the next few days.

The first of these is the following small abstract that actually dates from way back in 1996, shortly before I graduated with my Ph.D. in Computer Science from the University of Virginia.

It was a really fun symposium organized by the esteemed Yves Guiard, famous for his kinematic chain model of human bimanual action, that included myself and Bill Buxton, among others. For me this was a small but timely recognition that came early in my career and made it possible for me to take the stage alongside two of my biggest research heroes.

Thumbnail for Symposium on Human Bimanual SpecializationHinckley, K., 140.3: Issues in bimanual coordination: The props-based interface for neurosurgical visualization. Appeared in Symposium 140: Human bimanual specialization: New perspectives on basic research and application, convened by Yves Guiard, Montréal, Quebec, Canada, Aug. 17, 1996. Abstract published in  International Journal of Psychology, Volume 31, Issue 3-4, Special Issue: Abstracts of the XXVI INTERNATIONAL CONGRESS OF PSYCHOLOGY, 1996. [PDF – Symposium 140 Abstracts]

Abstract

I will describe a three-dimensional human-computer interface for neurosurgical visualization based on the bimanual manipulation of real-world tools. The user’s nonpreferred hand holds a miniature head that can be “sliced open” or “pointed to” using a cross-sectioning plane or a stylus held in the preferred hand. The nonpreferred hand acts as a dynamic frame-of-reference relative to which the preferred hand articulates its motion. I will also discuss experiments that investigate the role of bimanual action in virtual manipulation and in the design of human-computer interfaces in general.