Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Invited Talk: WIPTTE 2015 Presentation of Sensing Techniques for Tablets, Pen, and Touch

The organizers of WIPTTE 2015, the Workshop on the Impact of Pen and Touch Technology on Education, kindly invited me to speak about my recent work on sensing techniques for stylus + tablet interaction.

One of the key points that I emphasized:

To design technology to fully take advantage of human skills, it is critical to observe what people do with their hands when they are engaged in manual activites such as handwriting.

Notice my deliberate the use of the plural, hands, as in both of ’em, in a division of labor that is a perfect example of cooperative bimanual action.

The power of crayon and touch.

My six-year-old daughter demonstrates the power of crayon and touch technology.

And of course I had my usual array of stupid sensor tricks to illustrate the many ways that sensing systems of the future embedded in tablets and pens could take advantage of such observations. Some of these possible uses for sensors probably seem fanciful, in this antiquated era of circa 2015.

But in eerily similar fashion, some of the earliest work that I did on sensors embedded in handheld devices also felt completely out-of-step with the times when I published it back in the year 2000. A time so backwards it already belongs to the last millennium for goodness sakes!

Now aspects of that work are embedded in practically every mobile device on the planet.

It was a fun talk, with an engaged audience of educators who are eager to see pen and tablet technology advance to better serve the educational needs of students all over the world. I have three kids of school age now so this stuff matters to me. And I love speaking to this audience because they always get so excited to see the pen and touch interaction concepts I have explored over the years, as well as the new technologies emerging from the dim fog that surrounds the leading frontiers of research.

Harold and the Purple Crayon book coverI am a strong believer in the dictum that the best way to predict the future is to invent it.

And the pen may be the single greatest tool ever invented to harness the immense creative power of the human mind, and thereby to scrawl out–perhaps even in the just-in-time fashion of the famous book Harold and the Purple Crayon–the uncertain path that leads us forward.

                    * * *

Update: I have also made the original technical paper and demonstration video available now.

If you are an educator seeing impacts of pen, tablet, and touch technology in the classroom, then I strongly encourage you to start organizing and writing up your observations for next year’s workshop. The 2016 edition of the series, (now renamed CPTTE) will be held at Brown University in Providence, Rhode Island, and chaired by none other than the esteemed Andries Van Dam, who is my academic grandfather (i.e. my Ph.D. advisor’s mentor) and of course widely respected in computing circles throughout the world.

Thumbnail - WIPTTE 2015 invited TalkHinckley, K., WIPTTE 2015 Invited Talk: Sensing Techniques for Tablet + Stylus Interaction. Workshop on the Impact of Pen and Touch Technology on Education, Redmond, WA, April 28th, 2015. [Slides (.pptx)] [Slides PDF]


Project: Bimanual In-Place Commands

Here’s another interesting loose end, this one from 2012, which describes a user interface known as “In-Place Commands” that Michel Pahud, myself, and Bill Buxton developed for a range of direct-touch form factors, including everything from tablets and tabletops all the way up to electronic whiteboards a la the modern Microsoft Surface Hub devices of 2015.

Microsoft is currently running a Request for Proposals for Surface Hub research, by the way, so check it out if that sort of thing is at all up your alley. If your proposal is selected you’ll get a spiffy new Surface Hub and $25,000 to go along with it.

We’ve never written up a formal paper on our In-Place Commands work, in part because there is still much to do and we intend to pursue it further when the time is right. But in the meantime the following post and video documenting the work may be of interest to aficionados of efficient interaction on such devices. This also relates closely to the Finger Shadow and Accordion Menu explored in our Pen +Touch work, documented here and here, which collectively form a class of such techniques.

While we wouldn’t claim that any one of these represent the ultimate approach to command and control for direct input, in sum they illustrate many of the underlying issues, the rich set of capabilities we strive to support, and possible directions for future embellishments as well.

Thumbnail for In-Place CommandsKnies, R. In-Place: Interacting with Large Displays. Reporting on research by Pahud, M., Hinckley, K., and Buxton, B. TechNet Inside Microsoft Research Blog Post, Oct 4th, 2012. [Author’s cached copy of post as PDF] [Video MP4] [Watch on YouTube]

In-Place Commands Screen Shot

The user can call up commands in-place, directly where he is working, by touching both fingers down and fanning out the available tool palettes. Many of the functions thus revealed act as click-through tools, where the user may simultaneously select and apply the selected tool — as the user is about to do for the line-drawing tool in the image above.

Watch Bimanual In-Place Commands video on YouTube

Symposium Abstract: Issues in bimanual coordination: The props-based interface for neurosurgical visualization

I have a small backlog of updates and new posts to clear out, which I’ll be undertaking in the next few days.

The first of these is the following small abstract that actually dates from way back in 1996, shortly before I graduated with my Ph.D. in Computer Science from the University of Virginia.

It was a really fun symposium organized by the esteemed Yves Guiard, famous for his kinematic chain model of human bimanual action, that included myself and Bill Buxton, among others. For me this was a small but timely recognition that came early in my career and made it possible for me to take the stage alongside two of my biggest research heroes.

Thumbnail for Symposium on Human Bimanual SpecializationHinckley, K., 140.3: Issues in bimanual coordination: The props-based interface for neurosurgical visualization. Appeared in Symposium 140: Human bimanual specialization: New perspectives on basic research and application, convened by Yves Guiard, Montréal, Quebec, Canada, Aug. 17, 1996. Abstract published in  International Journal of Psychology, Volume 31, Issue 3-4, Special Issue: Abstracts of the XXVI INTERNATIONAL CONGRESS OF PSYCHOLOGY, 1996. [PDF – Symposium 140 Abstracts]


I will describe a three-dimensional human-computer interface for neurosurgical visualization based on the bimanual manipulation of real-world tools. The user’s nonpreferred hand holds a miniature head that can be “sliced open” or “pointed to” using a cross-sectioning plane or a stylus held in the preferred hand. The nonpreferred hand acts as a dynamic frame-of-reference relative to which the preferred hand articulates its motion. I will also discuss experiments that investigate the role of bimanual action in virtual manipulation and in the design of human-computer interfaces in general.

Contribute to MobileHCI 2015 and Help Advance the Frontiers of Mobility: Submissions Due Feb 6th, 2015.

Mobile HCI 2015 bannerSend us your work. If it makes us go “Wow!” we want it.

Along with Hans Gellersen of Lancaster University (UK), I’m proud to announce that I’m co-chairing the papers selection committee for the 2015 installment of the long-running MobileHCI conference (sponsored by the ACM and SIGCHI), to take place Aug 24th-Aug 27th, 2015, in wonderful and historic Copenhagen, Denmark.

MobileHCI is the premiere venue to publish and learn about state-of-the-art innovations and insights for all aspects of human-computer interaction as it pertains to mobility–whether in terms of the devices we use, the services we engage with, or the new patterns of human behavior emerging from the wilderness of the modern-day digital ecology.

Submissions due Feb 6th, 2015.

Call for Papers

MobileHCI seeks contributions in the form of innovations, insights, or analyses related to human experiences with mobility.

Our interpretation of mobility is inclusive and broadly construed. Likewise, our view of contribution encompasses technology, experience, methodology, and theory—or any mix thereof, and beyond. We seek richness and diversity in topic as well as approach, method, and viewpoint. If you can make a convincing case that you have something important to say about mobility, in all its many forms, we want to see your work.

In no particular order, this includes contributions in the form of:

Systems & infrastructures. The design, architecture, deployment, and evaluation of systems and infrastructures that support development of or interaction with mobile devices and services.

Devices & techniques: The design, construction, usage, and evaluation of devices and techniques that create valuable new capabilities for mobile human-computer interaction.

Applications & experiences. Descriptions of the design, empirical study of interactive applications, or analysis of usage trends that leverage mobile devices and systems.

Methodologies & tools. New methods and tools designed for or applied to studying or building mobile user interfaces, applications, and mobile users.

Theories & models. Critical analysis or organizing theory with clearly motivated relevance to the design or study of mobile human-computer interaction; taxonomies of design or devices; well-supported essays on emerging trends and practice in mobile human-computer interaction.

Visions & wildcards. Well-argued and well-supported visions of the future of mobile computing; non-traditional topics that bear on mobility; under-represented viewpoints and perspectives that convincingly bring something new to mobile research and practice. Surprise us with something new and compelling.

We seek contribution of ideas, as opposed to convention of form.

If you write a good paper—present clear, well-argued and well-cited ideas that are backed up with some form of compelling evidence (proof-of-concept implementations, system demonstrations, data analysis, user studies, or whatever methodology suits the contribution you are trying to make)—then we want to see your work, and if we agree it is good, we will accept it.

We are not particularly picky about page lengths or the structure of papers. Use the number of pages you need to convey a contribution, no more, no less.

Reviewers traditionally expect about 4pp for shorter contributions, and about 10pp for long-form contributions, but these are simply guideposts of what authors most commonly submit.

If you have a great 10 page paper with an intriguing set of ideas and the references spill over onto page 12, we are happy with that.

If you can convey a solid idea in 8 pages, that is fine too.

Or a four-pager with a clearly articulated nugget of contribution is always welcome.

Finally, keep the “Wow!” test in mind.

We are always happy to consider thought-provoking work that might not be perfect but clearly does inject new ideas into the discourse on mobile interaction, what it is now, what it could be in the future.

We would rather have 10 thought-provoking papers that break new ground in their own unique ways, than that one perfect paper that is dull and unassailable.

Send us your work. If it makes us go “Wow!” we want it. By the same token there is nothing wrong with solid work that advances the state of the art. We are excited to expand the many frontiers of mobility and we need your contributions to help us get there.

You can find full details in the online call for or papers at the MobileHCI 2015 website.

And be sure to spread the word to your peers and collaborators so that we can have a rich conference programme with a great diversity of neat projects and results to showcase the cutting edge of mobility.

Interacting with the Undead: A Crash Course on the “Inhuman Factors” of Computing

I did a far-ranging interview last week with Nora Young, the host of CBC Radio’s national technology and trend-watching show called Spark.

But the most critical and timely topic we ventured into was the burning question on everyone’s mind as All Hallows’ Eve rapidly approaches:

Can zombies use touchscreens?

This question treads (or shall we say, shambles) into the widely neglected area of Inhuman Factors, a branch of Human-Computer Interaction that studies technological affordances for the most disenfranchised and unembodied users of them all–the undead.

Fortunately for Nora, however, I am the world’s foremost authority on the topic.

And I was only too happy to speak to this glaring oversight in how we design today’s technologies, one that I have long campaigned to redress.

Needless to say, Zombie-Computer Interaction (ZCI) is an area rife with dire usability problems.

You can listen to the podcast and see how Nora sparked the discussion here.

But to clear up some common myths and misconceptions of ZCI, let me articulate seven critical design observations to keep in mind when designing technology for the undead:

  1.  Yes, zombies can use touchscreens–with appropriate design.
  2. Thus, like everything else in design, the correct answer is:
    “It Depends.”
  3. The corpse has to be fresh. Humans are essentially giant bags of water; touchscreens are sensitive to the capacitance induced by the moisture in our bodies. So long as the undead creature has recently departed the realm of the living, then, the capacitive touchscreens commonplace in today’s technology should respond appropriately.
  4. Results also may be acceptable if the zombie has fed on a sufficient quantity of brains in the last 24-36 hours.
  5. MOAR BRAINS! are better.
  6. Nonetheless, the water content of a motive corpse can be a significant barrier in day-to-day (or, to speak more precisely, night-to-night) interactions of the undead with tablets, smartphones, bank kiosks, and the like. In particular, touchscreens often completely fail to respond to mummies, ghasts, vampires, and the rarely-studied windigo of Algonquian legend–all due to the extreme desiccation of the corporeal form.
  7. Fortunately for these dried-up souls, the graveyard of devices-past is replete with resistive touchscreen technology such as the once-revered Palm Pilot handheld computer, as document in the frightening and deeply disturbing Buxton Collection of Input Devices and Technologies. These devices respond successfuly to the finger-taps of the desiccated undead because they sense contact pressure, not capacitance.

So let me recap the lessons:
Zombies can definitely use touchscreens; brains are good, MOAR BRAINS are better; and if you see a zombie sporting a Palm Pilot run like hell, because that sucker is damned hungry.

But naturally, the ground-breaking discussion on Zombie-Computer Interaction sparked by Nora’s provocation has triggered a flurry of follow-on questions from concerned citizens to my inbox:

What about ghosts? Can a ghost use a touchscreen?

A ghost is an unholy manifestation of non-corporeal form. Lacking an embodied form, a ghost therefore cannot use a touchscreen–their hand passes right through it. But ghosts can be sensed by light, such as laser rangefinders, or the depth-sensing technology of the Kinect camera for the XBox.

However, ghosts frequently can and do leave behind traces of ectoplasmic goo, which can cause touchscreens to respond in a strange and highly erratic manner.

If you have ever made a typo on a touchscreen keyboard, or triggered Angry Birds by accident when you could swear you were reaching for some other icon–chances are that “ghost contact” was triggered by a disembodied spirit trying to communicate with you from the beyond.

If this happens to you, I highly recommend that you immediately stop what you are doing and install every touchscreen Ouija board app you can find so that you can open a suitable communication channel with the realm of the dead.

What about Cthulu–H. P. Lovecraft’s terrifying cosmic deity that is part man, part loathsome alien form, and part giant squid? Can Cthulu use a touchscreen?

Studies are inconclusive. Scott’s great expedition to the Transantarctic mountains–where records of Cthulu are rumored to be hidden–vanished in the icy wastes, never to be heard from again. R. Carter et al. studied the literature extensively and promptly went insane.

Other researchers, including myself, have been understandably dissuaded from examining the issue further.

My opinion, unsupported by data, is that as a pan-dimensional being Cthulu can touch whatever the hell he wants–when the stars are right and the lost city of R’lyeh rises once again from the slimy eons-deep vaults of the black Pacific.

A lot of PEOPLE are WORRIED about Lawyers. Can lawyers use touchscreens as well?

Sadly, it is widely believed (and backed up by scientific studies) that most lawyers have no soul.

Therefore the majority of lawyers cannot use a touchscreen at all.

This is why summons and lawsuits always arrive in paper form from a beady-eyed courier.


Other noteworthy challenges to conventional INHUMAN FACTORS design wisdom

I’ve also fielded a variety of questions and strongly-held opinions from the far and dark corners of the Twittersphere.

Needless to say, these are clearly highly disturbed individuals, so I recommend that you interact with them at your own risk.

All right. I think I’ve put this topic to rest.

But keep the questions coming.

And be careful tonight.

Be sure to post in the comments below, or tweet me after midnight @ken_hinckley and I’ll do my best to give you a scientifically rigorous (if not rigor-mortis-ish) response.

Paper: LightRing: Always-Available 2D Input on Any Surface

In this modern world bristling with on-the-go-go-go mobile activity, the dream of an always-available pointing device has long been held as a sort of holy grail of ubiquitous computing.

Ubiquitous computing, as futurists use the term, refers to the once-farfetched vision where computing pervades everything, everywhere, in a sort of all-encompassing computational nirvana of socially-aware displays and sensors that can respond to our every whim and need.

From our shiny little phones.

To our dull beige desktop computers.

To the vast wall-spanning electronic whiteboards of a future largely yet to come.

How will we interact with all of these devices as we move about the daily routine of this rapidly approaching future? As we encounter computing in all its many forms, carried on our person as well as enmeshed in the digitally enhanced architecture of walls, desktops, and surfaces all around?

Enter LightRing, our early take on one possible future for ubiquitous interaction.

LightRing device on a supporting surface

By virtue of being a ring always worn on the finger, LightRing travels with us and is always present.

By virtue of some simple sensing and clever signal processing, LightRing can be supported in an extremely compact form-factor while providing a straightforward pointing modality for interacting with devices.

At present, we primarily consider LightRing as it would be configured to interact with a situated display, such as a desktop computer, or a presentation projected against a wall at some distance.

The user moves their index finger, angling left and right, or flexing up and down by bending at the knuckle. Simple stuff, I know.

But unlike a mouse, it’s not anchored to any particular computer.

It travels with you.

It’s a go-everywhere interaction modality.

Close-up of LightRing and hand angles inferred from sensors

Left: The degrees-of-freedom detected by the LightRing sensors. Right: Conceptual mapping of hand movement to the sensed degrees of freedom. LightRing then combines these to support 2D pointing at targets on a display, or other interactions.

LightRing can then sense these finger movements–using a one-dimensional gyroscope to capture the left-right movement, and an infrared sensor-emitter pair to capture the proximity of the flexing finger joint–to support a cursor-control mode that is similar to how you would hold and move a mouse on a desktop.

Except there’s no mouse at all.

And there needn’t even be a desktop, as you can see in the video embedded below.

LightRing just senses the movement of your finger.  You can make the pointing motions on a tabletop, sure, but you can just as easily do them on a wall. Or on your pocket. Or a handheld clipboard.

All the sensing is relative so LightRing always knows how to interpret your motions to control a 2D cursor on a display. Once the LightRing has been paired with a situated device, this lets you point at targets, even if the display itself is beyond your physical reach. You can sketch or handwrite characters with your finger–another scenario we have explored in depth on smartphones and even watches.

The trick to the LightRing is that it can automatically, and very naturally, calibrate itself to your finger’s range of motion if you just swirl your finger. From that circular motion LightRing can work backwards from the sensor values to how your finger is moving, assuming it is constrained to (roughly) a 2D plane. And that, combined with a button-press or finger touch on the ring itself, is enough to provide an effective input device.

The LightRing, as we have prototyped it now, is just one early step in the process. There’s a lot more we could do with this device, and many more practical problems that would need to be resolved to make it a useful adjunct to everyday devices–and to tap its full potential.

But my co-author Wolf Kienzle and I are working on it.

And hopefully, before too much longer now, we’ll have further updates on even more clever and fanciful stuff that we can do through this one tiny keyhole into this field of dreams, the verdant golden country of ubiquitous computing.


LightRing thumbnailKienzle, W., Hinckley, K., LightRing: Always-Available 2D Input on Any Surface. In the 27th ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, Hawaii, Oct. 5-8, 2014, pp. 157-160. [PDF] [video.mp4 TBA] [Watch on YouTube]

Watch LightRing video on YouTube