Award: CHI Academy, 2014 Inductee

I’ve been a bit remiss in posting this, but as of April 2014, I’m a member of the CHI Academy, which is an honorary group that recognizes leaders in the field of Human-Computer interaction.

Among whom I can apparently I now include myself, strange as that  may seem.

I was completely surprised by this and can honestly say I never expected any special recognition. I’ve just been plugging away on my little devices and techniques, writing papers here and there, but I suppose over the decades it all adds up. I don’t know if this means that my work is especially good or that I’m just getting older, but either way I appreciate the gesture of recognition from my peers in the field.

I was in a bit of a ribald mood when I got the news, so when the award organizers asked me to reply with my bio I decided what the heck and decided to have some fun with it:

Ken Hinckley is a Principal Researcher at Microsoft Research, where he has spent the last 17 years investigating novel input devices, device form-factors, and modalities of interaction.

He feels fortunate to have had the opportunity to collaborate with many CHI Academy members while working there, including noted trouble-makers such as Bill Buxton, Patrick Baudisch, and Eric Horvitz—as well as George Robertson, whom he owes a debt of gratitude for hiring him fresh out of grad school.

Ken is perhaps best know for his work on sensing techniques, cross-device interaction, and pen computing. He has published over 75 academic papers and is a named inventor on upwards of 150 patents. Ken holds a Ph.D. in Computer Science from the University of Virginia, where he studied with Randy Pausch.

He has also published fiction in professional markets including Nature and Fiction River, and prides himself on still being able to hit 30-foot jump shots at age 44.

Not too shabby.

Now, in the spirit of full disclosure, there are no real perks associated with being a CHI Academy member as far as I’ve been able to figure. People do seem to ask me for reference letters just a tiny bit more frequently. And I definitely get more junk email from organizers of dubious-sounding conferences than before. No need for research heroics if you want a piece of that, just email me and I’d be happy to forward them along.

But the absolute most fun part of the whole deal was a small private celebration that noted futurist Bill Buxton organized at his ultra-modern home fronting Lake Ontario in Toronto, and where I was joined by my Microsoft Research colleagues Abigail Sellen, her husband Richard Harper, and John Tang. Abi is already a member (and an occasional collaborator whom I consider a friend), and Richard and John were inducted along with me into the Academy in 2014.

Bill Buxton needs no introduction among the avant garde of computing. And he’s well known in the design community as well, not to mention publishing on equestrianism and mountaineering, among other topics. In particular, his collection of interactive devices is arguably the most complete ever assembled. Only a tiny fraction of it is currently documented on-line. It contains everything from the world’s first radio and television remote controls, to the strangest keyboards ever conceived by mankind, and even the very first handcrafted wooden computer mice that started cropping up in the 1960’s.

The taxi dropped me off, I rang the doorbell, and when a tall man with rock-star hair gone gray and thinned precipitously by the ravages of time answered the door, I inquired:

“Is this, by any chance, the Buxton Home for Wayward Input Devices?”

To which Bill replied in the affirmative.

I indeed had the right place, I would fit right in here, and he showed me in.

Much of Bill’s collection lives off the premises, but his below-ground sanctum sanctorum was still walled by shelves bursting with transparent tubs packed with handheld gadgets that had arrived far before their time, historical mice and trackballs, and hybrid bastard devices of every conceivable description. And what little space remained was packed with books on design, sketching, and the history of mountaineering and the fur trade.

Despite his home office being situated below grade, natural light poured down into it through the huge front windows facing the inland sea, owing to the home’s modern design. Totally awesome space and would have looked right at home on the front page of Architectural Digest.

Bill showed us his origami kayak on the back deck, treated us all to some hand-crafted martinis in the open-plan kitchen, and arranged for transportation to the awards dinner via a 10-person white stretch limousine. We even made a brief pit stop so Bill could dash out and pick up a bottle of champagne at a package store.

Great fun.

I’ve known Bill since 1994, when he visited Randy Pausch’s lab at the University of Virginia, and ever since people have often assumed that he was my advisor. He never was in any official capacity, but I read all of his papers in that period and in many ways I looked up to him as my research hero. And now that we’ve worked together as colleagues for nearly 10 years (!), and with Randy’s passing, I often do still see him as a mentor.

Or is that de-mentor?

Probably a little bit of each, in all honesty (grin).

Yeah, the award was pretty cool and all, but it was the red carpet thrown out by Bill that I’ll always remember.

Thumbnail - Ken Hinckley CHI Academy 2014 InducteeHinckley, K., CHI Academy. Inducted April 27th, 2014 at CHI 2014 in Toronto, Ontario, Canada, for career research accomplishments and service to the ACM SIGCHI community (Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction). [Ken Hinckley CHI Academy Bio] 

The CHI Academy is an honorary group of individuals who have made substantial contributions to the field of human-computer interaction. These are the principal leaders of the field, whose efforts have shaped the disciplines and/or industry, and led the research and/or innovation in human-computer interaction. The criteria for election to the CHI Academy are:

  • Cumulative contributions to the field.
  • Impact on the field through development of new research directions and/or innovations.
  • Influence on the work of others.
  • Reasonably active participant in the ACM SIGCHI community.

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Hinckley, K., Jacob, R., Ware, C. Wobbrock, J., and Wigdor, D., Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  [PDF – Author’s Draft – may contain discrepancies]

Invited Talk: WIPTTE 2015 Presentation of Sensing Techniques for Tablets, Pen, and Touch

The organizers of WIPTTE 2015, the Workshop on the Impact of Pen and Touch Technology on Education, kindly invited me to speak about my recent work on sensing techniques for stylus + tablet interaction.

One of the key points that I emphasized:

To design technology to fully take advantage of human skills, it is critical to observe what people do with their hands when they are engaged in manual activites such as handwriting.

Notice my deliberate the use of the plural, hands, as in both of ’em, in a division of labor that is a perfect example of cooperative bimanual action.

The power of crayon and touch.

My six-year-old daughter demonstrates the power of crayon and touch technology.

And of course I had my usual array of stupid sensor tricks to illustrate the many ways that sensing systems of the future embedded in tablets and pens could take advantage of such observations. Some of these possible uses for sensors probably seem fanciful, in this antiquated era of circa 2015.

But in eerily similar fashion, some of the earliest work that I did on sensors embedded in handheld devices also felt completely out-of-step with the times when I published it back in the year 2000. A time so backwards it already belongs to the last millennium for goodness sakes!

Now aspects of that work are embedded in practically every mobile device on the planet.

It was a fun talk, with an engaged audience of educators who are eager to see pen and tablet technology advance to better serve the educational needs of students all over the world. I have three kids of school age now so this stuff matters to me. And I love speaking to this audience because they always get so excited to see the pen and touch interaction concepts I have explored over the years, as well as the new technologies emerging from the dim fog that surrounds the leading frontiers of research.

Harold and the Purple Crayon book coverI am a strong believer in the dictum that the best way to predict the future is to invent it.

And the pen may be the single greatest tool ever invented to harness the immense creative power of the human mind, and thereby to scrawl out–perhaps even in the just-in-time fashion of the famous book Harold and the Purple Crayon–the uncertain path that leads us forward.

If you are an educator seeing impacts of pen, tablet, and touch technology in the classroom, then I strongly encourage you to start organizing and writing up your observations for next year’s workshop. The 2016 edition of the series will be held at Brown University in Providence, Rhode Island, and chaired by none other than the esteemed Andries Van Dam, who is my academic grandfather (i.e. my Ph.D. advisor’s mentor) and of course widely respected in computing circles throughout the world.

Thumbnail - WIPTTE 2015 invited TalkHinckley, K., WIPTTE 2015 Invited Talk: Sensing Techniques for Tablet + Stylus Interaction. Workshop on the Impact of Pen and Touch Technology on Education, Redmond, WA, April 28th, 2015. [Slides (.pptx)] [Slides PDF]

Project: Bimanual In-Place Commands

Here’s another interesting loose end, this one from 2012, which describes a user interface known as “In-Place Commands” that Michel Pahud, myself, and Bill Buxton developed for a range of direct-touch form factors, including everything from tablets and tabletops all the way up to electronic whiteboards a la the modern Microsoft Surface Hub devices of 2015.

Microsoft is currently running a Request for Proposals for Surface Hub research, by the way, so check it out if that sort of thing is at all up your alley. If your proposal is selected you’ll get a spiffy new Surface Hub and $25,000 to go along with it.

We’ve never written up a formal paper on our In-Place Commands work, in part because there is still much to do and we intend to pursue it further when the time is right. But in the meantime the following post and video documenting the work may be of interest to aficionados of efficient interaction on such devices. This also relates closely to the Finger Shadow and Accordion Menu explored in our Pen +Touch work, documented here and here, which collectively form a class of such techniques.

While we wouldn’t claim that any one of these represent the ultimate approach to command and control for direct input, in sum they illustrate many of the underlying issues, the rich set of capabilities we strive to support, and possible directions for future embellishments as well.

Thumbnail for In-Place CommandsKnies, R. In-Place: Interacting with Large Displays. Reporting on research by Pahud, M., Hinckley, K., and Buxton, B. TechNet Inside Microsoft Research Blog Post, Oct 4th, 2012. [Author’s cached copy of post as PDF] [Video MP4] [Watch on YouTube]

In-Place Commands Screen Shot

The user can call up commands in-place, directly where he is working, by touching both fingers down and fanning out the available tool palettes. Many of the functions thus revealed act as click-through tools, where the user may simultaneously select and apply the selected tool — as the user is about to do for the line-drawing tool in the image above.

Watch Bimanual In-Place Commands video on YouTube

Symposium Abstract: Issues in bimanual coordination: The props-based interface for neurosurgical visualization

I have a small backlog of updates and new posts to clear out, which I’ll be undertaking in the next few days.

The first of these is the following small abstract that actually dates from way back in 1996, shortly before I graduated with my Ph.D. in Computer Science from the University of Virginia.

It was a really fun symposium organized by the esteemed Yves Guiard, famous for his kinematic chain model of human bimanual action, that included myself and Bill Buxton, among others. For me this was a small but timely recognition that came early in my career and made it possible for me to take the stage alongside two of my biggest research heroes.

Thumbnail for Symposium on Human Bimanual SpecializationHinckley, K., 140.3: Issues in bimanual coordination: The props-based interface for neurosurgical visualization. Appeared in Symposium 140: Human bimanual specialization: New perspectives on basic research and application, convened by Yves Guiard, Montréal, Quebec, Canada, Aug. 17, 1996. Abstract published in  International Journal of Psychology, Volume 31, Issue 3-4, Special Issue: Abstracts of the XXVI INTERNATIONAL CONGRESS OF PSYCHOLOGY, 1996. [PDF – Symposium 140 Abstracts]


I will describe a three-dimensional human-computer interface for neurosurgical visualization based on the bimanual manipulation of real-world tools. The user’s nonpreferred hand holds a miniature head that can be “sliced open” or “pointed to” using a cross-sectioning plane or a stylus held in the preferred hand. The nonpreferred hand acts as a dynamic frame-of-reference relative to which the preferred hand articulates its motion. I will also discuss experiments that investigate the role of bimanual action in virtual manipulation and in the design of human-computer interfaces in general.

Contribute to MobileHCI 2015 and Help Advance the Frontiers of Mobility: Submissions Due Feb 6th, 2015.

Mobile HCI 2015 bannerSend us your work. If it makes us go “Wow!” we want it.

Along with Hans Gellersen of Lancaster University (UK), I’m proud to announce that I’m co-chairing the papers selection committee for the 2015 installment of the long-running MobileHCI conference (sponsored by the ACM and SIGCHI), to take place Aug 24th-Aug 27th, 2015, in wonderful and historic Copenhagen, Denmark.

MobileHCI is the premiere venue to publish and learn about state-of-the-art innovations and insights for all aspects of human-computer interaction as it pertains to mobility–whether in terms of the devices we use, the services we engage with, or the new patterns of human behavior emerging from the wilderness of the modern-day digital ecology.

Submissions due Feb 6th, 2015.

Call for Papers

MobileHCI seeks contributions in the form of innovations, insights, or analyses related to human experiences with mobility.

Our interpretation of mobility is inclusive and broadly construed. Likewise, our view of contribution encompasses technology, experience, methodology, and theory—or any mix thereof, and beyond. We seek richness and diversity in topic as well as approach, method, and viewpoint. If you can make a convincing case that you have something important to say about mobility, in all its many forms, we want to see your work.

In no particular order, this includes contributions in the form of:

Systems & infrastructures. The design, architecture, deployment, and evaluation of systems and infrastructures that support development of or interaction with mobile devices and services.

Devices & techniques: The design, construction, usage, and evaluation of devices and techniques that create valuable new capabilities for mobile human-computer interaction.

Applications & experiences. Descriptions of the design, empirical study of interactive applications, or analysis of usage trends that leverage mobile devices and systems.

Methodologies & tools. New methods and tools designed for or applied to studying or building mobile user interfaces, applications, and mobile users.

Theories & models. Critical analysis or organizing theory with clearly motivated relevance to the design or study of mobile human-computer interaction; taxonomies of design or devices; well-supported essays on emerging trends and practice in mobile human-computer interaction.

Visions & wildcards. Well-argued and well-supported visions of the future of mobile computing; non-traditional topics that bear on mobility; under-represented viewpoints and perspectives that convincingly bring something new to mobile research and practice. Surprise us with something new and compelling.

We seek contribution of ideas, as opposed to convention of form.

If you write a good paper—present clear, well-argued and well-cited ideas that are backed up with some form of compelling evidence (proof-of-concept implementations, system demonstrations, data analysis, user studies, or whatever methodology suits the contribution you are trying to make)—then we want to see your work, and if we agree it is good, we will accept it.

We are not particularly picky about page lengths or the structure of papers. Use the number of pages you need to convey a contribution, no more, no less.

Reviewers traditionally expect about 4pp for shorter contributions, and about 10pp for long-form contributions, but these are simply guideposts of what authors most commonly submit.

If you have a great 10 page paper with an intriguing set of ideas and the references spill over onto page 12, we are happy with that.

If you can convey a solid idea in 8 pages, that is fine too.

Or a four-pager with a clearly articulated nugget of contribution is always welcome.

Finally, keep the “Wow!” test in mind.

We are always happy to consider thought-provoking work that might not be perfect but clearly does inject new ideas into the discourse on mobile interaction, what it is now, what it could be in the future.

We would rather have 10 thought-provoking papers that break new ground in their own unique ways, than that one perfect paper that is dull and unassailable.

Send us your work. If it makes us go “Wow!” we want it. By the same token there is nothing wrong with solid work that advances the state of the art. We are excited to expand the many frontiers of mobility and we need your contributions to help us get there.

You can find full details in the online call for or papers at the MobileHCI 2015 website.

And be sure to spread the word to your peers and collaborators so that we can have a rich conference programme with a great diversity of neat projects and results to showcase the cutting edge of mobility.

Interacting with the Undead: A Crash Course on the “Inhuman Factors” of Computing

I did a far-ranging interview last week with Nora Young, the host of CBC Radio’s national technology and trend-watching show called Spark.

But the most critical and timely topic we ventured into was the burning question on everyone’s mind as All Hallows’ Eve rapidly approaches:

Can zombies use touchscreens?

This question treads (or shall we say, shambles) into the widely neglected area of Inhuman Factors, a branch of Human-Computer Interaction that studies technological affordances for the most disenfranchised and unembodied users of them all–the undead.

Fortunately for Nora, however, I am the world’s foremost authority on the topic.

And I was only too happy to speak to this glaring oversight in how we design today’s technologies, one that I have long campaigned to redress.

Needless to say, Zombie-Computer Interaction (ZCI) is an area rife with dire usability problems.

You can listen to the podcast and see how Nora sparked the discussion here.

But to clear up some common myths and misconceptions of ZCI, let me articulate seven critical design observations to keep in mind when designing technology for the undead:

  1.  Yes, zombies can use touchscreens–with appropriate design.
  2. Thus, like everything else in design, the correct answer is:
    “It Depends.”
  3. The corpse has to be fresh. Humans are essentially giant bags of water; touchscreens are sensitive to the capacitance induced by the moisture in our bodies. So long as the undead creature has recently departed the realm of the living, then, the capacitive touchscreens commonplace in today’s technology should respond appropriately.
  4. Results also may be acceptable if the zombie has fed on a sufficient quantity of brains in the last 24-36 hours.
  5. MOAR BRAINS! are better.
  6. Nonetheless, the water content of a motive corpse can be a significant barrier in day-to-day (or, to speak more precisely, night-to-night) interactions of the undead with tablets, smartphones, bank kiosks, and the like. In particular, touchscreens often completely fail to respond to mummies, ghasts, vampires, and the rarely-studied windigo of Algonquian legend–all due to the extreme desiccation of the corporeal form.
  7. Fortunately for these dried-up souls, the graveyard of devices-past is replete with resistive touchscreen technology such as the once-revered Palm Pilot handheld computer, as document in the frightening and deeply disturbing Buxton Collection of Input Devices and Technologies. These devices respond successfuly to the finger-taps of the desiccated undead because they sense contact pressure, not capacitance.

So let me recap the lessons:
Zombies can definitely use touchscreens; brains are good, MOAR BRAINS are better; and if you see a zombie sporting a Palm Pilot run like hell, because that sucker is damned hungry.

But naturally, the ground-breaking discussion on Zombie-Computer Interaction sparked by Nora’s provocation has triggered a flurry of follow-on questions from concerned citizens to my inbox:

What about ghosts? Can a ghost use a touchscreen?

A ghost is an unholy manifestation of non-corporeal form. Lacking an embodied form, a ghost therefore cannot use a touchscreen–their hand passes right through it. But ghosts can be sensed by light, such as laser rangefinders, or the depth-sensing technology of the Kinect camera for the XBox.

However, ghosts frequently can and do leave behind traces of ectoplasmic goo, which can cause touchscreens to respond in a strange and highly erratic manner.

If you have ever made a typo on a touchscreen keyboard, or triggered Angry Birds by accident when you could swear you were reaching for some other icon–chances are that “ghost contact” was triggered by a disembodied spirit trying to communicate with you from the beyond.

If this happens to you, I highly recommend that you immediately stop what you are doing and install every touchscreen Ouija board app you can find so that you can open a suitable communication channel with the realm of the dead.

What about Cthulu–H. P. Lovecraft’s terrifying cosmic deity that is part man, part loathsome alien form, and part giant squid? Can Cthulu use a touchscreen?

Studies are inconclusive. Scott’s great expedition to the Transantarctic mountains–where records of Cthulu are rumored to be hidden–vanished in the icy wastes, never to be heard from again. R. Carter et al. studied the literature extensively and promptly went insane.

Other researchers, including myself, have been understandably dissuaded from examining the issue further.

My opinion, unsupported by data, is that as a pan-dimensional being Cthulu can touch whatever the hell he wants–when the stars are right and the lost city of R’lyeh rises once again from the slimy eons-deep vaults of the black Pacific.

A lot of PEOPLE are WORRIED about Lawyers. Can lawyers use touchscreens as well?

Sadly, it is widely believed (and backed up by scientific studies) that most lawyers have no soul.

Therefore the majority of lawyers cannot use a touchscreen at all.

This is why summons and lawsuits always arrive in paper form from a beady-eyed courier.


Other noteworthy challenges to conventional INHUMAN FACTORS design wisdom

I’ve also fielded a variety of questions and strongly-held opinions from the far and dark corners of the Twittersphere.

Needless to say, these are clearly highly disturbed individuals, so I recommend that you interact with them at your own risk.

All right. I think I’ve put this topic to rest.

But keep the questions coming.

And be careful tonight.

Be sure to post in the comments below, or tweet me after midnight @ken_hinckley and I’ll do my best to give you a scientifically rigorous (if not rigor-mortis-ish) response.