Short Story: Six Names for The End

SIX NAMES FOR THE END

Time to say goodbye.

nature futures iconMy latest confabulation is now available at Nature, in the award-winning Futures column. It was a fun piece of fiction to write — short, sharp, and packing a mighty wallop — and I hope that you enjoy reading it, too.

As well, you can find my post about the writing of this story on The Futures Conditional blog, also hosted by Nature.

Coming up shortly, my next short story is currently slated to appear in Interzone issue #265 in July. It’s a mighty strange one, which steps on pretty much every third rail known to mankind, and with even a title that has the potential to raise a large number of eyebrows.

What can I say, I try to keep things interesting around here. (grin).

Nature cover“Six Names for the End” by Ken Hinckley. In Nature, Vol. 534, No. 7607, p. 430. June 15, 2016. Futures column. [Available to read online for free]

Published by Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. DOI: 10.1038/534430a

Paper: Pre-Touch Sensing for Mobile Interaction

I have to admit it: I feel as if I’m looking at the sunrise of what may be a whole new way of interacting with mobile devices.

When I think about it, the possibilities bathe my eyes in a golden glow, and the warmth drums against my skin.

And in particular, my latest research peers out across this vivid horizon, to where I see touch — and mobile interaction with touchscreens in particular — evolving in the near future.

As a seasoned researcher, my job (which in reality is some strange admixture of interaction design, innovator, and futurist) is not necessarily to predict the future, but rather to invent it via extrapolation from a sort of visionary present which occupies my waking dreams.

I see things not as they are, but as they could be, through the lens afforded by a (usually optimistic) extrapolation from extant technologies, or those I know are likely to soon become more widely available.

With regards to interaction with touchscreens in particular, it has been clear to me for some time that the ability to sense the fingers as they approach the device — well before contact with the screen itself — is destined to become commonplace on commodity devices.

This is interesting for a number of reasons.

And no, the ability to do goofy gestures above the screen, waving at it frantically (as if it were a fancy-pants towel dispenser in a public restroom) in some dim hope of receiving an affirmative response, is not one of them.

In terms of human capabilities, one obviously cannot touch the screen of a mobile device without approaching it first.

But what often goes unrecognized is that one also must hold the device, typically in the non-preferred hand, as a precursor to touch. Hence, how you hold the device — the pattern of your grip and which hand you hold it in — are additional details of context that are more-or-less wholly ignored by current mobile devices.

So in this new work, my colleagues and I collectively refer to these two precursors of touch — approach and the need to grip the device — as pre-touch.

And it is my staunch belief that the ability to sense such pre-touch information could radically transform the mobile ‘touch’ interfaces that we all have come to take for granted.

You can get a sense of these possibilities, all implemented on a fully functional mobile phone with pre-touch sensing capability, in our demo reel below:

The project received a lot of attention, and coverage from many of the major tech blogs and other media outlets, for example:

  • The Verge (“Microsoft’s hover gestures for Windows phones are magnificent”)
  • SlashGear (“Smartphones next big thing: ‘Pre-Touch’”)
  • Business Insider (“Apple should definitely copy Microsoft’s incredible finger-sensing smartphone technology”)
  • And Fast Company Design (and again in “8 Incredible Prototypes That Show The Future Of Human-Computer Interaction.”)

But I rather liked the take that Silicon Angle offered, which took my concluding statement from the video above:

Taken as a whole, our exploration of pre-touch hints that the evolution of mobile touch may still be in its infancy – with many possibilities, unbounded by the flatland of the touchscreen, yet to explore.

 And then responded as follows:

This is the moon-landing-esque conclusion Microsoft comes to after demonstrating its rather cool pre-touch mobile technology, i.e., a mobile phone that senses what your fingers are about to do.

While this evolution of touch has been coming in the research literature for at least a decade now, what exactly to do with above- and around-screen sensing (especially in a mobile setting) has been far from obvious. And that’s where I think our work on pre-touch sensing techniques for mobile interaction distinguishes itself, and in so doing identifies some very interesting use cases that have never been realized before.

The very best of these new techniques possess a quality that I love, namely that they have a certain surprising obviousness to them:

The techniques seem obvious — but only in retrospect.

And only after you’ve been surprised by the new idea or insight that lurks behind them.

If such an effort is indeed the first hint of a moonshot for touch, well, that’s a legacy for this project that I can live with.


UPDATE: The talk I gave at the CHI 2016 conference on this project is now available. Have a gander if you are so inclined.


 

Thumb sensed as it hovers over pre-touch mobile phoneKen Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O’Hara, Gavin Smyth, William Buxton. 2016. Pre-Touch Sensing for Mobile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, p. 2869-2881. San Jose, CA, May 7-12, 2016. http://dx.doi.org/10.1145/2858036.2858095

[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Pre-Touch Sensing for Mobile Interaction video on YouTube

 

Olfactory Pen Creates Giant Stink, Fails to Make it out of Research Skunkworks

Microsoft has shown incredible stuff this week at \\build around Pen and Ink experiences — including simultaneous Pen + Touch experiences — as showcased for example in the great video on “Inking at the Speed of Thought” that is now available on Channel 9.

But I’ve had a skunkworks project — so to speak — in the works as part of my research (in the course of a career spanning decades) for a long time now, and this particular vision of the future of pen computing has consumed my imagination for at least the last 37 seconds or so. I’ve put a lot of thought into it.

It’s long been recognized that the sense of smell is a powerful index into the human memory. The scent of decaying pulp instantly brings to mind a favorite book, for example — in my case a volume of the masterworks of Edgar Allen Poe that was bequeathed to me by my grandfather.

Or who can ever forget the dizzying scent of their first significant other?

So I thought: Why not a digital pen with olfactory output?

Just think of the possibilities for this remarkable technology:

Not only can you ink faster than the speed of thought, but now you can stink faster than the speed of thought!

And I’m here to tell you that this is entirely possible. I think. I’ve already conceived of an amazing confabulation called the Aromatic Recombinator (patent pending; filed April 1st, 2016 at 2:55 PM; summarily rejected by patent office, 2:57 PM; earnest appeal filed in hope of an affirmative response, 2:59 PM; earnest response received: TBA).

Nonetheless, I can understand the patent office’s reticence.

Because with this remarkable technology one can arouse almost any scent, from the headiest of perfumes all the way to the most cloying musk, simply by scribbling on the screen of your tablet as if it were an electronic scratch-n-sniff card. A conception on which I have another patent pending, by the way.

Admittedly, some details remain sketchy, but I remain highly optimistic that the obvious problems can be sniffed out in short order.

And if not, rest assured, I will raise one hell of a stink.

[Happy April Fools Day.]

Editorial: Welcome to a New Era for TOCHI

Wherein I tell the true story of how I became an Editor-in-Chief.


Constant change is a given in the world of high technology.

But still it can come as a rude awakening when it arrives in human terms, and we find that it also applies to our friends, our colleagues, and the people we care for.

Not to mention ourselves!

So it was that I found myself, with a tumbler full of fresh coffee steaming between my hands, looking in disbelief at an email nominating me to assume the editorial helm of the leading journal in my field, the ACM Transactions on Computer-Human Interaction (otherwise known as TOCHI).

Ultimately (through no fault of my own) the ACM Publications Board was apparently seized by a episode of temporary madness and, deeming my formal application to have the necessary qualifications (with a dozen years of TOCHI associate editorship under my belt, a membership in the CHI Academy for recognized leaders in the field, and a Lasting Impact Award for my early work on mobile sensing—not to mention hundreds of paper rejections that apparently did no lasting damage to my reputation) they forthwith approved me to take over as Editor-in-Chief from my friend and long-time colleague, Shumin Zhai.

I’ve known Shumin since 1994, way back when I delivered my very first talk at CHI in the same session as he presented his latest results on “the silk cursor.” I took an instant liking to him, but I only came to fully appreciate over the years that followed that Shumin’s work ethic is legendary. As my colleague Bill Buxton (who sat on Shumin’s thesis committee) once put it, “Shumin works harder than any two persons I have ever known.”

And of course that applied to Shumin’s work ethic with TOCHI as well.

A man who now represents an astoundingly large pair of shoes that I must fill.

To say that I respect Shumin enormously, and the incredible progress he brought to the operation and profile of the journal during his six-year tenure, would be a vast understatement.

But after I got over the sheer terror of taking on such an important role, I began to get excited.

And then I got ideas.

Lots of ideas.

A few of them might even be good ones:

Ways to advance the journal.

Ways to keep operating at peak efficiency in the face of an ever-expanding stream of submissions.

And most importantly, ways to deliver even more impact to our readers, and on behalf of our authors.

Those same authors whose contributions make it possible for us to proclaim:

TOCHI is the flagship journal of the Computer-Human Interaction community.

So in this, my introductory editorial as the head honcho, new sheriff in town, and supreme benevolent dictator otherwise known as the Editor-in-Chief, I would like to talk about how the transition is going, give a few updates on TOCHI’s standard operating procedure, and—with an eye towards growing the impact of the journal—announce the first of what I hope will be many exciting new initiatives.

And in case it is not already obvious, I intend to have some fun with this.

All while preserving the absolutely rigorous and top-notch reputation of the journal, and the constant push for excellence in all of the papers that we publish.

[Read the rest at: http://dx.doi.org/10.1145/2882897]


Be sure to also check out The Editor’s Spotlight, highlighting the many strong contributions in this issue. This, along with the full text of my introductory editorial, is available without an ACM Digital Library subscription via the links below.


_TOCHI-thumbKen Hinckley. 2016. Editorial: Welcome to a New Era for TOCHI. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1e (February 2016), 6 pages. http://dx.doi.org/10.1145/2882897

_TOCHI-thumbKen Hinckley. 2016. The Editor’s Spotlight: TOCHI Issue 23:1. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1 (February 2016), 4 pages. http://dx.doi.org/10.1145/2882899

TOCHI Article Alerts: Auditory Reality and Super Bowl Angst

I wanted to offer some reflections on two final articles in the current issue (23:1) of the journal that I edit — the ACM Transactions on Computer-Human Interaction:

Auditory Display in Mobile Augmented Reality

The first article delves into augmented reality of a somewhat unusual sort, namely augmentation of mobile and situated interaction via spatialized auditory cues.

A carefully structured study, designed around enhancing interactive experiences for exhibits in an art gallery, teases apart some of the issues that confront realities augmented in this manner, and thereby offers a much deeper understanding of both the strengths and weaknesses of various ways of presenting spatialized auditory feedback.

As such this article contributes a great foundation for appropriate design of user experiences augmented by this oft-neglected modality.

(http://dx.doi.org/10.1145/2829944).

* * *

Mass Interaction in Social Television

The final paper of TOCHI Issue 23:1 presents the first large-scale study of real-world mass interactions in social TV, by studying the key motives of users for participating in side-channel commentaries when viewing major sporting events online.

The large scale of the study (analysis of nearly six million chats, plus a survey of 1,123 users) allows the investigators to relate these motives to diverse usage patterns, leading to practical design suggestions that can be used to support user interactions and to enhance the identified motives of users—such as emotional release, cheering and jeering, and sharing thoughts, information, and feelings through commentary.

On a personal level, as a long-time resident of Seattle I certainly could have benefitted from these insights during last year’s Super Bowl—where yes, in the armchair-quarterback opinion of this Editor-in-Chief, the ill-fated Seahawks should indeed have handed the ball to Marshawn Lynch.

Alas. There is always next year.

(http://dx.doi.org/10.1145/2843941).

 

Two Papers on Brain-Computer Interaction in TOCHI Issue 23:1

There’s lots to please the eye, ear, and mind in the current issue of the Transactions that I edit, TOCHI Issue 23:1.

And I mean that not only figuratively—in terms of nourishing the intellect—but quite literally, in terms of those precious few cubic centimeters of private terrain residing inside our own skulls.

Because brain-computer interaction (BCI) forms a major theme of Issue 23:1. The possibility of sensing aspects of human perception, cognition, and physiological states has long fascinated me—indeed, the very term “brain-computer interaction” resonates with the strongest memes that science fiction visionaries can dish up—yet this topic confronts us with a burgeoning scientific literature.

* * *

The first of these articles presents an empirical study of phasic brain wave changes as a direct indicator of programmer expertise.

It makes a strong case that EEG-based measures of cognitive load, as it relates to expertise, can be observed directly (rather than through subjective assessments) and accurately measured when specifically applied to program comprehension tasks.

By deepening our ability to understand and to quantify expertise, the paper makes significant inroads on this challenging problem.

(http://dx.doi.org/10.1145/2829945).

* * *

The second BCI article explores ways to increase user motivation through tangible manipulation of objects and implicit physiological interaction, in the context of sound generation and control.

The work takes an original tack on the topic by combining explicit gestural interaction, via the tangible aspects, with implicit sensing of biosignals, thus forging an intriguing hybrid of multiple modalities.

In my view such combinations may very well be a hallmark of future, more enlightened approaches to interaction design—as opposed to slapping a touchscreen with “natural” gestures on any sorry old device we decide to churn out, and calling it a day.

(http://dx.doi.org/10.1145/2838732).

TOCHI Editor’s Spotlight: Navigating Giga-pixel Images in Digital Pathology

In addition to the scientific research (and other tom-foolery) that I conduct here at Microsoft Research, in “my other life” I serve as the Editor-in-Chief of ACM’s Transactions on Computer-Human Interaction — more affectionately known as TOCHI to insiders, and which comprises the premiere archival journal of the field.

From time to time I spotlight particularly intriguing contributions that appear in the journal’s pages, and therefore to reward you, O devoted reader, I will be sharing those Editor’s Spotlights here as well.

Writing these up keeps me thoroughly acquainted with the contents of everything we publish in the journal, and also gives me the pleasure of some additional interaction with our contributors, one of whom characterized this Spotlight as:

“beautifully written. […] You’ve really captured the spirit of our work.”

He also reported that it put a smile on his face, but the truth is that it put an even bigger one on mine: I love sharing the most intriguing and provocative contributions that come across our pages.

Have a look, and I hope that you, too, will enjoy this glimpse of the wider world of human-computer interaction, a diverse and exciting field that often has profound implications for people’s everyday lives, shaped as they are by the emerging wonders of technology.



THE EDITOR’S SPOTLIGHT: TOCHI ISSUE 23:1

For the first article to highlight in the freshly-conceived Editor’s Spotlight, from TOCHI Issue 23:1 I selected a piece of work that strongly reminded me of the context of some of my own graduate research, which took place embedded in a neurosurgery department. In my case, our research team (consisting of both physicians and computer scientists) sought to improve the care of patients who were often referred to the university hospital with debilitating neurological conditions and extremely grave diagnoses.

When really strong human-computer interaction research collides with real-world problems like this, in my experience compelling clinical impact and rigorous research results are always hard-won but in the end they are well worth the above-and-beyond efforts required to make such interdisciplinary collaborations fly.

And the following TOCHI Editor’s Spotlight paper, in my opinion, is an outstanding example of such a contribution.

IN THE SPOTLIGHT:

Navigating Giga-pixel Images in Digital Pathology

The diagnosis of cancer is serious business, yet in routine clinical practice pathologists still work on microscopes, with physical slides, because digital pathology runs up against many barriers—not the least of which are the navigational challenges raised by panning and zooming through huge (and I mean huge) image datasets on the order of multiple gigapixels. And that’s just for a single slide.

Few illustrations grace the article, but those that do—

They stop the reader cold.

Extract from a GI biopsy, showing malignant tissue at 400x magnification. (Fig. 3)

The ruddy and well-formed cells of healthy tissue from a GI biopsy slowly give way to an ill-defined frontier of pathology, an ever-expanding redoubt for the malignant tissue lurking deep within. One cannot help but be struck by the subtext that these images represent the lives of patients that face a dire health crisis.

Only by finding, comparing, and contrasting this tissue to other cross-sections and slides—scanned at 400x magnification and a startling 100,000 dots per inch—can the pathologist arrive at a correct and accurate diagnosis as to the type and extent of the malignancy.

This article stands out because it puts into practice—and challenges—accepted design principles for the navigation of such gigapixel images, against the backdrop of real work by medical experts.

These are not laboratory studies that strive for some artificial measure of “ecological validity”—no, here the analyses take place in the context of the real work of pathologists (using archival cases) and yet the experimental evaluations are still rigorous and insightful. There is absolutely no question of validity and the stakes are clearly very high.

While the article focuses on digital pathology, the insights and perspectives it raises (not to mention the interesting image navigation and comparison tasks motivated by clinical needs) should inform, direct, and inspire many other efforts to improve interfaces for navigation through large visualizations and scientific data-sets.


Roy Ruddle, Thomas Rhys, Rebecca Randell, Phil Quirke, and Darren Treanor. 2016. The Design and Evaluation of Interfaces for Navigating Gigapixel Images in Digital Pathology. ACM Trans. Comput.-Hum. Interact. 23, 1, Article 5 (February 2015), 29 pages. DOI= http://dx.doi.org/10.1145/2834117


Original Source: http://tochi.acm.org/the-editors-spotlight-navigating-giga-pixel-images-in-digital-pathology/I will update this post with the reference to the Spotlight as published in the journal when my editorial remarks appear in the ACM Digital Library.