Olfactory Pen Creates Giant Stink, Fails to Make it out of Research Skunkworks

Microsoft has shown incredible stuff this week at \\build around Pen and Ink experiences — including simultaneous Pen + Touch experiences — as showcased for example in the great video on “Inking at the Speed of Thought” that is now available on Channel 9.

But I’ve had a skunkworks project — so to speak — in the works as part of my research (in the course of a career spanning decades) for a long time now, and this particular vision of the future of pen computing has consumed my imagination for at least the last 37 seconds or so. I’ve put a lot of thought into it.

It’s long been recognized that the sense of smell is a powerful index into the human memory. The scent of decaying pulp instantly brings to mind a favorite book, for example — in my case a volume of the masterworks of Edgar Allen Poe that was bequeathed to me by my grandfather.

Or who can ever forget the dizzying scent of their first significant other?

So I thought: Why not a digital pen with olfactory output?

Just think of the possibilities for this remarkable technology:

Not only can you ink faster than the speed of thought, but now you can stink faster than the speed of thought!

And I’m here to tell you that this is entirely possible. I think. I’ve already conceived of an amazing confabulation called the Aromatic Recombinator (patent pending; filed April 1st, 2016 at 2:55 PM; summarily rejected by patent office, 2:57 PM; earnest appeal filed in hope of an affirmative response, 2:59 PM; earnest response received: TBA).

Nonetheless, I can understand the patent office’s reticence.

Because with this remarkable technology one can arouse almost any scent, from the headiest of perfumes all the way to the most cloying musk, simply by scribbling on the screen of your tablet as if it were an electronic scratch-n-sniff card. A conception on which I have another patent pending, by the way.

Admittedly, some details remain sketchy, but I remain highly optimistic that the obvious problems can be sniffed out in short order.

And if not, rest assured, I will raise one hell of a stink.

[Happy April Fools Day.]

Editorial: Welcome to a New Era for TOCHI

Wherein I tell the true story of how I became an Editor-in-Chief.


Constant change is a given in the world of high technology.

But still it can come as a rude awakening when it arrives in human terms, and we find that it also applies to our friends, our colleagues, and the people we care for.

Not to mention ourselves!

So it was that I found myself, with a tumbler full of fresh coffee steaming between my hands, looking in disbelief at an email nominating me to assume the editorial helm of the leading journal in my field, the ACM Transactions on Computer-Human Interaction (otherwise known as TOCHI).

Ultimately (through no fault of my own) the ACM Publications Board was apparently seized by a episode of temporary madness and, deeming my formal application to have the necessary qualifications (with a dozen years of TOCHI associate editorship under my belt, a membership in the CHI Academy for recognized leaders in the field, and a Lasting Impact Award for my early work on mobile sensing—not to mention hundreds of paper rejections that apparently did no lasting damage to my reputation) they forthwith approved me to take over as Editor-in-Chief from my friend and long-time colleague, Shumin Zhai.

I’ve known Shumin since 1994, way back when I delivered my very first talk at CHI in the same session as he presented his latest results on “the silk cursor.” I took an instant liking to him, but I only came to fully appreciate over the years that followed that Shumin’s work ethic is legendary. As my colleague Bill Buxton (who sat on Shumin’s thesis committee) once put it, “Shumin works harder than any two persons I have ever known.”

And of course that applied to Shumin’s work ethic with TOCHI as well.

A man who now represents an astoundingly large pair of shoes that I must fill.

To say that I respect Shumin enormously, and the incredible progress he brought to the operation and profile of the journal during his six-year tenure, would be a vast understatement.

But after I got over the sheer terror of taking on such an important role, I began to get excited.

And then I got ideas.

Lots of ideas.

A few of them might even be good ones:

Ways to advance the journal.

Ways to keep operating at peak efficiency in the face of an ever-expanding stream of submissions.

And most importantly, ways to deliver even more impact to our readers, and on behalf of our authors.

Those same authors whose contributions make it possible for us to proclaim:

TOCHI is the flagship journal of the Computer-Human Interaction community.

So in this, my introductory editorial as the head honcho, new sheriff in town, and supreme benevolent dictator otherwise known as the Editor-in-Chief, I would like to talk about how the transition is going, give a few updates on TOCHI’s standard operating procedure, and—with an eye towards growing the impact of the journal—announce the first of what I hope will be many exciting new initiatives.

And in case it is not already obvious, I intend to have some fun with this.

All while preserving the absolutely rigorous and top-notch reputation of the journal, and the constant push for excellence in all of the papers that we publish.

[Read the rest at: http://dx.doi.org/10.1145/2882897]


Be sure to also check out The Editor’s Spotlight, highlighting the many strong contributions in this issue. This, along with the full text of my introductory editorial, is available without an ACM Digital Library subscription via the links below.


_TOCHI-thumbKen Hinckley. 2016. Editorial: Welcome to a New Era for TOCHI. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1e (February 2016), 6 pages. http://dx.doi.org/10.1145/2882897

_TOCHI-thumbKen Hinckley. 2016. The Editor’s Spotlight: TOCHI Issue 23:1. ACM Trans. Comput.-Hum. Interact 23, 1: Article 1 (February 2016), 4 pages. http://dx.doi.org/10.1145/2882899

TOCHI Article Alerts: Auditory Reality and Super Bowl Angst

I wanted to offer some reflections on two final articles in the current issue (23:1) of the journal that I edit — the ACM Transactions on Computer-Human Interaction:

Auditory Display in Mobile Augmented Reality

The first article delves into augmented reality of a somewhat unusual sort, namely augmentation of mobile and situated interaction via spatialized auditory cues.

A carefully structured study, designed around enhancing interactive experiences for exhibits in an art gallery, teases apart some of the issues that confront realities augmented in this manner, and thereby offers a much deeper understanding of both the strengths and weaknesses of various ways of presenting spatialized auditory feedback.

As such this article contributes a great foundation for appropriate design of user experiences augmented by this oft-neglected modality.

(http://dx.doi.org/10.1145/2829944).

* * *

Mass Interaction in Social Television

The final paper of TOCHI Issue 23:1 presents the first large-scale study of real-world mass interactions in social TV, by studying the key motives of users for participating in side-channel commentaries when viewing major sporting events online.

The large scale of the study (analysis of nearly six million chats, plus a survey of 1,123 users) allows the investigators to relate these motives to diverse usage patterns, leading to practical design suggestions that can be used to support user interactions and to enhance the identified motives of users—such as emotional release, cheering and jeering, and sharing thoughts, information, and feelings through commentary.

On a personal level, as a long-time resident of Seattle I certainly could have benefitted from these insights during last year’s Super Bowl—where yes, in the armchair-quarterback opinion of this Editor-in-Chief, the ill-fated Seahawks should indeed have handed the ball to Marshawn Lynch.

Alas. There is always next year.

(http://dx.doi.org/10.1145/2843941).

 

Two Papers on Brain-Computer Interaction in TOCHI Issue 23:1

There’s lots to please the eye, ear, and mind in the current issue of the Transactions that I edit, TOCHI Issue 23:1.

And I mean that not only figuratively—in terms of nourishing the intellect—but quite literally, in terms of those precious few cubic centimeters of private terrain residing inside our own skulls.

Because brain-computer interaction (BCI) forms a major theme of Issue 23:1. The possibility of sensing aspects of human perception, cognition, and physiological states has long fascinated me—indeed, the very term “brain-computer interaction” resonates with the strongest memes that science fiction visionaries can dish up—yet this topic confronts us with a burgeoning scientific literature.

* * *

The first of these articles presents an empirical study of phasic brain wave changes as a direct indicator of programmer expertise.

It makes a strong case that EEG-based measures of cognitive load, as it relates to expertise, can be observed directly (rather than through subjective assessments) and accurately measured when specifically applied to program comprehension tasks.

By deepening our ability to understand and to quantify expertise, the paper makes significant inroads on this challenging problem.

(http://dx.doi.org/10.1145/2829945).

* * *

The second BCI article explores ways to increase user motivation through tangible manipulation of objects and implicit physiological interaction, in the context of sound generation and control.

The work takes an original tack on the topic by combining explicit gestural interaction, via the tangible aspects, with implicit sensing of biosignals, thus forging an intriguing hybrid of multiple modalities.

In my view such combinations may very well be a hallmark of future, more enlightened approaches to interaction design—as opposed to slapping a touchscreen with “natural” gestures on any sorry old device we decide to churn out, and calling it a day.

(http://dx.doi.org/10.1145/2838732).

TOCHI Editor’s Spotlight: Navigating Giga-pixel Images in Digital Pathology

In addition to the scientific research (and other tom-foolery) that I conduct here at Microsoft Research, in “my other life” I serve as the Editor-in-Chief of ACM’s Transactions on Computer-Human Interaction — more affectionately known as TOCHI to insiders, and which comprises the premiere archival journal of the field.

From time to time I spotlight particularly intriguing contributions that appear in the journal’s pages, and therefore to reward you, O devoted reader, I will be sharing those Editor’s Spotlights here as well.

Writing these up keeps me thoroughly acquainted with the contents of everything we publish in the journal, and also gives me the pleasure of some additional interaction with our contributors, one of whom characterized this Spotlight as:

“beautifully written. […] You’ve really captured the spirit of our work.”

He also reported that it put a smile on his face, but the truth is that it put an even bigger one on mine: I love sharing the most intriguing and provocative contributions that come across our pages.

Have a look, and I hope that you, too, will enjoy this glimpse of the wider world of human-computer interaction, a diverse and exciting field that often has profound implications for people’s everyday lives, shaped as they are by the emerging wonders of technology.



THE EDITOR’S SPOTLIGHT: TOCHI ISSUE 23:1

For the first article to highlight in the freshly-conceived Editor’s Spotlight, from TOCHI Issue 23:1 I selected a piece of work that strongly reminded me of the context of some of my own graduate research, which took place embedded in a neurosurgery department. In my case, our research team (consisting of both physicians and computer scientists) sought to improve the care of patients who were often referred to the university hospital with debilitating neurological conditions and extremely grave diagnoses.

When really strong human-computer interaction research collides with real-world problems like this, in my experience compelling clinical impact and rigorous research results are always hard-won but in the end they are well worth the above-and-beyond efforts required to make such interdisciplinary collaborations fly.

And the following TOCHI Editor’s Spotlight paper, in my opinion, is an outstanding example of such a contribution.

IN THE SPOTLIGHT:

Navigating Giga-pixel Images in Digital Pathology

The diagnosis of cancer is serious business, yet in routine clinical practice pathologists still work on microscopes, with physical slides, because digital pathology runs up against many barriers—not the least of which are the navigational challenges raised by panning and zooming through huge (and I mean huge) image datasets on the order of multiple gigapixels. And that’s just for a single slide.

Few illustrations grace the article, but those that do—

They stop the reader cold.

Extract from a GI biopsy, showing malignant tissue at 400x magnification. (Fig. 3)

The ruddy and well-formed cells of healthy tissue from a GI biopsy slowly give way to an ill-defined frontier of pathology, an ever-expanding redoubt for the malignant tissue lurking deep within. One cannot help but be struck by the subtext that these images represent the lives of patients that face a dire health crisis.

Only by finding, comparing, and contrasting this tissue to other cross-sections and slides—scanned at 400x magnification and a startling 100,000 dots per inch—can the pathologist arrive at a correct and accurate diagnosis as to the type and extent of the malignancy.

This article stands out because it puts into practice—and challenges—accepted design principles for the navigation of such gigapixel images, against the backdrop of real work by medical experts.

These are not laboratory studies that strive for some artificial measure of “ecological validity”—no, here the analyses take place in the context of the real work of pathologists (using archival cases) and yet the experimental evaluations are still rigorous and insightful. There is absolutely no question of validity and the stakes are clearly very high.

While the article focuses on digital pathology, the insights and perspectives it raises (not to mention the interesting image navigation and comparison tasks motivated by clinical needs) should inform, direct, and inspire many other efforts to improve interfaces for navigation through large visualizations and scientific data-sets.


Roy Ruddle, Thomas Rhys, Rebecca Randell, Phil Quirke, and Darren Treanor. 2016. The Design and Evaluation of Interfaces for Navigating Gigapixel Images in Digital Pathology. ACM Trans. Comput.-Hum. Interact. 23, 1, Article 5 (February 2015), 29 pages. DOI= http://dx.doi.org/10.1145/2834117


Original Source: http://tochi.acm.org/the-editors-spotlight-navigating-giga-pixel-images-in-digital-pathology/I will update this post with the reference to the Spotlight as published in the journal when my editorial remarks appear in the ACM Digital Library.

Field Notes from an Expedition into the Paleohistory of Personal Computing

After a time-travel excursion consisting of thirty years in the dusty hothouse of  fiberglass insulation that is my parent’s attic, I’ll be durned if my trusty old TI-99/4A computer didn’t turn up on my doorstep looking no worse for its exotic journey.

Something I certainly wish I could say about myself.

So I pried my fossil from the Jurassic age of personal computing out of the battered suitcase my Dad had shipped it in, and — with the help of just the right connector conjured through the magic of eBay — I was able to connect this ancient microprocessor to my thoroughly modern television, resulting in a wonderful non sequitur of old and new:

TI 994A on my large screen TV

Yep, that’s the iconic home screen from a computer that originally came with a 13″ color monitor — which seemed like an extravagant luxury at the time — but now projected onto the 53″ larger-than-life television in my secret basement redoubt of knotty pine.

This is the computer that got me started in programming, so I suppose I owe my putative status as a visionary (and occasional gadfly) of human-computer interaction to this 16-bit wonder. Its sixteen-color graphics and delightful symphonic sound generators were way ahead of its time.

Of course, when I sat down with my kids and turned it on, Exhibit A of What Daddy’s Old Computer Can Do had to be a reprise of the classic game Alpiner which requires you to spur your doughty 16-bit mountaineer to the top of increasingly treacherous mountains.

In my mind, even after the passage of three decades, I could hear Alpiner’s catchy soundtrack  — which takes excellent advantage of the 99’s sound generators — before I even plugged the cartridge in.

Here’s my seven-year-old daughter taking up the challenge:

Alpiner on the TI-99/4aAlpiner redux after the passage of three decades — and in the hands of a new generation. Unfortunately for our erstwhile mountaineer, he has dodged the rattlesnake only to be clobbered by a rockfall which (if you look closely) can be seen, captured in mid-plummet, exactly one character-row above his ill-fated digital noggin.

Next we moved on to some simple programs in the highly accessible TI-Basic that came with the computer, and (modifying one of the examples in the manual) we ginned up a JACKPOT!!! game.

And yes, the triple exclamation points do make it way, way better.

Here’s one of my 8-year-old twins showing off the first mega-jackpot ever struck, with a stunning payoff of 6,495 imaginary dollars, which my daughter informs me she will spend on rainbow ponies.

Powerball ain’t got nothin’ on that.

Jackpot

My daughter awaits verification from the pit boss while I capture photographic evidence of the first ever mega-jackpot payout for striking five consecutive multipliers with a sixth $ kicker redoubling the bonus.

I’m not quite sure what will come next for our paleontological expedition into this shale of exquisitely preserved microprocessors. My other twin daughter has informed me in no uncertain terms that we must add a unicorn to the jackpot symbols — a project for which extensive research is already underway, despite a chronic lack of funding — and which will presumably make even more dramatic payoffs possible in the near future.

And if I can get the TI’s “Program Recorder” working again — and if enough of the program DNA remains intact on my old cassette tapes — then in Jurassic-Park fashion I also hope to resuscitate some classics that a primeval version of myself coded up, including smash hits such as Skyhop, Rocket-Launch, and Karate Fest!

But with only one exclamation point to tout the excellence of the latter title,  I wouldn’t get your hopes up too much for the gameplay in that one (grin).

Paper: Sensing Tablet Grasp + Micro-mobility for Active Reading

Lately I have been thinking about touch:

In the tablet-computer sense of the word.

To most people, this means the touchscreen. The intentional pokes and swipes and pinching gestures we would use to interact with a display.

But not to me.

Touch goes far beyond that.

Look at people’s natural behavior. When they refer to a book, or pass a document to a collaborator, there are two interesting behaviors that characterize the activity.

What I call the seen but unnoticed:

Simple habits and social cues, there all the time, but which fall below our conscious attention — if they are even noticed at all.

By way of example, let’s say we’re observing someone handle a magazine.

First, the person has to grasp the magazine. Seems obvious, but easy to overlook — and perhaps vital to understand. Although grasp typically doesn’t involve contact of the fingers with the touchscreen, this is a form of ‘touch’ nonetheless, even if it is one that traditionally hasn’t been sensed by computers.

Grasp reveals a lot about the intended use, whether the person might be preparing to pick up the magazine or pass it off, or perhaps settling down for a deep and immersive engagement with the material.

Second, as an inevitable consequence of grasping the magazine, it must move. Again, at first blush this seems obvious. But these movements may be overt, or they may be quite subtle. And to a keen eye — or an astute sensing system — they are a natural consequence of grasp, and indeed are what give grasp its meaning.

In this way, sensing grasp informs the detection of movements.

And, coming full circle, the movements thus detected enrich what we can glean from grasp as well.

Yet, this interplay of grasp and movement has rarely been recognized, much less actively sensed and used to enrich and inform interaction with tablet computers.

And this feeds back into a larger point that I have often found myself trying to make lately, namely that touch is about far more than interaction with the touch-screen alone.

If we want to really understand touch (as well as its future as a technology) then we need to deeply understand these other modalities — grasp and movement, and perhaps many more — and thereby draw out the full naturalness and expressivity of interaction with tablets (and mobile phones, and e-readers, and wearables, and many dreamed-of form-factors perhaps yet to come).

My latest publication looks into all of these questions, particularly as they pertain to reading electronic documents on tablets.

We constructed a tablet (albeit a green metallic beast of one at present) that can detect natural grips along its edges and on the entire back surface of the device. And with a full complement of inertial motion sensors, as well. This image shows the grip-sensing (back) side of our technological monstrosity:

Grip Sensing Tablet Hardware

But this set-up allowed us to explore ways of combining grip and subtle motion (what has sometimes been termed micro-mobility in the literature), resulting in the following techniques (among a number of others):

A Single User ENGAGING with a Single Device

Some of these techniques address the experience of an individual engaging with their own reading material.

For example, you can hold a bookmark with your thumb (much as you can keep your finger on a page in physical book) and then tip the device. This flips back to the page that you’re holding:

Tip-to-Flip-x715

This ‘Tip-to-Flip’ interaction  involves both the grip and the movement of the device and results in a fairly natural interaction that builds on a familiar habit from everyday experience with physical documents.

Another one we experimented with was a very subtle interaction that mimics holding a document and angling it up to inspect it more closely. When we sense this, the tablet zooms in slightly on the page, while removing all peripheral distractions such as menu-bars and icons:

Immersive Reading mode through grip sensing

This immerses the reader in the content, rather than the iconographic gewgaws which typically border the screen of an application as if to announce, “This is a computer!”

Multiple Users Collaborating around a Single Device

Another set of techniques we explored looked at how people pass devices to one another.

In everyday experience, passing a paper document to a collaborator is a very natural — and different — form of “sharing,” as compared to the oft-frustrating electronic equivalents we have at our disposal.

Likewise, computers should be able to sense and recognize such gestures in the real world, and use them to bring some of the socially and situationally appropriate sharing that they afford to the world of electronic documents.

We explored one such technique that automatically sets up a guest profile when you hand a tablet (displaying a specific document) to another user:

Face-to-Face-Handoff-x715

The other user can then read and mark-up that document, but he is not the beneficiary of a permanent electronic copy of it (as would be the case if you emailed him an attachment), nor is he permitted to navigate to other areas or look at other files on your tablet.

You’ve physically passed him the electronic document, and all he can do is look at it and mark it up with a pen.

Not unlike the semantics — long absent and sorely missed in computing — of a simple a piece of paper.

A Single User Working With Multiple Devices

A final area we looked at considers what happens when people work across multiple tablets.

We already live in a world where people own and use multiple devices, often side-by-side, yet our devices typically have little or no awareness of one another.

But contrast this to the messy state of people’s physical desks, with documents strewn all over. People often place documents side-by-side as a lightweight and informal way of organization, and might dexterously pick one up or hold it at the ready for quick reference when engaged in an intellectually demanding task.

Again, missing from the world of the tablet computer.

But by sensing which tablets you hold, or pick up, our system allows people to quickly refer to and cross-reference content across federations of such devices.

While the “Internet of Things” may be all the rage these days among the avant-garde of computing, such federations remain uncommon and in our view represent the future of a ‘Society of Devices’ that can recognize and interact with one another, all while respecting social mores, not the least of which are the subtle “seen but unnoticed” social cues afforded by grasping, moving, and orienting our devices.

Fine-Grained-Reference-x715

Closing ThoughtS:

An ExpanDED Perspective OF ‘TOUCH’

The examples above represent just a few simple steps. Much more can, and should, be done to fully explore and vet these directions.

But by viewing touch as far more than simple contact of the fingers with a grubby touchscreen — and expanding our view to consider grasp, movement of the device, and perhaps other qualities of the interaction that could be sensed in the future as well — our work hints at a far wider perspective.

A perspective teeming with the possibilities that would be raised by a society of mobile appliances with rich sensing capabilities, potentially leading us to far more natural, more expressive, and more creative ways of engaging in the knowledge work of the future.

 


 

Sensing-Tablet-Grasp-Micro-Mobility-UIST-2015-thumbDongwook Yoon, Ken Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, Michel Pahud, and Marcel Gavriliu. 2015. Sensing Tablet Grasp + Micro-mobility for Active Reading. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 477-487. Charlotte, NC, Nov. 8-11, 2015. http://dx.doi.org/10.1145/2807442.2807510
[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Sensing Tablet Grasp + Micro-mobility for Active Reading video on YouTube