TOCHI Article Alerts: Auditory Reality and Super Bowl Angst

I wanted to offer some reflections on two final articles in the current issue (23:1) of the journal that I edit — the ACM Transactions on Computer-Human Interaction:

Auditory Display in Mobile Augmented Reality

The first article delves into augmented reality of a somewhat unusual sort, namely augmentation of mobile and situated interaction via spatialized auditory cues.

A carefully structured study, designed around enhancing interactive experiences for exhibits in an art gallery, teases apart some of the issues that confront realities augmented in this manner, and thereby offers a much deeper understanding of both the strengths and weaknesses of various ways of presenting spatialized auditory feedback.

As such this article contributes a great foundation for appropriate design of user experiences augmented by this oft-neglected modality.

(http://dx.doi.org/10.1145/2829944).

* * *

Mass Interaction in Social Television

The final paper of TOCHI Issue 23:1 presents the first large-scale study of real-world mass interactions in social TV, by studying the key motives of users for participating in side-channel commentaries when viewing major sporting events online.

The large scale of the study (analysis of nearly six million chats, plus a survey of 1,123 users) allows the investigators to relate these motives to diverse usage patterns, leading to practical design suggestions that can be used to support user interactions and to enhance the identified motives of users—such as emotional release, cheering and jeering, and sharing thoughts, information, and feelings through commentary.

On a personal level, as a long-time resident of Seattle I certainly could have benefitted from these insights during last year’s Super Bowl—where yes, in the armchair-quarterback opinion of this Editor-in-Chief, the ill-fated Seahawks should indeed have handed the ball to Marshawn Lynch.

Alas. There is always next year.

(http://dx.doi.org/10.1145/2843941).

 

Two Papers on Brain-Computer Interaction in TOCHI Issue 23:1

There’s lots to please the eye, ear, and mind in the current issue of the Transactions that I edit, TOCHI Issue 23:1.

And I mean that not only figuratively—in terms of nourishing the intellect—but quite literally, in terms of those precious few cubic centimeters of private terrain residing inside our own skulls.

Because brain-computer interaction (BCI) forms a major theme of Issue 23:1. The possibility of sensing aspects of human perception, cognition, and physiological states has long fascinated me—indeed, the very term “brain-computer interaction” resonates with the strongest memes that science fiction visionaries can dish up—yet this topic confronts us with a burgeoning scientific literature.

* * *

The first of these articles presents an empirical study of phasic brain wave changes as a direct indicator of programmer expertise.

It makes a strong case that EEG-based measures of cognitive load, as it relates to expertise, can be observed directly (rather than through subjective assessments) and accurately measured when specifically applied to program comprehension tasks.

By deepening our ability to understand and to quantify expertise, the paper makes significant inroads on this challenging problem.

(http://dx.doi.org/10.1145/2829945).

* * *

The second BCI article explores ways to increase user motivation through tangible manipulation of objects and implicit physiological interaction, in the context of sound generation and control.

The work takes an original tack on the topic by combining explicit gestural interaction, via the tangible aspects, with implicit sensing of biosignals, thus forging an intriguing hybrid of multiple modalities.

In my view such combinations may very well be a hallmark of future, more enlightened approaches to interaction design—as opposed to slapping a touchscreen with “natural” gestures on any sorry old device we decide to churn out, and calling it a day.

(http://dx.doi.org/10.1145/2838732).

TOCHI Editor’s Spotlight: Navigating Giga-pixel Images in Digital Pathology

In addition to the scientific research (and other tom-foolery) that I conduct here at Microsoft Research, in “my other life” I serve as the Editor-in-Chief of ACM’s Transactions on Computer-Human Interaction — more affectionately known as TOCHI to insiders, and which comprises the premiere archival journal of the field.

From time to time I spotlight particularly intriguing contributions that appear in the journal’s pages, and therefore to reward you, O devoted reader, I will be sharing those Editor’s Spotlights here as well.

Writing these up keeps me thoroughly acquainted with the contents of everything we publish in the journal, and also gives me the pleasure of some additional interaction with our contributors, one of whom characterized this Spotlight as:

“beautifully written. […] You’ve really captured the spirit of our work.”

He also reported that it put a smile on his face, but the truth is that it put an even bigger one on mine: I love sharing the most intriguing and provocative contributions that come across our pages.

Have a look, and I hope that you, too, will enjoy this glimpse of the wider world of human-computer interaction, a diverse and exciting field that often has profound implications for people’s everyday lives, shaped as they are by the emerging wonders of technology.



THE EDITOR’S SPOTLIGHT: TOCHI ISSUE 23:1

For the first article to highlight in the freshly-conceived Editor’s Spotlight, from TOCHI Issue 23:1 I selected a piece of work that strongly reminded me of the context of some of my own graduate research, which took place embedded in a neurosurgery department. In my case, our research team (consisting of both physicians and computer scientists) sought to improve the care of patients who were often referred to the university hospital with debilitating neurological conditions and extremely grave diagnoses.

When really strong human-computer interaction research collides with real-world problems like this, in my experience compelling clinical impact and rigorous research results are always hard-won but in the end they are well worth the above-and-beyond efforts required to make such interdisciplinary collaborations fly.

And the following TOCHI Editor’s Spotlight paper, in my opinion, is an outstanding example of such a contribution.

IN THE SPOTLIGHT:

Navigating Giga-pixel Images in Digital Pathology

The diagnosis of cancer is serious business, yet in routine clinical practice pathologists still work on microscopes, with physical slides, because digital pathology runs up against many barriers—not the least of which are the navigational challenges raised by panning and zooming through huge (and I mean huge) image datasets on the order of multiple gigapixels. And that’s just for a single slide.

Few illustrations grace the article, but those that do—

They stop the reader cold.

Extract from a GI biopsy, showing malignant tissue at 400x magnification. (Fig. 3)

The ruddy and well-formed cells of healthy tissue from a GI biopsy slowly give way to an ill-defined frontier of pathology, an ever-expanding redoubt for the malignant tissue lurking deep within. One cannot help but be struck by the subtext that these images represent the lives of patients that face a dire health crisis.

Only by finding, comparing, and contrasting this tissue to other cross-sections and slides—scanned at 400x magnification and a startling 100,000 dots per inch—can the pathologist arrive at a correct and accurate diagnosis as to the type and extent of the malignancy.

This article stands out because it puts into practice—and challenges—accepted design principles for the navigation of such gigapixel images, against the backdrop of real work by medical experts.

These are not laboratory studies that strive for some artificial measure of “ecological validity”—no, here the analyses take place in the context of the real work of pathologists (using archival cases) and yet the experimental evaluations are still rigorous and insightful. There is absolutely no question of validity and the stakes are clearly very high.

While the article focuses on digital pathology, the insights and perspectives it raises (not to mention the interesting image navigation and comparison tasks motivated by clinical needs) should inform, direct, and inspire many other efforts to improve interfaces for navigation through large visualizations and scientific data-sets.


Roy Ruddle, Thomas Rhys, Rebecca Randell, Phil Quirke, and Darren Treanor. 2016. The Design and Evaluation of Interfaces for Navigating Gigapixel Images in Digital Pathology. ACM Trans. Comput.-Hum. Interact. 23, 1, Article 5 (February 2015), 29 pages. DOI= http://dx.doi.org/10.1145/2834117


Original Source: http://tochi.acm.org/the-editors-spotlight-navigating-giga-pixel-images-in-digital-pathology/I will update this post with the reference to the Spotlight as published in the journal when my editorial remarks appear in the ACM Digital Library.

Field Notes from an Expedition into the Paleohistory of Personal Computing

After a time-travel excursion consisting of thirty years in the dusty hothouse of  fiberglass insulation that is my parent’s attic, I’ll be durned if my trusty old TI-99/4A computer didn’t turn up on my doorstep looking no worse for its exotic journey.

Something I certainly wish I could say about myself.

So I pried my fossil from the Jurassic age of personal computing out of the battered suitcase my Dad had shipped it in, and — with the help of just the right connector conjured through the magic of eBay — I was able to connect this ancient microprocessor to my thoroughly modern television, resulting in a wonderful non sequitur of old and new:

TI 994A on my large screen TV

Yep, that’s the iconic home screen from a computer that originally came with a 13″ color monitor — which seemed like an extravagant luxury at the time — but now projected onto the 53″ larger-than-life television in my secret basement redoubt of knotty pine.

This is the computer that got me started in programming, so I suppose I owe my putative status as a visionary (and occasional gadfly) of human-computer interaction to this 16-bit wonder. Its sixteen-color graphics and delightful symphonic sound generators were way ahead of its time.

Of course, when I sat down with my kids and turned it on, Exhibit A of What Daddy’s Old Computer Can Do had to be a reprise of the classic game Alpiner which requires you to spur your doughty 16-bit mountaineer to the top of increasingly treacherous mountains.

In my mind, even after the passage of three decades, I could hear Alpiner’s catchy soundtrack  — which takes excellent advantage of the 99’s sound generators — before I even plugged the cartridge in.

Here’s my seven-year-old daughter taking up the challenge:

Alpiner on the TI-99/4aAlpiner redux after the passage of three decades — and in the hands of a new generation. Unfortunately for our erstwhile mountaineer, he has dodged the rattlesnake only to be clobbered by a rockfall which (if you look closely) can be seen, captured in mid-plummet, exactly one character-row above his ill-fated digital noggin.

Next we moved on to some simple programs in the highly accessible TI-Basic that came with the computer, and (modifying one of the examples in the manual) we ginned up a JACKPOT!!! game.

And yes, the triple exclamation points do make it way, way better.

Here’s one of my 8-year-old twins showing off the first mega-jackpot ever struck, with a stunning payoff of 6,495 imaginary dollars, which my daughter informs me she will spend on rainbow ponies.

Powerball ain’t got nothin’ on that.

Jackpot

My daughter awaits verification from the pit boss while I capture photographic evidence of the first ever mega-jackpot payout for striking five consecutive multipliers with a sixth $ kicker redoubling the bonus.

I’m not quite sure what will come next for our paleontological expedition into this shale of exquisitely preserved microprocessors. My other twin daughter has informed me in no uncertain terms that we must add a unicorn to the jackpot symbols — a project for which extensive research is already underway, despite a chronic lack of funding — and which will presumably make even more dramatic payoffs possible in the near future.

And if I can get the TI’s “Program Recorder” working again — and if enough of the program DNA remains intact on my old cassette tapes — then in Jurassic-Park fashion I also hope to resuscitate some classics that a primeval version of myself coded up, including smash hits such as Skyhop, Rocket-Launch, and Karate Fest!

But with only one exclamation point to tout the excellence of the latter title,  I wouldn’t get your hopes up too much for the gameplay in that one (grin).

Paper: Sensing Tablet Grasp + Micro-mobility for Active Reading

Lately I have been thinking about touch:

In the tablet-computer sense of the word.

To most people, this means the touchscreen. The intentional pokes and swipes and pinching gestures we would use to interact with a display.

But not to me.

Touch goes far beyond that.

Look at people’s natural behavior. When they refer to a book, or pass a document to a collaborator, there are two interesting behaviors that characterize the activity.

What I call the seen but unnoticed:

Simple habits and social cues, there all the time, but which fall below our conscious attention — if they are even noticed at all.

By way of example, let’s say we’re observing someone handle a magazine.

First, the person has to grasp the magazine. Seems obvious, but easy to overlook — and perhaps vital to understand. Although grasp typically doesn’t involve contact of the fingers with the touchscreen, this is a form of ‘touch’ nonetheless, even if it is one that traditionally hasn’t been sensed by computers.

Grasp reveals a lot about the intended use, whether the person might be preparing to pick up the magazine or pass it off, or perhaps settling down for a deep and immersive engagement with the material.

Second, as an inevitable consequence of grasping the magazine, it must move. Again, at first blush this seems obvious. But these movements may be overt, or they may be quite subtle. And to a keen eye — or an astute sensing system — they are a natural consequence of grasp, and indeed are what give grasp its meaning.

In this way, sensing grasp informs the detection of movements.

And, coming full circle, the movements thus detected enrich what we can glean from grasp as well.

Yet, this interplay of grasp and movement has rarely been recognized, much less actively sensed and used to enrich and inform interaction with tablet computers.

And this feeds back into a larger point that I have often found myself trying to make lately, namely that touch is about far more than interaction with the touch-screen alone.

If we want to really understand touch (as well as its future as a technology) then we need to deeply understand these other modalities — grasp and movement, and perhaps many more — and thereby draw out the full naturalness and expressivity of interaction with tablets (and mobile phones, and e-readers, and wearables, and many dreamed-of form-factors perhaps yet to come).

My latest publication looks into all of these questions, particularly as they pertain to reading electronic documents on tablets.

We constructed a tablet (albeit a green metallic beast of one at present) that can detect natural grips along its edges and on the entire back surface of the device. And with a full complement of inertial motion sensors, as well. This image shows the grip-sensing (back) side of our technological monstrosity:

Grip Sensing Tablet Hardware

But this set-up allowed us to explore ways of combining grip and subtle motion (what has sometimes been termed micro-mobility in the literature), resulting in the following techniques (among a number of others):

A Single User ENGAGING with a Single Device

Some of these techniques address the experience of an individual engaging with their own reading material.

For example, you can hold a bookmark with your thumb (much as you can keep your finger on a page in physical book) and then tip the device. This flips back to the page that you’re holding:

Tip-to-Flip-x715

This ‘Tip-to-Flip’ interaction  involves both the grip and the movement of the device and results in a fairly natural interaction that builds on a familiar habit from everyday experience with physical documents.

Another one we experimented with was a very subtle interaction that mimics holding a document and angling it up to inspect it more closely. When we sense this, the tablet zooms in slightly on the page, while removing all peripheral distractions such as menu-bars and icons:

Immersive Reading mode through grip sensing

This immerses the reader in the content, rather than the iconographic gewgaws which typically border the screen of an application as if to announce, “This is a computer!”

Multiple Users Collaborating around a Single Device

Another set of techniques we explored looked at how people pass devices to one another.

In everyday experience, passing a paper document to a collaborator is a very natural — and different — form of “sharing,” as compared to the oft-frustrating electronic equivalents we have at our disposal.

Likewise, computers should be able to sense and recognize such gestures in the real world, and use them to bring some of the socially and situationally appropriate sharing that they afford to the world of electronic documents.

We explored one such technique that automatically sets up a guest profile when you hand a tablet (displaying a specific document) to another user:

Face-to-Face-Handoff-x715

The other user can then read and mark-up that document, but he is not the beneficiary of a permanent electronic copy of it (as would be the case if you emailed him an attachment), nor is he permitted to navigate to other areas or look at other files on your tablet.

You’ve physically passed him the electronic document, and all he can do is look at it and mark it up with a pen.

Not unlike the semantics — long absent and sorely missed in computing — of a simple a piece of paper.

A Single User Working With Multiple Devices

A final area we looked at considers what happens when people work across multiple tablets.

We already live in a world where people own and use multiple devices, often side-by-side, yet our devices typically have little or no awareness of one another.

But contrast this to the messy state of people’s physical desks, with documents strewn all over. People often place documents side-by-side as a lightweight and informal way of organization, and might dexterously pick one up or hold it at the ready for quick reference when engaged in an intellectually demanding task.

Again, missing from the world of the tablet computer.

But by sensing which tablets you hold, or pick up, our system allows people to quickly refer to and cross-reference content across federations of such devices.

While the “Internet of Things” may be all the rage these days among the avant-garde of computing, such federations remain uncommon and in our view represent the future of a ‘Society of Devices’ that can recognize and interact with one another, all while respecting social mores, not the least of which are the subtle “seen but unnoticed” social cues afforded by grasping, moving, and orienting our devices.

Fine-Grained-Reference-x715

Closing ThoughtS:

An ExpanDED Perspective OF ‘TOUCH’

The examples above represent just a few simple steps. Much more can, and should, be done to fully explore and vet these directions.

But by viewing touch as far more than simple contact of the fingers with a grubby touchscreen — and expanding our view to consider grasp, movement of the device, and perhaps other qualities of the interaction that could be sensed in the future as well — our work hints at a far wider perspective.

A perspective teeming with the possibilities that would be raised by a society of mobile appliances with rich sensing capabilities, potentially leading us to far more natural, more expressive, and more creative ways of engaging in the knowledge work of the future.

 


 

Sensing-Tablet-Grasp-Micro-Mobility-UIST-2015-thumbDongwook Yoon, Ken Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, Michel Pahud, and Marcel Gavriliu. 2015. Sensing Tablet Grasp + Micro-mobility for Active Reading. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 477-487. Charlotte, NC, Nov. 8-11, 2015. http://dx.doi.org/10.1145/2807442.2807510
[PDF] [Talk slides PPTX] [video – MP4] [30 second preview – MP4] [Watch on YouTube]

Watch Sensing Tablet Grasp + Micro-mobility for Active Reading video on YouTube

Editor-in-Chief, ACM Transactions on Computer-Human Interaction (TOCHI)

_TOCHI-fullsizeThe ACM Transactions on Computer-Human Interaction (TOCHI) has long been regarded as the flagship journal of the field. I’ve actually served on their editorial board since 2003, and thus have a long history with the endeavor.

So now that Shumin Zhai’s second term has come to a close, it is a great honor to report that I’ve assumed the helm as Editor-in-Chief. Shumin worked wonders in improving the efficiency and impact of the journal, diligent efforts that I am working hard to build upon. And I have many ideas and creative initiatives in the works that I hope can further advance the journal and help it to have even more impact.

The journal publishes original and significant research papers, and especially likes to see more systems-focused, long-term, or integrative contributions to human-computer interaction. TOCHI also publishes individual studies, methodologies, and techniques if we deem the contributions to be substantial enough. Occasionally impactful, well-argued, and well-supported essays on important  or emerging issues in human-computer interaction are published as well, though not often.

TOCHI prides itself on a rapid turn-around on manuscripts, with an average response time of about 50 days, and we often return manuscripts (particularly when there is not a good fit) much faster than that. We strive to make decisions within 90 days, and although that isn’t always possible, upon acceptance we do also feature very rapid publication. Digital editions of articles publish to the ACM Digital Library as soon as they are accepted, copyedited, and typeset. TOCHI can often, therefore, move articles into publication as fast as or faster than many of the popular conference venues.

Accepted papers at TOCHI also have the opportunity to present at participating SIGCHI conferences, which currently include CHI, CSCW, UIST, and MobileHCI. Authors therefore get the benefits of a rigorous reviewing process with a full journal revision cycle, plus the prestige of the TOCHI brand when you present new work to your colleagues at a top HCI conference.

To keep track of all the latest developments, you can get alerts for new TOCHI articles as they hit the Digital Library — never miss a key new result.  Or subscribe to our feed — just click on the little RSS link on the far right of the TOCHI landing page.

 


_TOCHI-thumbHinckley, K., Editor-in-Chief, ACM Transactions on CHI. Three-year term, commencing Sept. 1st, 2015. [TOCHI on the ACM Digital Library] 

The flagship journal of CHI.

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus
Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube