Category Archives: pen + touch

Paper: The “Seen but Unnoticed” Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another

In this Vision (for the UIST 2021 Symposium on User Interface Software & Technology), I argue that “touch” input and interaction remains in its infancy when viewed in context of the seen but unnoticed vocabulary of natural human behaviors, activity, and environments that surround direct interaction with displays.

Unlike status-quo touch interaction — a shadowplay of fingers on a single screen — I argue that our perspective of direct interaction should encompass the full rich context of individual use (whether via touch, sensors, or in combination with other modalities), as well as collaborative activity where people are engaged in local (co-located), remote (tele-present), and hybrid work.

We can further view touch through the lens of the “Society of Devices,” where each person’s activities span many complementary, oft-distinct devices that offer the right task affordance (input modality, screen size, aspect ratio, or simply a distinct surface with dedicated purpose) at the right place and time.

While many hints of this vision already exist in the literature, I speculate that a comprehensive program of research to systematically inventory, sense, and design interactions around such human behaviors and activities—and that fully embrace touch as a multi-modal, multi-sensor, multi-user, and multi-device construct—could revolutionize both individual and collaborative interaction with technology.


For the remote presentation, instead of a normal academic talk, I recruited my friend and colleague Nicolai Marquardt to have a 15-minute conversation with me about the vision and some of its implications:

Watch the “Seen but Unnoticed” UIST 2021 Vision presentation video on YouTube


Several aspects of this vision paper relate to a larger Microsoft Research project known as SurfaceFleet that explores the distributed systems and user experience implications of a “Society of Devices” in the New Future of Work.


Seen-but-Unnoticed-thumbKen Hinckley. The “Seen but Unnoticed” Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another. UIST Vision presented at The 34th Annual ACM Symposium on User Interface Software and Technology (UIST ’21). Non-archival publication, 5 pages. Virtual Event, USA, Oct 10-14, 2021.
https://arxiv.org/abs/2310.03958

[PDF] [captioned presentation video – mp4]

Paper: WritLarge: Ink Unleashed by Unified Scope, Action, & Zoom

Electronic whiteboards remain surprisingly difficult to use in the context of creativity support and design.

A key problem is that once a designer places strokes and reference images on a canvas, actually doing anything useful with key parts of that content involves numerous steps.

Hence, with digital ink, scope—that is, selection of content—is a central concern, yet current approaches often require encircling ink with a lengthy lasso, if not switching modes via round-trips to the far-off edges of the display.

Only then can the user take action, such as to copy, refine, or re-interpret their informal work-in-progress.

Such is the stilted nature of selection and action in the digital world.

But it need not be so.

By contrast, consider an everyday manual task such as sandpapering a piece of woodwork to hew off its rough edges. Here, we use our hands to grasp and bring to the fore—that is, select—the portion of the work-object—the wood—that we want to refine.

And because we are working with a tool—the sandpaper—the hand employed for this ‘selection’ sub-task is typically the non-preferred one, which skillfully manipulates the frame-of-reference for the subsequent ‘action’ of sanding, a complementary sub-task articulated by the preferred hand.

Therefore, in contrast to the disjoint subtasks foisted on us by most interactions with computers, the above example shows how complementary manual activities lend a sense of flow that “chunks” selection and action into a continuous selection-action phrase. By manipulating the workspace, the off-hand shifts the context of the actions to be applied, while the preferred hand brings different tools to bear—such as sandpaper, file, or chisel—as necessary.

The main goal of the WritLarge project, then, is to demonstrate similar continuity of action for electronic whiteboards. This motivated free-flowing, close-at-hand techniques to afford unification of selection and action via bimanual pen+touch interaction.

WriteLarge-hero-figure

Accordingly, we designed WritLarge so that user can simply gesture as follows:

With the thumb and forefinger of the non-preferred hand, just frame a portion of the canvas.

And, unlike many other approaches to “handwriting recognition,” this approach to selecting key portions of an electronic whiteboard leaves the user in complete control of what gets recognized—as well as when recognition occurs—so as not to break the flow of creative work.

Indeed, building on this foundation, we designed ways to shift between flexible representations of freeform content by simply moving the pen along semantic, structural, and temporal axes of movement.

See our demo reel below for some jaw-dropping demonstrations of the possibilities for digital ink opened up by this approach.

Watch WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom video on YouTube


WritLarge-CHI-2017-thumbHaijun Xia, Ken Hinckley, Michel Pahud, Xioa Tu, and Bill Buxton. 2017. WritLarge: Ink Unleashed by Unified Scope, Action, and Zoom. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3227-3240. Denver, Colorado, United States, May 6-11, 2017. Honorable Mention Award (top 5% of papers).
https://doi.org/10.1145/3025453.3025664

[PDF] [30 second preview – mp4 | YouTube] [Full video – mp4]

Paper: Thumb + Pen Interaction on Tablets

Modern tablets support simultaneous pen and touch input, but it remains unclear how to best leverage this capability for bimanual input when the nonpreferred hand holds the tablet.

We explore Thumb + Pen interactions that support simultaneous pen and touch interaction, with both hands, in such situations. Our approach engages the thumb of the device-holding hand, such that the thumb interacts with the touch screen in an indirect manner, thereby complementing the direct input provided by the preferred hand.

For instance, the thumb can determine how pen actions (articulated with the opposite hand) are interpreted.

thumb-pen-fullsize

Alternatively, the pen can point at an object, while the thumb manipulates one or more of its parameters through indirect touch.

Our techniques integrate concepts in a novel way that derive from radial menus (also known as marking menus) and spring-loaded modes maintained by muscular tension — as well as indirect input, and in ways that leverage multi-touch conventions.

Our overall approach takes the form of a set of probes, each representing a meaningfully distinct class of application. They serve as an initial exploration of the design space at a level which will help determine the feasibility of supporting bimanual interaction in such contexts, and the viability of the Thumb + Pen techniques in so doing.

Watch Thumb + Pen Interaction on Tablets video on YouTube


thumb-pen-thumbKen Pfeuffer, Ken Hinckley, Michel Pahud, and Bill Buxton. 2017. Thumb + Pen Interaction on Tablets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3254-3266 . Denver, Colorado, United States, May 6-11, 2017.
https://doi.org/10.1145/3025453.3025567

[PDF] [30s preview – mp4 | YouTube] [Full video – mp4]

Paper: As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences

This work sheds light on gaps and discrepancies between the experiences afforded by analog pens and their digital counterparts.

Despite the long history (and recent renaissance) of digital pens, the literature still lacks a comprehensive survey of what types of marks people make and what motivates them to use ink—both analog and digital—in daily life.

As-We-May-Ink-fullsize

To capture the diversity of inking behaviors and tease out the unique affordances of pen-and ink, we conducted a diary study with 26 participants from diverse backgrounds.

From analysis of 493 diary entries we identified 8 analog pen-and-ink activities, and 9 affordances of pens. We contextualized and contrasted these findings using a survey with 1,633 respondents and a follow-up diary study with 30 participants, observing digital pens.

Our analysis revealed many gaps and research opportunities based on pen affordances not yet fully explored in the literature.


As-We-May-Ink-CHI-2017-thumbYann Riche, Nathalie Henry Rich, Ken Hinckley, Sarah Fuelling, Sarah Williams, and Sheri Panabaker. 2017. As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, USA, pp. 3241-3253. Denver, Colorado, United States, May 6-11, 2017.
https://doi.org/10.1145/3025453.3025716

[PDF] [CHI 2017 Talk Slides (PowerPoint)]

Paper: Wearables as Context for Guiard-abiding Bimanual Touch

This particular paper has a rather academic-sounding title, but at its heart it makes a very simple and interesting observation regarding touch that any user of touch-screen technology can perhaps appreciate.

The irony is this: when interaction designers talk about “natural” interaction, they often have touch input in mind. And so people tend to take that for granted. What could be simpler than placing a finger — or with the modern miracle of multi-touch, multiple fingers — on a display?

And indeed, an entire industry of devices and form-factors — everything from phones, tablets, drafting-tables, all the way up to large wall displays — has arisen from this assumption.

Yet, if we unpack “touch” as it’s currently realized on most touchscreens, we can see that it remains very much a poor man’s version of natural human touch.

For example, on a large electronic-whiteboard such as the 84″ Surface Hub, multiple people can work upon the display at the same time. And it feels natural to employ both hands — as one often does in a wide assortment of everyday manual activities, such as indicating a point on a whiteboard with your off-hand as you emphasize the same point with the marker (or electronic pen).

Yet much of this richness — obvious to anyone observing a colleague at a whiteboard — represents context that is completely lost with “touch” as manifest in the vast majority of existing touch-screen devices.

For example:

  • Who is touching the display?
  • Are they touching the display with one hand, or two?
  • And if two hands, which of the multiple touch-events generated come from the right hand, and which come from the left?

Well, when dealing with input to computers, the all-too-common answer from the interaction designer is a shrug, a mumbled “who the heck knows,” and a litany of assumptions built into the user interface to try and paper over the resulting ambiguities, especially when the two factors (which user, and which hand) compound one another.

The result is that such issues tend to get swept under the rug, and hardly anybody ever mentions them.

But the first step towards a solution is recognizing that we have a problem.

This paper explores the implications of one particular solution that we have prototyped, namely leveraging wearable devices on the user’s body as sensors that can augment the richness of touch events.

A fitness band worn on the non-preferred hand, for example, can sense the impulse resulting from making finger-contact with a display through its embedded motion sensors (accelerometers and gyros). If the fitness band and the display exchange information and id’s, the touch-event generated can then be associated with the left hand of a particular user. The inputs of multiple users instrumented in this manner can then be separated from one another, as well, and used as a lightweight form of authentication.

That then explains the “wearable” part of “Wearables as Context for Guiard-abiding Bimanual Touch,” the title of my most recent paper, but what the heck does “Guiard-abiding” mean?

Well, this is a reference to classic work by a research colleague, Yves Guiard, who is famous for a 1987 paper in which he made a number of key observations regarding how people use their hands — both of them — in everyday manual tasks.

Particularly, in a skilled manipulative task such as writing on a piece of paper, Yves pointed out (assuming a right-handed individual) three general principles:

  • Left hand precedence: The action of the left hand precedes the action of the right; the non-preferred hand first positions and orients the piece of paper, and only then does the pen (held in the preferred hand, of course) begin to write.
  • Differentiation in scale: The action of the left hand tends to occur at a larger temporal and spatial scale of motion; the positioning (and re-positioning) of the paper tends to be infrequent and relatively coarse compared to the high-frequency, precise motions of the pen in the preferred hand.
  • Right-to-Left Spatial Reference: The left hand sets a frame of reference for the action of the right; the left hand defines the position and orientation of the work-space into which the preferred hand inserts its contributions, in this example via the manipulation of a hand-held implement — the pen.

Well, as it turns out these three principles are very deep and general, and they can yield great insight into how to design interactions that fully take advantage of people’s everyday skills for two-handed (“bimanual”) manipulation — another aspect of “touch” that interaction designers have yet to fully leverage for natural interaction with computers.

This paper is a long way from a complete solution to the paucity of modern touch-screens but hopefully by pointing out the problem and illustrating some consequences of augmenting touch with additional context (whether provided through wearables or other means), this work can lead to more truly “natural” touch interaction — allowing for simultaneous interaction by multiple users, both of whom can make full and complementary use of their hard-won manual skill with both hands — in the near future.


Wearables (fitness band and ring) provide missing context (who touches, and with what hand) for direct-touch bimanual interactions.Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 287-300. Tokyo, Japan, Oct. 16-19, 2016. https://doi.org/10.1145/2984511.2984564
[PDF] [Talk slides PDF] [30 second preview on YouTube] [Full video – MP4]

Book Chapter: Inking Outside the Box — How Context Sensing Affords More Natural Pen (and Touch) Computing

“Pen” and “Touch” are terms that tend to be taken for granted these days in the context of interaction with mobiles, tablets, and electronic-whiteboards alike.

Yet, as I have discussed in many articles here, even in the simplest combination of these modalities — that of “Pen + Touch” — new opportunities for interaction design abound.

And from this perspective we can go much further still.

Take “touch,” for example.

What does this term really mean in the context of input to computers?

Is it just when the user intentionally moves a finger into contact with the screen?

What if the palm accidentally brushes the display instead — is that still “touch?”

Or how about the off-hand, which plays a critical but oft-unnoticed role in gripping and skillfully orienting the device for the action of the preferred hand? Isn’t that an important part of “touch” as well?

Well, there’s good reason to argue that from the human perspective, these are all “touch,” even though most existing devices only generate a touch-event at the moment when a finger comes into contact with the screen.

Clearly, this is a very limited view, and clearly with greater insight of the context surrounding a particular touch (or pen, or pen + touch) event, we could enhance the naturalness of working with computers considerably.

This chapter, then, works through a series of examples and perspectives which demonstrate how much richness there is in such a re-conception of direct interaction with computers, and thereby suggests some directions for future innovations and richer, far more expressive interactions.


Thumbnail - Inking Outside the Box book chapterHinckley, Ken, and Buxton, Bill. Inking Outside the Box: How Context Sensing Affords More Natural Pen (and Touch) Computing. Appears as Chapter 3 in Revolutionizing Education with Digital Ink: The Impact of Pen and Touch Technology on Education (Human-Computer Interaction Series), First Edition (2016). Ed. by Tracy Hammond, Stephanie Valentine, & Aaron Adler. Published by Springer, Cham. June 13, 2016. https://doi.org/10.1007/978-3-319-31193-7_3

[PDF – Author’s draft]

P.S.: I’ve linked to the draft of the chapter that I submitted to the publisher, rather than the final version, as the published copy-edit muddied the writing by a gross misapplication of the Chicago Manual of Style, and in so doing introduced many semantic errors as well. Despite my best efforts I was not able to convince the publisher to fully reverse these undesired and unfortunate “improvements.” As such, my draft may contain some typographical errors or other minor discrepancies from the published version, but it is the authoritative version as far as I am concerned.

Olfactory Pen Creates Giant Stink, Fails to Make it out of Research Skunkworks

Microsoft has shown incredible stuff this week at \\build around Pen and Ink experiences — including simultaneous Pen + Touch experiences — as showcased for example in the great video on “Inking at the Speed of Thought” that is now available on Channel 9.

But I’ve had a skunkworks project — so to speak — in the works as part of my research (in the course of a career spanning decades) for a long time now, and this particular vision of the future of pen computing has consumed my imagination for at least the last 37 seconds or so. I’ve put a lot of thought into it.

It’s long been recognized that the sense of smell is a powerful index into the human memory. The scent of decaying pulp instantly brings to mind a favorite book, for example — in my case a volume of the masterworks of Edgar Allen Poe that was bequeathed to me by my grandfather.

Or who can ever forget the dizzying scent of their first significant other?

So I thought: Why not a digital pen with olfactory output?

Just think of the possibilities for this remarkable technology:

Not only can you ink faster than the speed of thought, but now you can stink faster than the speed of thought!

And I’m here to tell you that this is entirely possible. I think. I’ve already conceived of an amazing confabulation called the Aromatic Recombinator (patent pending; filed April 1st, 2016 at 2:55 PM; summarily rejected by patent office, 2:57 PM; earnest appeal filed in hope of an affirmative response, 2:59 PM; earnest response received: TBA).

Nonetheless, I can understand the patent office’s reticence.

Because with this remarkable technology one can arouse almost any scent, from the headiest of perfumes all the way to the most cloying musk, simply by scribbling on the screen of your tablet as if it were an electronic scratch-n-sniff card. A conception on which I have another patent pending, by the way.

Admittedly, some details remain sketchy, but I remain highly optimistic that the obvious problems can be sniffed out in short order.

And if not, rest assured, I will raise one hell of a stink.

[Happy April Fools Day.]

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind… [Spark radio interview with Ken Hinckley]

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing:

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus

Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014 (paywalled), among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Book Chapter: Input/Output Devices and Interaction Techniques, Third Edition

Thumbnail for Computing Handbook (3rd Edition)Ken Hinckley, Robert J.K. Jacob, R., Colin Ware, Jacob O. Wobbrock, and Daniel Wigdor. Input/Output Devices and Interaction Techniques. Appears as Chapter 21 in The Computing Handbook, Third Edition: Two-Volume Set, ed. by Tucker, A., Gonzalez, T., Topi, H., and Diaz-Herrera, J. Published by Chapman and Hall/CRC (Taylor & Francis), May 13, 2014.  ISBN 9781439898444. [PDF – Author’s Draft – may contain discrepancies]

Invited Talk: WIPTTE 2015 Presentation of Sensing Techniques for Tablets, Pen, and Touch

The organizers of WIPTTE 2015, the Workshop on the Impact of Pen and Touch Technology on Education, kindly invited me to speak about my recent work on sensing techniques for stylus + tablet interaction.

One of the key points that I emphasized:

To design technology to fully take advantage of human skills, it is critical to observe what people do with their hands when they are engaged in manual activites such as handwriting.

Notice my deliberate the use of the plural, hands, as in both of ’em, in a division of labor that is a perfect example of cooperative bimanual action.

The power of crayon and touch.

My six-year-old daughter demonstrates the power of crayon and touch technology.

And of course I had my usual array of stupid sensor tricks to illustrate the many ways that sensing systems of the future embedded in tablets and pens could take advantage of such observations. Some of these possible uses for sensors probably seem fanciful, in this antiquated era of circa 2015.

But in eerily similar fashion, some of the earliest work that I did on sensors embedded in handheld devices also felt completely out-of-step with the times when I published it back in the year 2000. A time so backwards it already belongs to the last millennium for goodness sakes!

Now aspects of that work are embedded in practically every mobile device on the planet.

It was a fun talk, with an engaged audience of educators who are eager to see pen and tablet technology advance to better serve the educational needs of students all over the world. I have three kids of school age now so this stuff matters to me. And I love speaking to this audience because they always get so excited to see the pen and touch interaction concepts I have explored over the years, as well as the new technologies emerging from the dim fog that surrounds the leading frontiers of research.

Harold and the Purple Crayon book coverI am a strong believer in the dictum that the best way to predict the future is to invent it.

And the pen may be the single greatest tool ever invented to harness the immense creative power of the human mind, and thereby to scrawl out–perhaps even in the just-in-time fashion of the famous book Harold and the Purple Crayon–the uncertain path that leads us forward.

                    * * *

Update: I have also made the original technical paper and demonstration video available now.

If you are an educator seeing impacts of pen, tablet, and touch technology in the classroom, then I strongly encourage you to start organizing and writing up your observations for next year’s workshop. The 2016 edition of the series, (now renamed CPTTE) will be held at Brown University in Providence, Rhode Island, and chaired by none other than the esteemed Andries Van Dam, who is my academic grandfather (i.e. my Ph.D. advisor’s mentor) and of course widely respected in computing circles throughout the world.


Thumbnail - WIPTTE 2015 invited TalkKen Hinckley. WIPTTE 2015 Invited Talk: Sensing Techniques for Tablet + Stylus Interaction. Workshop on the Impact of Pen and Touch Technology on Education, Redmond, WA, April 28th, 2015. [Slides (PowerPoint)] [Slides PDF]