Tag Archives: tilting user interfaces

Paper: Sensing Techniques for Tablet+Stylus Interaction (Best Paper Award)

It’s been a busy year, so I’ve been more than a little remiss in posting my Best Paper Award recipient from last year’s User Interface Software & Technology (UIST) symposium.

UIST is a great venue, particularly renowned for publishing cutting-edge innovations in devices, sensors, and hardware.

And software that makes clever uses thereof.

Title slide - sensing techniques for stylus + tablet interaction

Title slide from my talk on this project. We had a lot of help, fortunately. The picture illustrates a typical scenario in pen & tablet interaction — where the user interacts with touch, but the pen is still at the ready, in this case palmed in the user’s fist.

The paper takes two long-standing research themes for me — pen (plus touch) interaction, and interesting new ways to use sensors — and smashes them together to produce the ultimate Frankenstein child of tablet computing:

Stylus prototype augmented with sensors

Microsoft Research’s sensor pen. It’s covered in groovy orange shrink-wrap, too. What could be better than that? (The shrink wrap proved necessary to protect some delicate connections between our grip sensor and the embedded circuitry).

And if you were to unpack this orange-gauntleted beast, here’s what you’d find:

Sensor components inside the pen

Components of the sensor pen, including inertial sensors, a AAAA battery, a Wacom mini pen, and a flexible capacitive substrate that wraps around the barrel of the pen.

But although the end-goal of the project is to explore the new possibilities afforded by sensor technology, in many ways, this paper kneads a well-worn old worry bead for me.

It’s all about the hand.

With little risk of exaggeration you could say that I’ve spent decades studying nothing but the hand. And how the hand is the window to your mind.

Or shall I say hands. How people coordinate their action. How people manipulate objects. How people hold things. How we engage with the world through the haptic sense, how we learn to articulate astoundingly skilled motions through our fingers without even being consciously aware that we’re doing anything at all.

I’ve constantly been staring at hands for over 20 years.

And yet I’m still constantly surprised.

People exhibit all sorts of manual behaviors, tics, and mannerisms, hiding in plain sight, that seemingly inhabit a strange shadow-world — the realm of the seen but unnoticed — because these behaviors are completely obvious yet somehow they still lurk just beneath conscious perception.

Nobody even notices them until some acute observer takes the trouble to point them out.

For example:

Take a behavior as simple as holding a pen in your hand.

You hold the pen to write, of course, but most people also tuck the pen between their fingers to momentarily stow it for later use. Other people do this in a different way, and instead palm the pen, in more of a power grip reminiscent of how you would grab a suitcase handle. Some people even interleave the two behaviors, based on what they are currently doing and whether or not they expect to use the pen again soon:

Tuck and Palm Grips for temporarily stowing a pen

Illustration of tuck grip (left) vs. palm grip (right) methods of stowing the pen when it is temporarily not in use.

This seems very simple and obvious, at least in retrospect. But such behaviors have gone almost completely unnoticed in the literature, much less actively sensed by the tablets and pens that we use — or even leveraged to produce more natural user interfaces that can adapt to exactly how the user is currently handing and using their devices.

If we look deeper into these writing and tucking behaviors alone, a whole set of grips and postures of the hand emerge:

Core Pen Grips

A simple design space of common pen grips and poses (postures of the hand) in pen and touch computing with tablets.

Looking even more deeply, once we have tablets that support a pen as well as full multi-touch, users naturally want to used their bare fingers on the screen in combination with the pen, so we see another range of manual behaviors that we call extension grips based on placing one (or more) fingers on the screen while holding the pen:

Single Finger Extension Grips for Touch Gestures with Pen-in-hand

Much richness in “extension” grips, where touch is used while the pen is still being held, can also be observed. Here we see various single-finger extension grips for the tuck vs. the palm style of stowing the pen.

People also exhibited more ways of using multiple fingers on the touchscreen that I expected:

Multiple Finger Extension Grips for Touch Gestures with Pen-in-hand

Likewise, people extend multiple fingers while holding the pen to pinch or otherwise interact with the touchscreen.

So, it began to dawn on us that there was all this untapped richness in terms of how people hold, manipulate, write on, and extend fingers when using pen and touch on tablets.

And that sensing this could enable some very interesting new possibilities for the user interfaces for stylus + tablet computing.

This is where our custom hardware came in.

On our pen, for example, we can sense subtle motions — using full 3D inertial sensors including accelerometer, gyroscope, and magnetometer — as well as sense how the user grips the pen — this time using a flexible capacitive substrate wrapped around the entire barrel of the pen.

These capabilities then give rise to sensor signals such as the following:

Grip and motion sensors on the stylus
Sensor signals for the pen’s capacitive grip sensor with the writing grip (left) vs. the tuck grip (middle). Exemplar motion signals are shown on the right.

This makes various pen grips and motions stand out quite distinctly, states that we can identify using some simple gesture recognition techniques.

Armed with these capabilities, we explored presenting a number of context-appropriate tools.

As the very simplest example, we can detect when you’re holding the pen in a grip (and posture) that indicates that you’re about to write. Why does this matter? Well, if the touchscreen responds when you plant your meaty palm on it, it causes no end of mischief in a touch-driven user interface. You’ll hit things by accident. Fire off gestures by mistake. Leave little “ink turds” (as we affectionately call them) on the screen if the application responds to touch by leaving an ink trace. But once we can sense it’s your palm, we can go a long ways towards solving these problems with pen-and-touch interaction.

To pull the next little rabbit out of my hat, if you tap the screen with the pen in hand, the pen tools (what else?) pop up:

Pen tools appear

Tools specific to the pen appear when the user taps on the screen with the pen stowed in hand.

But we can take this even further, such as to distinguish bare-handed touches — to support the standard panning and zooming behaviors —  versus a pinch articulated with the pen-in-hand, which in this example brings up a magnifying glass particularly suited to detail work using the pen:

Pen Grip + Motion example: Full canvas zoom vs. Magnifier tool

A pinch multi-touch gesture with the left hand pans and zooms. But a pinch articulated with the pen-in-hand brings up a magnifier tool for doing fine editing work.

Another really fun way to use the sensors — since we can sense the 3D orientation of the pen even when it is away from the screen — is to turn it into a digital airbrush:

Airbrush tool using the sensors

Airbrushing with a pen. Note that the conic section of the resulting “spray” depends on the 3D orientation of the pen — just as it would with a real airbrush.

At any rate, it was a really fun project that garnered a best paper award,  and a fair bit of press coverage (Gizmodo, Engadget, & named FastCo Design’s #2 User Interface innovation of 2014, among other coverage). It’s pretty hard to top that.

Unless maybe we do a lot more with all kinds of cool sensors on the tablet as well.

Hmmm…

You might just want to stay tuned here. There’s all kinds of great stuff in the works, as always (grin).


Sensing Pen & Tablet Grip+Motion thumbnailHinckley, K., Pahud, M., Benko, H., Irani, P., Guimbretiere, F., Gavriliu, M., Chen, X., Matulic, F., Buxton, B., Wilson, A., Sensing Techniques for Tablet+Stylus Interaction.  In the 27th ACM Symposium on User Interface Software and Technology (UIST’14)  Honolulu, Hawaii, Oct 5-8, 2014, pp. 605-614. http://dx.doi.org/10.1145/2642918.2647379

Watch Context Sensing Techniques for Tablet+Stylus Interaction video on YouTube

Advertisements

Paper: Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation

I have three papers coming out this week at MobileHCI 2013, the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, which convenes this week in Munich. It’s one of the great small conferences that focuses exclusively on mobile interaction, which of course is a long-standing interest of mine.

This post focuses on the first of those papers, and right behind it will be short posts on the other two projects that my co-authors are presenting this week.

I’ve explored many directions for viewing and moving through information on small screens, often motivated by novel hardware sensors as well as basic insights about human motor and cognitive capabilities. And I also have a long history in three-dimensional (spatial) interaction, virtual environments, and the like. But despite doing this stuff for decades, every once in a while I still get surprised by experimental results.

That’s just part of what keeps this whole research gig fun and interesting. If the all answers were simple and obvious, there would be no point in doing the studies.

In this particular paper, my co-authors and I took a closer look at a long-standing spatial, or through-the-lens, metaphor for interaction– akin to navigating documents (or other information spaces) by looking through your mobile as if it were a camera viewfinder– and subjected it to experimental scrutiny.

While this basic idea of using your mobile as a viewport onto a larger virtual space has been around for a long time, the idea hasn’t been subjected to careful scrutiny in the context of moving a mobile device’s small screen as a way to view virtually larger documents. And the potential advantages of the approach have not been fully articulated and realized either.

This style of navigation (panning and zooming control) on mobile devices has great promise because it allows you to offload the navigation task itself to your nonpreferred hand, leaving your preferred hand free to do other things like carry bags of grocieries — or perform additional tasks such as annotation, selection, and tapping commands — on top of the resulting views.

But, as our study also shows, it is an approach not without its challenges; sensing the spatial position of the device, and devising an appropriate input mapping, are both difficult challenges that will need more progress to fully take advantage of this way of moving through information on a mobile device. For the time being, at least, the traditional touch gestures of pinch-to-zoom and drag-to-pan still appear to offer the most efficient solution for general-purpose navigation tasks.

Compound-Navigation-Mobiles-thumbPahud, M., Hinckley, K., Iqbal, S., Sellen, A., and Buxton, B., Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation. In ACM 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, (MobileHCI 2013), Munich, Germany, Aug. 27-30, 2013, pp. 113-122. [PDF] [video – MP4]

Toward Compound Navigation on Mobiles via Spatial Manipulation on YouTube

Paper: Motion and Context Sensing Techniques for Pen Computing

I continue to believe that stylus input — annotations, sketches, mark-up, and gestures — will be an important aspect of interaction with slate computers in the future, particularly when used effectively and convincingly with multi-modal pen+touch input. It also seems that every couple of years I stumble across an interesting new use or set of techniques for motion sensors, and this year proved to be no exception.

Thus, it should come as no surprise that my latest project has continued to push in this direction, exploring the possibilities for pen interaction when the physical stylus itself is augmented with inertial sensors including three-axis accelerometers, gyros, and magnetometers.

Figure-1-Sensor-Pen-hardware

In recent years such sensors have become integrated with all manner of gadgets, including smart phones and tablets, and it is increasingly common for microprocessors to include such sensors directly on the die. Hence in my view of the world, we are just at the cusp of sensor-rich stylus devices becoming  commercially feasible, so it is only natural to consider how such sensors afford new interactions, gestures, or context-sensing techniques when integrated directly with an active (powered) stylus on pen-operated devices.

In collaboration with Xiang ‘Anthony’ Chen and Hrvoje Benko I recently published a paper exploring motion-sensing capabilities for electronic styluses, which takes a first look at some techniques for such a device. With some timely help from Tom Blank’s brilliant devices team at Microsoft Research, we built a custom stylus — fully wireless and powered by an AAAA battery — that integrates these sensors.

These range from very simple but clever things such as reminding the user if they have left behind the pen — a common problem that users encounter with pen-based devices — to fun new techniques that emulate physical media, such as the gesture of striking a loaded brush on one’s finger in water media.

fig-ink-spatter

Check out the video below for an overview of these and some of the other techniques we have come up with so far, or read more about it in the technical paper linked below.

We are continuing to work in this area, and have lots more ideas that go beyond what we were able to accomplish in this first stage of the project, so stay tuned for future developments along these lines.

Motion-Context-Pen-thumbHinckley, K., Chen, X., and Benko, H., Motion and Context Sensing Techniques for Pen
Computing. 
In Proc. Graphics Interface 2013 (GI’13).  Canadian Information Processing Society, Toronto, Ont., CanadaRegina, Saskatchewan, Canada, May 29-31, 2013. [PDF] [video – MP4].

Watch Motion and Context Sensing Techniques for Pen Computing video on YouTube

Paper: Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer

I collaborated on a nifty project with the fine folks from Saul Greenberg’s group at the University of Calgary exploring the emerging possibilities for devices to sense and respond to their digital ecology. When devices have fine-grained sensing of their spatial relationships to one another, as well as to the people in that space, it brings about new ways for users to interact with the resulting system of cooperating devices and displays.

This fine-grained sensing approach makes for an interesting contrast to what Nic Marquardt and I explored in GroupTogether, which intentionally took a more conservative approach towards the sensing infrastructure — with the idea in mind that sometimes, one can still do a lot with very little (sensing).

Taken together, the two papers nicely bracket some possibilities for the future of cross-device interactions and intelligent environments.

This work really underscores that we are still largely in the dark ages with regard to such possibilities for digital ecologies. As new sensors and sensing systems make this kind of rich awareness of the surround of devices and users possible, our devices, operating systems, and user experiences will grow to encompass the expanded horizons of these new possibilities as well.

The full citation and the link to our scientific paper are as follows:

Gradual Engagement with devices via proximity sensingMarquardt, N., Ballendat, T., Boring, S., Greenberg, S. and Hinckley, K., Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer. In Proceedings of ACM Interactive Tabletops & Surfaces (ITS 2012). Boston, MA, USA, November 11-14. 10pp. [PDF] [video – MP4].

Watch the Gradual Engagement via Proximity video on YouTube

GroupTogether — Exploring the Future of a Society of Devices

My latest paper discussing the GroupTogether system just appeared at the 2012 ACM Symposium on User Interface Software & Technology in Cambridge, MA.

GroupTogether video available on YouTube

I’m excited about this work — it really looks hard at what some of the next steps in sensing systems might be, particularly when one starts considering how users can most effectively interact with one another in the context of the rapidly proliferating Society of Devices we are currently witnessing.

I think our paper on the GroupTogether system, in particular, does a really nice job of exploring this with strong theoretical foundations drawn from the sociological literature.

F-formations are small groups of people engaged in a joint activity.

F-formations are the various type of small groups that people form when engaged in a joint activity.

GroupTogether starts by considering the natural small-group behaviors adopted by people who come together to accomplish some joint activity.  These small groups can take a variety of distinctive forms, and are known collectively in the sociological literature as f-formations. Think of those distinctive circles of people that form spontaneously at parties: typically they are limited to a maximum of about 5 people, the orientation of the partipants clearly defines an area inside the group that is distinct from the rest of the environment outside the group, and there are fairly well established social protocols for people entering and leaving the group.

A small group of two users as sensed by GroupTogether's overhead Kinect depth-cameras

A small group of two users as sensed via GroupTogether’s overhead Kinect depth-cameras.

GroupTogether also senses the subtle orientation cues of how users handle and posture their tablet computers. These cues are known as micro-mobility, a communicative strategy that people often employ with physical paper documents, such as when a sales representative orients a document towards to to direct your attention and indicate that it is your turn to sign, for example.

Our system, then, is the first to put small-group f-formations, sensed via overhead Kinect depth-camera tracking, in play simultaneously with the micro-mobility of slate computers, sensed via embedded accelerometers and gyros.

The GroupTogether prototype sensing environment and set-up

GroupTogether uses f-formations to give meaning to the micro-mobility of slate computers. It understands which users have come together in a small group, and which users have not. So you can just tilt your tablet towards a couple of friends standing near you to share content, whereas another person who may be nearby but facing the other way — and thus clearly outside of the social circle of the small group — would not be privy to the transaction. Thus, the techniques lower the barriers to sharing information in small-group settings.

Check out the video to see what these techniques look like in action, as well as to see how the system also considers groupings of people close to situated displays such as electronic whiteboards.

The full text of our scientific paper on GroupTogether and the citation is also available.

My co-author Nic Marquardt was the first author and delivered the talk. Saul Greenberg of the University of Calgary also contributed many great insights to the paper.

Image credits: Nic Marquardt

Paper: Cross-Device Interaction via Micro-mobility and F-formations (“GroupTogether”)

GroupTogetherMarquardt, N., Hinckley, K., and Greenberg, S., Cross-Device Interaction via Micro-mobility and F-formations.  In ACM UIST 2012 Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA,  Cambridge, MA, Oct. 7-10, 2012, pp. (TBA). [PDF] [video – WMV]. Known as the GroupTogether system.

See also my post with some further perspective on the GroupTogether project.

Watch the GroupTogether video on YouTube

Lasting Impact Award for “Sensing Techniques for Mobile Interaction”

Last week I received a significant award for some of my early work in mobile sensing.

It was not that long ago really, that I would get strange glances from practical-minded people– those folks who would look at me with heads tilted downwards ever so slightly, eyebrows raised, and eyeballs askew– when I would mention how I was painting mobile devices with conductive epoxy and duct-taping accelerometers and infrared range-finders to them.

The dot-com bubble was still expanding, smartphones didn’t exist yet, and accelerometers were still far too expensive to reasonably consider on a device’s bill of materials. Many people still regarded the apex of handheld nirvana as the PalmPilot, although its luster was starting to fade.

And this Frankensteinian contraption of sensors, duct tape, and conductive epoxy was taking shape on my laboratory bench-top:

The Idea

I’d been dabbling in the area of sensor-enhanced mobile interaction for about a year, trying one idea here, another idea there, but the project had stubbornly refused to come together. For a long time I felt like it was basically a failure. But every so often myself and my colleagues who worked with me on the project– Jeff Pierce, Mike Sinclair, and Eric Horvitz– would come up with one new example, or another type of idea to try out, and slowly we populated a space of interesting new ways to use the sensors to make mobile devices smarter– or to be more honest about it, just a little bit less stupid– in how they responded to the physical environment, how the user was handling the device, or the orientation of the screen.

The latter led to the idea of using the accelerometer to automatically re-orient the display based on how the user was holding the device. The accelerometer gave us a constant signal of this-way-up, and at some point we realized it would make a great way to switch between portrait and landscape display formats without any need for buttons or menus, or indeed without even explicitly having to think about the interaction at all. The handheld, by being perceptive about it, could offload the decision from the user– hey, I need to look at this table in landscape— to the background of the interaction, so that the user could simply move the device to the desired orientation, and our sensors and our software would automatically optimize the display accordingly.

There were also some interesting subtleties to it. Just using the raw angle of the display, relative to gravity, was not that satisfactory. We built in some hysteresis so the display wouldn’t chatter back and forth between different orientations. We added special handling when you put the handheld down flat on a desk, or picked it back up, so that the screen wouldn’t accidentally flip to a different orientation because of this brief, incidental motion. We noticed that flipping the screen upside-down, which we initially thought wouldn’t be useful, was an effective way to quickly show the contents of the screen to someone seated across the table from you. And we also added some layers of logic in there so that other uses of the accelerometer could co-exist with automatic screen rotation.

Once we had this automatic screen rotation idea working well, I knew we had something. We worked furiously right up to the paper deadline, hammering out additional techniques, working out little kinks and details, figuring out how to convey the terrain we’d explored in the paper we were writing.

The reviewers all loved the paper, and it received a Best Paper Award at the conference. We had submitted it to the Association of Computing Machinery’s annual UIST Symposium– the UIST 2000 13th Annual Symposium on User Interface Software and Technology, held in San Diego, California– because we knew the UIST community was ideally suited to evaluate this research. The paper had a novel combination of sensors. It was a systems paper– that is, it did not just propose a one-off technique but rather a suite of techniques that all used the sensors in a variety of creative ways that complemented one another. And UIST is a rigorously peer-reviewed single-track conference. It’s not the largest conference in the field of Human-Computer Interaction by a long shot– for many years it averaged about two hundred attendees– but as my Ph.D. advisor Randy Pausch (now known for “The Last Lecture“) would often say, “UIST is only 200 people, but its the right 200 people.”

This is the video, recorded back in the year 2000, that accompanied the paper. I think it’s stood the test of time pretty well– or at least a lot better than the hair on top of my head :-).

Sensing Techniques for Mobile Interaction on YouTube

The Award

Fast forward ten years, and the vast majority of handhelds and slates being produced today include accelerometers and other micro-electromechanical wonders. The cost of these sensors has dropped to essentially nothing. Increasingly, they’re included as a co-processor right on the die with other modules of mobile microprocessors. The day will soon come where it will be all but impossible to purchase a device without sensors directly integrated into the microscopic Manhattan of its silicon gates.

And our mobile screens all automatically rotate, like it or not 🙂

So, it was with great pleasure last week that I attended the 2011 24th annual ACM UIST Symposium, and received a Lasting Impact Award, presented to me by Stanford professor Dr. Scott Klemmer, for the contributions of our UIST 2000 paper “Sensing Techniques for Mobile Interaction.”

The inscription on the award reads:

Awarded for its scientific exploration of mobile interaction, investigating new interaction techniques for handheld mobile devices supported by hardware sensors, and laying the groundwork for new research and industrial applications.

UIST 2011 Lasting Impact Award

In the Meantime…

I remember demonstrating my prototype on-stage with Bill Gates at a media event here in Redmond, Washington in 2001. Gates spoke about the importance of keeping spending– both in the public and private sectors– on R & D and he used my demo as an example of some up-and-coming research, but what I most strongly recall is lingering in the green room backstage with him and some other folks. It wasn’t the first time that I’d met Gates, but it was the first occasion where I chit-chatted with him a bit in a casual, unstructured context. I don’t remember what we talked about but I do remember his foot twitching, always in motion, driving the pedal of a vast invisible loom, weaving a sweeping landscape surmounted by the towering summits of his electronic dreams.

I remember my palms sweating, nervous about the demo, hoping that the sensors I’d duct-taped to my transmogrified Cassiopeia E-105 Pocket PC wouldn’t break off or drain the battery or go crazy with some unforseen nuance of the stage lighting (yes, infrared proximity sensors most definitely have stage fright).

And then less than a week later came the 9/11 attacks. Suddenly spiffy little sensors for mobile devices didn’t seem so important any more. Many product groups, including Windows Mobile at the time, got excited about my demonstration but then the realities of a thousand other crushing demands and priorities rained down on the fragile bubble of technological wonderland I’d been able to cobble together with my prototype. The years stretched by and sensors still hadn’t become mainstream like I had expected them to be.

Then some laptops started shipping with accelerometers to automatically park the hard-disk when you dropped the laptop. I remember seeing digital cameras that would sense the orientation you snapped a picture in, so that you could view it properly when you downloaded it. And when the iPhone shipped in 2007, one of the coolest features on it was the embedded accelerometer, which enabled automatic screen rotation and tilt-based games.

A View to the Future

It took about five years longer than I expected, but we have finally reached an age where clever uses of sensors– both for obvious things like games, as well as for subtle and not-so-obvious things like counting footfalls while you are walking around with the device– abound.

Any my take on all this?

We ain’t seen nothin’ yet.

Since my initial paper on sensing techniques for mobile interaction, every couple of years another idea has struck me. How about answering your phone, or cuing a voice-recognition mode, just by holding your phone to your ear? How about bumping devices together as a way to connect them? What of dual-screen devices that can sense the posture of the screens, and thereby support a breadth of automatically sensed functions? What about new types of motion gestures that combine multi-touch interaction with the physical gestures, or vibratory signals, afforded by these sensors?

And I’m sure there’s many more. My children will never know a world where their devices are not sensitive to motion and proximity, to orientation and elevation and all the headings of the compass.

The problem is, the future is not so obvious until you’ve struck upon the right idea, until you’ve found the one gold nugget in acres and acres of tailings from the mine of your technological ambitions.

A final word of advice: if your aim is to find these nuggets– whether in research or in creative endeavors– what you need to do is dig as fast as you possibly can. Burrow deeper. Dig side-tunnels where no-one has gone before. Risk collapse and explosion and yes, worst of all, complete failure and ignominious rejection of your diligently crafted masterpieces.

Above all else, fail faster.

Because sometimes those “failed” projects turn out to be the most rewarding of all.

***

This project would not have been possible without standing on the shoulders of many giants. Of course, there are my colleagues on the project– Jeff Pierce, who worked with me as a Microsoft Research Graduate Fellowship recipient at the time, and did most of the heavy lifting on the software infrastructure and contributed many of the ideas and nuances of the resulting techniques. Mike Sinclair, who first got me thinking about accelerometers and spent many, many hours helping me cobble together the sensing hardware. And Eric Horvitz, who helped to shape the broad strokes of the project and who was always an energetic sounding board for ideas.

With the passing of time that an award like this entails, one also reflects on how life has changed, and the people who are no longer there. I think of my advisor Randy Pausch, who in many ways has made my entire career possible, and his epic struggle with pancreatic cancer. I think of my first wife, Kerrie Exely, who died in 1997, and of her father, Bill, who also was claimed by cancer a couple of years ago.

Then there are the many scientists whose work I built upon in our exploration of sensing systems. Beverly Harrison’s explorations of embodied interactions. Albrecht Schmidt’s work on context sensing for mobile phones. Jun Rekimoto’s exploration of tilting user interfaces. Bill Buxton’s insights into background sensing. And many others cited in the original paper.