Bootstrapper: Recognizing Tabletop Users by Their Shoes

Stephan R. Richter, Christian Holz, Patrick Baudisch. CHI '12

Bootstrapper recognizes users interacting with the table by observing their shoes using a depth and an RGB camera. Here we use a Kinect camera to extract users' shoes from the depth image, retrieve their textures from the color image, and match them against samples in the database to finally identify users.

Abstract

In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay in assigned seats and cannot recognize users across sessions. We propose a different approach based on distinguishing users' shoes. While users are interacting with the table, our system Bootstrapper observes their shoes using one or more depth cameras mounted to the edge of the table. It then identifies users by matching camera images with a database of known shoe images. When multiple users interact, Bootstrapper associates touches with shoes based on hand orientation. The approach can be implemented using consumer depth cameras because (1) shoes offer large distinct features such as color, (2) shoes naturally align themselves with the ground, giving the system a well-defined perspective and thus reduced ambiguity. We report two simple studies in which Bootstrapper recognized participants from a database of 18 users with 95.8% accuracy.

Video

Publications