Kinect Chapter 5.
Viewing Users in 3D
This chapter was previously labelled as "NUI Chapter 15.5".
This chapter revisits skeletal tracking but this time renders the
users in 3D, as shown in the screenshot at the top of the page.
The OpenNI aspects of this application (called UsersTracker3D) are much
the same as in the last chapter --
the pose detection and skeletal tracking
capabilities of a UserGenerator node are utilized to obtain the users'
joint coordinates. In fact, the OpenNI part of the code is simpler than
previously because I've no need to collect and render depth map information.
The main new features are:
- the joints and the limbs are Java 3D shapes (sphere and cylinders
respectively), which move and rotate to match the movement of the users);
- OpenNI observers (listeners) deal with a user temporarily moving out
of range of the Kinect sensor and returning (the user's skeleton disappears
during that time);
- joints are positioned using averaged coordinate values, collected
over several sensor updates. This reduces the 'shuddering' of joints, at
the cost of reducing the responsiveness of a skeleton to sudden user movement;
- each joint and limb that be made invisible independently of the rest of
the skeleton. This allows the application to deal with joints going out of
sensor range by making only those parts of the skeleton invisible;
- the 3D scene is essentially unchanged from the Java 3D code used in
chapter 3 for the point cloud
(i.e. a checkerboard floor, blue sky,
lighting, and a moveable camera). The screenshot on the right
shows a user standing facing
the Kinect, but with the Java 3D camera rotated to the left.
- The PDF file for the draft chapter (452 KB).
Last updated: 22nd December 2011.
Fixed Figure 7.
- Zipped code (20 KB).
Last updated: 12th October 2011.
IMPORTANT: the installation details for the OpenNI and NITE software have changed
since the book was published; please read the details on the
main KOPS page.
Dr. Andrew Davison
Back to my home page