[NyARToolkit PIC]

Chapter VBI-15.   Augmented Reality with NyARToolkit


[ This chapter does not appear in the book. ]

[Note: this draft differs quite significantly from the previous version. I've removed the reliance on JMF, replacing it with JavaCV's FrameGrabber. Since NyARToolkit on the PC utilizes JMF for image capturing, this has meant some large changes to my code. I've not changed any of the NyARToolkit API, but bypass most of its optional utility classes.]


Augmented Reality (AR) enhances a user's view of the real world with computer-generated imagery, rendered quickly enough so that the added content can be changed/updated as the physical view changes.

AR started its rise with the development of Head Mounted Displays (HMDs) which superimpose images over the user's field of vision. Tracking sensors allow these graphics to be modified in response to the user's head movement. But AR received its biggest boost with the appearance of mobile devices containing cameras, GPS, accelerometers, wireless internet connection, and more. Applications are starting to appear which allow you to simply point a camera phone at something (e.g. a shop window, a theatre) and the on-screen display will be augmented with information (e.g. sales offers, discounted tickets), customized to your interests, at that time and place. Two popular examples are Layar and Wikitude.

ARToolkit is probably the most widely used AR library: it identifies predefined physical markers in a supplied video stream, and overlays those markers with 3D models. One of its many 'children' is NyARToolkit an OOP port aimed at Java 3D, Processing, Android, C#, and C++; I'll be using the Java 3D version in this chapter.

The image at the top of the page illustrates how the toolkit can be utilized.

[MultiNyAR 1 PIC] A camera streams video into the NyARToolkit application (MultiNyAR.java in this chapter), which searches for markers in each frame. The markers (squares with thick black borders) are identified, and their orientation relative to the image frame is calculated. 3D models associated with the markers are added to the frame, after being transformed so they appear to be standing on top of their markers. The screenshot on the right shows the MultiNyAR GUI in more detail.

The application consists of a panel showing the augmented video stream, and a text area giving extra details about the markers and models. For example, the robot model in the screenshot above is positioned at (-1.9, -1.8, 51.0) and the cow at (8.2, -7.9, 41.8). The positive z-axis points into the scene (as explained later), which means that the robot is 'behind' the cow.

The screenshot at the bottom of the page shows the same scene after the markers have been moved around.

The model's orientation and position have changed to correspond to the new locations of the markers. For instance, the robot's z-axis position in the second screenshot is now 41.2, while the cow's is 50.5, indicating that the robot is in the foreground.

To summarize: the MultiNyAR.java program described in this chapter explains how to:

My code also replaces JMF with JavaCV's FrameGrabber. This only affects the capturing parts of the code; the core elements of NyARToolkit are untouched.




Dr. Andrew Davison
E-mail: ad@fivedots.coe.psu.ac.th
Back to my home page