4.4 Rendering Alias|Wavefront's Maya three-dimensional animation software and Pixar's PhotoRealistic RenderMan renderer were used in the rendering phase. Maya was used for geometry and camera control, and RenderMan was used for rendering with the registration shader. Controlling the motion of the camera (e.g., the current view) is very important. Initially, we used a key-framed motion spline created in Maya, but this motion had some visual inconsistencies. We discovered that for the velocity of the zoom out to appear constant and smooth over many orders of magnitude, the apparent angular velocity of the pixels at the edges of the frame must remain constant. This implies vertical motion that is exponential over time as in the trajectory function:
where H(t) is the height of the camera above the surface of the earth at time t, h0 is the starting height of the camera at time t = 0, and h1 is the ending height of the camera at time t = 1. For many of our animations h0 is 0.00015 Earth radii and h1 is 5.75 Earth radii. We also damped the beginning and end of the trajectory function for ease in/out effect Maya provides the tools to programmatically control the camera motion via expressions and the Maya Embedded Language (MEL) [11]. Using MEL, we were able to program this exponential trajectory function for the camera directly to get very smooth camera motion. The rendering phase often uncovered problems with color matching or georegistration that needed to be addressed. We returned to the appropriate phase in the pipeline to address these problems. Our iteration cycle was slow in part due to the need to preprocess the images using the txmake application, which generates optimized MIP-mapped versions of the textures for rapid access by the registration shader. This program could take over an hour to process an image file. Linux-based 1.4 GHz processors were used to render the zooms. We only had three RenderMan licenses at the time, but the renderer is very efficient, and on three processors we were able to render one high quality NTSC-video resolution zoom in about 8 hours. 5 CONCLUSION We have presented the techniques used to create dramatic visualizations highlighting multiple resolutions of remote sensing data. Our initial efforts led to the development of a procedural registration shader. By employing this shader in a production pipeline, we have been able to create a series of highly successful visualizations in a reasonable time. To date, we have produced 26 zoom visualizations into the following locations:
|
All of these were rendered in NTSC-video resolution, and
These visualizations received significant national and
Acknowledgements Many other people were involved in creating these visualizations.
We would also like to thank Space Imaging Corporation, the USGS,
References [1] http://svs.gsfc.nasa.gov/stories/zooms/ [2] http://svs.gsfc.nasa.gov/stories/nasm/ [3] http://modis.gsfc.nasa.gov/ [5] http://www.spaceimaging.com/ [6] http://svs.gsfc.nasa.gov/vis/a000000/a001300/a001324/ [7] http://www.gsfc.nasa.gov/topstory/20010419landsatimaging.html [8] A. Apodaca and L. Gritz, Advanced RenderMan, Part III, Morgan
[9] P. S. Chavez, S. C. Sides, and J. A. Anderson, Comparison of three different methods to merge multiresolution and multispectral data:
[10] D. Margulis, Professional Photoshop 6, Wiley Computer Publishing,
[11] P. Anderson, et.al., Using Maya Expressions, Alias|Wavefront Inc.,
<< Previous Page 1 2 3 4 |