4.4 Rendering

Alias|Wavefront's Maya three-dimensional animation software and Pixar's PhotoRealistic RenderMan renderer were used in the rendering phase. Maya was used for geometry and camera control, and RenderMan was used for rendering with the registration shader.

Controlling the motion of the camera (e.g., the current view) is very important. Initially, we used a key-framed motion spline created in Maya, but this motion had some visual inconsistencies. We discovered that for the velocity of the zoom out to appear constant and smooth over many orders of magnitude, the apparent angular velocity of the pixels at the edges of the frame must remain constant. This implies vertical motion that is exponential over time as in the trajectory function:

where H(t) is the height of the camera above the surface of the earth at time t, h0 is the starting height of the camera at time t = 0, and h1 is the ending height of the camera at time t = 1. For many of our animations h0 is 0.00015 Earth radii and h1 is 5.75 Earth radii. We also damped the beginning and end of the trajectory function for ease in/out effect

Maya provides the tools to programmatically control the camera motion via expressions and the Maya Embedded Language (MEL) [11]. Using MEL, we were able to program this exponential trajectory function for the camera directly to get very smooth camera motion.

The rendering phase often uncovered problems with color matching or georegistration that needed to be addressed. We returned to the appropriate phase in the pipeline to address these problems. Our iteration cycle was slow in part due to the need to preprocess the images using the txmake application, which generates optimized MIP-mapped versions of the textures for rapid access by the registration shader. This program could take over an hour to process an image file.

Linux-based 1.4 GHz processors were used to render the zooms. We only had three RenderMan licenses at the time, but the renderer is very efficient, and on three processors we were able to render one high quality NTSC-video resolution zoom in about 8 hours.


We have presented the techniques used to create dramatic visualizations highlighting multiple resolutions of remote sensing data. Our initial efforts led to the development of a procedural registration shader. By employing this shader in a production pipeline, we have been able to create a series of highly successful visualizations in a reasonable time.

To date, we have produced 26 zoom visualizations into the following locations:

  • Washington DC:
  • New York, NY:
  • Baltimore, MD:
  • Boston, MA:
  • Atlanta, GA:
  • Chicago, IL:
  • Orlando, FL:
  • Los Angeles CA:
  • Long Beach, CA:
  • Tucson, AZ:
The US Capitol Building, NASA HQ
The World Trade Center
The Inner Harbor
The Bunker Hill Monument
The Georgia State Capitol
The Sears Tower
Epcot Center
The Hollywood Sign
The Queen Mary
The University of Arizona
  • Seattle, WA:
  • San Francisco, CA:
  • Greenbelt, MD:
  • Greenbelt, MD:
  • Park City, UT:
  • Salt Lake City, UT:
  • Snowbasin, UT:
  • Beltsville, MD:
  • Skukuza, South Africa:
  • Mongu, Zambia:
  • New Orleans, LA:
    The Space Needle
    Fisherman's Wharf
    NASA GSFC buildings 8, 28, & 33
    Eleanor Roosevelt High School
    Olympic snowboarding venues
    Olympic Stadium, The Delta Center
    Olympic downhill skiing venue
    EOS Land Validation Site
    EOS Land Validation Site
    EOS Land Validation Site
    The Louisiana Superdome

All of these were rendered in NTSC-video resolution, and
some were also rendered in HDTV resolution.

These visualizations received significant national and
international television coverage during Earth Day 2001, Super
Bowl XXXVI and the 2002 Winter Olympics. Millions of
viewers have seen these visualizations and have, we hope, come
to a better understanding of the role remote sensing imagery can
play in their day-to-day lives.


Many other people were involved in creating these visualizations.
They include: Marte Newcombe, Michael Mangos, Eric
Sokolowsky, Alex Kekesi, Jim Williams, John McGinnis, Kevin
Mahoney, Joycelyn Ingram, Stuart Snodgrass, Lori Perkins, Wade
Sisler, Michael Starobin, Jarrett Cohen, Laura Rocchio, Darrel
Williams, Jacques Descloitres, David Herring, and Brian

We would also like to thank Space Imaging Corporation, the USGS,
and the Data Buy folks at NASA's Stennis Space Center.


[1]  http://svs.gsfc.nasa.gov/stories/zooms/

[2]  http://svs.gsfc.nasa.gov/stories/nasm/

[3]  http://modis.gsfc.nasa.gov/

[4]  http://landsat7.usgs.gov/

[5]  http://www.spaceimaging.com/

[6]  http://svs.gsfc.nasa.gov/vis/a000000/a001300/a001324/

[7]  http://www.gsfc.nasa.gov/topstory/20010419landsatimaging.html

[8]  A. Apodaca and L. Gritz, Advanced RenderMan, Part III, Morgan
Kaufmann, San Francisco, California, 2000.

[9]  P. S. Chavez, S. C. Sides, and J. A. Anderson, Comparison of three

different methods to merge multiresolution and multispectral data:
Landsat TM and Spot panchromatic, Photogrammetric Eng. Remote
, 57(3), pp. 295-303, 1991.

[10]  D. Margulis, Professional Photoshop 6, Wiley Computer Publishing,
New York, New York, 2001.

[11]  P. Anderson, et.al., Using Maya Expressions, Alias|Wavefront Inc.,
Toronto, Canada, 1999.

 << Previous Page  1  2  3  4