Global and local lighting methods, just like the procedures described so far for the modeling of plants, are based on the standard rendering pipeline: a threedimensional model of the data to be represented is generated and transformed by a mathematical projection onto the computer screen. Here the color of surfaces or pixels is determined by local or global lighting procedures.
A great disadvantage of this method is the necessity to often generate very complex surface models in order to convert these into a comparatively small number of pixels. The scene in Fig. 8.18 consists of approximately 16.5 million triangles. If it is projected onto a computer screen with 1024 x 768 pixels, then even with a triangle size of one pixel more than 10 triangles per pixel are to be drawn on the screen, where per triangle in each case the information on position, texture, and lighting is to be stored.
There are a number of approaches that attempt to work around this dilemma. A very promising approach is to use points in place of triangles as the base elements of the imaging. This technique substantially lowers the amount of data per element to be stored; however the two-dimensionality of the surface
representation is lost, which is particularly disturbing in close-ups. Here the object converts to a point cloud. This approach was used in 1985 by Reeves and Blau (see Sect. 4.5) who modeled whole forests and meadows using particle systems, whose basic elements were also points. A variety of point-based methods were proposed during recent years, some of which we will introduce in Sect. 10.3.
Often a virtual landscape must be synthesized by combining real and synthetic objects. Again, an example here is landscape planning, where in most cases an existing landscape is to be modified. Consequently, attempts have been made to mount photographs of real objects into synthetic scenes, or, vice versa, to integrate synthetic objects into photographs. In both cases there are problems with spatial coherence: a photograph is taken from one position and then shown in an animation from differing directions. Since the true object shape relative to its position cannot be reflected due to the parallax, the object appears unnaturally flat.
Using the methods of image-based rendering researchers therefore try to reconstruct from one or more pictures of a real object its three-dimensional shape and incorporate the obtained 3D data into rendering – that is if the real object is to be combined with a synthetic one. We will explain some of these techniques and their application to plant rendering in Sect. 10 of the next chapter.