After having introduced the lighting model for leaves, we now turn to the lighting methods in the rendering of larger scenes. Especially in this case, the lighting computation must be applied in the most efficient manner, since we are working with an increased quantity of data. A number of techniques are available to practically solve problems of data processing. In the following we introduce some of these methods.
We saw in Sect. 4.5 that the first commercial computer film with realistic – looking plants was produced by Lucasfilm [172] under the scientific leadership of Reeves and Blau. With the particle systems and a simple lighting procedure, they achieved good visual approximations of single trees, entire forests, and
meadows. Each tree was represented by up to 1 million particles. Since the quantity of information would have exceeded the computer capacity during rendering, one tree after the other was rendered and later superimposed.
This method offered the possibility of dividing the computer work. Another option is to partition the image itself into subareas and to compute these in different computers. In this case, however, all data must be transferred to each computer, and each computer must not only be able to handle that quantity of data but also transpose it into an image. Even if in a subimage only a small part of the data can be seen, all data must be included in order to be able to compute global effects, such as shadowing.
A general scheme for the composition of images with complex content composed from partial scenes was introduced in 1985 by Duff [51]. Here the scene was also partitioned, however the global interaction was implemented over several buffers in the form of special image data. Each partial scene was treated individually, and the results were then given to the other partial scenes in the form of those buffers.
In the first step, from each subscene a shadow buffer in the form of a special image is produced. From the position of the light source, for each pixel of the image we specify the depth from which a shadow may be cast onto the other objects by the objects of the subscene. The coding of the depth values usually is done using a real number in the interval [0…1], where the value zero equals the minimal distance d0 from the objects to the light source, and the value one equals the maximal distance d. Details on this interpolation are mentioned in another context in Sect. 11.3.
If the lighting of a subscene is defined, for the shadow computation the distance from an object to the light source is computed, and all shadow buffers are checked for any object that could be in-between the object and the light source in another subscene and could possibly cast a shadow.
Together with the image data of the subscene produced in this way, for each pixel the depth value is stored; this time, however, in the coordinate system of the virtual camera. Additionally, for the color values of the image a transparency value is defined. All pixels for which no object exists in the subscene, are assigned the alpha value one. If an object that is partially transparent exists in the scene, an alpha value between zero and one is assigned; otherwise the value is zero. At the combination stage, the different depth values of the subscenes are assessed per pixel, and finally combined with the transparency values to become the actual pixel value.
The advantage of this procedure is the possibility of dividing the object data into subscenes without the need to transfer the complete geometric data among these scenes. Unfortunately, the temporary image data together with the additional buffers also require high storage capacity and a high bandwidth to transfer the data within a computer network.