The Volumetric Billboards representation extends the classic billboards representation used
to render complex 3D objects in real-time. It uses volumetric images of an object stored
into 3D textures, instead of 2D images as for classic billboards.
Combined with a dedicated real-time rendering algorithm based on GPU geometry shader,
volumetric billboards offer full parallax effect of the objects from any viewing
direction without popping artifact, along with improved anti-aliasing of distant objects.
The objects can have transparency and can be arbitrarily distributed in a 3D scene.
The algorithm correctly handles transparency between the multiple and possibly overlapping objects.
Furthermore, volumetric billboards can be easily integrated into common rasterization-based
renderers, which allows for their concurrent use with polygonal models and standard rendering
techniques such as shadow-mapping.

They may be of interest in many cases including levels-of-detail
representation for multiple intersecting complex objects, volumetric textures,
animated complex objects and construction of high-resolution objects by assembling
instances of low-resolution volumetric billboards.

More details and comparison with other techniques can be found in
this paper.

Images, details and results

Representation

Our representation is composed of a set of triangular prism-shaped
3D cells arbitrarily positioned in a 3D scene with
3D texture coordinates at their vertices, and a set of volume
data stored into 3D textures which are mapped into the cells.
In typical uses, the cells are likely to overlap.

Here, the eiffel volume resolution is 256x128x128
(4.8 MB of compressed texture memory) and each tree is a 128^{3} volume
(2.4 MB of compressed texture memory for colors).

Volume data generation

Volume data representing an object is created offline. It is stored into a MIP-mapped 3D texture.
The building process (figure below) ensures that view-dependant information
is preserved as long as possible.

Voxelization of an object is performed by rendering 6 stacks of axis-aligned slices of the object.
Each stack of slices is obtained by fitting an orthographic camera on the object along
one of the 6 directions (+X, -X, +Y, -Y, +Z, -Z) and then, for each slice image,
by setting the near and far clip planes of the camera to match the slice position and thickness.

From these 6 stacks of slices we get 6 3D volumes. Each volume is MIP-mapped independently.
For each level of MIP-map, the 6 corresponding volumes are then combined into one volume.
These volumes finally form the 3D MIP-mapped texture of the object.

Volume MIP-maps construction from 6 view directions (α : alpha blending operator in the view direction, f : user defined voxel combination function).

Rendering algorithm

At runtime, volumetric billboards are rendered with a dedicated slice-based volume
rendering algorithm that renders all the volumetric billboards at once and
correctly handles transparency without requiring any sorting.

To that purpose, the algorithm generates slice planes parallel to the screen
from back to front with respect to the camera. For each slice it computes all
the polygons that correspond to the intersection of the given slice with all
the cells. It draws these polygons and then moves to the next slice.
Thus, the polygons are drawn in a back-to-front order with respect to the camera
and do not need any additionnal sorting for correct transparency compositing.
Slicing: Intersection of all cells with slice 1 is drawn first,
then with slice 2, and so on.

Slab partitioning : The algorithm avoids slicing the regions with no cells
and avoids considering all the cells for intersection by partitioning the space with planes
orthogonal to the viewing direction axis (slabs) before drawing the cells.

Adaptive slicing : The distance between two successive slices is adapted to
render distant volumetric billboards more efficiently while ensuring accurate
anti-aliasing and MIP-map. This also avoids over or under-sampling artifacts
during the slice-based volume reconstruction.

Prism/plane intersection

The performance of the slicing algorithm relies on the computation
of the prism/plane intersections. This step is performed on GPU through
a geometry shader.

The result of the intersection of a prism with a plane may be empty or
may be a polygon having 3, 4 or 5 edges depending on the plane/prism
configurations. The shader algorithm is depicted by the figure and
pseudo-code below.

Some results

The following images are rendered in real-time.
Framerate and memory consumption are detailed in the
paper.