Combined 3D visualization of volume data and polygonal models

Computers & Graphics 24 (2000) 583}601
Technical Section
Combined 3D visualization of volume data and polygonal
models using a Shear-Warp algorithm
Ana Elisa F. Schmidt *, Marcelo Gattass , Paulo Cezar P. Carvalho
Tecgraf-Departamento de InformaJ tica/PUC-Rio, Rua MarqueL s de SaJ o Vicente 225, 22453-900-Rio de Janeiro-RJ, Brazil
IMPA-Instituto de Matema& tica Pura e Aplicada, Estrada Dona Castorina, 110, 22460-320-Rio de Janeiro-RJ, Brazil
Abstract
Shear-Warp techniques provide an e$cient way to perform the projection and blending stages of the volume-rendering
process. They treat volume-data slices in a coherent order and can be naturally integrated with the Z-Bu!er algorithm.
The present paper presents a study on the integration between Shear-Warp and Z-Bu!er, and extends the Shear-Warp
algorithm to handle scenes composed of both volume and polygonal data. We present methods to handle
opaque/translucent volumes combined with opaque/translucent polygonal models. As volume data usually has a di!erent resolution from that of the "nal image, in which Z-Bu!er renders the polygonal data, several variants for this
integration are analyzed. Results are shown to support some conclusions on the trade-o! quality versus time that can be
expected. 2000 Elsevier Science Ltd. All rights reserved.
Keywords: Shear-Warp; Z-Bu!er; Polygonal models; Hybrid volume rendering
1. Introduction
In some medical applications, it is necessary to display
scenes composed of both volume data and polygonal
models. Two examples of these applications are a combined view of radiation treatment beams [1] or bone
prostheses [2,3] over the patient's anatomy. Furthermore, the rendering process must be fast enough to
achieve interactive display rates. Such rates are necessary
for an immediate feedback when the user changes the
viewing parameters. This is essential for exploring data
and for aiding new surgery procedures augmented with
3D imaging.
To improve the e$ciency of volume-rendering algorithms, Lacroute and Levoy [4] proposed an interesting
method, called Shear-Warp, which geometrically transforms the data volume, thus simplifying the projection
* Corresponding author. Tel.: #55-21-512-5984; fax: #5521-2592232.
E-mail addresses: [email protected] (M. Gattass),
[email protected] (A.E.F. Schmidt), [email protected]
(P.C.P. Carvalho).
and composition stages. A parallel version of this algorithm was capable of producing, from a dataset of 256,
a frame rate of 15 Hz in a Silicon Graphics Challenge
with 32 processors [5].
More recently, the quest for interactive response time
in volume rendering has brought up new approaches,
using mainly specialized hardware and texture mapping.
When the hardware supports only 2D textures, each
volume slice can be processed as a single 2D texture,
which is composed into the "nal image using the `overa
operator. This method, however, produces noticeable
sampling errors when the viewing direction is not aligned
with one of the data-volume axes [6]. Wilson et al. [7]
presented a method that uses 3D texture and can be
implemented on top of an OpenGL 1.2 graphical system.
In this method, interaction time can be achieved on
high-performance graphics systems that implement 3D
textures in hardware such as the SGI In"niteReality
Engine. In 1998, Silicon Graphics developed the SGI
Volumizer [8], which uses the 3D-texture hardware to
achieve interaction times to render translucent/opaque
volumes.
Osborne and others [9] presented EM-Cube, a scheme
for a specialized PC card that implements the Cube-4
0097-8493/00/$ - see front matter 2000 Elsevier Science Ltd. All rights reserved.
PII: S 0 0 9 7 - 8 4 9 3 ( 0 0 ) 0 0 0 6 0 - 1
584
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
proposal [10]. VolumePro, presented in SIGGRAPH'99
[11], is the "rst single-chip real-time rendering system
developed for PC users that can render volumes with up
to 256 voxels at 30 frames/s. VolumePro is based on the
Cube-4 volume-rendering architecture [10] and includes
the optimizations of enchanced memory present in the
EM-Cube architecture [9]. This new hardware solution
allows real-time changing of color and opacity transfer
functions. It can also visualize time-variant volumes in
real time, among other features.
Several strategies have been proposed to combine volume and polygonal data using well-known volume-rendering techniques such as Ray-Casting and Splatting
combined with polygon techniques like Ray-Tracing,
Z-Bu!er, and Scan-Conversions [1,12}15]. These systems have the advantage of breaking the volume size
limitation that is present in texture-hardware implementations. However, they have lower frame rates than the
systems that use hardware features.
Currently, almost every graphics board can e$ciently
render polygon-based scenes with the Z-Bu!er algorithm. The major variation among these boards is the
number of triangles per frame they can render, and the
size and quality of the texture maps. Most of the currently available hardware supports 2D texture maps, and
only a few support 3D textures. In any case, the size of the
texture memory is always a problem. As computer technology improves, the size of the data sets we must handle
increases, and the best methods are those that can e$ciently manage the computer resources.
An OpenGL solution to mix polygons and volume
data was proposed in HP Voxelator [16]. However, this
proposal works only with opaque volumes and polygons,
using separate pipelines to render each one and merging
the "nal images using Z-depth for occlusion. The SGI
Volumizer [8] uses 3D-texture hardware combined with
polygon pipeline to mix translucent/opaque volumes
only with opaque polygons.
In their paper, Kreeger and Kaufman present two
methods for hybrid volume rendering using current polygon-rendering hardware accelerators and the Cube-5
hardware [2]. One method shares a common DRAM
bu!er with Cube-5, removing the SRAM composite buffer from inside the Cube-5 pipeline and replacing it with
an external DRAM frame bu!er. The other one runlength encodes images of thin slabs of polygonal data
using a special embedded DRAM hardware and then
combines them into the cube composite bu!er. This work
can mix translucent/opaque volumes and polygons at
interactive frame rates.
Another recent work also presented by Kreeger and
Kaufman uses OpenGL 3D general-purpose graphics
system features such as texture map and polygon pipeline
to handle translucent/opaque volumes and polygons [3].
It achieves interactive times using hardware acceleration
for the 3D texture mapping.
These two proposals presented by Kreeger and Kaufman correctly mix volumes and polygons with or without
transparency. However, they require specialized hardware to compose the "nal image in interactive time.
Shear-Warp algorithms have the ability to deal with
volume data as a sequence of 2D textures without the
unwanted sampling errors that occur when these textures
are directly projected on the screen. In addition, attractive frame rates can be achieved without the use of specialized hardware.
On the other hand, Shear-Warp lacks generality compared to the hybrid methods mentioned above. One of
the most important capabilities missing in the method is
the handling of combined volume data and polygonal
models. Zakaria [17] presents a hybrid Shear-Warp rendering that uses a Zlist-Bu!er to create images mixing
volumes with translucent/opaque polygons. His approach is similar to the one we will discuss in this paper,
especially the use of a Zlist-Bu!er to implement translucent polygons.
The present paper presents a study on the integration
between the Shear-Warp and Z-Bu!er algorithms to
create images composed of both volume and polygonal
data. By extending the Shear-Warp algorithm we can
mix opaque/translucent volume data with opaque/translucent polygonal models. We limit attention to the case
in which visualization is done through orthographic
projection, although, as shown by Lacroute [18],
Shear-Warp can be extended to work with perspective
projections.
Algorithms that combine volume and polygonal models must deal with two problems. Aliasing is introduced
during the polygon-sampling process if the volume resolution is low and polygons are rendered at that same
resolution. In addition, the shading process at pixels
where projections of voxels and polygons mix may present problems as discussed by Levoy [1] in the context of
the Ray-Casting algorithm.
In our work, we will discuss strategies to minimize the
aliasing e!ect introduced during the polygon resampling
and we will show samples of mixtures between translucent/opaque volumes and translucent/opaque polygons.
A strategy to mix translucent polygons with volumes is
presented; another strategy using the Zlist-Bu!er described by Zakaria [17] will also be discussed.
The next section presents a brief review of the ShearWarp method and discusses how to combine volumes
and polygonal data using the Shear-Warp and Z-Bu!er
algorithms. Three strategies are discussed to minimize
aliasing during the volume composition process.
2. Shear-Warp factorization
The Shear-Warp algorithm works by reducing the
problem of rendering a given volume, through a general
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
parallel projection, to the special case of a projection
along one of the axes. In this special case, one can obtain
the "nal projected image through a two-step procedure.
First, the slices are composed front to back using the fact
that voxels in the same position (i, j) lie on the same
projecting ray. The result of this step is an intermediate
2D image having the same dimensions as each one of the
slices. In the second step, the intermediate image is appropriately warped to produce the "nal image (in this
case, the warping step simply scales the intermediate
image to the desired "nal resolution).
In order to reduce the case of a general parallel projection to the above situation, the volume to be rendered
must be appropriately deformed by a shear transformation. The shear transforms slices which are orthogonal to
one of the coordinate axes (called the principal view axis)
from the original object space (x, y, z) into the sheared
object space (i, j, k). In the latter, k corresponds to the
principal view axis. After the shear transformation, slices
are projected in the direction of axis k to generate an
intermediate image. Each sampled slice is composed in
front-to-back order, using the over operator, to form the
distorted intermediate image. Finally, the warp step transforms the distorted intermediate image into the "nal image.
Fig. 1 shows a generic scheme of the Shear-Warp
algorithm for parallel projections. Horizontal lines represent the volume slices.
In matrix terms, this is equivalent to factoring the
viewing transformation matrix M
as
TGCU
M "M M\ M
"M
M
.
(1)
TGCU
TGCU QFC?P QFC?P
U?PN" QFC?P
The shear matrix M
is of the form
QFC?P
1 0 s
G
0 1 s
H
0 0 1
0
0 0 0
1
0
0
[P],
585
where [P] is a permutation matrix that maps the principal view axis (that is, the one which is closest to the
viewing direction) onto the z-axis. The coe$cients s
G
and s are computed by imposing the condition that
H
M
map the viewing direction (vx, vy, vz) into a vector
QFC?P
parallel to (0, 0, 1). Full detail on the Shear-Warp factorization, including mathematical derivations, is presented
by Lacroute [18].
In the sheared object space, the viewing direction is
perpendicular to the volume slices. Moreover, since shear
only translates the slices, without any rotation about the
k-axis, each scanline of the volume remains aligned with
the scanlines of the intermediate image. This property is
fundamental for simplifying the rendering algorithm,
thus allowing run-length encoding and coherent imagecomposition algorithms.
The intermediate image dimensions, in pixels, are calculated as follows:
w "w
#d
s,
*MU
1JGAC
1JGAC G
h "h
#d
s,
*MU
1JGAC
1JGAC H
(2)
where w
and h
are, respectively, the width and
1JGAC
1JGAC
height of each slice, d
is the number of slices, and
1JGAC
s and s are the shear coe$cients computed from the
G
H
permuted viewing matrix [18].
It is important to note that, in the standard ShearWarp algorithm, the intermediate image resolution depends on the volume resolution and on the projection
direction; it does not depend on the "nal image
resolution. This yields an important property of the
Shear-Warp composition process: its complexity is proportional to the volume resolution, but not to the "nal
image resolution. When volume resolution is smaller
than the "nal image resolution, this property results in
a fast algorithm. The warp transformation not only corrects the distorted intermediate image but also re-samples it to the "nal image resolution.
Fig. 1. Overview of Shear-Warp factorization.
586
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
Fig. 2. Slice composition in Lacroute's Shear-Warp algorithm.
Another important speed-up factor is obtained by the
scanline alignment in this algorithm. Fig. 2 illustrates the
use of this alignment to avoid unnecessary computations
during the process of blending a slice into the intermediate image. When a bilinear reconstruction kernel is used,
a voxel contributes with four pixels in the intermediate
image and, conversely, a pixel usually receives contribution from four voxels. Moreover, neighboring pixels in an
image scanline receive neighboring voxels in a slice scanline. Furthermore, if the slices are blended in the frontto-back order, as the intermediate image pixels become
opaque (a"1) they cease to receive voxel contributions.
Conversely, transparent voxels (a"0) can also be ignored, since they do not contribute. In Fig. 2, only the two
pixels labeled modixed pixels need to receive contributions from the slice.
Lacroute's implementation of Shear-Warp [18] uses
clever encoding techniques for the intermediate image
and volume slices. For the intermediate image, he uses an
auxiliary "eld to indicate, for every pixel, how many
pixels can be skipped until the method can "nd a nonopaque pixel. Volume slices are run-length encoded as
transparent and contributing voxels, also allowing the
algorithm to skip the former.
3. Combining volume and polygons
The Z-Bu!er algorithm [19] can be used to render
polygonal models yielding color and depth (Z) matrices.
The depth matrix is used as a mask to determine which
voxels are mixed with the polygons. The color matrix is
blended with the transparent volume color lying in front
of it. Fig. 3 shows a generic scheme used to combine
volume and polygonal model information using the
Shear-Warp and Z-Bu!er algorithms.
The blending process that generates the intermediate
image must take polygon contributions into account.
For this reason, the polygons must be sampled at the
same resolution as the intermediate image. If this resolution is low compared to the "nal image resolution,
then artifacts and aliasing e!ects are introduced at this
stage.
Fig. 3. Basic combination scheme.
3.1. Resolution of the intermediate image
To avoid low-resolution problems, the polygons must
be rendered into an intermediate image whose resolution
is comparable to the "nal image resolution. To "nd this
resolution, let us consider the warp matrix. As discussed
above, if the volume resolution is low, the warp matrix
not only corrects the distortions, but also resamples the
intermediate image to the "nal image resolution.
In order to avoid aliasing, the intermediate image
resolution must be rede"ned so that the new warp matrix
maps intermediate image vectors into vectors of about
the same length in the "nal image. That is, ideally the
dimensions of a pixel in the intermediate image should
not change signi"cantly as the pixel is warped to the "nal
image. Nevertheless, that is not the general case in normal applications of the Shear-Warp algorithm. When the
volume resolution is smaller than the "nal image resolution, a pixel of the intermediate image is warped into
M;N pixels of the "nal image.
Factors M and N can be computed from the warp
matrix M
"[w ], as illustrated in Fig. 4. A unit step
U?PN
GH
in the u and v directions in the intermediate image leads
to increments of (w , w ) and (w , w ), respectively, in
the "nal image's coordinate system.
Thus, M and N can be determined as
M"ceil(max("w ", "w ", 1)),
N"ceil(max("w ", "w ", 1)).
(3)
If the dimensions of the low-resolution intermediate
image, given by Eq. (2), are increased by factors M and N,
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
587
Fig. 4. Mapping intermediate space into "nal image space.
Fig. 5. Example of image subdivision.
then a unit increment in the "nal image coordinates
corresponds to approximately a unit increment in the
new high-resolution intermediate image, whose size
(w
,h
) is given by
&GEF &GEF
w
"M(w !1)#1,
&GEF
*MU
h
"N(h !1)#1.
(4)
&GEF
*MU
This corresponds to dividing each pixel interval in the
low-resolution intermediate image into M;N subpixels,
as illustrated by Fig. 5. This subdivision increases the
resolution, preserving the pixel positions in the lowresolution image. This is an important property for the
dual-resolution images generated in the dual-resolution
composition algorithms to be presented below.
then the composition step becomes more complex. This
happens when the polygonal model is sampled to an
intermediate image in high resolution. A simple way to
avoid this complexity is to resample the slices after the
classi"cation step. Nevertheless, completely resampling
the classi"ed volume slices is ine$cient due to the number of slices to be processed. As previously noted, one of
the key factors for the speed of the Shear-Warp algorithm
is the use of low resolution in the costly volume-composition operation. Resampling the slices completely by factors M and N reduces this advantage signi"cantly.
An alternative strategy consists in working with two
intermediate images: one in low and another in high
resolution. In areas in which the alias problem appears,
the high-resolution image is used in the composition
process; in other areas, the low-resolution image is used.
The algorithms proposed here are classi"ed in three
groups:
E Low-resolution composition: Intermediate and polygon
images in low resolution.
E High-resolution composition: Colors and opacities of all
volume slices interpolated during the composition
process; intermediate and polygon images in high resolution (naive method).
E Dual-resolution composition: Intermediate image in
both low and high resolution; polygons rendered in
high resolution.
3.2. Shear-Warp with Z-Buwer
The basic idea for combining the Shear-Warp and the
Z-Bu!er algorithms is to replace the polygonal model by
their polygon fragments represented in the color and
depth bu!ers. With these bu!ers, the composition step of
the Shear-Warp algorithm can be modi"ed to include the
polygonal model contribution, thus yielding a hybrid
algorithm. This algorithm blends the polygon fragments
that lie between two slices into the intermediate image as
the slice composition progresses. The warp step remains
unchanged.
If the color and depth bu!ers have a resolution that is
di!erent from the one corresponding to the volume slices,
Algorithms in all three groups use the Z-Bu!er algorithm to render the polygonal model into a color and
depth bu!er whose resolution is de"ned by the
intermediate image resolution. They use the depth
information to blend into the intermediate image the
contribution of the fragments of the polygons that lie
between two volume slices.
The main di!erence in "nal image quality for the three
methods lies in the way they deal with the aliasing that
occurs when the polygonal model is rendered. To illustrate how the proposed methods handle the aliasing
problem, a synthetic volumetric model with dimension
64 combined with a polygonal cone, with about 300
588
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
Fig. 6. Synthetic model example and polygonal cone used to detect aliasing.
triangles, shown in Fig. 6, is used as an example. The
opacity transfer function in this example maps values
greater than 30 of the volume as opaque and the polygonal model as opaque too.
3.3. Z-Buwer matrix
mapped to a high-resolution intermediate image, R is
given by
w
&GEF
w
*MU
0
0
0
0 1
R"
The projection matrix used by the Z-Bu!er algorithm
maps the polygonal model from the object space into the
intermediate image space. The polygons must be described in the same object space (x, y, z) where the volume
is described. Note, however, that the projection discussed
here is not the one that projects objects into the "nal
image. Rather, it is the one that projects objects into the
intermediate image, prior to the warping stage which
builds the "nal image. Thus, it is based on the intermediate image resolution and on the shear parameters, as
shown below.
The matrix for performing this transformation is
M
"RCM
.
QFC?P1SPD
QFC?P
(5)
In (5), matrix M
is given by Eq. (1). Matrix C,
QFC?P
which determines the slice composition order, is
C"
1 0 0
0
0 1 0
0
0 0 k
GLAP
0 0 0
0
,
(5a)
1
where coe$cient k
may be #1 or !1 according to
GLAP
whether the shear transformation maps the viewing vector into the positive or the negative k-axis.
Finally, the resolution matrix R maps the polygons to
the intermediate image resolution. If the polygons are
0
h
&GEF
h
*MU
0
w
&GEF !1
w
*MU
h
0 &GEF !1
h
*MU
.
1 0
0
0
(5b)
When the intermediate image has low resolution, R is
just the identity matrix.
3.4. Low-resolution composition algorithm
Low-resolution composition is the simplest method
proposed here. The intermediate image resolution is the
one given by standard Shear-Warp algorithm. The basic
steps of this procedure are given below:
Compute the Z-Bu!er projection matrix (Eq. (5));
Render polygons into color and depth bu!ers;
For each volume slice k
Compute the o!sets (slice}u, slice}v) of the slices into the
intermediate image;
Compute weight factors for each set of 4 voxels that
contribute to one pixel (Fig. 7);
For each slice scanline j
Compute the corresponding scanline v in the lowresolution intermediate image;
While there are still unprocessed voxels in this
scanline j
Search for the "rst non-opaque pixel, (u, v)
Find the 4 voxels (¹¸, ¹R, B¸ and BR) that
contribute to the (u, v) pixel;
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
If one of the current 4 voxels is non-transparent:
Interpolate the contributions of the 4 voxels;
Add their contributions to (u, v) pixel;
else
Skip transparent voxels in both scanlines
j and j#1, updating pixel position u;
Blend into intermediate image the contribution of the
portion of the polygons that lies between slices k and
k#1;
Warp the intermediate image into the "nal image.
To speed-up the voxel-blending step, RLE structures
used by Lacroute [18] are employed to store non-transparent voxel values and gradients. That is, only those
voxels that map to opacity greater than zero are considered. The stored values allow signi"cant space saving,
without restricting the illumination model. Lights can be
attached to the observer, and this e!ect can be perceived
as he or she navigates around the volume.
The encoding of the intermediate image stores the
number of fully opaque pixels that can be skipped in
a scanline during the composition process. Since the
blending operator works in front-to-back order, the slices
that are closer to the observer are blended "rst. As pixels
become opaque, they do not accept further contribution;
thus, the encoding seeks to e$ciently skip these pixels.
An important aspect of the blending process is illustrated in Fig. 7. Since both the intermediate image and
the slice have the same resolution, only four voxels can
contribute to a given pixel p.
The contribution of voxels TL, TR, BL, and BR to
pixel p is given by the bilinear interpolation
C "(1!a)(1!b)C #(a)(1!b)C
T
0
*
#(1!a)(b)C #(a)(b)C ,a
20
2* T
589
a "(1!a)(1!b)a #(a)(1!b)a
T
0
*
#(1!a)(b)a #(a)(b)a ,
20
2*
(6)
where a is the opacity and C"aC is the opacityweighted color channel (aR, aG, aB) of each one of the
four voxels. These voxel colors and opacities are computed from user-provided transfer functions modulated
by the illumination model.
This interpolated color is then blended into the pixel
p by
C
"C #C (1!a
),
NLCU
NMJB
T
NMJB
a
"a
#a (1!a
).
NLCU
NMJB
T
NMJB
(7)
As the opacity gets closer to 1, the contribution of the
voxels becomes less important.
To accelerate the polygon-contribution step, an auxiliary list is produced with the pixels of the polygon images
ordered by their depth. Therefore, in order to "nd the
polygon's contributions between each slice k and k#1,
the algorithm does not have to search all elements of the
depth matrix. Eq. (7) is also used to blend the color and
opacity of the polygonal model into the corresponding
pixels.
Fig. 8 shows a "nal image of the synthetic volume
model combined with the polygonal cone, both presented
in Fig. 6, using the low-resolution method.
The detail in Fig. 8 illustrates the aliasing problem
introduced by the sampling of the polygonal model to
intermediate-image resolution in the low-resolution
method. This problem is more pronounced at the borders
of the polygonal model and can be more or less severe
according to the point of view, the shape of the model
and, more importantly, the resolution of the intermediate
image. To reduce the aliasing e!ect we investigate techniques to increase the resolution of the intermediate
Fig. 7. Standard Shear-Warp slice composition.
590
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
Fig. 8. Image created with low-resolution method, showing cone border detail.
Fig. 9. Blending low-resolution slices into a high-resolution intermediate image.
image. This change in resolution, however, a!ects the
blending process, as shown in the following sections.
3.5. High-resolution composition algorithm
In this algorithm, the intermediate-image resolution
and the Z-Bu!er color and depth bu!ers are set to high
resolution (see Eq. (4)). The volume slices are composed
into the high-resolution intermediate image with the
colors and opacity interpolated as illustrated by Fig. 9. In
this "gure, each pixel in high resolution is considered as
a sub-division of a low-resolution pixel.
The strategy used here seeks to use the RLE structure
of the low-resolution intermediate image and volume
slices. The correspondence between current pixel (in low
resolution) and current voxel that exists in Fig. 7 continues to exist in Fig. 9.
As Fig. 9 shows, the pixel `pa is sub-divided into
M;N subpixels. Following the same interpolation
scheme as last section shown, for each subpixel of `pa
only four voxels contribute to determine its color and
opacity. However, due to the misalignment, indicated by
distances `aa and `ba in Fig. 9, between the slice and the
intermediate image, these four voxels may vary for di!erent subpixels. The four voxels that contribute for each
subpixel are chosen from a neighborhood of nine voxels,
labeled as follows:
E C: current voxel (i, j),
E =, E: voxels i!1 and i#1 in the scanline j,
E N=, N, NE: voxels i!1, i and i#1 in the scanline
j!1,
E S=, S, SE: voxels i!1, i and i#1 in the scanline
j#1.
Fig. 10 shows the subset of four voxels that are chosen
for each group of subpixels of `pa. Note in this "gure that
the sectors marked by the symbols (hexagon, diamond,
etc.) are the ones shown in Fig. 9, and that voxels TL, TR,
BL, and BR play the same role as the ones shown in
Fig. 7. In Fig. 10, for each subpixel, distances a and b
play the same role in de"ning the voxel weighting factor
as do a and b in Eq. (6).
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
Considering the subpixels indexed by m ranging from
0 to M!1 in the u direction, and by n from 0 to N!1 in
the v direction, each of the four sectors is de"ned as
shown in Table 1.
As noted before in this section, the composition step
uses the same run-length information used in the lowcomposition algorithm. The main di!erences are the
number of active scanlines and the opacity criteria for
intermediate image pixels. In the low-resolution method,
it is necessary to access two active scanlines of the RLE
structure to obtain the four voxels that may contribute to
the current pixel into the intermediate image. Here, the
algorithm for high resolution uses three active scanlines
in the run-length encoding of the current slice to obtain
nine neighbors that may contribute to the current pixel in
the intermediate image. Regarding the opacity criteria,
only when all subpixels are opaque the correspondent
pixel in low resolution can be considered opaque. In this
case, the composition algorithm can skip this pixel.
Finally, the warping step must be adjusted to transform this high-resolution intermediate image into the
"nal image. The warp process is similar to the one applied to low-resolution intermediate images. The di!erence consists in multiplying the "rst two rows of the
warping matrix by factors M and N, respectively.
Fig. 11 shows a "nal image of the synthetic volume
model combined with the polygonal cone using the
high-resolution method.
As the "gure above shows, the high-resolution algorithm smoothes the cone's edges, producing good-quality
images. This method, however, is very slow and may be
improved upon by dual-resolution methods, as shown in
Fig. 10. Voxels that contribute to a pixel and their weighting
factors.
591
the following section. A complete comparison between
this and the other methods is presented in Section 4.
3.6. Dual-resolution composition algorithms
The basic idea in this class of algorithms is to increase
the resolution of the intermediate image only in the
regions that are in#uenced by the polygons. To identify
those regions two criteria are proposed here: (a) polygonal model footprint, and (b) high-frequency regions.
These criteria yield two dual-resolution algorithms. In
the "rst one, all pixels of the low-resolution intermediate
image that correspond to projections of the polygons are
marked to require high resolution. In the second case, the
algorithm "rst applies the 3;3 Laplacian "lter, given
below, to the polygon color map, produced in high
resolution:
0 !1
¸" !1
0
4 !1 .
0 !1
0
This convolution produces a new image in which the
high-frequency regions correspond to high values in each
RGB channel. These high values correspond to borders
and to irregularly lightened regions of the polygonal
model. Therefore, only the pixels of the low-resolution
intermediate image that correspond to projections of the
high-frequency regions are marked to require high resolution.
Determining which values are considered high-frequency values depends on a frequency threshold provided
by the user. The lower the threshold, the greater the
number of pixels to be supersampled. Fig. 12 shows the
example of the cone model sampled to the intermediateimage resolution and its corresponding "ltered image.
The footprint dual-resolution algorithm requires two
auxiliary intermediate images: one in high resolution and
one in low resolution. The high-resolution auxiliary image is used to blend the voxels and the polygons into
pixels identi"ed by the footprint criteria. The low-resolution auxiliary image is used to compose only volume
data.
Table 1
Sector limits
Sector
m
KGL
m
K?V
n
KGL
n
K?V
00
01
10
11
0
-oor(a*M)#1
0
-oor(a*M)#1
-oor(a*M)
M!1
-oor(a*M)
M!1
0
0
-oor(b*N)#1
-oor(b*N)#1
-oor(b*N)
-oor(b*N)
N!1
N!1
592
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
Fig. 11. Final image of a hybrid sample using high-resolution algorithm.
Fig. 12. Polygonal cone and its correspondent high-frequency "ltered image.
The high-frequency algorithm uses the same two auxiliary images as the footprint version plus an additional
low-resolution auxiliary intermediate image to compose
volume and polygonal data. Fig. 13 illustrates the need
for two low-resolution images in this method. In this
"gure only the pixels marked with a diamond are computed in high resolution; the others (squares and hexagons) are computed by resampling the low-resolution
intermediate images. To correctly compute the hexagons
in the high-resolution intermediate image, one must take
into account the low-resolution image that contains both
volume and polygonal data. If this same low-resolution
image is used to compute the square pixels, however, the
polygonal model's border color would be blurred, as
illustrated in Fig. 13. To correctly interpolate pixels that
do not receive polygon contribution (square pixels), the
algorithm uses the low-resolution auxiliary intermediate
image in which only voxel contributions are computed.
To identify which pixels lie outside the polygon's projections, the Z-Bu!er depth matrix is used.
In both dual-resolution algorithms, the versions of the
intermediate image, in high and low resolution, are assembled into a single high-resolution image before the
warping transformation. The warp process is similar to
the one applied to the high-resolution algorithm.
Figs. 14 and 15 show "nal images of the synthetic
volume model combined with the polygonal cone using
the footprint and high-frequency methods, respectively.
The high-frequency image was created with a frequency
threshold value of 120.
These images present similar results in terms of image
quality and anti-aliasing e!ects, but they present variations in processing time due to the di!erences in the
number of pixels that are processed in high resolution.
Fig. 16(a) shows the portions of the "nal image that are
composed in high resolution using the footprint algorithm, and Fig. 16(b) shows the portions in high resolution created with the high-frequency algorithm, using 120
as frequency threshold.
3.7. Polygonal models with transparency
All algorithms presented here can be extended to combine volume data with translucent polygonal models.
A simple change in the composition methods proposed
consists in introducing just one level of transparency for
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
593
Fig. 13. Blur produced by interpolation in the low-resolution polygons borders.
Fig. 14. Final image of a hybrid sample using footprint algorithm.
Fig. 15. Final image of a hybrid sample using high-frequency algorithm.
the polygonal models. We used the Z-Bu!er algorithm to
create not only a color and depth bu!er as before, but
also an opacity bu!er that contains the opacity value
corresponding to the closest polygon fragment. Volume
slices that are behind the polygon fragments are also
considered in the composition process. Fig. 17 shows the
opaque synthetic volume combined with the cone polygonal model rendered with an opacity value of 0.5 using
a high-frequency dual-resolution method, with 120 as
frequency threshold.
This simple process does not account for polygon
contributions that are behind the "rst polygon fragment.
To correctly deal with more than one level of transparent
polygons, one may construct a Zlist-Bu!er data structure
as proposed by Zakaria [17]. The main idea of Zakaria's
work is to render polygons into a Zlist-Bu!er, where each
594
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
Fig. 16. (a) High-resolution portions created using the footprint and (b) high-frequency algorithms.
Fig. 17. Opaque synthetic model with one-level transparent
polygonal cone.
pixel may store more than one z-value, yielding a list of
translucent polygon fragments. During the composition
process, for each new slice, the algorithms proposed here
can check this list and accumulate the color and opacity
values of the polygon fragments. There are, however,
many applications in which the simple one-level polygon
transparency scheme described above may yield good
results.
4. Test results and discussion
The ideas proposed in this paper were implemented in
an in-house volume-rendering program. The algorithms
use orthographic projection and implement volume and
intermediate image RLE encoding to accelerate the composition process. To minimize the amount of memory
used by RLE volume structures, the program maps the
gradient vectors into a pre-computed table with a "xed
number of uniformly distributed normal vectors. In the
tests described in this section, we use a table with 256
entries, thus reducing the gradient representation to one
byte per voxel. Quantizing the gradient vector may introduce a reduction in terms of image quality. Figs. 18(a)
and (b) show two images of a skull, one created using the
original gradient vectors and another using gradient
quantization to 256 normal vectors. In the areas of the
image corresponding to surfaces having slowly varying
curvatures, the di!erences are noticeable; in other areas,
however, there is very little di!erence. The other two
pairs of images in Figs. 18(c)}(f) illustrate situations
where gradient quantization does not signi"cantly decrease the image quality.
Better results, of course, can be obtained by quantizing
the gradient into a larger set. Lacroute [18], for instance,
proposed a quantization method that represents the
gradients in 13 bits, mapping the gradients to a space of
8192 normal vectors.
We should note that since both the RLE volume representation and the gradient quantization are done in
a pre-processing step of the algorithm, the number of
entries in the table of gradients has little in#uence on the
time spent generating the "nal images (pre-processing
times are not included in time results presented in this
section).
The system used in the tests was an Intergraph Realizm II TDZ 2000, with dual Pentium II 300 MHz and 512
Mb RAM running Windows NT. In order to better
control the numerical experiments presented here, we
made no use of the graphics hardware. That is, our
software implements all steps of all algorithms, including
the Z-Bu!er. To make the tests simpler, the parallel
capability that is o!ered by the dual Pentium architecture was not used either.
We used two commonly available volumetric datasets
from the Visible Human Project [20] to show two aspects of the proposed algorithms: e$ciency and image
quality results.
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
595
Fig. 18. (a) Skull rendered with the original gradients and (b) with quantization. (c) Foot rendered with transparency with the original
gradients and (d) with quantization. (e) Engine rendered with the original gradients and (f) with quantization.
596
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
4.1. Ezciency results
values, used in the tests were:
Five versions of the hybrid Shear-Warp algorithm
were considered to generate the time results presented in
Table 3: (1) standard (without polygonal model); (2) low
resolution; (3) high resolution; (4) footprint in high resolution; and (5) high frequency in high resolution. The
dataset used here is the CT exam of Visible Woman's
head [20], shown in Fig. 19(a). The head dataset was
assembled from slices 1001}1209 of the whole woman's
body. Each slice has a resolution of 512;512, and the
basic dataset has 512;512;209 voxels. To study the
e!ect of the volume size on the algorithms, we have
"ltered this dataset to produce a smaller one:
127;127;51 voxels. We also uniformly converted the
original 12-bit density values into a one-byte representation, that is, with density values ranging from
0 to 255.
The speed of a direct volume-rendering algorithm is
also very dependent on the transfer functions. To visualize the Visible Woman's skull, shown in Fig. 19(b), the
opacity transfer function a, which gives opacity values,
and the color transfer function C, which gives RGB
Fig. 19. Visible Woman's head data set and cones polygonal
model.
0.0 if v3[0, 100],
a"
1.0 if v3[101, 255],
(0 0 0)
C"
if v3[0, 49],
(215 150 115) if v3[50, 66],
(245 100 100) if v3[67, 99],
(255 255 255) if v3[100, 255],
where v is the voxel density value.
The choice of the opacity transfer function implies that
both volume and polygonal model are considered
opaque. To check the time overhead introduced by the
pixel subdivision method implemented in the high-resolution method's algorithms, we have chosen to work
with no transparency. Later in this section, we will show
examples that use transparent transfer functions.
The polygonal model is composed of six cones that are
parallel to the X, >, and Z object-space axes, as illustrated by Fig. 19(c). About 300 quadrilateral polygons
are used to model each cone. All cones are completely
opaque (a"1.0). Fig. 19(d) shows a "nal image where
volume data and polygonal model are combined.
Other important rendering parameters used in this test
are shown in Table 2. The minimum opacity value indicates the lower opacity value considered as non-transparent. If a voxel has an opacity value that is lower than
the minimum, then this voxel is considered transparent.
The maximum opacity value indicates the highest opacity
value considered as non-opaque. During the composition
process, pixels can receive voxel contributions until they
turn opaque, i.e., until their opacity value becomes lower
than the maximum opacity value. The ambient light sets
the percentage of light that comes from the ambient
instead of the light sources. The noise threshold indicates
the minimum density value considered as valid material.
The frequency threshold is the user-de"ned value that
sets the high-frequency lower bound for the polygonal
model's "ltered image.
Table 3 shows the CPU time, in seconds, for the main
steps of the "ve implemented algorithms, using the two
versions of the volume data: 127;127;51, labeled small,
and 512;512;209, labeled large. Since the speed of the
algorithms is also dependent on the projection's direction, the times presented in this table were computed
Table 2
Parameters used in the tests
Minimum opacity value
Maximum opacity value
Ambient light
0.05
0.95
10%
Noise threshold
Frequency threshold
Final image resolution
10
100
500
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
597
Table 3
Visible Woman's skull dataset times in seconds
Hybrid composition
Visible woman's skull
500;500 "nal image
Volume size
No
Surface
Low
Polygons Z-Bu!er rendering
small
large
0.00
0.00
0.18
0.85
High-frequency determination
small
large
0.00
0.00
Marking high-resolution portions into
intermediate image
small
large
Creating the intermediate image
Footprint
High
frequency
0.52
0.93
0.51
0.91
0.51
0.93
0.00
0.00
0.00
0.00
0.00
0.00
0.02
0.05
0.00
0.00
0.00
0.00
0.03
0.06
0.03
0.04
0.02
0.03
small
large
0.09
1.32
0.08
1.25
0.94
3.82
0.49
3.11
0.51
2.41
Composing polygon
Contributions between slices
small
large
0.00
0.00
0.00
0.02
0.03
0.07
0.03
0.07
0.03
0.07
Composing slice contributions
small
large
0.09
1.32
0.08
1.24
0.91
3.75
0.34
2.79
0.29
1.95
Assembling intermediate
images
small
large
0.00
0.00
0.00
0.00
0.00
0.00
0.12
0.25
0.18
0.39
Warping
small
large
0.17
0.20
0.17
0.20
0.18
0.20
0.18
0.20
0.18
0.22
Exhibition
small
large
0.04
0.09
0.08
0.08
0.09
0.08
0.09
0.08
0.09
0.07
Total time
excluding exhibition time
small
large
0.30
1.61
0.51
2.37
1.76
5.09
1.30
4.34
1.33
3.71
as an average of 50 randomly distributed projection
directions.
From Table 3, a series of observations can be made.
Without hardware assistance, the time spent on the
Z-Bu!er polygon-rendering step is considerable if compared to that of other steps, and it is proportional to the
intermediate image resolution. This is a strong indication
that a hardware implementation of the Shear-Warp steps
would yield interactive time.
The time for composing volume slices and polygons
into the low-resolution intermediate image is smaller
than the time for composing only slices. This happens
because polygons cause the intermediate-image pixels to
become opaque at an earlier time. Thus, as we are using
the RLE volume and image optimizations, the volume
and polygon composition requires fewer computations
than compositing the slices alone in this case.
In all algorithms and in most cases, the time taken to
create the intermediate image is greater than the time
spent in each of the other steps. Only in the low-composi-
High
tion algorithm with small volume data this did not happen. As the number of pixels in the intermediate image
increases, also does the time spent in the composition
step. Note, however, that the number of pixels to be
processed depends on the algorithm, on the geometry of
the polygonal model, and on the projection's direction.
Considering the creation of the intermediate image, the
most time-consuming sub-step is slice composition,
which includes voxel shading, color and opacity interpolation, and accumulation into the intermediate image.
The results shown in Table 3 indicate that the algorithms
can be ranked as follows, from the most to the least
e$cient in this processing step: low, high frequency, footprint, and high.
The step for assembling the intermediate image applies
only to the dual-resolution algorithms. The footprint
algorithm presents better times for this step than the
high-frequency algorithm. The former creates only one
low-resolution intermediate image that accumulates
only volume contributions, while the latter creates two
598
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
age with the same resolution as the one created by highand dual-resolution algorithms; thus, the "nal images
created by all these methods have the same quality. In
this case, it is better to use the faster low-composition
algorithm.
4.2. Image quality results
Fig. 20. Images of a pelvis combined with a polygonal prosthesis: (a) prosthesis is opaque; (b) prosthesis has one level of
transparency.
low-resolution images that accumulate contributions of
volume and volume plus polygons, respectively. Thus,
the footprint method needs to scale only one low-resolution image. Moreover, it has to assemble less pixels than
the high-frequency algorithm. The data in Table 3 supports this analysis.
Concerning the time spent in the warping step, one can
note that it depends on the intermediate- and "nal-image
resolutions. In the low-composition algorithm, warping
time is equivalent to that of the standard Shear-Warp
algorithm. In the algorithms that create a high-resolution
intermediate image, warping time increases as the resolution on the intermediate image increases.
When the volume dimensions that are parallel to the
projection plane are similar to the "nal-image resolution,
the M and N factors are equal to one. In this case, the
low-composition algorithm creates an intermediate im-
Fig. 20 shows two "nal images of a pelvis extracted
from a CT exam of Visible Male's pelvis dataset [20].
The pelvis dataset was assembled from slices 1843}2258
of the whole man's body. Each slice has a resolution of
512;512, and the basic dataset has 512;512;100
voxels. This volume is combined with a polygonal prosthesis model, with 3758 triangles, which stays inside the
femur bone. The images shown in Fig. 20 have
1024;1024 as resolution and were created using the
low-resolution algorithm, RLE structures and gradient
quantization.
In Fig. 20(a), skin, muscle and bones are transparent
and the prosthesis is opaque. In Fig. 20(b) the volume
stays transparent and the prosthesis is rendered with an
opacity value of 0.3 with one level of transparency. The
"nal time spent to create Figs. 20(a) and (b) was 13.65 and
13.85 s, respectively.
As we can see in Fig. 20(a), the image created with the
opaque prosthesis seems `wronga, as if the prosthesis
were outside the femur bone. On the other hand,
Fig. 20(b), created with one level of transparency for the
prosthesis, gives the correct notion that the prosthesis is
inside the femur bone. The image results presented above
emphasize the need to deal with transparency, since the
opaque model does not present an adequate image result.
Of course, more accurate images are obtained when
a Zlist-Bu!er is used.
To compare the behavior of the algorithms, we have
generated four versions of the image in Fig. 20(b). Table
4 shows the processing time spent to create the "nal
images using each algorithm, and Fig. 21 shows zoomed
areas where we can see the di!erences in image quality
and anti-aliasing results.
Fig. 21(a) shows the image created with the low-resolution algorithm. As we can see, the image is blurred and
aliased, but this method presents the best processing
time, as shown in Table 4.
Fig. 21(b) shows the high-resolution image results. It is
the best image, with no blur and a lower level of aliasing,
but it takes the longest time to process.
Fig. 21(c) shows the results of the footprint algorithm.
Here we also have a very good image with no blur in the
areas where the prosthesis is present, and the processing
time is the lowest for the methods that work at subpixel
level.
Finally, Fig. 21(d) presents the image created with the
high-frequency algorithm, using a frequency threshold of
120. As we can see, some regions inside the prosthesis are
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
599
Fig. 21. Zoom of the images created using the (a) low-resolution, (b) high-resolution, (c) footprint, and (d) high-frequency methods.
Table 4
Processing time to create images from the pelvis with transparent prosthesis
Methods
Low resolution
High resolution
Footprint
High frequency
Time (s)
13.85
228.29
39.76
46.96
blurred because they are composed in low resolution, but
the borders are well de"ned and with a low level of
aliasing.
The "nal observation in this example is that the di!erence between the footprint and the high-frequency algorithm images occurs in the pixels within the polygonal
model projection and in those that are adjacent to them.
5. Conclusions
This work presented four Shear-Warp algorithms for
combining volume data and opaque/translucent polygonal models using Z-Bu!er: low resolution, high resolution, footprint, and high frequency. These algorithms
were implemented, and test results have shown that they
correctly mix the volume data and polygonal models into
600
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
the "nal image. The di!erences between them concern
e$ciency and "nal image quality.
The low-resolution algorithm presents the best time
for mixing volume and polygonal data; however, its "nal
image can present some aliasing artifacts on the borders
of the polygonal model representation.
The high-resolution algorithm presents the best "nalimage quality, but its time is the highest. This happens
despite the use of the RLE encoding of the slices and of
the low-resolution intermediate image. Without this optimization, its running time would be totally unacceptable.
The proposed dual-resolution algorithms kept the
RLE encoding, generating, in acceptable times, "nal images of quality comparable to that of the high-resolution
algorithm. The examples studied in this research slightly
favor the use of the high-frequency algorithm in cases
where there is no transparency and the footprint version
of the dual-resolution algorithms for the transparent
cases.
Finally, there is much to be gained if these algorithms
could be implemented in hardware. In the absence of
specialized hardware, implementing the algorithms presented here can be signi"cantly improved if two general
functions were supported by general graphics libraries
such as OpenGL and DirectX: e$cient Z-Bu!er control
and image composition with RLE encoding. By e$cient
Z-Bu!er control we mean the ability to retrieve actual
depth values and to e$ciently access these bu!ers from
a program in the main memory. OpenGL provides this
access but, in all implementations we tested, it is a very
slow operation. DirectX addresses this problem and presents a partial solution. It provides e$cient access, but
does not provide a standard for encoding depth values.
By image composition with RLE we mean the ability to
compose the "nal image using the run-length encoding
representation of the slices. It should take advantage of
the opacity of the destination image in an over operation
such as the ones shown here. Furthermore, it would be
very useful if this blending operation could also be controlled by a depth bu!er.
Acknowledgements
This work was developed in Tecgraf/PUC-Rio, and
was partially funded by CNPq, by means of fellowships.
Tecgraf is a Laboratory mainly funded by PETROBRAS.
References
[1] Levoy M. A hybrid ray-tracer for rendering polygon and
volume data. IEEE Computer Graphics and Applications
1990;10(3):33}40.
[2] Kreeger K, Kaufman A. Hybrid volume and polygon
rendering with cube hardware. Proceedings of the 1999
SIGGRAPH/Eurographics Workshop on Graphics Hardware, 1999. p. 14}24.
[3] Kreeger K, Kaufman A. Mixing translucent polygons
with volumes. Proceedings of the IEEE Visualization '99,
San Francisco, 1999.
[4] Lacroute P, Levoy M. Fast volume rendering using
a shear-warp factorization of the viewing transformation.
Computer Graphics Proceedings, Annual Conference
Series (SIGGRAPH '94), Orlando, 1994. p. 451}8.
[5] Lacroute P. Analysis of a parallel volume rendering
system based on the shear-warp factorization. IEEE Transactions on Visualization and Computer Graphics
1996;2(3):218}31.
[6] McReynolds T, Blythe D, Fowle C, Grantham B, Hui S,
Womack P. Programming with OpenGL: advanced rendering. SIGGRAPH '97 Lecture Notes, Course, vol. 11,
1997. p. 144}53.
[7] Wilson O, Gelder AV, Wilhelms J. Direct volume rendering via 3D textures. Technical Report UCSC-CRL-94-19,
Baskin Center for Computer Engineering and Information
Sciences, University of California, Santa Cruz, EUA,
1994.
[8] Eckel G, Grzeszczuk R. OpenGL Volumizer programmer's guide. Mountain View, CA: Silicon Graphics, 1998.
[9] Osborne R, P"ster H, Lauer H, McKenzie H, Gibson S,
Hiatt W, Ohkami T. EM-Cube: an architecture for lowcost real-time volume rendering. Proceedings of the 1997
SIGGRAPH/Eurographics Workshop on Graphics Hardware, 1997. p. 131}8.
[10] P"ster H, Kaufman A. Cube-4 * a scalable architecture
for real-time volume rendering. ACM/IEEE Symposium
on Volume Visualization, 1996. p. 47}54.
[11] P"ster H, et al. The VolumePro real-time ray-casting
system. Computer Graphics Proceedings, Annual Conference Series (SIGGRAPH '99), Los Angeles, 1999.
p. 251}60.
[12] Kaufman A, Yagel R, Cohen D. Intermixing surface and
volume rendering. In: Hoehne KH, Fuchs H, Pizer SM
editors. 3D imaging in medicine: algorithms, systems and
applications. Berlin: Springer, 1990. p. 217}228.
[13] Miyazawa T, Koyamada K. A high-speed integrated
renderer for interpreting multiple 3D volume data. The
Journal of Visualization and Computer Animation
1992;3:65}83.
[14] Tost D, Puig A, Navazo I. Visualization of mixed scenes
based on volumes and surfaces. Proceedings of Fourth
Eurographics Workshop on Rendering, Paris, 1993.
p. 281}92.
[15] van Walsum TL, Hin AJS, Versloot J, Post FH. E$cient
hybrid rendering of volume data and polygons. In: Post
FH, Hin AJS, editors. Advances in Scienti"c Visualization,
Springer-Verlag, Berlin, 1992. pp. 83}96.
[16] Lichtenbelt B. Designing of a high performance volume
visualization system. Proceedings of the 1997 SIGGRAPH/Eurographics Workshop on Graphics Hardware, 1997. p. 111}20.
[17] Nordin Zakaria M, Md Yazid Md Saman. Hybrid shearwarp rendering. Proceedings of the ICVC, Goa, India,
1999.
A.E.F. Schmidt et al. / Computers & Graphics 24 (2000) 583}601
[18] Lacroute P. Fast volume rendering using a shear-warp
factorization of the viewing transformation. Technical Report: CSL-TR-95-678, Computer Systems Laboratory,
Departments of Electrical Engineering and Computer
Science, Stanford University, 1995.
601
[19] Foley JD, van Dam A, Feiner SK, Hughes JF. Computer
graphics: principles and practice, 2nd ed. New York:
Addison-Wesley, 1990. p. 92}100.
[20] Visible Human Project. www.nlm.nih.gov/research/visible/visible}human.htm.