Kim Et Al Fuzzy Thresholding Segmentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO.

2, MARCH 2007 161

Real-Time Volume Rendering Visualization of


Dual-Modality PET/CT Images With Interactive
Fuzzy Thresholding Segmentation
Jinman Kim, Member, IEEE, Weidong Cai, Member, IEEE, Stefan Eberl, Member, IEEE,
and Dagan Feng, Fellow, IEEE

Abstract—Three-dimensional (3-D) visualization has become an and computed tomography (PET/CT). These images have
essential part for imaging applications, including image-guided introduced significant challenges for efficient visualization
surgery, radiotherapy planning, and computer-aided diagnosis. In [1]–[3]. In line with the advances in image acquisition, three-
the visualization of dual-modality positron emission tomography
and computed tomography (PET/CT), 3-D volume rendering is of- dimensional (3-D) visualization algorithms have been devel-
ten limited to rendering of a single image volume and by high com- oped that enable real-time visualization of multidimensional
putational demand. Furthermore, incorporation of segmentation volumes using low-cost hardware instead of restricting it to
in volume rendering is usually restricted to visualizing the pre- high-end expensive workstations [4]–[6]. 3-D visualization has
segmented volumes of interest. In this paper, we investigated the become an attractive method for imaging applications, including
integration of interactive segmentation into real-time volume ren-
dering of dual-modality PET/CT images. We present and validate image-guided surgery and radiotherapy, and computer-aided di-
a fuzzy thresholding segmentation technique based on fuzzy clus- agnosis [3], [4], [7]–[12]. In these applications, segmentation is
ter analysis, which allows interactive and real-time optimization of often employed, which enables visual separation and selection
the segmentation results. This technique is then incorporated into a of specific volumes of interest (VOIs) [6], [12]–[18]. Segmen-
real-time multi-volume rendering of PET/CT images. Our method tation of the image volume can be performed manually by a
allows a real-time fusion and interchangeability of segmentation
volume with PET or CT volumes, as well as the usual fusion of physician. However, such delineation is subjective, and hence,
PET/CT volumes. Volume manipulations such as window level ad- may not be reproducible, and it is time consuming. Fully auto-
justments and lookup table can be applied to individual volumes, mated methods can only be applied successfully within precisely
which are then fused together in real time as adjustments are made. defined bounds and they cannot guarantee accurate delineation
We demonstrate the benefit of our method in integrating segmen- under all circumstances, thus requiring some kind of operator
tation with volume rendering in its application to PET/CT images.
Responsive frame rates are achieved by utilizing a texture-based intervention, such as in interactive segmentation.
volume rendering algorithm and the rapid transfer capability of Studies involving interactive segmentation in 3-D visualiza-
the high-memory bandwidth available in low-cost graphic hard- tion have often been limited to rendering the preprocessed seg-
ware. mentation results [16]–[18]. However, these methods render
Index Terms—Dual-modality positron emission tomography and only the segmented VOIs, without placing them in the context
computed tomography (PET/CT), fuzzy C-means cluster analysis, of surrounding structures. In [18], a method of correcting seg-
interactive three-dimensional (3-D) segmentation, multi-volume mentation errors from volume-rendered VOIs by adjusting the
rendering, real-time volume rendering. radius of the viewable volume to reveal the surrounding image
was presented. Although this method allows a physician the
I. INTRODUCTION ability to correct for segmentation errors in volume rendering,
it was limited to only rendering the surrounding voxels within
DVANCES in digital medical images are resulting
A in increased image volumes from the acquisition of
four-dimensional (4-D) imaging modalities, such as dynamic
the radius of the VOIs and did not take into consideration that
the surrounding voxels may have no relation to the VOI.
These interactive segmentation methods were all based on
positron emission tomography (PET), and dual modality PET
visualization of a single volume of images. In dual-modality
PET/CT images, which consist of co-registered functional and
Manuscript received August 22, 2005; revised December 24, 2005. This work
was supported in part by the ARC and RGC grants. anatomical image volumes, the ability to visualize the segmen-
J. Kim and W. Cai are with the Biomedical and Multimedia Information tation result with both image volumes can be of considerable
Technology Group, School of Information Technologies, University of Sydney, benefit. For instance, segmentation of tumor structures from
Sydney, NSW 2006, Australia (e-mail: jinman@it.usyd.edu.au).
S. Eberl is with the Biomedical and Multimedia Information Tech- low-resolution, functional PET image data can benefit from
nology Group, School of Information Technologies, University of Sydney, overlaying it on the CT to provide an anatomical frame of ref-
Sydney, NSW 2006, Australia, and also with the Department of PET and Nu- erence and precise localization.
clear Medicine, Royal Prince Alfred Hospital, Sydney, NSW 2050, Australia.
D. Feng is with the Biomedical and Multimedia Information Technology In this paper, we investigated and validated the incorpora-
Group, School of Information Technologies, University of Sydney, Sydney, tion of interactive segmentation into real-time 3-D visualization
NSW 2006, Australia, and also with the Center for Multimedia Signal Pro- of PET/CT images. We present a fuzzy thresholding segmen-
cessing, Department of Electronic and Information Engineering, Hong Kong
Polytechnic University, Kowloon, Hong Kong. tation method for PET images in real-time volume rendering.
Digital Object Identifier 10.1109/TITB.2006.875669 In the segmentation of functional PET images, cluster analysis

1089-7771/$25.00 © 2007 IEEE

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
162 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007

based on kinetic behavior has previously been found effective in


classifying kinetic patterns [19]–[21], including segmentation of
regions of interest [19], and the generation of parametric images
from huge data sets [20]. In these approaches, PET images were
partitioned into a predefined number of cluster groups based
on “crisp” clustering, where one voxel was assigned to a single
cluster group. The fuzzy extension to the crisp clustering, such
as the fuzzy C-means (FCM) cluster analysis [22], presents the
advantage of assigning probabilities of each voxel belonging
to a particular cluster. This attribute is utilized in this paper to
control the segmentation by simple and computationally effi-
cient thresholding of the cluster probabilities. We describe and
evaluate a fuzzy thresholding technique and its integration into
an interactive multi-volume viewer (IMV2 ) is demonstrated.
The IMV2 allows fusion of the segmentation with PET or CT
images as well as the usual fusion of PET and CT images. Vol-
ume manipulation tools designed for PET/CT visualization and
which allow manipulation of individual volumes, e.g., thresh-
olding the CT and adjusting window levels of PET images are
incorporated. The resultant manipulated volumes are then fused
together in real time as the adjustments are made.

II. METHOD
2
The IMV consists of four major steps as shown in the
flowchart in Fig. 1: 1) segmentation of PET images using FCM
cluster analysis into cluster groups based on functional similar-
ity; 2) volume rendering of PET, CT, and segment data using
texture-based rendering technique; 3) interactive fuzzy thresh-
olding of PET data with real-time volume rendering of dual- Fig. 1. Flowchart of the proposed interactive multi-volume visualization. After
modality PET/CT; and 4) volume manipulation tools such as segmenting the PET image using FCM cluster analysis (step 1), the segmenta-
window level adjustments and lookup table (LUT) applied to tion map and fuzzy logic layer are constructed. The segmentation map, PET,
and CT image volumes are rendered using texture-based volume rendering (step
PET/CT volume rendering. 2). The fuzzy logic layer is then used to interactively adjust the rendered seg-
mentation volume by fuzzy thresholding (step 3). These volumes can be fused
A. Automated 4-D FCM Cluster Analysis of Dynamic/Static and interchanged in real time with volume manipulation tools (step 4) included
in the IMV2 .
PET Images
Prior to segmentation, the image data are preprocessed as where P (1 ≤ P ≤ ∞) is a weighting exponent on each fuzzy
follows: low-count background areas in the PET images are membership, which determines the amount of fuzziness of the
removed (set to zero) by thresholding. Isolated voxels and gaps resulting classification, and uij is the membership degree of
are then removed and/or filled by a 3 × 3 × 3 morphological the ith feature vector in the cluster j. The similarity measure
opening filter followed by a closing filter. For dynamic PET between the ith feature vector fi (t) and the cluster centroid f̄cj (t)
data, tissue time activity curves (TTACs) are extracted for each of the jth cluster group cj was calculated using the Euclidean
nonzero voxel to form the kinetic feature vector f (t) of time distance Dij given by
interval t(t = 1, 2, . . . , T ), where T is the total number of time
 T 1/2
points. For static images, a single frame is acquired at t = T .     2
The FCM cluster analysis based on [22] is applied to assign D fi , f̄cj = s(t) fi (t) − f̄cj (t) (2)
each of the N feature vectors to one of a set number C of t=1
distinct cluster groups. For each cluster, centroids are assigned where s(t) is a scale factor of time point t(t = 1, 2, . . . , T ) equal
as the feature vectors of distinct, randomly selected voxels. The to the duration of the tth frame divided by the total dynamic ac-
value of each centroid voxel is replaced with the average of the quisition time. The scale factor s(t) gives more weight to the
3 × 3 × 3 surrounding voxels to avoid false selection of a noisy longer frames, which contain more reliable data. The minimiza-
outlier that may result in a cluster with a single member. FCM tion of J is achieved by iteratively updating the uij and the
cluster analysis minimizes the objective function J, according cluster centroids f̄cj (t) with
to
1

N 
C
 2 uij =   P 2−1 (3)
J= uP C D (fi (t),f̄c j (t))
ij D fi (t), f̄cj (t) (1)
k=1 D (fi (t),f̄c k (t))
i=1 j=1

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 163

N P
i=1 uij fi (t)
f̄cj (t) = N P . (4)
i=1 uij

Thus, a probabilistic fuzzy


 membership degree is assigned
to every voxel i, such that C j=1 uij = 1.0. The procedure is
terminated when the convergence criterion ε in the range of
[0, 1] is satisfied, i.e.,
max um+1
ij − um
ij <ε (5)
ij

where m is the iteration step. Upon termination, the segmenta-


tion map image is constructed by assigning each voxel to the
cluster for which it has the highest membership degree. The opti-
mal number of clusters is determined using the fuzzy validation
measure [23] S given by
C N P 2
j=1 i=1 uij f̄cj (t) − fi (t)
S= 2 (6)
N minij f̄cj (t) − f̄ci (t)
and is evaluated for integer values of C in the range (L − 3 <
C < L + 4), where L is the number of tissue types expected to
be present. Smaller value of S indicates a cluster scheme with Fig. 2. (1) Result from applying FCM segmentation to a clinical brain PET
more compact and more separate clusters. Values of the param- study: (a) Original image. (b) FCM segmentation map. The clusters from the
automated segmentation map are assigned to a shade of gray corresponding to
eters C, P , and ε are empirically determined (see Section III). the cluster average. (c) Fuzzy logic layer of a selected cluster #1. The fuzzy
membership layer is also assigned to a shade of gray in an ascending order of
B. Fuzzy Membership Degree Layer and Fuzzy Thresholding membership degree. (2) Histogram distribution of the fuzzy membership degree
of voxels in (c). First 5% of the fuzzy membership histogram, consisting of a
From each segmented cluster cj (j = 1, 2, . . . , C), a fuzzy large number of voxels that have minor relationships to the cluster, is removed
for presentation purposes.
membership layer lj is constructed consisting of the member-
ship degrees uij for all N voxels to cj , as shown in Fig. 2(1).
The uij are scaled to 0%–100% for the membership layer. For
each membership layer, a fuzzy histogram can be plotted as
in Fig. 2(2), which represents the membership of the voxels
to a cluster. The fuzzy thresholding works by controlling the
fuzzy membership threshold of a selected cluster. By lower-
ing the fuzzy threshold from the automated FCM segmentation
threshold, additional voxels with weaker membership to the
cluster centroid can be assigned to the cluster. On the other
hand, by increasing the fuzzy threshold, fewer voxels, but with
higher membership, are clustered. With dynamic PET images,
the fuzzy thresholding affects the voxels that are similar in ki-
netic behavior to the cluster centroid. For static PET images, the
fuzzy thresholding is similar to intensity-based thresholding.

C. Real-Time Multi-Volume Rendering Overview


In IMV2 , visualization tools were designed to provide physi-
cians with efficient ways to interpret and navigate through the
dual-modality PET/CT images. The main features of IMV2 are
illustrated in Fig. 3 with the use of whole-body PET/CT im-
age. From the PET, CT, and segmentation volumes, any volume
can be rendered individually or two volumes can be selected Fig. 3. Overview of the main features in the proposed interactive multi-volume
and fused together, with the ability to interchange the volumes visualization. Individual volumes can be rendered (a) and (b) or can be combined
with another volume, such as PET/CT (c). Combination of volumes and the
and the fusion ratio of the volumes in real time. The segmenta- ability to interchange the volumes in real time allows the segmented VOI of a
tion volume allows for interactive selection of different clusters tumor (d) and (e) to be fused and visualized with PET (g) and CT (h). The VOI is
and fuzzy thresholding. The rendered volume(s) can be inter- displayed using the white LUT for presentation purpose. Volume manipulations
can be applied to individual volumes with the volumes being fused and the
actively navigated using conventional volume navigation tools volume rendered in real time as shown in (i) where the CT has been thresholded
including rotation, scaling, and translation. To remove a portion to reveal the lung boundary surrounding the tumor.

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
164 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007

of the volume that may obscure the user’s view, a “volume clip-
ping” tool can be applied by using a plane that cuts through the
volume perpendicular to the viewing window. Alternatively, the
user can position a “clipping box” that encapsulates the volume
such that only the volume residing inside the box is visible.
The ability to clip the volume, together with volume naviga-
tion, allows the user to explore the large multi-volume data by
interactively selecting the viewing window, which isolates the
interested section from the whole volume. Another tool is the
“adaptive sampling rate” that allows the volume to be sampled at
a lower rate to increase the responsiveness to movement, which Fig. 4. (a) Normal noise level simulation (last temporal frame). In (b)–(e),
the top row is the GM and bottom row is the WM. (b) GT. (c) Fuzzy threshold
are necessary when dealing with large multidimensional vol- of 28% for GM and 30% for WM. (d) Automated FCM segmentation with
umes. When the movement is completed, the samples are raised threshold of 48% for GM and 50% for WM. (e) Fuzzy threshold of 68% for GM
back to the default setting of fully sampled data. The sampling and 70% for WM.
rate is the number of parallel planes used in texture-based vol-
ume rendering to render the volume (see Section II-D). Other
tools include “window-level adjustment”, “transfer function”, III. EVALUATION AND EXPERIMENTAL RESULTS
“LUT”, and “intensity-based thresholding”. CT images, which
occupy greater dynamic range than is possible to display si- A. Validation of FCM Segmentation—Computer Simulations
multaneously, can be interactively adjusted using the window Computer simulations were performed to evaluate the perfor-
level. Transfer function can be used to control the opacity of mance and reliability of the fuzzy thresholding segmentation.
the volume, such that particular voxels (measured in the voxel’s The anatomical Zubal brain phantom [30] was reduced to white
intensity) become more prominent. In addition, intensity-based matter (WM) and gray matter (GM) and 20 cross-sectional
thresholding can be applied to potentially segment out the tissue slices. A five-parameter 2-[18F] fluoro-2-deoxy-D-glucose
structures in CT [24] that are acquired in high resolution and (18F-FDG) model [31] was used to simulate realistic TTAC
contain well-defined separation of tissue structures. As the PET values to construct 22 temporal frame sequences. Each slice was
and CT volumes are rendered independently, manipulations can smoothed by applying a Gaussian filter with a full-width-half
be applied to individual volumes, e.g., thresholding the CT and maximum (FWHM) of 8 mm prior to forward projection.
adjusting window levels of PET data, and the resultant ma- Projections were scaled to three different count levels by
nipulated data sets are then fused together in real time as the applying a scale factor that sets the maximum pixel count in
adjustments are made. the last frame to 100 (high noise), 500 (normal noise), and
800 (low noise) counts, where 500 was the maximum observed
in comparable clinical studies. Poisson noise was then added
D. Interactive IMV2 Implementation to the scaled projection data, which were reconstructed using
The IMV2 has been developed using the OpenGL [25] and filtered backprojection with a Hann filter. Noise-free ground
SGI Volumizer 2.7 application programming interface (API) truth (GT) images were constructed by smoothing the phantom
[26] to render the multi-volume images using texture-based data with Gaussian blur as in the noisy simulations. Voxels
volume rendering [5], [6], [27], [28]. The texture-based vol- mixed at the boundary in the smoothed images were reclassified
ume rendering creates parallel planes through the columns of to the tissue with the highest contribution. For boundaries
the volume data, in the principal direction most perpendicular separating tissue from zero count regions such as ventricles,
to the user’s line of sight, which are then drawn back-to-front 40% of the tissue counts were defined.
with appropriate 3-D texture coordinates [5]. IMV2 utilizes the The fuzzy thresholding results of GM (first row) and WM
high-capacity memory bandwidth of low-cost graphic hardware (second row) are illustrated in Fig. 4. From visual inspection,
to perform a rapid transfer of the 3-D textures from the system the GM segmentation result resembles the GT in Fig. 4(b) most
memory into the graphic memory. By utilizing the large band- closely with the increased threshold shown in Fig. 4(e). The re-
width, the volume interchange method replaces an old volume verse was evident for WM where the fuzzy thresholding result
in the graphic memory with a new volume. The two volumes are in Fig. 4(c) is visually the most similar to its GT. This illustrates
fused using the hardware-based per-voxel fusion method [29] the ability of the fuzzy thresholding to interactively optimize
of compositing the two voxels from respective volumes to cre- the segmentation result for particular structures of interest. The
ate a new voxel in real time. This method does not require any ring artifact around the periphery of the WM segmentation is
preprocessing of the volume data and thus allows real-time ad- attributed to the mixed tissue contribution between the GM and
justment of the fusion ratio of PET to CT. The segmentation the background. In the FCM cluster analysis, the background
volume is interactively adjusted in real time by assigning trans- was removed by using the boundaries of the tissue structures
parency values to the voxels based on the fuzzy threshold. In defined in the Zubal phantom and two clusters were applied
this approach, voxels that have greater fuzzy membership than corresponding to the WM and GM. Empirically derived cluster
the defined threshold are assigned to a visible transparency level analysis parameter values of P = 2.0, and ε = 0.1 gave accept-
and other voxels are set to fully transparent. able results. The segmentation results were found to be quite

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 165

Fig. 5. Results of quantitative evaluation measures applied to the simulations


at the three noise levels (low, normal, and high). Automated FCM segmentation
thresholds are represented by the enlarged data points. The thresholds for the
normal noise level were between 18% and 78% for WM, and 20% and 80% for
GM (±30% from automated FCM) in increments of 10%. The low noise level
had the same threshold, and the high noise level was different only in WM,
which was 19%–79%.

insensitive to these two parameters. Quantitative evaluation of


the application of fuzzy thresholding was performed with dice
similarity coefficient (DSC) [32] given as
DSC = 2|AEst ∩ ATrue |/(|AEst | + |ATrue |) (7)
which measures the spatial overlap between the estimated and
true segmented areas. The DSC is in the range of [0, 1], where 1
represents two overlapping areas of identical size and location.
Fig. 5 shows the evaluation measures of the simulation study for
the three noise levels as a function of fuzzy thresholding levels.
Based on these results, the DSC improved with increasing fuzzy
threshold for the GM in both the low and normal noise levels.
Fig. 6. Segmentation applied to clinical dynamic PET patient study and para-
For the high noise level, the automated threshold was optimal metric images. (1) Traverse slice 21 (a) with FCM segmentation result (b) and
for GM segmentation. In WM, decreasing the threshold showed the selection of a tumor cluster (c). (2) Fuzzy thresholding applied to dynamic
a marked improvement in DSC among all noise levels. This PET image. (3) Fuzzy thresholding and (4) Conventional thresholding of Patlak
parametric image. (5) Volume rendering of results in row (2).
suggests that the automated segmentation has over-segmented
the GM and under-segmented the WM, which was in accordance
with the visual findings in Fig. 4. The results indicate that the
tumor volume definition. Standard deviation of the Euclidean
proposed fuzzy thresholding was most effective at high noise
distance measures given in (2) between the voxel’s TTAC fea-
levels and that the method was robust in terms of noise.
ture vectors and their thresholded cluster centroid was treated
as an indicator of the homogeneity of the structure. Based on
B. Application of FCM to Clinical 4D Dynamic Brain Pet Study the result in Fig. 6(2), as the threshold was increased, a tighter
The fuzzy thresholding was applied to dynamic clinical 18 F- volume around the tumor was selected and the standard devia-
FDG brain PET studies. The dynamic images were decay cor- tion was lowered, which demonstrates that the lowering of the
rected to the time of injection and attenuation corrected, and threshold resulted in the clustering of voxels that were more
then reconstructed using filtered backprojection with a Shepp homogeneous.
and Logan filter. Segmentation of a patient study with a tumor To demonstrate that the technique could also be applied to
is shown in Fig. 6(1). The FCM cluster analysis automatically parametric images, parametric images were generated with
separated the prominent tissue structures with a clear indication voxel by voxel Patlak graphical analysis plot [33] and the
of the tumor (left center of the image). The application of fuzzy measured arterial plasma concentration of the tracer. Due to
thresholding to a selected tumor structure shown in Fig. 6(2) the high noise in the Patlak plot images, a Gaussian filter with
demonstrates the addition or removal of only the voxels based an FWHM of 7.5 mm was applied. The Patlak parametric
on the kinetic similarity of the TTACs to the cluster centroid, al- images were then segmented with FCM technique [Fig. 6(3)]
lowing interactive optimization of the segmentation results and as well as intensity-based thresholding [Fig. 6(4)]. For each

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
166 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007

of the techniques, parameters were adjusted to give tumor


segment volumes of 150, 200, and 250 voxels [Fig. 6(e)–(g),
respectively]. As expected, for static images, such as the
Patlak parametric images, FCM segmentation provides results
analogous to simple intensity threshold-based segmentation.
Interestingly, as indicated by the higher standard deviations
compared to the FCM segmentation of the dynamic data
[Fig. 6(2)], the Patlak analysis does not group the TTAC curves,
which were most similar as defined by the Euclidean distance,
and hence results in a different definition of tumor volume com-
pared to FCM applied directly to the dynamic data. This is likely
due to the Patlak analysis reflecting different characteristics of
the dynamic curve and effectively giving different weighting
to different parts of the TTAC than that used for estimating the
Euclidean distance between the TTACs. The most appropriate
data to use (parametric images, dynamic data, or selected static Fig. 7. (a) Volume rendering of PET/CT image. (b) Automated FCM segmen-
tation result with segmented tumor structures fused with PET. Fuzzy threshold
images) will depend on the application and the characteristics, of the selected cluster was at 56%. (c) Reduced segmentation of the tumor re-
which are required to be featured in the segmented volume. sulting from increase in fuzzy threshold to 91%. (d) Segmentation result of (b)
The volume rendering of the segmented cluster and the thresh- fused with CT. All volumes have been fused with equal fusion ratios.
olded results are presented in Fig. 6(5). The PET image and the
segmentation results were individually volume rendered and
then fused together with equal fusion ratios. reduced size was attributed to only selecting voxels that have
In the FCM segmentation of dynamic PET and Patlak images, the highest membership to the cluster containing the tumor. This
the background threshold was set at 15% of the maximum counts ability provides the physician with control over the definition
in each slice of the summed temporal frames. This threshold of the viable tumor volume, for example, for radiotherapy treat-
value was found not to be critical and moderate changes (±5%) ment planning while avoiding the tedium and time associated
had little effect on the results. As in the simulations, the FCM with manually defining a 3-D VOI. The real-time rendering of
cluster analysis parameters P and ε were set to 2.0 and 0.1, re- the segmentation with either PET or CT image data provides
spectively. The number of clusters C was determined from the quick and effective feedback on the accuracy of the segmented
fuzzy validation measure S given in (6). For the dynamic PET, tumor volume as the fuzzy threshold is adjusted. The fusion
as C was increased from 5 to 12, the values of S was increased of the segmented tumors on the CT is shown in Fig. 7(d). This
gradually up to C = 8 (maximum increase of 28% from the permits improved visualization of the anatomical frame of refer-
previous S), followed by a rapid increase at C = 9 (increase ence and localization of the segmented tumors when compared
of 113%) with gradual increase in S thereafter (maximum in- to the fusion of PET/CT in Fig. 7(a). The transparency level and
crease of 26%). Although a small value of S indicates a cluster the LUT of the segmented volumes can be adjusted to reduce ob-
scheme with well-defined clusters, to maximize the partition of scuration of underlying structures relevant for the interpretation
the individual tissue types, the optimal number of clusters was of the images and segmentation result. Other segmented clusters
taken to be the value of C corresponding to the value of S prior representing different functional structures such as the lung, can
to the rapid increase. Similarly, C = 7 was found optimal for be interactively selected and thresholded. For this example, the
the Patlak image. FCM cluster analysis was applied to a subsection of the whole-
body PET images (30 slices), the background threshold was set
at 10%, and the number of clusters C = 4 was determined to be
C. Clinical Whole-Body PET/CT Study optimal.
Fig. 7 demonstrates a potential application of IMV2 in Fig. 8 illustrates some of the tools available in IMV2 . In
interactively visualizing and segmenting out tumor volumes Fig. 8(a), an example of intensity-based thresholding of CT
from a whole-body PET/CT image. The reconstructed PET/CT structures is illustrated. The thresholded CT results were vol-
images were 128 × 128 with voxel dimensions of 5.148 × ume rendered together with PET, which revealed the boundary
5.148 × 3.375 mm for PET, and 512 × 512 with voxel dimen- of the anatomical structure fused with PET. Fig. 8(b) illustrates
sions of 0.977 × 0.977 × 3.4 mm for CT. PET and CT images an example of window-level-adjusted PET (brighter) fused with
were cropped and rescaled to 256 × 256 with voxel dimensions the CT, which has been intensity thresholded as in Fig. 8(a). In
of 1.953 × 1.953 × 3.4 mm. The automated segmentation of the Fig. 8(c), window-level-adjusted CT (showing lung vasculature)
tumors fused with the PET volume is shown in Fig. 7(b). The fused with PET is shown. Finally, Fig. 8(d) shows the clipped
FCM cluster analysis has identified the two regions with tumors PET/CT image using the clipping box, revealing internal struc-
when compared to the PET/CT counterpart in Fig. 7(a). The tures of the fused PET/CT image.
segmentation of the tumors can be interactively optimized by A major goal of the IMV2 is to provide real-time volume
increasing the fuzzy threshold as shown in Fig. 7(c), which re- rendering to allow real-time manipulations. Table I shows mea-
sulted in the reduction in the size of the segmented tumors. The sured times for various manipulations in IMV2 , running on ATI

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 167

olding segmentation. Through manual intervention in 3-D vi-


sualization, optimization of segmentation parameters was pos-
sible to emphasize VOIs and adjust for inter-patient difference.
In the segmentation of functional images, partial volume effects
(PVEs) caused by limited spatial resolution, and low counting
statistics have a significant influence on segmentation errors.
Using our fuzzy thresholding, some of these limitations can
potentially be overcome. Voxels, which may be incorrectly seg-
mented, often exhibit low fuzzy membership to all clusters, and
thus, are most likely to be detected and corrected by changes
to the fuzzy threshold. Furthermore, segmentation errors aris-
ing from a suboptimal cluster number selection may also be
corrected using fuzzy thresholding, i.e., a structure that is sepa-
rated into two or more cluster groups due to an excessive number
of clusters could be manually combined into a single cluster by
decreasing the fuzzy threshold of the cluster most representative
of the structure.
Fig. 8. Various manipulations applied to PET/CT volume renditions. (a) The determination of the optimum number of clusters for the
Intensity thresholding of CT to show the lung boundary, fused with PET. application of FCM cluster analysis on whole-body PET images
(b) Window level applied to PET and intensity thresholding applied to CT
images. (c) Window-level-adjusted CT (showing lung vasculature) fused with was difficult due to the low spatial resolution and the high vari-
PET. (d) Clipped PET/CT (clipping box) with window-level adjustments. ation of the tracer uptakes inside the organs. The cluster validity
measure in (6) was found to result in an excessive number of
TABLE I
AVERAGE FRAME RATES MEASURED IN FRAMES PER SECOND (FPS) FOR
clusters, causing the separation of a particular structure into two
VARIOUS PET/CT MANIPULATIONS IN IMV2 or more cluster groups, rather than a single cluster representing
the structure. However, the use of subsections of the whole-body
images (<40 cross-sectional slices) was found to dramatically
improve the determination of the optimum number of clusters
without causing the separation of structures. Thus, segmentation
of VOI in IMV2 can be applied to subsections of the whole-body
image to improve the performance of the segmentation.
FCM segmentation with thresholding was selected as the seg-
mentation method of choice for the following reasons. Once
cluster membership probabilities are assigned to each voxel, the
FCM thresholding shares the computational efficiency of sim-
ple intensity-based thresholding, which is essential to provide a
real-time interactive segmentation manipulation. However, un-
Radeon 9600 graphics hardware with 64 MB of memory, ap-
like simple intensity-based thresholding, it can be applied di-
plied to subsections of the whole-body PET/CT images with
rectly to dynamic data. For static data, it provides segmentation
dimensions scaled to 256 × 256 × 40 in 16 bit, and the corre-
analogous to threshold-based segmentation for regions with the
sponding segmentation volume in 8 bit. The sampling rate of
highest uptake such as the tumor examples shown. However, it
1.8 × 1.8 × 3.2 and window size of 500 × 500 were applied. The
is potentially more adapted in segmenting out structures that do
rendered volumes can be animated, with good response times of
not have the highest activity uptake, which is more challeng-
4 ∼ 15 frames per second (FPS). The time taken for changing
ing for intensity threshold-based segmentation methods. The
different clusters on the segmentation results and the interchange
proposed multi-volume visualization method is not limited to
of volumes were measured to have response times of less than
the FCM segmentation method. Indeed, it can take the input of
0.5 s. With adaptive sampling rates, where the sampling rate
any segmentation map image that contains labeled voxels corre-
was lowered to 0.6 × 0.6 × 0.6, FPS for all of the manipula-
sponding to the segmented VOIs to construct the segmentation
tions were improved by an average of 5.1 times. An increase
volume for visualization. As the updates to the segmentation pa-
in window size to 1000 × 1000 pixels (double) resulted in a 1.8
rameter usually involve only the addition and deletion of voxels
times decrease in FPS. Manipulations applied to a whole-body
to the segmentation volume, the application of interactive seg-
PET/CT were within 1 ∼ 3 FPS. Nonetheless, with the utiliza-
mentation can easily be accommodated using the IMV2 .
tion of adaptive sampling rates, the whole-body PET/CT was
able to be interactively visualized.
V. CONCLUSION
IV. DISCUSSION
We have demonstrated a multi-volume visualization of
This paper described a new visualization method for multi- dual-modality PET/CT images with integrated fuzzy thresh-
volume images with the integration of interactive fuzzy thresh- olding segmentation. Our method has the advantage that

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
168 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007

the segmentation volume can be fused with PET or CT, as well [18] E. Bullitt and S. R. Aylward, “Volume rendering of segmented image
as the fusion of PET/CT in real-time volume rendering. The objects,” IEEE Trans. Med. Imag., vol. 21, no. 8, pp. 998–1002, Aug.
2002.
ability to visualize and interactively optimize segmentation of [19] K.-P. Wong, D. Feng, S. R. Meikle, and M. J. Fulham, “Segmentation
PET images that is overlaid on CT images in real-time volume of dynamic PET images using cluster analysis,” IEEE Trans. Nucl. Sci.,
rendering can potentially facilitate VOI generation for appli- vol. 49, no. 1, pt. 1, pp. 200–207, Feb. 2002.
[20] H. Guo, R. Renaut, K. Chen, and E. Rieman, “Clustering huge data sets
cations such as radiotherapy or image-guided surgery. Unlike for parametric PET imaging,” Biosystems, vol. 71, pp. 81–92, 2003.
fully automated techniques, the interactive fuzzy thresholding [21] J. G. Brankov, N. P. Galatsanos, Y. Yang, and M. N. Wernick, “Segmenta-
technique allows the physician to control segmentation while tion of dynamic PET or fMRI images based on a similarity metric,” IEEE
Trans. Nucl. Sci., vol. 50, no. 5, pt. 2, pp. 1410–1414, Oct. 2003.
navigating through the rendered PET/CT volumes, without in- [22] J. Bezdek, Pattern Recognition With Fuzzy Objective Function Algorithm.
curring the time penalty associated with manual VOI definition. Norwell, MA: Kluwer, 1981.
Overall, the IMV2 performed well on low-cost graphic hardware [23] X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 13, no. 8, pp. 841–847, Aug. 1991.
prior to software optimization, and we intend to further refine [24] S. Hu, E. A. Hoffman, and J. M. Reinhardt, “Automatic lung segmentation
this technique and potential clinical applications in future work. for accurate quantification of volumetric X-ray CT images,” IEEE Trans.
Med. Imag., vol. 20, no. 6, pp. 490–498, Jun. 2001.
[25] D. Shreiner, M. Woo, J. Neider, and T. Davis, OpenGL Programming
REFERENCES Guide: The Official Guide to Learning OpenGL Version 1.4, 4th ed.
Boston, MA: Addison-Wesley, 2003.
[1] M. N. Wernick and J. N. Aarsvold, Emission Tomography—Fundamentals [26] K. Jones and J. McGee, SGI OpenGL Volumizer 2 Programmer’s Guide.
of PET and SPECT. London, U.K.: Elsevier, 2004. Mountain View, CA: Silicon Graphics, Inc., 2004.
[2] R. A. Robb, “Visualization in biomedical computing,” Parallel Comput., [27] J. Kniss, P. McCormick, A. McPherson, J. Ahrens, J. Painter, A. Keahey,
vol. 25, pp. 2067–2110, 1999. and C. Hansen, “Interactive texture-based volume rendering for large data
[3] O. Ratib, “PET/CT image navigation and communication,” J. Nucl. Med., sets,” IEEE Comput. Graph. Appl., vol. 21, no. 4, pp. 52–61, Jul.–Aug.
vol. 45, pp. 46S–55S, 2004. 2001.
[4] A. Rosset, L. Spadola, and O. Ratib, “OsiriX: An open-source software [28] R. Westermann and B. Sevenich, “Accelerated volume ray-casting using
for navigating in multidimensional DICOM images,” J. Digital Imag., texture mapping,” in Proc. IEEE Vis., Oct. 21–26, 2001, pp. 271–278.
vol. 17, pp. 205–216, 2004. [29] “ARB fragment program specification,” ATI Research, Marlborough, MA,
[5] P. Bhaniramka and Y. Demange, “OpenGL volumizer: A toolkit for high 2002.
quality volume rendering of large data sets,” in Proc. Symp. IEEE/ACM [30] I. G. Zubal, C. R. Harrell, E. O. Smith, Z. Rattner, G. Gindi, and
SIGGRAPH Volume Vis. Graph., Oct. 28–29, 2002, pp. 45–53. P. B. Hoffer, “Computerized three-dimensional segmented human
[6] M. Hadwiger, C. Berger, and H. Hauser, “High-quality two-level volume anatomy,” Med. Phys., vol. 21, pp. 299–302, 1994.
rendering of segmented data sets on consumer graphics hardware,” in [31] R. A. Hawkins, M. E. Phelps, and S. C. Haung, “Effects of temporal sam-
Proc. IEEE Vis., Oct. 19–24, 2003, pp. 301–308. pling glucose metabolic rates, and disruptions of the blood-brain barrier
[7] R. Shahidi, R. Tombropoulos, and R. P. Grzeszczuk, “Clinical applications on the FDG model with and without a vascular compartment: Studies in
of three-dimensional rendering of medical data sets,” Proc. IEEE, vol. 86, human brain tumors with PET,” J. Cereb. Blood Flow Metab., vol. 6,
no. 3, pp. 555–568, Mar. 1998. pp. 170–183, 1986.
[8] F. Beltrame, G. De Leo, M. Fato, F. Masulli, and A. Schenone, “A three- [32] A. P. Zijdenbos, B. M. Dawant, R. A. Margolin, and A. C. Palmer, “Mor-
dimensional visualization and navigation tool for diagnostic and surgical phometric analysis of white matter lesions in MR images: Methods and
planning applications,” in Proc. SPIE Vis., Display Image-Guided Proce- validation,” IEEE Trans. Med. Imag., vol. 13, no. 4, pp. 716–724, Dec.
dures, 2001, vol. 4319, 2001, pp. 507–514. 1994.
[9] I. F. Ciernik, E. Dizendorf, B. G. Baumert, B. Reiner, C. Burger, [33] C. S. Patlak and R. G. Blasberg, “Graphical evaluation of blood-to-brain
J. B. Davis, U. M. Lütolf, H. C. Steinert, and G. K. Von Schulthess, transfer constants from multiple-time uptake data. Generalisations,” J.
“Radiation treatment planning with an integrated positron emission and Cereb. Blood Flow Metab., vol. 5, pp. 584–590, 1985.
computer tomography (PET/CT): A feasibility study,” Int. J. Radiol.
Oncol. Biol. Phys., vol. 57, pp. 853–863, 2003.
[10] A. B. Jani, J.-S. Irick, and C. Pelizzari, “Opacity transfer function op-
timization for volume-rendered computed tomography images of the Jinman Kim (S’01–M’06) received the B.S. (honors)
prostate1,” Acad. Radiol., vol. 12, pp. 761–770, 2005. degree in computer science and technology in 2001
[11] Y. C. Loh, M. Y. Teo, W. S. Ng, C. Sim, Q. S. Zou, T. T. Yeo, and from the University of Sydney, Sydney, Australia,
Y. Y. Sitoh, “Surgical planning system with real-time volume rendering,” where he is currently working toward the Ph.D. de-
in Proc. IEEE Int. Workshop Med. Imag. Augmented Reality, Jun. 10–12, gree in information technologies.
2001, pp. 259–261. His research interests include the development
[12] D. T. Gering, A. Nabavi, R. Kikinis, W. E. L. Grimson, N. Hata, P. Everett, of multidimensional image segmentation, image en-
F. Jolesz, and W. M. Wells, “An integrated visualization system for surgical hancement, information visualization, content-based
planning and guidance using image fusion and interventional imaging,” in image retrieval, and computer-aided diagnosis.
Proc. Med. Image Comput. Comput. Assisted Intervention, 1999, pp. 809–
819.
[13] H. Hauser, L. Mroz, G. I. Bischi, and M. E. Gröller, “Two-level volume
rendering,” IEEE Trans. Vis. Comput. Graph., vol. 7, no. 3, pp. 242–252,
Jul.–Sep. 2001. Weidong Cai (S’99–M’01) received the B.S. de-
[14] A. Wenger, D. F. Keefe, S. Zhang, and D. H. Laidlaw, “Interactive volume gree in Computer Science from HuaQiao University,
rendering of thin thread structures within multivalued scientific data sets,” Quanzhou, China, in 1989, and the Ph.D. degree from
IEEE Trans. Vis. Comput. Graph., vol. 10, no. 6, pp. 664–672, Nov.–Dec. the University of Sydney, Sydney, Australia, in 2001,
2004. both in computer science.
[15] M. Harders and G. Székely, “Enhancing human-computer interaction in Prior to his doctoral study, he worked in industry
medical segmentation,” Proc. IEEE, vol. 91, no. 9, pp. 1430–1442, Sep. for five years. After graduation, he was a Postdoc-
2003. toral Research Associate at the Centre for Multimedia
[16] L. Vosilla, G. De Leo, M. Fato, A. Schenone, and F. Beltrame, “An Signal Processing (CMSP), Hong Kong Polytechnic
interactive tool for the segmentation of multimodal medical images,” in University. In 2001, he was a Lecturer and is cur-
Proc. IEEE Inf. Technol. Appl. Biomed. Nov. 9–10, 2000, pp. 203–209 rently a Senior Lecturer in the School of Information
[17] S.-C. Yoo, C.-U. Lee, B. G. Choi, and P. Saiviroonporn, “Interactive Technologies, University of Sydney. His research interests include computer
3-dimensional segmentation of MRI data in personal computer environ- graphics, image processing and analysis, data compression and retrieval, and
ment,” J. Neurosci. Methods, vol. 112, pp. 75–82, 2001. multimedia database and computer modelling with biomedical applications.

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 169

Stefan Eberl (M’97) received the B.E. (honors) de- Polytechnic University, Hong Kong; the Advisory Professor, Shanghai JiaoTong
gree in electrical engineering from New South Wales University; and a Guest Professor with Northwestern Polytechnic University,
Institute of Technology, Sydney, Australia, in 1982, Xian, China, with Northeastern University, Shenyang, China, and with Tsinghua
and the M.Sc. degree in physics and the Ph.D. de- University, Beijing China. He is the Founder and Director of the Biomedical and
gree in biomedical engineering from the University Multimedia Information Technology Research Group. He has published over
of New South Wales, Sydney, in 1997 and 2001, re- 400 scholarly research papers, pioneered several new research directions, and
spectively. made a number of landmark contributions in his field with significant scientific
He is currently a Principal Hospital Scientist in impact and social benefit. His research area is biomedical and multimedia in-
the Department of PET and Nuclear Medicine, Royal formation technology.
Prince Alfred Hospital, Sydney, and is an Adjunct As- Dr. Feng is a Fellow of the Australia Computer Society, the Australian
sociate Professor in the School of Information Tech- Academy of Technological Sciences and Engineering, Hong Kong Institution
nologies, University of Sydney, Sydney. His research interests include physio- of Engineers, and the Institution of Electrical Engineers, U.K. He is also the
logical parameter estimation from functional imaging, image registration, and special Area Editor of the IEEE TRANSACTIONS ON INFORMATION TECHNOL-
optimizing use of the combination of functional/anatomic data. OGY IN BIOMEDICINE and is the current Chairman of IFAC-TC-BIOMED. He is
the recipient of the Crump Prize for Excellence in Medical Engineering (USA).

Dagan Feng (S’88–M’88–SM’94–F’03) received the


M.E. degree in electrical engineering and computing
science from Shanghai JiaoTong University, Shang-
hai, China, in 1982, and the M.Sc. degree in biocy-
bernetics and the Ph.D degree in computer science
from the University of California, Los Angeles, in
1985 and 1988, respectively.
After briefly working as an Assistant Professor at
the University of California, Riverside, he joined the
University of Sydney, Sydney, Australia, at the end
of 1988, where he was a Lecturer, Senior Lecturer,
Reader, Professor, Head of Department of Computer Science, Head of School
of Information Technologies, and is currently Associate Dean of the Faculty
of Science. He is also the Honorary Research Consultant, Royal Prince Alfred
Hospital, Sydney; the Chair-Professor of Information Technology, Hong Kong

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.

You might also like