Kim Et Al Fuzzy Thresholding Segmentation
Kim Et Al Fuzzy Thresholding Segmentation
Kim Et Al Fuzzy Thresholding Segmentation
Abstract—Three-dimensional (3-D) visualization has become an and computed tomography (PET/CT). These images have
essential part for imaging applications, including image-guided introduced significant challenges for efficient visualization
surgery, radiotherapy planning, and computer-aided diagnosis. In [1]–[3]. In line with the advances in image acquisition, three-
the visualization of dual-modality positron emission tomography
and computed tomography (PET/CT), 3-D volume rendering is of- dimensional (3-D) visualization algorithms have been devel-
ten limited to rendering of a single image volume and by high com- oped that enable real-time visualization of multidimensional
putational demand. Furthermore, incorporation of segmentation volumes using low-cost hardware instead of restricting it to
in volume rendering is usually restricted to visualizing the pre- high-end expensive workstations [4]–[6]. 3-D visualization has
segmented volumes of interest. In this paper, we investigated the become an attractive method for imaging applications, including
integration of interactive segmentation into real-time volume ren-
dering of dual-modality PET/CT images. We present and validate image-guided surgery and radiotherapy, and computer-aided di-
a fuzzy thresholding segmentation technique based on fuzzy clus- agnosis [3], [4], [7]–[12]. In these applications, segmentation is
ter analysis, which allows interactive and real-time optimization of often employed, which enables visual separation and selection
the segmentation results. This technique is then incorporated into a of specific volumes of interest (VOIs) [6], [12]–[18]. Segmen-
real-time multi-volume rendering of PET/CT images. Our method tation of the image volume can be performed manually by a
allows a real-time fusion and interchangeability of segmentation
volume with PET or CT volumes, as well as the usual fusion of physician. However, such delineation is subjective, and hence,
PET/CT volumes. Volume manipulations such as window level ad- may not be reproducible, and it is time consuming. Fully auto-
justments and lookup table can be applied to individual volumes, mated methods can only be applied successfully within precisely
which are then fused together in real time as adjustments are made. defined bounds and they cannot guarantee accurate delineation
We demonstrate the benefit of our method in integrating segmen- under all circumstances, thus requiring some kind of operator
tation with volume rendering in its application to PET/CT images.
Responsive frame rates are achieved by utilizing a texture-based intervention, such as in interactive segmentation.
volume rendering algorithm and the rapid transfer capability of Studies involving interactive segmentation in 3-D visualiza-
the high-memory bandwidth available in low-cost graphic hard- tion have often been limited to rendering the preprocessed seg-
ware. mentation results [16]–[18]. However, these methods render
Index Terms—Dual-modality positron emission tomography and only the segmented VOIs, without placing them in the context
computed tomography (PET/CT), fuzzy C-means cluster analysis, of surrounding structures. In [18], a method of correcting seg-
interactive three-dimensional (3-D) segmentation, multi-volume mentation errors from volume-rendered VOIs by adjusting the
rendering, real-time volume rendering. radius of the viewable volume to reveal the surrounding image
was presented. Although this method allows a physician the
I. INTRODUCTION ability to correct for segmentation errors in volume rendering,
it was limited to only rendering the surrounding voxels within
DVANCES in digital medical images are resulting
A in increased image volumes from the acquisition of
four-dimensional (4-D) imaging modalities, such as dynamic
the radius of the VOIs and did not take into consideration that
the surrounding voxels may have no relation to the VOI.
These interactive segmentation methods were all based on
positron emission tomography (PET), and dual modality PET
visualization of a single volume of images. In dual-modality
PET/CT images, which consist of co-registered functional and
Manuscript received August 22, 2005; revised December 24, 2005. This work
was supported in part by the ARC and RGC grants. anatomical image volumes, the ability to visualize the segmen-
J. Kim and W. Cai are with the Biomedical and Multimedia Information tation result with both image volumes can be of considerable
Technology Group, School of Information Technologies, University of Sydney, benefit. For instance, segmentation of tumor structures from
Sydney, NSW 2006, Australia (e-mail: jinman@it.usyd.edu.au).
S. Eberl is with the Biomedical and Multimedia Information Tech- low-resolution, functional PET image data can benefit from
nology Group, School of Information Technologies, University of Sydney, overlaying it on the CT to provide an anatomical frame of ref-
Sydney, NSW 2006, Australia, and also with the Department of PET and Nu- erence and precise localization.
clear Medicine, Royal Prince Alfred Hospital, Sydney, NSW 2050, Australia.
D. Feng is with the Biomedical and Multimedia Information Technology In this paper, we investigated and validated the incorpora-
Group, School of Information Technologies, University of Sydney, Sydney, tion of interactive segmentation into real-time 3-D visualization
NSW 2006, Australia, and also with the Center for Multimedia Signal Pro- of PET/CT images. We present a fuzzy thresholding segmen-
cessing, Department of Electronic and Information Engineering, Hong Kong
Polytechnic University, Kowloon, Hong Kong. tation method for PET images in real-time volume rendering.
Digital Object Identifier 10.1109/TITB.2006.875669 In the segmentation of functional PET images, cluster analysis
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
162 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007
II. METHOD
2
The IMV consists of four major steps as shown in the
flowchart in Fig. 1: 1) segmentation of PET images using FCM
cluster analysis into cluster groups based on functional similar-
ity; 2) volume rendering of PET, CT, and segment data using
texture-based rendering technique; 3) interactive fuzzy thresh-
olding of PET data with real-time volume rendering of dual- Fig. 1. Flowchart of the proposed interactive multi-volume visualization. After
modality PET/CT; and 4) volume manipulation tools such as segmenting the PET image using FCM cluster analysis (step 1), the segmenta-
window level adjustments and lookup table (LUT) applied to tion map and fuzzy logic layer are constructed. The segmentation map, PET,
and CT image volumes are rendered using texture-based volume rendering (step
PET/CT volume rendering. 2). The fuzzy logic layer is then used to interactively adjust the rendered seg-
mentation volume by fuzzy thresholding (step 3). These volumes can be fused
A. Automated 4-D FCM Cluster Analysis of Dynamic/Static and interchanged in real time with volume manipulation tools (step 4) included
in the IMV2 .
PET Images
Prior to segmentation, the image data are preprocessed as where P (1 ≤ P ≤ ∞) is a weighting exponent on each fuzzy
follows: low-count background areas in the PET images are membership, which determines the amount of fuzziness of the
removed (set to zero) by thresholding. Isolated voxels and gaps resulting classification, and uij is the membership degree of
are then removed and/or filled by a 3 × 3 × 3 morphological the ith feature vector in the cluster j. The similarity measure
opening filter followed by a closing filter. For dynamic PET between the ith feature vector fi (t) and the cluster centroid f̄cj (t)
data, tissue time activity curves (TTACs) are extracted for each of the jth cluster group cj was calculated using the Euclidean
nonzero voxel to form the kinetic feature vector f (t) of time distance Dij given by
interval t(t = 1, 2, . . . , T ), where T is the total number of time
T 1/2
points. For static images, a single frame is acquired at t = T . 2
The FCM cluster analysis based on [22] is applied to assign D fi , f̄cj = s(t) fi (t) − f̄cj (t) (2)
each of the N feature vectors to one of a set number C of t=1
distinct cluster groups. For each cluster, centroids are assigned where s(t) is a scale factor of time point t(t = 1, 2, . . . , T ) equal
as the feature vectors of distinct, randomly selected voxels. The to the duration of the tth frame divided by the total dynamic ac-
value of each centroid voxel is replaced with the average of the quisition time. The scale factor s(t) gives more weight to the
3 × 3 × 3 surrounding voxels to avoid false selection of a noisy longer frames, which contain more reliable data. The minimiza-
outlier that may result in a cluster with a single member. FCM tion of J is achieved by iteratively updating the uij and the
cluster analysis minimizes the objective function J, according cluster centroids f̄cj (t) with
to
1
N
C
2 uij = P 2−1 (3)
J= uP C D (fi (t),f̄c j (t))
ij D fi (t), f̄cj (t) (1)
k=1 D (fi (t),f̄c k (t))
i=1 j=1
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 163
N P
i=1 uij fi (t)
f̄cj (t) = N P . (4)
i=1 uij
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
164 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007
of the volume that may obscure the user’s view, a “volume clip-
ping” tool can be applied by using a plane that cuts through the
volume perpendicular to the viewing window. Alternatively, the
user can position a “clipping box” that encapsulates the volume
such that only the volume residing inside the box is visible.
The ability to clip the volume, together with volume naviga-
tion, allows the user to explore the large multi-volume data by
interactively selecting the viewing window, which isolates the
interested section from the whole volume. Another tool is the
“adaptive sampling rate” that allows the volume to be sampled at
a lower rate to increase the responsiveness to movement, which Fig. 4. (a) Normal noise level simulation (last temporal frame). In (b)–(e),
the top row is the GM and bottom row is the WM. (b) GT. (c) Fuzzy threshold
are necessary when dealing with large multidimensional vol- of 28% for GM and 30% for WM. (d) Automated FCM segmentation with
umes. When the movement is completed, the samples are raised threshold of 48% for GM and 50% for WM. (e) Fuzzy threshold of 68% for GM
back to the default setting of fully sampled data. The sampling and 70% for WM.
rate is the number of parallel planes used in texture-based vol-
ume rendering to render the volume (see Section II-D). Other
tools include “window-level adjustment”, “transfer function”, III. EVALUATION AND EXPERIMENTAL RESULTS
“LUT”, and “intensity-based thresholding”. CT images, which
occupy greater dynamic range than is possible to display si- A. Validation of FCM Segmentation—Computer Simulations
multaneously, can be interactively adjusted using the window Computer simulations were performed to evaluate the perfor-
level. Transfer function can be used to control the opacity of mance and reliability of the fuzzy thresholding segmentation.
the volume, such that particular voxels (measured in the voxel’s The anatomical Zubal brain phantom [30] was reduced to white
intensity) become more prominent. In addition, intensity-based matter (WM) and gray matter (GM) and 20 cross-sectional
thresholding can be applied to potentially segment out the tissue slices. A five-parameter 2-[18F] fluoro-2-deoxy-D-glucose
structures in CT [24] that are acquired in high resolution and (18F-FDG) model [31] was used to simulate realistic TTAC
contain well-defined separation of tissue structures. As the PET values to construct 22 temporal frame sequences. Each slice was
and CT volumes are rendered independently, manipulations can smoothed by applying a Gaussian filter with a full-width-half
be applied to individual volumes, e.g., thresholding the CT and maximum (FWHM) of 8 mm prior to forward projection.
adjusting window levels of PET data, and the resultant ma- Projections were scaled to three different count levels by
nipulated data sets are then fused together in real time as the applying a scale factor that sets the maximum pixel count in
adjustments are made. the last frame to 100 (high noise), 500 (normal noise), and
800 (low noise) counts, where 500 was the maximum observed
in comparable clinical studies. Poisson noise was then added
D. Interactive IMV2 Implementation to the scaled projection data, which were reconstructed using
The IMV2 has been developed using the OpenGL [25] and filtered backprojection with a Hann filter. Noise-free ground
SGI Volumizer 2.7 application programming interface (API) truth (GT) images were constructed by smoothing the phantom
[26] to render the multi-volume images using texture-based data with Gaussian blur as in the noisy simulations. Voxels
volume rendering [5], [6], [27], [28]. The texture-based vol- mixed at the boundary in the smoothed images were reclassified
ume rendering creates parallel planes through the columns of to the tissue with the highest contribution. For boundaries
the volume data, in the principal direction most perpendicular separating tissue from zero count regions such as ventricles,
to the user’s line of sight, which are then drawn back-to-front 40% of the tissue counts were defined.
with appropriate 3-D texture coordinates [5]. IMV2 utilizes the The fuzzy thresholding results of GM (first row) and WM
high-capacity memory bandwidth of low-cost graphic hardware (second row) are illustrated in Fig. 4. From visual inspection,
to perform a rapid transfer of the 3-D textures from the system the GM segmentation result resembles the GT in Fig. 4(b) most
memory into the graphic memory. By utilizing the large band- closely with the increased threshold shown in Fig. 4(e). The re-
width, the volume interchange method replaces an old volume verse was evident for WM where the fuzzy thresholding result
in the graphic memory with a new volume. The two volumes are in Fig. 4(c) is visually the most similar to its GT. This illustrates
fused using the hardware-based per-voxel fusion method [29] the ability of the fuzzy thresholding to interactively optimize
of compositing the two voxels from respective volumes to cre- the segmentation result for particular structures of interest. The
ate a new voxel in real time. This method does not require any ring artifact around the periphery of the WM segmentation is
preprocessing of the volume data and thus allows real-time ad- attributed to the mixed tissue contribution between the GM and
justment of the fusion ratio of PET to CT. The segmentation the background. In the FCM cluster analysis, the background
volume is interactively adjusted in real time by assigning trans- was removed by using the boundaries of the tissue structures
parency values to the voxels based on the fuzzy threshold. In defined in the Zubal phantom and two clusters were applied
this approach, voxels that have greater fuzzy membership than corresponding to the WM and GM. Empirically derived cluster
the defined threshold are assigned to a visible transparency level analysis parameter values of P = 2.0, and ε = 0.1 gave accept-
and other voxels are set to fully transparent. able results. The segmentation results were found to be quite
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 165
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
166 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 167
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
168 IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007
the segmentation volume can be fused with PET or CT, as well [18] E. Bullitt and S. R. Aylward, “Volume rendering of segmented image
as the fusion of PET/CT in real-time volume rendering. The objects,” IEEE Trans. Med. Imag., vol. 21, no. 8, pp. 998–1002, Aug.
2002.
ability to visualize and interactively optimize segmentation of [19] K.-P. Wong, D. Feng, S. R. Meikle, and M. J. Fulham, “Segmentation
PET images that is overlaid on CT images in real-time volume of dynamic PET images using cluster analysis,” IEEE Trans. Nucl. Sci.,
rendering can potentially facilitate VOI generation for appli- vol. 49, no. 1, pt. 1, pp. 200–207, Feb. 2002.
[20] H. Guo, R. Renaut, K. Chen, and E. Rieman, “Clustering huge data sets
cations such as radiotherapy or image-guided surgery. Unlike for parametric PET imaging,” Biosystems, vol. 71, pp. 81–92, 2003.
fully automated techniques, the interactive fuzzy thresholding [21] J. G. Brankov, N. P. Galatsanos, Y. Yang, and M. N. Wernick, “Segmenta-
technique allows the physician to control segmentation while tion of dynamic PET or fMRI images based on a similarity metric,” IEEE
Trans. Nucl. Sci., vol. 50, no. 5, pt. 2, pp. 1410–1414, Oct. 2003.
navigating through the rendered PET/CT volumes, without in- [22] J. Bezdek, Pattern Recognition With Fuzzy Objective Function Algorithm.
curring the time penalty associated with manual VOI definition. Norwell, MA: Kluwer, 1981.
Overall, the IMV2 performed well on low-cost graphic hardware [23] X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 13, no. 8, pp. 841–847, Aug. 1991.
prior to software optimization, and we intend to further refine [24] S. Hu, E. A. Hoffman, and J. M. Reinhardt, “Automatic lung segmentation
this technique and potential clinical applications in future work. for accurate quantification of volumetric X-ray CT images,” IEEE Trans.
Med. Imag., vol. 20, no. 6, pp. 490–498, Jun. 2001.
[25] D. Shreiner, M. Woo, J. Neider, and T. Davis, OpenGL Programming
REFERENCES Guide: The Official Guide to Learning OpenGL Version 1.4, 4th ed.
Boston, MA: Addison-Wesley, 2003.
[1] M. N. Wernick and J. N. Aarsvold, Emission Tomography—Fundamentals [26] K. Jones and J. McGee, SGI OpenGL Volumizer 2 Programmer’s Guide.
of PET and SPECT. London, U.K.: Elsevier, 2004. Mountain View, CA: Silicon Graphics, Inc., 2004.
[2] R. A. Robb, “Visualization in biomedical computing,” Parallel Comput., [27] J. Kniss, P. McCormick, A. McPherson, J. Ahrens, J. Painter, A. Keahey,
vol. 25, pp. 2067–2110, 1999. and C. Hansen, “Interactive texture-based volume rendering for large data
[3] O. Ratib, “PET/CT image navigation and communication,” J. Nucl. Med., sets,” IEEE Comput. Graph. Appl., vol. 21, no. 4, pp. 52–61, Jul.–Aug.
vol. 45, pp. 46S–55S, 2004. 2001.
[4] A. Rosset, L. Spadola, and O. Ratib, “OsiriX: An open-source software [28] R. Westermann and B. Sevenich, “Accelerated volume ray-casting using
for navigating in multidimensional DICOM images,” J. Digital Imag., texture mapping,” in Proc. IEEE Vis., Oct. 21–26, 2001, pp. 271–278.
vol. 17, pp. 205–216, 2004. [29] “ARB fragment program specification,” ATI Research, Marlborough, MA,
[5] P. Bhaniramka and Y. Demange, “OpenGL volumizer: A toolkit for high 2002.
quality volume rendering of large data sets,” in Proc. Symp. IEEE/ACM [30] I. G. Zubal, C. R. Harrell, E. O. Smith, Z. Rattner, G. Gindi, and
SIGGRAPH Volume Vis. Graph., Oct. 28–29, 2002, pp. 45–53. P. B. Hoffer, “Computerized three-dimensional segmented human
[6] M. Hadwiger, C. Berger, and H. Hauser, “High-quality two-level volume anatomy,” Med. Phys., vol. 21, pp. 299–302, 1994.
rendering of segmented data sets on consumer graphics hardware,” in [31] R. A. Hawkins, M. E. Phelps, and S. C. Haung, “Effects of temporal sam-
Proc. IEEE Vis., Oct. 19–24, 2003, pp. 301–308. pling glucose metabolic rates, and disruptions of the blood-brain barrier
[7] R. Shahidi, R. Tombropoulos, and R. P. Grzeszczuk, “Clinical applications on the FDG model with and without a vascular compartment: Studies in
of three-dimensional rendering of medical data sets,” Proc. IEEE, vol. 86, human brain tumors with PET,” J. Cereb. Blood Flow Metab., vol. 6,
no. 3, pp. 555–568, Mar. 1998. pp. 170–183, 1986.
[8] F. Beltrame, G. De Leo, M. Fato, F. Masulli, and A. Schenone, “A three- [32] A. P. Zijdenbos, B. M. Dawant, R. A. Margolin, and A. C. Palmer, “Mor-
dimensional visualization and navigation tool for diagnostic and surgical phometric analysis of white matter lesions in MR images: Methods and
planning applications,” in Proc. SPIE Vis., Display Image-Guided Proce- validation,” IEEE Trans. Med. Imag., vol. 13, no. 4, pp. 716–724, Dec.
dures, 2001, vol. 4319, 2001, pp. 507–514. 1994.
[9] I. F. Ciernik, E. Dizendorf, B. G. Baumert, B. Reiner, C. Burger, [33] C. S. Patlak and R. G. Blasberg, “Graphical evaluation of blood-to-brain
J. B. Davis, U. M. Lütolf, H. C. Steinert, and G. K. Von Schulthess, transfer constants from multiple-time uptake data. Generalisations,” J.
“Radiation treatment planning with an integrated positron emission and Cereb. Blood Flow Metab., vol. 5, pp. 584–590, 1985.
computer tomography (PET/CT): A feasibility study,” Int. J. Radiol.
Oncol. Biol. Phys., vol. 57, pp. 853–863, 2003.
[10] A. B. Jani, J.-S. Irick, and C. Pelizzari, “Opacity transfer function op-
timization for volume-rendered computed tomography images of the Jinman Kim (S’01–M’06) received the B.S. (honors)
prostate1,” Acad. Radiol., vol. 12, pp. 761–770, 2005. degree in computer science and technology in 2001
[11] Y. C. Loh, M. Y. Teo, W. S. Ng, C. Sim, Q. S. Zou, T. T. Yeo, and from the University of Sydney, Sydney, Australia,
Y. Y. Sitoh, “Surgical planning system with real-time volume rendering,” where he is currently working toward the Ph.D. de-
in Proc. IEEE Int. Workshop Med. Imag. Augmented Reality, Jun. 10–12, gree in information technologies.
2001, pp. 259–261. His research interests include the development
[12] D. T. Gering, A. Nabavi, R. Kikinis, W. E. L. Grimson, N. Hata, P. Everett, of multidimensional image segmentation, image en-
F. Jolesz, and W. M. Wells, “An integrated visualization system for surgical hancement, information visualization, content-based
planning and guidance using image fusion and interventional imaging,” in image retrieval, and computer-aided diagnosis.
Proc. Med. Image Comput. Comput. Assisted Intervention, 1999, pp. 809–
819.
[13] H. Hauser, L. Mroz, G. I. Bischi, and M. E. Gröller, “Two-level volume
rendering,” IEEE Trans. Vis. Comput. Graph., vol. 7, no. 3, pp. 242–252,
Jul.–Sep. 2001. Weidong Cai (S’99–M’01) received the B.S. de-
[14] A. Wenger, D. F. Keefe, S. Zhang, and D. H. Laidlaw, “Interactive volume gree in Computer Science from HuaQiao University,
rendering of thin thread structures within multivalued scientific data sets,” Quanzhou, China, in 1989, and the Ph.D. degree from
IEEE Trans. Vis. Comput. Graph., vol. 10, no. 6, pp. 664–672, Nov.–Dec. the University of Sydney, Sydney, Australia, in 2001,
2004. both in computer science.
[15] M. Harders and G. Székely, “Enhancing human-computer interaction in Prior to his doctoral study, he worked in industry
medical segmentation,” Proc. IEEE, vol. 91, no. 9, pp. 1430–1442, Sep. for five years. After graduation, he was a Postdoc-
2003. toral Research Associate at the Centre for Multimedia
[16] L. Vosilla, G. De Leo, M. Fato, A. Schenone, and F. Beltrame, “An Signal Processing (CMSP), Hong Kong Polytechnic
interactive tool for the segmentation of multimodal medical images,” in University. In 2001, he was a Lecturer and is cur-
Proc. IEEE Inf. Technol. Appl. Biomed. Nov. 9–10, 2000, pp. 203–209 rently a Senior Lecturer in the School of Information
[17] S.-C. Yoo, C.-U. Lee, B. G. Choi, and P. Saiviroonporn, “Interactive Technologies, University of Sydney. His research interests include computer
3-dimensional segmentation of MRI data in personal computer environ- graphics, image processing and analysis, data compression and retrieval, and
ment,” J. Neurosci. Methods, vol. 112, pp. 75–82, 2001. multimedia database and computer modelling with biomedical applications.
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES 169
Stefan Eberl (M’97) received the B.E. (honors) de- Polytechnic University, Hong Kong; the Advisory Professor, Shanghai JiaoTong
gree in electrical engineering from New South Wales University; and a Guest Professor with Northwestern Polytechnic University,
Institute of Technology, Sydney, Australia, in 1982, Xian, China, with Northeastern University, Shenyang, China, and with Tsinghua
and the M.Sc. degree in physics and the Ph.D. de- University, Beijing China. He is the Founder and Director of the Biomedical and
gree in biomedical engineering from the University Multimedia Information Technology Research Group. He has published over
of New South Wales, Sydney, in 1997 and 2001, re- 400 scholarly research papers, pioneered several new research directions, and
spectively. made a number of landmark contributions in his field with significant scientific
He is currently a Principal Hospital Scientist in impact and social benefit. His research area is biomedical and multimedia in-
the Department of PET and Nuclear Medicine, Royal formation technology.
Prince Alfred Hospital, Sydney, and is an Adjunct As- Dr. Feng is a Fellow of the Australia Computer Society, the Australian
sociate Professor in the School of Information Tech- Academy of Technological Sciences and Engineering, Hong Kong Institution
nologies, University of Sydney, Sydney. His research interests include physio- of Engineers, and the Institution of Electrical Engineers, U.K. He is also the
logical parameter estimation from functional imaging, image registration, and special Area Editor of the IEEE TRANSACTIONS ON INFORMATION TECHNOL-
optimizing use of the combination of functional/anatomic data. OGY IN BIOMEDICINE and is the current Chairman of IFAC-TC-BIOMED. He is
the recipient of the Crump Prize for Excellence in Medical Engineering (USA).
Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.