VISUALIZATION AND REGISTRATION OF MULTI-MODAL MEDICAL DATA-SETS

VISUALIZATION AND REGISTRATION OF MULTI-MODAL MEDICAL DATA-SETS

MATTHIAS WLOKA 1, John Coleman, Ph.D. 1, Silke Kuball 1, Hari Krishnan, Ph.D. 1, Lynne Johnson, M.D. 2, Peter Bono, Ph.D. 1

1) Fraunhofer Center for Research in Computer Graphics, Providence, RI
2) Division of Cardiology, Rhode Island Hospital, Providence, RI

Contact: Matthias M. Wloka
321 South Main St., Suite 2
Providence, RI 02903
Phone: (401) 453 6363 x102
Fax: (401) 453 0444
E-mail: mwloka@crcg.edu

INTRODUCTION/BACKGROUND

Physicians today have at their disposal many different imaging modalities including X-ray images, Doppler ultrasound, CAT scans, and nuclear scans, with output ranging from static to dynamic and from 1D to 3D data sets. While single modality views sometimes suffice to establish a diagnosis, many conditions profit from multiple modalities. For example, diagnosing extent and functional significance of a coronary artery lesion requires coronary angiographic and nuclear imaging modalities.

The burden of synthesizing these different modalities (i.e., performing the data fusion) lies with the physician. The physician must therefore possess advanced mental visualization capabilities to correctly diagnose conditions and plan treatments. Mental three-dimensional visualization, data fusion, and data registration, however, are extremely difficult, time-consuming, and error-prone tasks.

PURPOSE OF WORK

To ease the physician's burden of mental visualization we provide a tool for simultaneous interactive visualization and manipulation of multi-modal volume data sets. Our tool allows physicians to simultaneously display multiple volume data sets in 3D, to manually register data sets to one another as well as to atlases, and to interactively slice through these combined data sets. While this paper concentrates on the applicability to heart disease, i.e., the integration of angiogram and perfusion SPECT data, other modalities are just as easily integrated and visualized.

Because the functionality described below (see Section METHODS) has proven useful in visualizing and understanding single-modality data, we feel confident that applying the same advanced visualization techniques to multi-modal data sets provides equal benefits. In particular, combining multiple data sets simplifies correlative imaging to a one-step process for the interpreting physician.

METHODS

Our software constitutes the visualization component of a larger project: enabling and maintaining large multimedia medical databases. This larger project acts as an advanced web-browser to allow users easy access to an object-oriented database (i.e., Versant) that integrates PACs and medical data in standard form (e.g., HL-7 and Dicom 3.0 formats) using the familiar web-browser interface. The visualization component starts in response to querying and selecting a particular three-dimensional data set.

The visualization software presents the user with a three-dimensional view of the data. The user may then select other, related data to display in the same view. Related data are different modalities of the same patient, atlases of the same anatomic region, or same-modality/same-anatomic data of other patients. Automatic behaviors within the multimedia database select these related data; other such behaviors enable the 2D to 3D reconstruction of a series of angiogram images into an angiogram volume, and (potentially) the automatic registration of multiple volumes to one another.

The interface to the 3D view allows arbitrary translation, rotation, and zoom of the view using direct manipulation techniques such as virtual sphere rotation and zoom-boxes (i.e., no additional interface-components such as virtual buttons or menus are required). The same interface applies to a selected volume or group of volumes (a user selects one or more volumes by simply clicking on the displayed volume data), thus providing arbitrary translation, rotation, and scaling to individual volume data sets.

A user may augment this 3D view of the data with an arbitrary number of cutting plane visualizations. Each cutting plane shows in a separate window, and each cutting-plane window uses the same interface as the 3D view to manipulate the position, orientation, and scale of the cutting plane. While these techniques work satisfactorily with a two-button mouse, for ease of use we also provide the ability to connect the 3D view, the object-selection, or any of the oblique cutting-planes to a 6 DOF input device to control position and orientation simultaneously.

The ability to spatially manipulate both the various views as well as the individual volume data sets enables the use of this visualization software as a manual registration tool. Physicians may visualize and manipulate multiple volumes, adjusting position, orientation, and scale of the various data sets until they register.

We implement the integrated display of multiple volumes via the concept of mapping volumes. To avoid re-sampling the volume data sets and the associated loss of accuracy, none of the volumes render directly. Instead, every volume associates with at least one mapping-volume that indicates where in space a particular voxel is to be located. Thus, whenever users manipulate volumes, only the associated mapping volumes change. While the current software only implements affine transformations, mapping-volumes generalize to non-affine transformations. Mapping-volumes produce little overhead, thus maintaining the ability to manipulate the data interactively.

RESULTS

We provide physicians with a tool for interactive visualization and registration of multi-modal data sets. In particular, our software is used to view myocardial perfusion SPECT and coronary angiographic volume data (the angiographic volume data is a 2D to 3D reconstruction of a series of 2D angiographs). Seeing a combined view of these two modalities should enable a better understanding of myocardial perfusion and coronary artery stenoses. It should help to give an impression of the extent of coronary heart disease, its exact location, and its accessibility to interventional procedures. Accordingly, earlier and better diagnoses should become possible. In addition, it should allow better decisions regarding treatment choices and treatment planning.

CONCLUSIONS/DISCUSSION

We implemented a prototype of the described tool. The next step is to let doctors use and critique the work to further improve the user interface and add useful features. For example, to reduce unnecessary tedium automating the registration task seems highly desirable. To ensure correctness, any such feature would only be semi-automatic at first.