Block diagram of the logical components of the system.
Today, physicians have at their disposal many different machines, each creating a view of a different modality of the patient. These modalities include x-ray images, Doppler ultrasound, CAT- and PET-scans, etc., and thus range from static to dynamic and from 1D to 3D datasets. While single modality views are sometimes sufficient to diagnose a patient (e.g., using x-rays to diagnose fractures), many conditions require multi-modal views to enable a physician to confidently diagnose a condition (e.g., using angiography and SPECT to diagnose extent, exact location, and reachability of heart disease).
To date, the burden of synthesizing all this information into a single, consistent view (i.e., data fusion) is placed on the mental visualization capabilities of the physician. Aided only by reference books and, perhaps, light boxes for viewing X-ray or MRI film series, the physician is required to make a diagnosis and plan a treatment. While it is likely that, for the foreseeable future, the physician will continue to shoulder the ultimate responsibility, it is the goal of this research to develop computer-assisted tools that ease the mental burden, increase the likelihood of a correct diagnosis, reduce the risk of a catastrophic misdiagnosis, and lead to a finely-tuned, individualized, effective treatment strategyall in a way that reduces stress, uses the physicians time more effectively, and increases communication with the patient.
CRCG proposes to lay the foundation for a next generation medical multimedia (multi-modal) database. Such a database will have flexible retrieval options and will facilitate multiple modes of interaction, browsing, and manipulation of the data.
Next generation databases will incorporate behaviors that facilitate associating, linking, and establishing relationships among the various information primitives that populate the database. Such behaviors are context sensitive and imply a kind of database for which the incorporation of area-specific knowledge is fundamental to its design. The concept of data as passive declarative facts is replaced with the concept of information objects that can be instantiated, queried, and associated in various ways. Behaviors can be quite diverse and include content-based retrieval, automatic volume registration and segmentation, and physically-based simulations. The incorporation and representation of knowledge, in this context, implies the use of models that capture essential features of relationships. Models can be geometric, statistical, heuristic, and/or syntactical among others. When diverse representations (i.e., models) must be logically related, the emerging structure may be conveniently referred to as an atlas. The atlas in effect provides a mapping among the diverse representations that comprise it. Next generation medical databases will be populated with a hierarchy of logically and structurally related atlases that will provide the basic knowledge foundation on which appropriate behaviors are built. Achieving this goal requires a long-term vision, drawing from a diverse set of core competencies.
To address these challenging problems, development of a knowledge-based database that models and stores these multi-modal and multimedia medical datasets and their relationships is a key issue. Our initial focus of this effort is on the design of a conceptual model and architecture for such a multimedia medical database (MMMDB) that supports the following three key components.
1. Behaviors -- spatial registration and integration of various datasets, in particular, of anatomical and functional data, which are logically related.
2. Atlases knowledge representation schema that support storage of multimedia medical data, including static or time-varying images or volumes (i.e., 2D or 3D), in an integrated way so that they can be retrieved efficiently and effectively.
3. Interaction -- User-friendly retrieval and
visualization of semantically-enriched data in the MMMDB so as to be of
optimal assistance to a physician.
1. Object-Oriented Data Modeling and Database Design.To represent and store the medical data and the semantics for facilitating data retrieval and visualization.
2. Medical Visualization and Human-Computer Interaction. Coherent display and integration of radiological and other biometric data along with natural modes of interaction.
3. Knowledge Acquisition, Representation, and Data Mining. Data structures that convey associations among logically related components must be designed along with methods for accessing and modifying these associations. Data mining is a process to uncover trends and relationships in heterogeneous data.
4. Information Refinement. This is the process whereby low-order information primitives are transformed into high-order information symbols that are easily comprehended by human agents or their representatives.
5. Pattern Recognition and Data Fusion. Statistical,
syntactical, or neural in nature, pattern recognition investigates algorithms
for recognizing and labeling parts and is an important part of the information
refinement process. Data fusion investigates various methodologies whereby
heterogeneous data can be combined to produce a unified description or
This work is supported in part by a grant from NASA's Commercial Space Center for Medical Informatics and Technology Applications (CSC/MITA) at Yale.
Visualization and Registration of
Multi-Modal Medical Data-Sets Screen
MATTHIAS WLOKA 1, John Coleman, Ph.D. 1, Silke Kuball 1, Hari Krishnan, Ph.D. 1, Lynne Johnson, M.D. 2, Peter Bono, Ph.D. 1. Abstarct of talk presented at MMVR6, January 98, San Diego, CA.
Screen Shots.Click on image to see a full picture.
|Maximum Intensity Projection of an MRI Scan and CT Scan||Surface Rendering of an MRA Scan (gray) and a CT Scan (red & green)||Surface Rendering of an MRA Scan and CT Scan|