Document Type
Conference Proceeding
Publication Title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Abstract
Automatic radiology reporting has great clinical potential to relieve radiologists from heavy workloads and improve diagnosis interpretation. Recently, researchers have enhanced data-driven neural networks with medical knowledge graphs to eliminate the severe visual and textual bias in this task. The structures of such graphs are exploited by using the clinical dependencies formed by the disease topic tags via general knowledge and usually do not update during the training process. Consequently, the fixed graphs can not guarantee the most appropriate scope of knowledge and limit the effectiveness. To address the limitation, we propose a knowledge graph with Dynamic structure and nodes to facilitate chest X-ray report generation with Contrastive Learning, named DCL. In detail, the fundamental structure of our graph is pre-constructed from general knowledge. Then we explore specific knowledge extracted from the retrieved reports to add additional nodes or redefine their relations in a bottom-up manner. Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation. Finally, this paper introduces Image-Report Contrastive and Image-Report Matching losses to better represent visual features and textual information. Evaluated on IU-Xray and MIMIC-CXR datasets, our DCL outperforms previous state-of-the-art models on these two benchmarks.
First Page
3334
Last Page
3343
DOI
10.1109/CVPR52729.2023.00325
Publication Date
8-22-2023
Keywords
cell microscopy, Medical and biological vision, Training, Visualization, Neural networks, MIMICs, Knowledge graphs, Radiology, Pattern recognition
Recommended Citation
M. Li, B. Lin, Z. Chen, H. Lin, X. Liang and X. Chang, "Dynamic Graph Enhanced Contrastive Learning for Chest X-Ray Report Generation," 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 2023, pp. 3334-3343, doi: 10.1109/CVPR52729.2023.00325.
Additional Links
https://doi.org/10.1109/CVPR52729.2023.00325
Comments
Open access, archived thanks to CVPR
Uploaded 13th June 2024