Skip to content

Commit 6bf991d

Browse files
committed
Update README
1 parent 7707b79 commit 6bf991d

4 files changed

+11
-1
lines changed

README.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,22 @@
44

55
The automatic generation of radiology reports has the potential to assist radiologists in the time-consuming task of report writing. Existing methods generate the full report from image-level features, failing to explicitly focus on anatomical regions in the image. We propose a simple yet effective region-guided report generation model that detects anatomical regions and then describes individual, salient regions to form the final report. While previous methods generate reports without the possibility of human intervention and with limited explainability, our method opens up novel clinical use cases through additional human-in-the-loop capabilities and introduces a high degree of transparency and explainability. Comprehensive experiments demonstrate the effectiveness of our method in both the report generation task and the human-in-the-loop capabilities, outperforming previous state-of-the-art models.
66

7-
## Results
7+
## Method
8+
9+
![image info](./figures_repo/method.png) *Figure 1. **R**egion-**G**uided Radiology **R**eport **G**eneration (RGRG): the object detector extracts visual features for 29 unique anatomical regions in the chest. Two subsequent binary classifiers select salient region features for the final report and encode strong abnormal information in the features, respectively. The language model generates sentences for each of the selected regions (in this example 4), forming the final report. For conciseness, residual connections and layer normalizations in the language model are not depicted.*
10+
11+
## Quantitative Results
812

913
![image info](./figures_repo/nlg_metrics_table.png) *Table 1. Natural language generation (NLG) metrics for the full report generation task. Our model is competitive with or outperforms previous state-of-the-art models on a variety of metrics.*
1014

1115
![image info](./figures_repo/clinical_efficacy_metrics_table.png) *Table 2. Clinical efficacy (CE) metrics micro-averaged over 5 observations (denoted by mic-5) and example-based averaged over 14 observations (denoted by ex-14). RL represents reinforcement learning. Our model outperforms all non-RL models by large margins and is competitive with the two RL-based models directly optimized on CE metrics. Dashed lines highlight the scores of the best non-RL baseline.*
1216

17+
## Qualitative Results
18+
19+
![image info](./figures_repo/full_report_generation.png) *Figure 2. Full report generation for a test set image. Detected anatomical regions (solid boxes), corresponding generated sentences, and semantically matching reference sentences are colored the same. The generated report mostly captures the information contained in the reference report, as reflected by the matching colors.*
20+
21+
![image info](./figures_repo/anatomy_based_sentence_generation.png) *Figure 3. Interactive capability of anatomy-based sentence generation allows for the generation of descriptions of explicitly chosen anatomical regions. We show predicted (dashed boxes) and ground-truth (solid boxes) anatomical regions and color sentences accordingly. We observe that the model generates pertinent, anatomy-related sentences.*
22+
1323
## Setup
1424

1525
1. Create conda environment with "**conda env create -f environment.yml**"
Loading
604 KB
Loading

figures_repo/method.png

353 KB
Loading

0 commit comments

Comments
 (0)