Interpretable Risk Assessment Methods for Medical Image Processing via Dynamic Dilated Convolution and a Knowledge Base on Location Relations
DOI:
https://doi.org/10.31577/cai_2024_2_438Keywords:
High risk areas, quantification of uncertainty, deep learning, dilated convolution, image segmentation, credibility learningAbstract
Existing approaches to image risk assessment start with the uncertainty of the model, yet ignore the uncertainty that exists in the data itself. In addition, the decisions made by the models still lack interpretability, even with the ability to assess the credibility of the decisions. This paper proposes a risk assessment model that unites a model, a sample and an external knowledge base, which includes: 1. The uncertainty of the data is constructed by masking the different decision-related parts of the image data with a random mask of probabilities. 2. A dynamically distributed dilated convolution method based on random directional field perturbations is proposed to construct the uncertainty of the model. The method evaluates the impact of different components on the decisions within the local region by locally perturbing the attention region of the dilated convolution. 3. A triadic external knowledge base with relative interpretability is presented to reason and validate the model's decisions. The experiments are implemented on the dataset of CT images of the stomach, which shows that our proposed method outperforms current state-of-the-art methods.