Interpretable Risk Assessment Methods for Medical Image Processing via Dynamic Dilated Convolution and a Knowledge Base on Location Relations

Authors

  • Yunan Shi School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013, China
  • Junxian Bao School of Medicine, Jiangsu University, Zhenjiang, 212001, China & Zhenjiang Traditional Chinese Medicine Hospital, Zhenjiang, 212003, China
  • Keyang Cheng School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013, China
  • Weijie Shen School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013, China
  • Jingfeng Tang Jiangsu Provincial Key Laboratory of Industrial Network Security Technology, Jiangsu University, Zhenjiang, 212013, China
  • Yongzhao Zhan Jiangsu Provincial Key Laboratory of Industrial Network Security Technology, Jiangsu University, Zhenjiang, 212013, China

DOI:

https://doi.org/10.31577/cai_2024_2_438

Keywords:

High risk areas, quantification of uncertainty, deep learning, dilated convolution, image segmentation, credibility learning

Abstract

Existing approaches to image risk assessment start with the uncertainty of the model, yet ignore the uncertainty that exists in the data itself. In addition, the decisions made by the models still lack interpretability, even with the ability to assess the credibility of the decisions. This paper proposes a risk assessment model that unites a model, a sample and an external knowledge base, which includes: 1. The uncertainty of the data is constructed by masking the different decision-related parts of the image data with a random mask of probabilities. 2. A dynamically distributed dilated convolution method based on random directional field perturbations is proposed to construct the uncertainty of the model. The method evaluates the impact of different components on the decisions within the local region by locally perturbing the attention region of the dilated convolution. 3. A triadic external knowledge base with relative interpretability is presented to reason and validate the model's decisions. The experiments are implemented on the dataset of CT images of the stomach, which shows that our proposed method outperforms current state-of-the-art methods.

Downloads

Download data is not yet available.

Downloads

Published

2024-05-30

How to Cite

Shi, Y., Bao, J., Cheng, K., Shen, W., Tang, J., & Zhan, Y. (2024). Interpretable Risk Assessment Methods for Medical Image Processing via Dynamic Dilated Convolution and a Knowledge Base on Location Relations. Computing and Informatics, 43(2), 438–457. https://doi.org/10.31577/cai_2024_2_438

Issue

Section

Special Section Articles