Full Program »
Occlusion-Based Approach For Interpretable Semantic Segmentation
In this paper, we investigate the application of an occlusion-based approach for the task of interpreting semantic segmentation results. With an increasing deployment of deep learning systems in critical domains, interpretability plays a key role in providing additional information about the model besides the evaluation metric score. An extended modification of occlusion sensitivity allows the generation of saliency maps based on the effect of occlusions on the evaluation metric. Such a perturbation-based post-hoc interpretability method can be used to visualize those image regions that the selected segmentation class is most sensitive to. We observe that, compared to classification cases, the evaluation metric scores for segmentation remain similar to each other even after occlusions. In order to generate more color intensities in the saliency map, we use normalization and standardization techniques. We also evaluate the results quantitatively using deletion curves.