ECINN: Efficient Counterfactuals from Invertible Neural Networks
Published at The British Machine Vision Conference (BMVC) 2021,
Published at The British Machine Vision Conference (BMVC) 2021,
Published at arXiv,
Published at NeurIPS 2020 (Spotlight),
Published at arXiv,
Published at arXiv,
Published at ICML Workshop on Invertyible Neural Networks, Normalizing Flows, and Explicit Likelihood Models,
Master Course at SDC, Neuroscience & Neuroimaging, Beijing, China
Master Course at AU, Dept. of Computer Science, Aarhus, Denmark
Bachelor Course at AU, Dept. of Computer Science, Aarhus, Denmark
Bachelor Course at AU, Dept. of Computer Science, Aarhus, Denmark
Bachelor Course at AU, Dept. of Computer Science, Aarhus, Denmark
Bachelor Course at AU, Dept. of Computer Science, Aarhus, Denmark
Bachelor Course at AU, Dept. of Computer Science, Aarhus, Denmark
Bachelor Course at AU, Dept. of Computer Science, Aarhus, Denmark
Article Summary at Handout meeting, Aarhus University, Denmark
Introductory at U-Days, Aarhus University, Denmark
Article Summary at Handout meeting, Aarhus University, Denmark
Article Summary at Journal Club, Aarhus University, Denmark
Workshop at Major League Hacking - AUHack, Aarhus University, Denmark
Many local post-hoc explainability techniques, such as DeConvNet, Guided Backprop, Layer-wise relevance propagation, and integrated gradients, rely on “gradient-like” computations, where explanations are propagated backwards through Neural Networks, one layer at a time. One can alter this backward computation to include attentions, which guides the explanation techniques to produce better explanations.