University of Glasgow - Explainable deep learning models for healthcare - CDSS 3
- Offered byCoursera
Explainable deep learning models for healthcare - CDSS 3 at Coursera Overview
Duration | 39 hours |
Start from | Start Now |
Total fee | Free |
Mode of learning | Online |
Official Website | Explore Free Course |
Credential | Certificate |
Explainable deep learning models for healthcare - CDSS 3 at Coursera Highlights
- Earn a Certificate upon completion from University of Glasgow
Explainable deep learning models for healthcare - CDSS 3 at Coursera Course details
- This course will introduce the concepts of interpretability and explainability in machine learning applications
- The learner will understand the difference between global, local, model-agnostic and model-specific explanations State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification
Explainable deep learning models for healthcare - CDSS 3 at Coursera Curriculum
Interpretable vs Explainable Machine Learning Models in Healthcare
Welcome video - Explainable Deep Learning Models for Healthcare
Interpretability vs Explainability
'Explainability' in Healthcare Applications
Taxonomy of Explainability Methods
Model Agnostic Explainability Methods
Permutation Feature Importance in Time Series Data
The importance of explainable prediction models in healthcare
Explainable Artificial Intelligence - Taxonomy
Model Agnostic Explainability
Permutation Feature Importance
Practical Exercise: Interpretability of the MLP model using Permutation Feature Importanceg
Practical Exercise: Interpretability of the CNN model using Permutation Feature Importance
Practical Exercise: Interpretability of the LSTM model using Permutation Feature Importance
Explainability models in ECG
End of week 1 quiz
Local Explainability Methods for Deep Learning Models
Local Interpretable Model Agnostic Explanations (LIME)
LIME in Time-Series Classification
Shapley Additive Explanations
Model-Specific Explanations: Visualisation Methods
CAM in Time-Series Classification
Why Should I Trust You?
Practical Exercise: Interpretability of heartbeat classification using LIME and an NNMLP model
Practical Exercise: Interpretability of heartbeat classification using LIME and a CNN model
Practical Exercise: Interpretability of heartbeat classification using LIME and an LSTM model
A Unified Approach to Interpreting Model Predictions
Practical Exercise: Interpretability of CNN models using Class Activation Maps
Class Activation Mapping
End of week 2 quiz
Gradient-weighted Class Activation Mapping and Integrated Gradients
Gradient Weighted Class Activation Maps
Grad-CAM in Time-Series Classification
Integrated Gradients
Integrated Gradients in Time Series Classification
GRAD - Class Activation Mapping
Practical Exercise: Interpretability of the CNN model using Gradient-weighted Class Activation Mapping
Practical Exercise: Interpretability of the LSTM model using Gradient-weighted Class Activation Mapping
Axiomatic Attribution for Deep Networks
Practical Exercise: Interpretability of the CNN model using Integrated Gradients
Practical Exercise: Interpretability of the LSTM model using Integrated Gradients
End of week 3 quiz
Attention mechanisms in Deep Learning
Attention in Deep Learning
Taxonomy of Attention
Attention and Explainability
Survey on Attention Mechanisms
Practical Exercise: Classification of heartbeats using an LSTM with attention mechanism
Practical Exercise: Interpretability of the LSTM model with attention mechanism
End of week 4 quiz
End of course summative quiz