Matrix Methods in Data Analysis, Signal Processing, and Machine Learning
- Offered byMIT Professional Education
Matrix Methods in Data Analysis, Signal Processing, and Machine Learning at MIT Professional Education Overview
Duration | 12 hours |
Total fee | Free |
Mode of learning | Online |
Schedule type | Self paced |
Difficulty level | Intermediate |
Official Website | Explore Free Course |
Credential | Certificate |
Matrix Methods in Data Analysis, Signal Processing, and Machine Learning at MIT Professional Education Highlights
- Earn a Certificate of completion from MIT on successful course completion
- Instructor - Prof. Gilbert Strang
- This course explores linear algebra with applications to probability and statistics and optimization, and a complete explanation of deep learning
Matrix Methods in Data Analysis, Signal Processing, and Machine Learning at MIT Professional Education Course details
- This course is designed for those who want to learn basics of linear algebra, probability and statistics, optimization, and deep learning, and their relationship.
- Linear algebra concepts are key for understanding and creating machine learning algorithms, especially as applied to deep learning and neural networks. This course reviews linear algebra with applications to probability and statistics and optimization?and above all a full explanation of deep learning.
Matrix Methods in Data Analysis, Signal Processing, and Machine Learning at MIT Professional Education Curriculum
Lecture 1: The Column Space of A Contains All Vectors Ax
Lecture 2: Multiplying and Factoring Matrices
Lecture 3: Orthonormal Columns in Q Give Q?Q = I
Lecture 4: Eigenvalues and Eigenvectors
Lecture 5: Positive Definite and Semidefinite Matrices
Lecture 6: Singular Value Decomposition (SVD)
Lecture 7: Eckart-Young: The Closest Rank k Matrix to A
Lecture 8: Norms of Vectors and Matrices
Lecture 9: Four Ways to Solve Least Squares Problems
Lecture 10: Survey of Difficulties with Ax = b
Lecture 11: Minimizing ?x? Subject to Ax = b
Lecture 12: Computing Eigenvalues and Singular Values
Lecture 13: Randomized Matrix Multiplication
Lecture 14: Low Rank Changes in A and Its Inverse
Lecture 15: Matrices A(t) Depending on t, Derivative = dA/dt
Lecture 16: Derivatives of Inverse and Singular Values
Lecture 17: Rapidly Decreasing Singular Values
Lecture 18: Counting Parameters in SVD, LU, QR, Saddle Points
Lecture 19: Saddle Points Continued, Maxmin Principle
Lecture 20: Definitions and Inequalities
Lecture 21: Minimizing a Function Step by Step
Lecture 22: Gradient Descent: Downhill to a Minimum
Lecture 23: Accelerating Gradient Descent (Use Momentum)
Lecture 24: Linear Programming and Two-Person Games
Lecture 25: Stochastic Gradient Descent
Lecture 26: Structure of Neural Nets for Deep Learning
Lecture 27: Backpropagation: Find Partial Derivatives
Lecture 30: Completing a Rank-One Matrix, Circulants!
Lecture 31: Eigenvectors of Circulant Matrices: Fourier Matrix
Lecture 32: ImageNet is a Convolutional Neural Network (CNN), The Convolution Rule
Lecture 33: Neural Nets and the Learning Function
Lecture 34: Distance Matrices, Procrustes Problem
Lecture 35: Finding Clusters in Graphs
Lecture 36: Alan Edelman and Julia Language