Skip to content

Commit b12974f

Browse files
authored
README file for Course2
1 parent eda4be8 commit b12974f

File tree

1 file changed

+63
-0
lines changed

1 file changed

+63
-0
lines changed

Course 2/README.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Course 2 - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
2+
3+
**Info:** This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good results. You will also learn TensorFlow.
4+
5+
After 3 weeks, you will:
6+
- Understand industry best-practices for building deep learning applications.
7+
- Be able to effectively use the common neural network "tricks", including initialization, L2 and dropout regularization, Batch normalization, gradient checking,
8+
- Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence.
9+
- Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance
10+
- Be able to implement a neural network in TensorFlow.
11+
12+
This is the second course of the Deep Learning Specialization.
13+
14+
## Week 1 - Practical aspects of Deep Learning
15+
16+
- Video: Train / Dev / Test sets
17+
- Video: Bias / Variance
18+
- Video: Basic Recipe for Machine Learning
19+
- Video: Regularization
20+
- Video: Why regularization reduces overfitting?
21+
- Video: Dropout Regularization
22+
- Video: Understanding Dropout
23+
- Video: Other regularization methods
24+
- Video: Normalizing inputs
25+
- Video: Vanishing / Exploding gradients
26+
- Video: Weight Initialization for Deep Networks
27+
- Video: Numerical approximation of gradients
28+
- Video: Gradient checking
29+
- Video: Gradient Checking Implementation Notes
30+
- Notepad: Initialization
31+
- Notepad: Regularization
32+
- Notepad: Gradient Checking
33+
- Video: Yoshua Bengio interview
34+
35+
## Week 2 - Optimization algorithms
36+
37+
- Video: Mini-batch gradient descent
38+
- Video: Understanding mini-batch gradient descent
39+
- Video: Exponentially weighted averages
40+
- Video: Understanding exponentially weighted averages
41+
- Video: Bias correction in exponentially weighted averages
42+
- Video: Gradient descent with momentum
43+
- Video: RMSprop
44+
- Video: Adam optimization algorithm
45+
- Video: Learning rate decay
46+
- Video: The problem of local optima
47+
- Notepad: Optimization
48+
- Video: Yuanqing Lin interview
49+
50+
## Week 3 - Hyperparameter tuning, Batch Normalization and Programming Frameworks
51+
52+
- Video: Tuning process
53+
- Video: Using an appropriate scale to pick hyperparameters
54+
- Video: Hyperparameters tuning in practice: Pandas vs. Caviar
55+
- Video: Normalizing activations in a network
56+
- Video: Fitting Batch Norm into a neural network
57+
- Video: Why does Batch Norm work?
58+
- Video: Batch Norm at test time
59+
- Video: Softmax Regression
60+
- Video: Training a softmax classifier
61+
- Video: Deep learning frameworks
62+
- Video: TensorFlow
63+
- Notepad: Tensorflow

0 commit comments

Comments
 (0)