machine learning note
latest
Math
Algorithm
Neural Network
Neural Network
Loss Functions
Optimizer
Activation Functions
Normalization
Weight Initialization
Tricks
Models
CNN
GAN
Generative Models
Transfer Learning
RNN
Attention
NLP
GNN
AutoML
Issues
Useful Links
About Me
Blog
Cheatsheets
machine learning note
»
Neural Network
Edit on GitHub
Neural Network
ΒΆ
Neural Network
Perceptron
Backpropagation
Loss Functions
L2 & L1 Loss
Regression Loss Functions
Binary Classification Loss Functions
Multi-Class Classification Loss Functions
CNN Loss
Detection
Face
Divergence loss (on probability)
Optimizer
Gradient Descent
Momentum
AdaGrad
AdaDelta
RMSprop
Adam
AdamW
Cyclical Learning Rates
SGDR
Activation Functions
Step
Signum
Sigmoid
Tanh
ReLU
Softmax
Normalization
Local Response Normalization
Batch Normalization
Layer Normalization
Weight Normalization
Instance normalization
SELU (NIPS 2017)
Group Normalization
Conditional BatchNorm
Conditional Instance Normalizatoin
AdaIN
Batch Renormalization
SPADE
Summary and use cases
Weight Initialization
Unsupervised pre-training
Xavier Initialization
Kaiming Initialization
Tricks
Data
Dropout
Regarlization
Normalization
Skip Connection
Residual scaling
visual gradient vanishing
Batch size
Read the Docs
v: latest
Versions
latest
stable
develop
Downloads
On Read the Docs
Project Home
Builds