(multilayer) perceptron, backpropagation, fully connected neural networks
loss functions and optimization strategies
convolutional neural networks (CNNs)
activation functions
regularization strategies
common practices for training and evaluating neural networks
visualization of networks and results
common architectures, such as LeNet, Alexnet, VGG, GoogleNet
recurrent neural networks (RNN, TBPTT, LSTM, GRU)
deep reinforcement learning
unsupervised learning (autoencoder, RBM, DBM, VAE)
generative adversarial networks (GANs)
weakly supervised learning
applications of deep learning (segmentation, object detection, speech recognition, ...)
The accompanying exercises will provide a deeper understanding of the workings and architecture of neural networks.