- Prerequisites / Organisational information
- Registration via StudOn: https://www.studon.fau.de/crs3330572.html
- Contents
- Meta-learning refers to algorithms which aim to learn an aspect of a learning algorithm from data.
Examples of meta-learning methods include algorithms which design neural network architectures based on data, optimize the performance of a learning algorithm or exploit commonalities between tasks to enable learning from few samples on unseen tasks.
These methods hold the promise to automate machine learning even further than learning good representations from data by learning algorithms to learn even better representations.The seminar will cover the most important works which provide the cornerstone knowledge to understand cutting edge research in the field of meta-learning. Applications will include:
-
Learning from few samples
-
Automatically tuning neural network architectures
-
Determining appropriate equivariances
-
Disentangling causal mechanisms
-
- Recommended literature
- Finn et al., "Model-agnostic meta-learning for fast adaptation of deep networks", ICML 2017
Zhou et al., "Meta-learning symmetries by reparameterization", Arxiv
Snell et al., "Prototypical networks for few-shot learning", Neurips 2017
Triantafillou et al., "Meta-dataset: A dataset of datasets for learning to learn from few examples", ICLR 2020
Vinyals et al., "Matching networks for one shot learning. ", Neurips 2016
Zoph et al. "Neural Architecture Search with Reinforcement Learning", Journal of Machine Learning Research 20 (2019)
Bengio et al., "A meta-transfer objective for learning to disentangle causal mechanisms", ICLR 2020
Santoro et al., "Meta-Learning with Memory-Augmented Neural Networks", ICML 2016
Ravi et al., "Optimization as a model for few-shot learning", ICLR 2016 Munkhdalai et al., "Meta Networks", ICML 2017
Sung et al. "Learning to Compare: Relation Network for Few-Shot Learning", CVPR 2018
Nichol et al. "On First-Order Meta-Learning Algorithms", Arxiv