4 - Lipschitz Learning using Graphs and Neural Networks (L. Bungert , FAU, Germany ) [ID:32332]
50 von 642 angezeigt

So welcome everyone. Today we have Leon Burkhardt here at the FAO and he will be speaking about

Lipstick learning using graphs under a network. Please Dr. Burkhardt you have the floor.

Thank you very much Marius for the introduction and also thank you very much for inviting me to

give a talk in this nice seminar series which I've been following closely in the last month already.

So I'm really honored to also give a talk here. So as you already said my talk is going to be

about Lipstick learning using graphs and neural networks and in a nutshell I will speak about two

different learning paradigms namely semi-supervised learning and supervised learning and I will speak

about challenges and difficulties which arise there when it comes to stability and also

some approaches which we have developed to sort of regularize the problems and get more stable

learning algorithms and most of the things that I will be talking about is joint work with Tim

Royd and Daniel Tempring who are also from the chair of applied mathematics in the fourth floor.

Good so this is the structure of today's talk. I will start with the motivation on the kind of

learning paradigms that I will look at and what the difficulties there are and then in the first

main part of the talk I will speak about semi-supervised learning methodology called

graph-based Lipstick learning and there I will explain how you can derive a continuum limit of

this learning problem as the data set becomes denser and denser and converges to a continuum

let's say approximating a measure and the techniques that I will use there are mainly based

on gamma convergence and then in the second part of the talk I will sort of switch topics and speak

about a supervised learning problem using neural networks and there we found a new algorithm

how to regularize the Lipschitz constants of neural networks during the training process

and I will present you this algorithm and show some preliminary analysis and also some results

and so the second part is brand new sort of the first part was mainly built on a master's thesis

and also a pre-print that Tim Royd and I did together a few months ago and in the end I will

conclude with some open questions and some future work that we will do in this direction.

Okay then let's get started with having a look at the motivation and the kind of problems that I will

speak about. So first of all nowadays everybody is speaking about learning which is a fancy word

and then there are also different paradigms appearing there there's unsupervised semi-supervised

supervised learning however it's not sometimes it's not quite clear what the differences are

between these different paradigms and so I just want to provide you with some toy examples to

make this a bit clearer and yeah we start with unsupervised learning. In unsupervised learning

you're just given some data set denoted by these gray points here and that's all you have so you

don't have any information about the data points you just have the collection of data points

and what you would like to infer from this collection of data points is a most of the

times is a clustering or a separation of the data into meaningful subgroups let's say and yeah you

could for instance then get something like this separating this data set into two different classes

one being blue and the other one being red and yeah so what are the techniques to achieve

such clusterings and the techniques there are basically kind of traditional and they've been

existing since decades already and you can use for instance the k-means clustering algorithm

or a stochastic version of that which is called expectation maximization based on Gaussian

mixtures or you could also use a spectral clustering approach which is based on computing

eigenvectors of the graph laplacian matrix which you can define over such a data set.

All right so this is sort of the classical data science on machine learning tasks that you can

start with the unsupervised learning and this is basically all that I want to say about unsupervised

learning in the rest of the talk I won't speak about unsupervised learning anymore

so then there's also semi-supervised learning where you're also given let's say the same data set

but you have some additional information you might for instance know a few labels of these

data points so you might know okay these three data points they should belong to my blue class

whereas these three data points should belong to the red class and then you want to

to solve the same problem namely a clustering problem however taking into account the labeling

that you already got right and you can also do this and you see that you might get a different

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:03:25 Min

Aufnahmedatum

2021-05-05

Hochgeladen am

2021-05-05 14:57:00

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen