10 - Lecture 10: Machine learning for MR image reconstruction [ID:46538]
50 von 480 angezeigt

Okay, so let's get started.

What we're going to talk about today is bringing the concepts of everything that we have done so far in the lecture together right so we talked about the reconstruction problem in detail.

We began from a linear reconstruction by nonlinear reconstruction we cover iterative methods. We talked a little bit about basics of machine learning last week and today I'm going to show you something that is a little bit more cutting edge more research.

The concept of a textbook lecture so you can probably just sit back and and then enjoy and then show you what research is going on at the moment, how to combine and use machine learning methods for the for the image reconstruction task.

And this is a recap. Just to set the stage and this is the problem that you all know really really well we are dealing with the version of this of this matrix operator here that has the following structure.

We have our services of image that we want to try to reconstruct. This is our measure case based data. This is the term that describes the encoding of our gradient fields, and these are the receive quote sensitivities.

And we can discretize this and write it in matrix format, come up with this way known formulation to you that we invert this operator, both from the case base to the image space and you can use all kinds of methods that you already know

like gradient gradient descent.

Recreating a non uniform effort these compress sensing or this can come into this problem.

And we are so already know that in the case of a fully sample the position. This is a really a problem it's just the US FFT everything's very conditioned but if we understand when this program becomes in condition, we end up with a using artifacts, it's

can be under the term and so we need to provide some kind of a priori knowledge and historically what we started out with.

This was invented in the, in the 90s and 2000s, using my channel to provide this extra information so that we can solve this problem and compress sensing, then was what we covered in the last lecture before the holiday break.

And in the late 2000s, where you are specifically using these conditions of sparsity of the image here illustrated is just a great image this is vapor representation can see that it has a substantially smaller number of non zero coefficients than the original

image. So it is compressive work. And you can use this a priori information in the information process, then the concept of interference, which we achieve by using a sampling trajectory that has this single random distribution so our

data is noise like appearance, and we use those two together, then in this non linear construction where we have two terms, the data term here that ensures that our current solution is consistent with measurement data that we have acquired.

And the regularizer term of the sparsity term however you might call it that enforces that all of our solutions that we could obtain we select the one that minimizes the response or maximize this possibility of our image in our transform.

So, a few questions about this before we move on this was kind of a three minute recap of everything we have done so far in the lecture.

This was the state of the art, up until 2016. And in 2016 machine learning methods and that's the picture for this type of problem.

So, in the methodology and our research world, there is a conference, the annual meeting of the ism or m ism or instance for international society from magnetic resonance in medicine.

It's a pretty good meeting, five to 6000 people.

Before 2016 nobody was talking about machine learning at all in a complex there was about compressing sparsity transforms long rank models, these type of things.

And then in 2016, there were out of all these 6000 or 7000 presentations, there were three that had kind of hinted or talked about how you can use machine learning for for a new reconstruction, and they were really under the radar.

One was from a group in Korea, one was from a group in Shenzhen, China. They were posters, so they were not even oral presentations. And one of them, so this was ours.

And this one got a little bit of an award and it was an oral presentation, but it was at eight o'clock in the morning on Friday, Friday being the last day of the conference and on Thursday evening there's the closing party.

So just imagine how many people probably attend this talk. It was literally this session was completely empty the only people who were in the sessions were the 10 people who gave talks, and kind of presented to do it.

So, what this means is that nobody cared about machine learning for for for this type of problem this comprehensive image reconstruction problem before 2016.

And then, after, after this ism ran. This has really picked up so in 2007. This situation was that suddenly there were several hundred submissions on this particular topic so they needed to suddenly create an own review track for for this

that they got these these papers revealed, they had to make to entire sessions dedicated to just this topic.

And then, in 2018, the first guy's brand did not put on the one workshop dedicated on this particular topic, but one in March. That was in a cinema in California, and the second one in DC in October.

And then, Mikai started to have a special workshop, just dedicated on this topic and I could.

It's just three weeks ago, three weeks ago in Arizona Arizona conference that I attended. Again, half of the topics there were machine learning.

So, it's really taking over the field. And I hope that after this lecture you probably also agree with me or why, why this is such a powerful method of white work so well.

Fast forward. I mean this was from 22 now, or we can I manufacture as genes Siemens and they have something in their product portfolio that resembles machine learning, learning for for image reconstruction and even though sometimes what they are doing is

image processing and then they call it image reconstruction.

Yeah, I show you what many of them actually do.

They had a real construction from the start Siemens and post processing branded imagery construction, but they are they both are now have. I know that there's something that I planned.

And this was interesting to see it has really sparked the startup scene. So, usually in the area of of our reconstruction startups are relatively rare.

And the simple reason is that you have to have a good relationship with the manufacturers, because you have to, if you want to really sell product, you have to run it on the mask. Otherwise, it will not gain traction and the big three manufacturers

GE Phillips and Siemens they have a monopoly between the three of them, and they are very protective, and they make it really really hard for startups to enter this market.

But this particular combination machine learning MRI better images. This has a certain VC capital attraction and we are really a lot of startups in this area that they're very very successful fundraising.

So, this is just a copper that I know Eric's medical is from from Korea, subtle is a Stanford spin off.

There are several others, hot risk, I can show you one this is a really cool one or this thing for us to know that they are doing a very specialized application.

So this is really shaken up our fields substantially.

Now, I just want to show you a couple of flavors, how you can approach this particular problem, and given on what we have covered so far, you know, fairly imaging image space in case based compress sensing, you can already appreciate probably.

More than one way, how, how you can approach it and the same happened with the machine learning techniques. People just came from in different angles, and they all have their advantages and disadvantages.

So the first thing you can do is, you just start out with the understand case based data.

You do a new mess with this one. This image that then has artifacts has lots of resolution. Again, we said, we're more lines at the center of a space for consistency to be calibration.

So, but this also means that we have an over proportionate amount of low frequency information in there.

And then you use a neural network, purely as an image processing tasks so kind of try to clean up this image and turn it into an image with higher quality.

So this is a task where you can completely ignore everything about game after six.

Zugänglich über

Offener Zugang

Dauer

01:35:05 Min

Aufnahmedatum

2023-01-24

Hochgeladen am

2023-01-24 18:46:03

Sprache

en-US

Tags

Medical Imaging, MRI, Inverse Problems, Numerical Optimization, Machine Learning, Deep Learning
Einbetten
Wordpress FAU Plugin
iFrame
Teilen