Skip to content

Fully convolutional networks for 2D segmentation #186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 100 commits into from

Conversation

StephanieLarocque
Copy link
Contributor

Only look at doc/fcn_2D_segm.txt for now (other tutorials still under construction).

I know fcn_2D_segm.txt still miss:

  • Images reference
  • Whole references
  • Cleaned code from code/fcn_2D_segm directory (less files, remove absolute paths)

*downsampling path* to refer to the network up to *pool5* layer and we will use the term
*upsampling path* to refer to the network composed of all layers after *pool5*. It is worth
noting that the 3 FCN architectures share the same downsampling path, but differ in their
respective upmsapling paths.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"upsampling"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.



1. **FCN-32** : Directly produces the segmentation map from *pool5*, by using a
transposed convolution layer with stride 32.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about conv6-7? Are they the transposed convolution you mention? But if there are two of them, which one has stride 32?
Or is it a transposed convolution after conv7?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explanations added.

1. **FCN-32** : Directly produces the segmentation map from *pool5*, by using a
transposed convolution layer with stride 32.

2. **FCN-16** : Sums the 2x upsampled prediction from *pool5* with *pool4* and then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is that 2x upsampling done? Simple duplication of pixels, or another strdied transposed convolution? Or something else?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explanations added.

*.pyc
dataset_loaders/datasets
.cache
config.ini
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we want to add the whole dataset_loaders repo into this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed it

**Fully Convolutional Networks** (FCNs) owe their name to their architecture, which is
built only from locally connected layers, such as convolution, pooling and upsampling.
Note that no dense layer is used in this kind of architecture. This reduces the number
of parameters and computation time. To obtain a segmentation map (output), segmentation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the network can work regardless of the original image size, since all connections are local, there does not need to be any fixed number of units at any stage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment added

@StephanieLarocque
Copy link
Contributor Author

StephanieLarocque commented May 1, 2017 via email

@@ -0,0 +1,231 @@
import os
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need that file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I left the two data_loader.py from unet and fcn_2D_segm just in case. But they can be removed

Copy link
Member

@lamblin lamblin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also:

  • remove code/cnn_1D_segm/data_loader/__init__.pyc
  • remove code/cnn_1D_segm/data_loader/cortical_layers.pyc
  • remove code/cnn_1D_segm/data_loader/parallel_loader_1D.pyc
  • you still need to explain that Lasagne and dataset_loaders are requirements, and should be installed (and link to their doc or repo there, add version number).
  • Can you add links to your tutorial in the home page (index.txt), maybe as a list of 3 tutorials on segmentation for medical imagery?
  • Also, I just realized there was something to cite for datasets_loader: Francesco Visin, Adriana Romero - Dataset loaders: a python library to
    load and preprocess datasets (BibTeX)

if load_weights:
if pascal:
path_weights = '/data/lisatmp4/erraqabi/data/att-segm/' + \
'pre_trained_weights/pascal-fcn8s-tvg-dag.mat'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like it is still there.

@@ -0,0 +1,231 @@
import os
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that file still needed as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you can remove it if the test works

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or I can do it.

of polyps for each pixel. The other subdirectories (/masks3 and /masks4) are,
respectively, for a segmentation task with 3 and 4 classes, but will not be
presented here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to have a sentence explaining how to set up the configuration file for dataset_loaders to point to the right thing after the data is downloaded.
For instance, data for the other tutorials will be downloaded in DeepLearningTutorials/data/, so maybe use that as an example, and tell to change the config.ini file to have

[general]
datasets_local_path = /path/to/DeepLearningTutorials/data
[polyps192]
shared_path = /path/to/DeepLearningTutorials/data/polyps_split7

doc/unet.txt Outdated
++++

The data is from ISBI challenge and can be found `here <http://brainiac2.mit.edu/isbi_challenge/home>`_.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be instructions as well as what to put / add to datasets_loaders's config file.
I guess the section should be [isbi_em_stacks].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, there seems to be data augmentation by default during training.

  • it would be nice to mention that, maybe a sentence or two about what that does and why it matters
  • also, it uses another dependency, SimpleITK. We should indicate it is an additional dependency.

batch_size=batch_size[0],
seq_per_subset=0,
seq_length=0,
data_augm_kwargs=train_data_augm_kwargs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it should be defined to {}, as that was the default in data_loader.py.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would confirm that data_loader.py is not loaded.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor Author

@StephanieLarocque StephanieLarocque May 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact we must keep data_augm_kwargs = data_augmentation here in order to crop all the images to (224,244) and be able to use batch size >1.
Data_augmentation is a default argument for the parser at the bottom of the page. It is set with the right data augmentation to use.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, thanks. I'll run the full thing, then.

return_01c=False,
overlap=0,
use_threads=False,
shuffle_at_each_epoch=shuffle_train,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And shuffle_at_each_epoch=True here (but keep False below).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

We are interested in the outer part of the brain, the cortex.
More precisely, we are interested in segmenting the 6 different layers of the cortex in 3D.
Creating an expertly labelled training dataset with each 2D section (shown in figure 1) is unfeasible. Instead of giving as input a 2D image of one section of the brain, we give as input 1D vectors with information from across the cortex, extracted from smaller portions of manually labelled cortex
as shown in Figure 2. The actual dataset can be found `here TODO: link the dataset`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. We should probably mention the name of the section for datasets_loader as well (cortical_layers I guess).

The FCN implementation can be found in the following file:

* `fcn1D.py <../code/cnn_1D_segm/fcn1D.py>`_ : Main script. Defines the model.
* `train_fcn1D.py <../code/cnn_1D_segm/train_fcn1D.py>`_ : Training loop
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one seems to be starting fine.
I did not complete the training on a GPU yet, and the small one I had for visualization did not have enough RAM, but it should be OK.

use_threads=True,
shuffle_at_each_epoch=shuffle_train,
return_list=True,
return_0_255=return_0_255)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here again, I guess one_hot -> False, shuffle_train -> True, return_0_255 -> False.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. (Thanks)

doc/unet.txt Outdated

* `Unet.py <https://github.com/Lasagne/Recipes/blob/master/modelzoo/Unet.py>`_ : Main script. Defines the model

* `train_unet.py <../code/unet/train_unet.py>`_ : Training loop.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one starts correctly as well (needs more memory than I have available now).

@lamblin lamblin changed the title Fully convoutional networks for 2D segmentation Fully convolutional networks for 2D segmentation Jul 3, 2017
@lamblin
Copy link
Member

lamblin commented Jun 15, 2018

Finally finished in #204!

@lamblin lamblin closed this Jun 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants