Skip to content

Commit 2194fda

Browse files
author
Erik Ziegler
committed
HTML cleanup
1 parent a5e9928 commit 2194fda

File tree

6 files changed

+62
-62
lines changed

6 files changed

+62
-62
lines changed

examples/dmri_connectivity_advanced.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@
7474
This needs to point to the freesurfer subjects directory (Recon-all must have been run on subj1 from the FSL course data)
7575
Alternatively, the reconstructed subject data can be downloaded from:
7676
77-
http://dl.dropbox.com/u/315714/subj1.zip
77+
* http://dl.dropbox.com/u/315714/subj1.zip
7878
7979
"""
8080

@@ -85,7 +85,7 @@
8585
"""
8686
This needs to point to the fdt folder you can find after extracting
8787
88-
http://www.fmrib.ox.ac.uk/fslcourse/fsl_course_data2.tar.gz
88+
* http://www.fmrib.ox.ac.uk/fslcourse/fsl_course_data2.tar.gz
8989
9090
"""
9191

examples/dmri_group_connectivity_camino.py

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,8 @@
4040
Output data can be visualized in ConnectomeViewer, TrackVis,
4141
and anything that can view Nifti files.
4242
43-
ConnectomeViewer: https://github.com/LTS5/connectomeviewer
44-
TrackVis: http://trackvis.org/
43+
* ConnectomeViewer: https://github.com/LTS5/connectomeviewer
44+
* TrackVis: http://trackvis.org/
4545
4646
The fiber data is available in Numpy arrays, and the connectivity matrix
4747
is also produced as a MATLAB matrix.
@@ -107,10 +107,10 @@
107107

108108
"""
109109
110-
.. warning::
110+
.. warning::
111111
112-
The 'info' dictionary below is used to define the input files. In this case, the diffusion weighted image contains the string 'dwi'.
113-
The same applies to the b-values and b-vector files, and this must be changed to fit your naming scheme.
112+
The 'info' dictionary below is used to define the input files. In this case, the diffusion weighted image contains the string 'dwi'.
113+
The same applies to the b-values and b-vector files, and this must be changed to fit your naming scheme.
114114
115115
"""
116116

@@ -119,20 +119,20 @@
119119
bvals=[['subject_id', 'bvals']])
120120

121121
"""
122-
This line creates the processing workflow given the information input about the groups and subjects.
122+
This line creates the processing workflow given the information input about the groups and subjects.
123123
124-
.. seealso::
124+
.. seealso::
125125
126-
* nipype/workflows/dmri/mrtrix/group_connectivity.py
127-
* nipype/workflows/dmri/camino/connectivity_mapping.py
128-
* :ref:`dmri_connectivity`
126+
* nipype/workflows/dmri/mrtrix/group_connectivity.py
127+
* nipype/workflows/dmri/camino/connectivity_mapping.py
128+
* :ref:`dmri_connectivity`
129129
130130
"""
131131

132132
l1pipeline = create_group_connectivity_pipeline(group_list, group_id, data_dir, subjects_dir, output_dir, info)
133133

134134
"""
135-
Define the parcellation scheme to use.
135+
Define the parcellation scheme to use.
136136
"""
137137

138138
parcellation_scheme = 'NativeFreesurfer'
@@ -141,21 +141,21 @@
141141
l1pipeline.inputs.connectivity.inputnode.resolution_network_file = cmp_config._get_lausanne_parcellation(parcellation_scheme)['freesurferaparc']['node_information_graphml']
142142

143143
"""
144-
The first level pipeline we have tweaked here is run within the for loop.
144+
The first level pipeline we have tweaked here is run within the for loop.
145145
"""
146146

147147
l1pipeline.run()
148148
l1pipeline.write_graph(format='eps', graph2use='flat')
149149

150150
"""
151-
Next we create and run the second-level pipeline. The purpose of this workflow is simple:
152-
It is used to merge each subject's CFF file into one, so that there is a single file containing
153-
all of the networks for each group. This can be useful for performing Network Brain Statistics
154-
using the NBS plugin in ConnectomeViewer.
151+
Next we create and run the second-level pipeline. The purpose of this workflow is simple:
152+
It is used to merge each subject's CFF file into one, so that there is a single file containing
153+
all of the networks for each group. This can be useful for performing Network Brain Statistics
154+
using the NBS plugin in ConnectomeViewer.
155155
156-
.. seealso::
156+
.. seealso::
157157
158-
http://www.connectomeviewer.org/documentation/users/tutorials/tut_nbs.html
158+
http://www.connectomeviewer.org/documentation/users/tutorials/tut_nbs.html
159159
160160
"""
161161

examples/dmri_group_connectivity_mrtrix.py

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -105,9 +105,9 @@
105105

106106
"""
107107
108-
.. warning::
109-
The 'info' dictionary below is used to define the input files. In this case, the diffusion weighted image contains the string 'dwi'.
110-
The same applies to the b-values and b-vector files, and this must be changed to fit your naming scheme.
108+
.. warning::
109+
The 'info' dictionary below is used to define the input files. In this case, the diffusion weighted image contains the string 'dwi'.
110+
The same applies to the b-values and b-vector files, and this must be changed to fit your naming scheme.
111111
112112
113113
"""
@@ -116,23 +116,23 @@
116116
bvals=[['subject_id', 'bvals']])
117117

118118
"""
119-
This line creates the processing workflow given the information input about the groups and subjects.
119+
This line creates the processing workflow given the information input about the groups and subjects.
120120
121-
.. seealso::
121+
.. seealso::
122122
123-
* nipype/workflows/dmri/mrtrix/group_connectivity.py
124-
* nipype/workflows/dmri/mrtrix/connectivity_mapping.py
125-
* :ref:`dmri_connectivity_advanced
123+
* nipype/workflows/dmri/mrtrix/group_connectivity.py
124+
* nipype/workflows/dmri/mrtrix/connectivity_mapping.py
125+
* :ref:`dmri_connectivity_advanced
126126
127127
"""
128128

129129
l1pipeline = create_group_connectivity_pipeline(group_list, group_id, data_dir, subjects_dir, output_dir, info)
130130

131131
"""
132-
This is used to demonstrate the ease through which different parameters can be set for each group.
133-
These values relate to the absolute threshold used on the fractional anisotropy map. This is done
134-
in order to identify single-fiber voxels. In brains with more damage, however, it may be necessary
135-
to reduce the threshold, since their brains are have lower average fractional anisotropy values.
132+
This is used to demonstrate the ease through which different parameters can be set for each group.
133+
These values relate to the absolute threshold used on the fractional anisotropy map. This is done
134+
in order to identify single-fiber voxels. In brains with more damage, however, it may be necessary
135+
to reduce the threshold, since their brains are have lower average fractional anisotropy values.
136136
"""
137137

138138
if group_id == 'parkinsons':
@@ -141,16 +141,16 @@
141141
l1pipeline.inputs.connectivity.mapping.threshold_FA.absolute_threshold_value = 0.7
142142

143143
"""
144-
These lines relate to inverting the b-vectors in the encoding file, and setting the
145-
maximum harmonic order of the pre-tractography spherical deconvolution step. This is
146-
done to show how to set inputs that will affect both groups.
144+
These lines relate to inverting the b-vectors in the encoding file, and setting the
145+
maximum harmonic order of the pre-tractography spherical deconvolution step. This is
146+
done to show how to set inputs that will affect both groups.
147147
"""
148148

149149
l1pipeline.inputs.connectivity.mapping.fsl2mrtrix.invert_y = True
150150
l1pipeline.inputs.connectivity.mapping.csdeconv.maximum_harmonic_order = 6
151151

152152
"""
153-
Define the parcellation scheme to use.
153+
Define the parcellation scheme to use.
154154
"""
155155

156156
parcellation_name = 'scale500'
@@ -160,27 +160,27 @@
160160
l1pipeline.inputs.connectivity.mapping.inputnode_within.resolution_network_file = cmp_config._get_lausanne_parcellation('Lausanne2008')[parcellation_name]['node_information_graphml']
161161

162162
"""
163-
Set the maximum number of tracks to obtain
163+
Set the maximum number of tracks to obtain
164164
"""
165165

166166
l1pipeline.inputs.connectivity.mapping.probCSDstreamtrack.desired_number_of_tracks = 100000
167167

168168
"""
169-
The first level pipeline we have tweaked here is run within the for loop.
169+
The first level pipeline we have tweaked here is run within the for loop.
170170
"""
171171

172172
l1pipeline.run()
173173
l1pipeline.write_graph(format='eps', graph2use='flat')
174174

175175
"""
176-
Next we create and run the second-level pipeline. The purpose of this workflow is simple:
177-
It is used to merge each subject's CFF file into one, so that there is a single file containing
178-
all of the networks for each group. This can be useful for performing Network Brain Statistics
179-
using the NBS plugin in ConnectomeViewer.
176+
Next we create and run the second-level pipeline. The purpose of this workflow is simple:
177+
It is used to merge each subject's CFF file into one, so that there is a single file containing
178+
all of the networks for each group. This can be useful for performing Network Brain Statistics
179+
using the NBS plugin in ConnectomeViewer.
180180
181-
.. seealso::
181+
.. seealso::
182182
183-
http://www.connectomeviewer.org/documentation/users/tutorials/tut_nbs.html
183+
http://www.connectomeviewer.org/documentation/users/tutorials/tut_nbs.html
184184
185185
"""
186186

examples/dmri_mrtrix_dti.py

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@
77
Introduction
88
============
99
10-
This script, mrtrix_dti_tutorial.py, demonstrates the ability to perform advanced diffusion analysis
10+
This script, dmri_mrtrix_dti.py, demonstrates the ability to perform advanced diffusion analysis
1111
in a Nipype pipeline.
1212
1313
python dmri_mrtrix_dti.py
1414
1515
We perform this analysis using the FSL course data, which can be acquired from here:
1616
17-
http://www.fmrib.ox.ac.uk/fslcourse/fsl_course_data2.tar.gz
17+
* http://www.fmrib.ox.ac.uk/fslcourse/fsl_course_data2.tar.gz
1818
1919
Import necessary modules from nipype.
2020
"""
@@ -30,7 +30,7 @@
3030
"""
3131
This needs to point to the fdt folder you can find after extracting
3232
33-
http://www.fmrib.ox.ac.uk/fslcourse/fsl_course_data2.tar.gz
33+
* http://www.fmrib.ox.ac.uk/fslcourse/fsl_course_data2.tar.gz
3434
3535
"""
3636

@@ -70,18 +70,18 @@
7070
inputnode = pe.Node(interface=util.IdentityInterface(fields=["dwi", "bvecs", "bvals"]), name="inputnode")
7171

7272
"""
73-
Diffusion processing nodes
74-
--------------------------
73+
Diffusion processing nodes
74+
--------------------------
7575
76-
.. seealso::
76+
.. seealso::
7777
78-
dmri_connectivity_advanced.py
79-
Tutorial with further detail on using MRtrix tractography for connectivity analysis
78+
dmri_connectivity_advanced.py
79+
Tutorial with further detail on using MRtrix tractography for connectivity analysis
8080
81-
http://www.brain.org.au/software/mrtrix/index.html
82-
MRtrix's online documentation
81+
http://www.brain.org.au/software/mrtrix/index.html
82+
MRtrix's online documentation
8383
84-
b-values and b-vectors stored in FSL's format are converted into a single encoding file for MRTrix.
84+
b-values and b-vectors stored in FSL's format are converted into a single encoding file for MRTrix.
8585
"""
8686

8787
fsl2mrtrix = pe.Node(interface=mrtrix.FSL2MRTrix(),name='fsl2mrtrix')
@@ -136,12 +136,12 @@
136136
threshold_wmmask.inputs.absolute_threshold_value = 0.4
137137

138138
"""
139-
The spherical deconvolution step depends on the estimate of the response function
140-
in the highly anisotropic voxels we obtained above.
139+
The spherical deconvolution step depends on the estimate of the response function
140+
in the highly anisotropic voxels we obtained above.
141141
142-
.. warning::
142+
.. warning::
143143
144-
For damaged or pathological brains one should take care to lower the maximum harmonic order of these steps.
144+
For damaged or pathological brains one should take care to lower the maximum harmonic order of these steps.
145145
146146
"""
147147

@@ -242,7 +242,7 @@
242242
"""
243243

244244
dwiproc = pe.Workflow(name="dwiproc")
245-
dwiproc.base_dir = os.path.abspath('mrtrix_dti_tutorial')
245+
dwiproc.base_dir = os.path.abspath('dmri_mrtrix_dti')
246246
dwiproc.connect([
247247
(infosource,datasource,[('subject_id', 'subject_id')]),
248248
(datasource,tractography,[('dwi','inputnode.dwi'),

nipype/workflows/dmri/camino/diffusion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
def create_camino_dti_pipeline(name="dtiproc"):
1010
"""Creates a pipeline that does the same diffusion processing as in the
11-
camino_dti_tutorial example script. Given a diffusion-weighted image,
11+
:ref:`dmri_camino_dti` example script. Given a diffusion-weighted image,
1212
b-values, and b-vectors, the workflow will return the tractography
1313
computed from diffusion tensors and from PICo probabilistic tractography.
1414

nipype/workflows/dmri/mrtrix/diffusion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
def create_mrtrix_dti_pipeline(name="dtiproc", tractography_type = 'probabilistic'):
77
"""Creates a pipeline that does the same diffusion processing as in the
8-
:ref:`example_mrtrix_dti` example script. Given a diffusion-weighted image,
8+
:ref:`dmri_mrtrix_dti` example script. Given a diffusion-weighted image,
99
b-values, and b-vectors, the workflow will return the tractography
1010
computed from spherical deconvolution and probabilistic streamline tractography
1111

0 commit comments

Comments
 (0)