Skip to content

Commit 8bb5456

Browse files
committed
ENH: add model specification notebook
1 parent dd064a9 commit 8bb5456

File tree

1 file changed

+214
-0
lines changed

1 file changed

+214
-0
lines changed
Lines changed: 214 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,214 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Model Specification for 1st-Level fMRI Analysis\n",
8+
"\n",
9+
"Nipype provides also an interfaces to create a first level Model for an fMRI analysis. Such a model is needed to specify the study specific information, such as **condition**, their **onsets** and **durations**. For more information, make sure to check out [Model Specificaton](http://nipype.readthedocs.io/en/latest/users/model_specification.html) and [nipype.algorithms.modelgen](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html)"
10+
]
11+
},
12+
{
13+
"cell_type": "markdown",
14+
"metadata": {},
15+
"source": [
16+
"## Simple Example\n",
17+
"\n",
18+
"Let's consider a simple experiment, where we have three different stimuli such as ``'faces'``, ``'houses'`` and ``'scrambled pix'``. Now each of those three conditions has different stimuli onsets, but all of them have a stimuli presentation duration of 3 seconds.\n",
19+
"\n",
20+
"So to summarize:\n",
21+
"\n",
22+
" conditions = ['faces', 'houses', 'scrambled pix']\n",
23+
" onsets = [[0, 30, 60, 90],\n",
24+
" [10, 40, 70, 100],\n",
25+
" [20, 50, 80, 110]]\n",
26+
" durations = [[3], [3], [3]]\n",
27+
" \n",
28+
"The way we would create this model with Nipype is almsot as simple as that. The only step that is missing is to put this all into a ``Bunch`` object. This can be done as follows:"
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": null,
34+
"metadata": {
35+
"collapsed": false
36+
},
37+
"outputs": [],
38+
"source": [
39+
"from nipype.interfaces.base import Bunch\n",
40+
"\n",
41+
"conditions = ['faces', 'houses', 'scrambled pix']\n",
42+
"onsets = [[0, 30, 60, 90],\n",
43+
" [10, 40, 70, 100],\n",
44+
" [20, 50, 80, 110]]\n",
45+
"durations = [[3], [3], [3]]\n",
46+
"\n",
47+
"subject_info = Bunch(conditions=conditions,\n",
48+
" onsets=onsets,\n",
49+
" durations=durations)"
50+
]
51+
},
52+
{
53+
"cell_type": "markdown",
54+
"metadata": {},
55+
"source": [
56+
"It's also possible to specify additional regressors. For this you need to additionally specify:\n",
57+
"\n",
58+
"- **``regressors``**: list of regressors that you want to include in the model (must correspond to the number of volumes in the functional run)\n",
59+
"- **``regressor_names``**: name of the regressors that you want to include"
60+
]
61+
},
62+
{
63+
"cell_type": "markdown",
64+
"metadata": {},
65+
"source": [
66+
"## Example based on dataset\n",
67+
"\n",
68+
"Now for a more realistic example, let's look at a TVA file from our tutorial dataset."
69+
]
70+
},
71+
{
72+
"cell_type": "code",
73+
"execution_count": null,
74+
"metadata": {
75+
"collapsed": false,
76+
"deletable": true,
77+
"editable": true
78+
},
79+
"outputs": [
80+
{
81+
"name": "stdout",
82+
"output_type": "stream",
83+
"text": [
84+
"onset\tduration\ttrial_type\tresponse_time\tcorrectness\tStimVar\tRsponse\tStimulus\tcond\r\n",
85+
"0.0\t2.0\tincongruent_correct\t1.095\tcorrect\t2\t1\tincongruent\tcond003\r\n",
86+
"10.0\t2.0\tincongruent_correct\t0.988\tcorrect\t2\t1\tincongruent\tcond003\r\n",
87+
"20.0\t2.0\tcongruent_correct\t0.591\tcorrect\t1\t1\tcongruent\tcond001\r\n",
88+
"30.0\t2.0\tcongruent_correct\t0.499\tcorrect\t1\t1\tcongruent\tcond001\r\n",
89+
"40.0\t2.0\tincongruent_correct\t0.719\tcorrect\t2\t1\tincongruent\tcond003\r\n",
90+
"52.0\t2.0\tcongruent_correct\t0.544\tcorrect\t1\t1\tcongruent\tcond001\r\n",
91+
"64.0\t2.0\tcongruent_correct\t0.436\tcorrect\t1\t1\tcongruent\tcond001\r\n",
92+
"76.0\t2.0\tincongruent_correct\t0.47\tcorrect\t2\t1\tincongruent\tcond003\r\n",
93+
"88.0\t2.0\tcongruent_correct\t0.409\tcorrect\t1\t1\tcongruent\tcond001\r\n",
94+
"102.0\t2.0\tincongruent_correct\t0.563\tcorrect\t2\t1\tincongruent\tcond003\r\n",
95+
"116.0\t2.0\tcongruent_correct\t0.493\tcorrect\t1\t1\tcongruent\tcond001\r\n",
96+
"130.0\t2.0\tcongruent_correct\t0.398\tcorrect\t1\t1\tcongruent\tcond001\r\n",
97+
"140.0\t2.0\tcongruent_correct\t0.466\tcorrect\t1\t1\tcongruent\tcond001\r\n",
98+
"150.0\t2.0\tincongruent_correct\t0.518\tcorrect\t2\t1\tincongruent\tcond003\r\n",
99+
"164.0\t2.0\tincongruent_correct\t0.56\tcorrect\t2\t1\tincongruent\tcond003\r\n",
100+
"174.0\t2.0\tincongruent_correct\t0.533\tcorrect\t2\t1\tincongruent\tcond003\r\n",
101+
"184.0\t2.0\tcongruent_correct\t0.439\tcorrect\t1\t1\tcongruent\tcond001\r\n",
102+
"196.0\t2.0\tcongruent_correct\t0.458\tcorrect\t1\t1\tcongruent\tcond001\r\n",
103+
"208.0\t2.0\tincongruent_correct\t0.734\tcorrect\t2\t1\tincongruent\tcond003\r\n",
104+
"220.0\t2.0\tincongruent_correct\t0.479\tcorrect\t2\t1\tincongruent\tcond003\r\n",
105+
"232.0\t2.0\tincongruent_correct\t0.538\tcorrect\t2\t1\tincongruent\tcond003\r\n",
106+
"246.0\t2.0\tcongruent_correct\t0.54\tcorrect\t1\t1\tcongruent\tcond001\r\n",
107+
"260.0\t2.0\tincongruent_correct\t0.622\tcorrect\t2\t1\tincongruent\tcond003\r\n",
108+
"274.0\t2.0\tcongruent_correct\t0.488\tcorrect\t1\t1\tcongruent\tcond001\r\n"
109+
]
110+
}
111+
],
112+
"source": [
113+
"!cat /data/ds102/sub-01/func/sub-01_task-flanker_run-1_events.tsv"
114+
]
115+
},
116+
{
117+
"cell_type": "markdown",
118+
"metadata": {},
119+
"source": [
120+
"So, the only things that we need to specify our model are the onset and the stimuli type, i.e. **column 0** and **column 5 or 7**. Those we can get with the command:"
121+
]
122+
},
123+
{
124+
"cell_type": "code",
125+
"execution_count": null,
126+
"metadata": {
127+
"collapsed": false
128+
},
129+
"outputs": [],
130+
"source": [
131+
"import numpy as np\n",
132+
"filename = '/data/ds102/sub-01/func/sub-01_task-flanker_run-1_events.tsv'\n",
133+
"trailinfo = np.genfromtxt(filename, delimiter='\\t', dtype=None, skip_header=1)\n",
134+
"trailinfo = [[t[0], t[7]] for t in trailinfo]\n",
135+
"trailinfo"
136+
]
137+
},
138+
{
139+
"cell_type": "markdown",
140+
"metadata": {},
141+
"source": [
142+
"Before we can use the onsets, we first need to split them into the two conditions:"
143+
]
144+
},
145+
{
146+
"cell_type": "code",
147+
"execution_count": null,
148+
"metadata": {
149+
"collapsed": true
150+
},
151+
"outputs": [],
152+
"source": [
153+
"onset1 = []\n",
154+
"onset2 = []\n",
155+
"\n",
156+
"for t in trailinfo:\n",
157+
" if 'incongruent' in t[1]:\n",
158+
" onset2.append(t[0])\n",
159+
" else:\n",
160+
" onset1.append(t[0])\n",
161+
"\n",
162+
"print onset1\n",
163+
"print onset2"
164+
]
165+
},
166+
{
167+
"cell_type": "markdown",
168+
"metadata": {},
169+
"source": [
170+
"The last thing we now need to to is to put this into a ``Bunch`` object and we're done:"
171+
]
172+
},
173+
{
174+
"cell_type": "code",
175+
"execution_count": null,
176+
"metadata": {
177+
"collapsed": true
178+
},
179+
"outputs": [],
180+
"source": [
181+
"from nipype.interfaces.base import Bunch\n",
182+
"\n",
183+
"conditions = ['congruent', 'incongruent']\n",
184+
"onsets = [onset1, onset2]\n",
185+
"durations = [[2], [2]]\n",
186+
"\n",
187+
"subject_info = Bunch(conditions=conditions,\n",
188+
" onsets=onsets,\n",
189+
" durations=durations)"
190+
]
191+
}
192+
],
193+
"metadata": {
194+
"kernelspec": {
195+
"display_name": "Python [default]",
196+
"language": "python",
197+
"name": "python2"
198+
},
199+
"language_info": {
200+
"codemirror_mode": {
201+
"name": "ipython",
202+
"version": 2
203+
},
204+
"file_extension": ".py",
205+
"mimetype": "text/x-python",
206+
"name": "python",
207+
"nbconvert_exporter": "python",
208+
"pygments_lexer": "ipython2",
209+
"version": "2.7.13"
210+
}
211+
},
212+
"nbformat": 4,
213+
"nbformat_minor": 2
214+
}

0 commit comments

Comments
 (0)