@@ -101,7 +101,7 @@ Raw input encoding
101
101
102
102
A token corresponds to a word. Each token in the ATIS vocabulary is associated to an index. Each sentence is a
103
103
array of indexes (``int32``). Then, each set (train, valid, test) is a list of arrays of indexes. A python
104
- dictionnary is defined for mapping the space of indexes to the space of words.
104
+ dictionary is defined for mapping the space of indexes to the space of words.
105
105
106
106
>>> sentence
107
107
array([383, 189, 13, 193, 208, 307, 195, 502, 260, 539,
@@ -224,7 +224,7 @@ The **parameters** of the E-RNN to be learned are:
224
224
* the word embeddings (real-valued matrix)
225
225
* the initial hidden state (real-value vector)
226
226
* two matrices for the linear projection of the input ``t`` and the previous hidden layer state ``t-1``
227
- * (optionnal ) bias. `Recommendation <http://en.wikipedia.org/wiki/Occam's_razor>`_: don't use it.
227
+ * (optional ) bias. `Recommendation <http://en.wikipedia.org/wiki/Occam's_razor>`_: don't use it.
228
228
* softmax classification layer on top
229
229
230
230
The **hyperparameters** define the whole architecture:
@@ -282,7 +282,7 @@ the true labels and compute some metrics. In this `repo
282
282
<http://www.cnts.ua.ac.be/conll2000/chunking/conlleval.txt>`_ PERL script.
283
283
It's not trivial to compute those metrics due to the `Inside Outside Beginning
284
284
(IOB) <http://en.wikipedia.org/wiki/Inside_Outside_Beginning>`_ representation
285
- i.e. a prediction is considered correct if the word-beginnin **and** the
285
+ i.e. a prediction is considered correct if the word-beginning **and** the
286
286
word-inside **and** the word-outside predictions are **all** correct.
287
287
Note that the extension is `txt` and you will have to change it to `pl`.
288
288
0 commit comments