Skip to content

Commit ca7d215

Browse files
author
dallascard
committed
fixed a few more typos
1 parent 9e278c1 commit ca7d215

File tree

3 files changed

+9
-9
lines changed

3 files changed

+9
-9
lines changed

code/logistic_sgd.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -94,11 +94,11 @@ def __init__(self, input, n_in, n_out):
9494
# symbolic expression for computing the matrix of class-membership
9595
# probabilities
9696
# Where:
97-
# W is a matrix where column-k represent the separation hyper plain for
97+
# W is a matrix where column-k represent the separation hyperplane for
9898
# class-k
9999
# x is a matrix where row-j represents input training sample-j
100-
# b is a vector where element-k represent the free parameter of hyper
101-
# plain-k
100+
# b is a vector where element-k represent the free parameter of
101+
# hyperplane-k
102102
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
103103

104104
# symbolic description of how to compute prediction as class whose

code/mlp.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -296,9 +296,9 @@ def test_mlp(learning_rate=0.01, L1_reg=0.00, L2_reg=0.0001, n_epochs=1000,
296296
# specify how to update the parameters of the model as a list of
297297
# (variable, update expression) pairs
298298

299-
# given two list the zip A = [a1, a2, a3, a4] and B = [b1, b2, b3, b4] of
300-
# same length, zip generates a list C of same size, where each element
301-
# is a pair formed from the two lists :
299+
# given two lists of the same length, A = [a1, a2, a3, a4] and
300+
# B = [b1, b2, b3, b4], zip generates a list C of same size, where each
301+
# element is a pair formed from the two lists :
302302
# C = [(a1, b1), (a2, b2), (a3, b3), (a4, b4)]
303303
updates = [
304304
(param, param - learning_rate * gparam)

doc/rnnslu.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ Raw input encoding
101101

102102
A token corresponds to a word. Each token in the ATIS vocabulary is associated to an index. Each sentence is a
103103
array of indexes (``int32``). Then, each set (train, valid, test) is a list of arrays of indexes. A python
104-
dictionnary is defined for mapping the space of indexes to the space of words.
104+
dictionary is defined for mapping the space of indexes to the space of words.
105105

106106
>>> sentence
107107
array([383, 189, 13, 193, 208, 307, 195, 502, 260, 539,
@@ -224,7 +224,7 @@ The **parameters** of the E-RNN to be learned are:
224224
* the word embeddings (real-valued matrix)
225225
* the initial hidden state (real-value vector)
226226
* two matrices for the linear projection of the input ``t`` and the previous hidden layer state ``t-1``
227-
* (optionnal) bias. `Recommendation <http://en.wikipedia.org/wiki/Occam's_razor>`_: don't use it.
227+
* (optional) bias. `Recommendation <http://en.wikipedia.org/wiki/Occam's_razor>`_: don't use it.
228228
* softmax classification layer on top
229229

230230
The **hyperparameters** define the whole architecture:
@@ -282,7 +282,7 @@ the true labels and compute some metrics. In this `repo
282282
<http://www.cnts.ua.ac.be/conll2000/chunking/conlleval.txt>`_ PERL script.
283283
It's not trivial to compute those metrics due to the `Inside Outside Beginning
284284
(IOB) <http://en.wikipedia.org/wiki/Inside_Outside_Beginning>`_ representation
285-
i.e. a prediction is considered correct if the word-beginnin **and** the
285+
i.e. a prediction is considered correct if the word-beginning **and** the
286286
word-inside **and** the word-outside predictions are **all** correct.
287287
Note that the extension is `txt` and you will have to change it to `pl`.
288288

0 commit comments

Comments
 (0)