Skip to content

Commit a16ed9e

Browse files
committed
Merge pull request lisa-lab#102 from dallascard/minor_suggestions
Minor suggestions
2 parents c1a1155 + ca7d215 commit a16ed9e

File tree

4 files changed

+12
-12
lines changed

4 files changed

+12
-12
lines changed

code/logistic_sgd.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -94,11 +94,11 @@ def __init__(self, input, n_in, n_out):
9494
# symbolic expression for computing the matrix of class-membership
9595
# probabilities
9696
# Where:
97-
# W is a matrix where column-k represent the separation hyper plain for
97+
# W is a matrix where column-k represent the separation hyperplane for
9898
# class-k
9999
# x is a matrix where row-j represents input training sample-j
100-
# b is a vector where element-k represent the free parameter of hyper
101-
# plain-k
100+
# b is a vector where element-k represent the free parameter of
101+
# hyperplane-k
102102
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
103103

104104
# symbolic description of how to compute prediction as class whose

code/mlp.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -296,9 +296,9 @@ def test_mlp(learning_rate=0.01, L1_reg=0.00, L2_reg=0.0001, n_epochs=1000,
296296
# specify how to update the parameters of the model as a list of
297297
# (variable, update expression) pairs
298298

299-
# given two list the zip A = [a1, a2, a3, a4] and B = [b1, b2, b3, b4] of
300-
# same length, zip generates a list C of same size, where each element
301-
# is a pair formed from the two lists :
299+
# given two lists of the same length, A = [a1, a2, a3, a4] and
300+
# B = [b1, b2, b3, b4], zip generates a list C of same size, where each
301+
# element is a pair formed from the two lists :
302302
# C = [(a1, b1), (a2, b2), (a3, b3), (a4, b4)]
303303
updates = [
304304
(param, param - learning_rate * gparam)

doc/gettingstarted.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ MNIST Dataset
104104
Since now the data is in one variable, and a minibatch is defined as a
105105
slice of that variable, it comes more natural to define a minibatch by
106106
indicating its index and its size. In our setup the batch size stays constant
107-
through out the execution of the code, therefore a function will actually
107+
throughout the execution of the code, therefore a function will actually
108108
require only the index to identify on which datapoints to work.
109109
The code below shows how to store your data and how to
110110
access a minibatch:
@@ -141,8 +141,8 @@ MNIST Dataset
141141

142142
# accessing the third minibatch of the training set
143143

144-
data = train_set_x[2 * 500: 3 * 500]
145-
label = train_set_y[2 * 500: 3 * 500]
144+
data = train_set_x[2 * batch_size: 3 * batch_size]
145+
label = train_set_y[2 * batch_size: 3 * batch_size]
146146

147147

148148
The data has to be stored as floats on the GPU ( the right

doc/rnnslu.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ Raw input encoding
101101

102102
A token corresponds to a word. Each token in the ATIS vocabulary is associated to an index. Each sentence is a
103103
array of indexes (``int32``). Then, each set (train, valid, test) is a list of arrays of indexes. A python
104-
dictionnary is defined for mapping the space of indexes to the space of words.
104+
dictionary is defined for mapping the space of indexes to the space of words.
105105

106106
>>> sentence
107107
array([383, 189, 13, 193, 208, 307, 195, 502, 260, 539,
@@ -224,7 +224,7 @@ The **parameters** of the E-RNN to be learned are:
224224
* the word embeddings (real-valued matrix)
225225
* the initial hidden state (real-value vector)
226226
* two matrices for the linear projection of the input ``t`` and the previous hidden layer state ``t-1``
227-
* (optionnal) bias. `Recommendation <http://en.wikipedia.org/wiki/Occam's_razor>`_: don't use it.
227+
* (optional) bias. `Recommendation <http://en.wikipedia.org/wiki/Occam's_razor>`_: don't use it.
228228
* softmax classification layer on top
229229

230230
The **hyperparameters** define the whole architecture:
@@ -282,7 +282,7 @@ the true labels and compute some metrics. In this `repo
282282
<http://www.cnts.ua.ac.be/conll2000/chunking/conlleval.txt>`_ PERL script.
283283
It's not trivial to compute those metrics due to the `Inside Outside Beginning
284284
(IOB) <http://en.wikipedia.org/wiki/Inside_Outside_Beginning>`_ representation
285-
i.e. a prediction is considered correct if the word-beginnin **and** the
285+
i.e. a prediction is considered correct if the word-beginning **and** the
286286
word-inside **and** the word-outside predictions are **all** correct.
287287
Note that the extension is `txt` and you will have to change it to `pl`.
288288

0 commit comments

Comments
 (0)