Mini-Batch and Gradient Descent

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Mini-Batch

1. Make Batches and Store

make_batches(​batch_size​):
iterator=0
while iterator<training_size:
temp_batch = range(start,start+batch_size)
X_batch = X_train[temp_batch]
y_batch = y_train[temp_batch]
yield X_batch,y_batch
iterator += batch_size

batches = list(make_batches(batch_size=.....))

2. Training

model = linear_model.SGDRegressor(max_iter=....,tol=....,eta0=....,average=batch_size)

for loop_counter in range(max_iterations):


for X_batch,y_batch in batches:
model.partial_fit(X_batch,y_batch)

NOTE
1. Use batch_size as 50 or 100 according to convenience.
2. Max_iter, tol and eta0 are parameters that need to be tuned.

Gradient Descent
As already discussed there is no direct API/Class in scikit-learn to implement this algorithm. So,
we use the mini-batch approach with ​batch_size ​as the entire dataset i.e, ​X_train.shape[0]

You might also like