Closed
Description
I have been searching hours on this problem and can consistently replicate it:
clf = GridSearchCV( sk.LogisticRegression(),
tuned_parameters,
cv = N_folds_validation,
pre_dispatch='6*n_jobs',
n_jobs=4,
verbose = 1,
scoring=metrics.make_scorer(metrics.scorer.f1_score, average="macro")
)
This snippet crashes because of scoring=metrics.make_scorer(metrics.scorer.f1_score, average="macro") where metrics refers to sklearn.metrics module. If I cancel out the scoring=... line, the parallel execution works. If I want to use the f1 score as evaluation method, I have to cancel out the parallel execution by setting n_jobs = 1.
Is there a way I can define another score method without losing the parallel execution possibility?
Thanks
Metadata
Metadata
Assignees
Labels
No labels