Skip to content

[MRG+1] ENH Add working_memory global config for chunked operations #10280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 105 commits into from
May 25, 2018

Conversation

jnothman
Copy link
Member

@jnothman jnothman commented Dec 10, 2017

We often get issues related to memory consumption and don't deal with them particularly well. Indeed, Scikit-learn should be at home on commodity hardware like developer/researcher laptops.

Some operations can be performed chunked, so that the result is computed in constant (or O(n)) memory relative to some current O(n) (O(n^2)) consumption. Examples include: getting the argmin and min of all pairwise distances (currently done with an ad-hoc parameter to pairwise_distances_argmin_min), calculating silhouette score (#1976), getting nearest neighbors with brute force (#7287), calculating standard deviation of every feature (#5651).

It's not very helpful to provide this "how much constant memory" parameter in each function (because they're often called within nested code), so this PR instead makes a global config parameter of it. The optimisation is then transparent to the user, but still configurable.

At @rth's request, this PR has been cut back. The proposed changes to silhouette and neighbors can be seen here.

This PR (building upon my work with @dalmia) will therefore:

  • add set_config(working_memory=n_mib)
  • add pairwise_distances_chunked
  • make use of the latter in nearest neighbors, silhouette and pairwise_distances_argmin_min
  • deprecate batch_size in pairwise_distances_argmin_min

and thus:

TODO:

  • attract other core devs' attention which I feel has been lacking despite the repeated popular interest in these issues!
  • review my comments to @dalmia at [MRG] ENH: Added block_size parameter for lesser memory consumption #7979
  • add tests for get_chunk_n_rows
  • add tests for _check_chunk_size and any others for pairwise_distances_chunked
  • add config documentation
  • fix, perhaps, Is it possible to reduce StandardScaler.fit() memory consumption? #5651 with a chunked standard deviation implementation (can be a separate PR)
  • perhaps review uses of gen_batches to see if they can use this parameter to be more principled in the choice of batch size: in most cases the batch size affects the model, so we can't just change it. And in other cases, it may make little difference. We could change mean_shift's use from fixed batch_size=500 to something sensitive to working_memory if we wish.
  • add what's new entry
  • add example to pairwise_distances_chunked that uses start and a tuple return
  • benchmarking

Sentient07 and others added 30 commits January 27, 2016 19:26
reverted the comment

Resolved merge conflicts
Also use threading for parallelism
@rth
Copy link
Member

rth commented Mar 8, 2018

Regarding the use of assert_allclose [...]

I see. It's fine, it was just a side comment not very critical to this PR...

There is a minor conflict in the what's new.

LGTM.

Now this would need a second review... maybe @lesteve or @qinhanmin2014 ? :)

@rth rth changed the title [MRG] ENH Add working_memory global config for chunked operations [MRG+1] ENH Add working_memory global config for chunked operations Mar 8, 2018
Copy link
Member

@TomDLT TomDLT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM


- A new configuration parameter, ``working_memory`` was added to control memory
consumption limits in chunked operations, such as the new
:func:`metrics.pairwise_distances_chunked`. See :ref:`working_memory`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You seem to have forgotten the glossary entry.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a reference to the User Guide is most relevant in what's new.

I can add a glossary entry, though I'm not sure how it will help beyond the user guide and the config_context docstring.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, I just wonder what is the goal of the syntax :ref:, which does not render a link.

Did you mean :func:set_config? Or maybe you need a label in doc/modules/computational_performance.rst?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latter. A glossary reference would be :term:, not :ref: which references sections.

``reduce_func``.

Examples
-------
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need one more dash to have proper rendering.

assert isinstance(S_chunks, GeneratorType)
S_chunks = list(S_chunks)
assert len(S_chunks) > 1
# atol is for diagonal where S is explcitly zeroed on the diagonal
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*explicitly

min_block_mib = np.array(X).shape[0] * 8 * 2 ** -20

for block in blockwise_distances:
memory_used = len(block) * 8
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use memory_used = block.size * 8 to have the correct memory used in the block.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... indeed!


for block in blockwise_distances:
memory_used = len(block) * 8
assert memory_used <= min(working_memory, min_block_mib) * 2 ** 20
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the min should be a max, shouldn't it?

metric='euclidean')
# Test small amounts of memory
for power in range(-16, 0):
check_pairwise_distances_chunked(X, None, working_memory=2 ** power,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line raises a lot of warnings:
UserWarning: Could not adhere to working_memory config. Currently 0MiB, 1MiB required.
We should silence them as they are expected.

@rth
Copy link
Member

rth commented May 22, 2018

@TomDLT approved these changes

Great that this is happening!

@jnothman
Copy link
Member Author

Thanks @TomDLT for the review and the approval! I've addressed all your comments except for the glossary one, which I'm not sure is necessitated at this point.

@jnothman
Copy link
Member Author

I suppose we could include a new Global Configuration section of the glossary. But I'm not sure how that goes beyond the config_context API reference.

@amueller
Copy link
Member

Is there a plan to in the future also use this in pairwise_distances and automatically dispatch to pairwise_distances_chunked if the parameter is set? (or maybe I'm overlooking something).
Also, @jnothman feel free to point me to stuff you want me to look at ;)

@jnothman
Copy link
Member Author

jnothman commented May 24, 2018 via email

@amueller
Copy link
Member

py3.6 fails ;)
But you're right, I wasn't thinking it through...

@jnothman
Copy link
Member Author

Merging to enable downstream PRs. Let me know if there are further quibbles! Thanks for the reviews, Roman and Tom!

@rth
Copy link
Member

rth commented May 27, 2018

Thanks for this @jnothman !

@TomDLT @amueller FYI there is a follow-up PR in #11136 that applies this mechanism to brute force nearest neighbors.

@rth
Copy link
Member

rth commented May 27, 2018

.. and also #11135 for chunking silhouette_score calculations

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants