Skip to content

Conversation

@pentschev
Copy link

No description provided.

try:
_, _, weight = self.host_buffer.fast.evict()
return weight
except Exception: # We catch all `Exception`s, just like zict.LRU

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would strongly advise to log it. Otherwise your users will have no clue why they're running out of memory.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently we can't log appropriately, we may consider this in the future. Could you point me to what Dask currently logs? I'm not sure we even intended to log it this way but could you point to where Dask is logging this currently?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @crusaderky for the pointers, for now I opened rapidsai#873 so we can consider doing the same, we'll also have to think how to add proper logging in Dask-CUDA without just hacking into Distributed's.

@shwina
Copy link
Owner

shwina commented Mar 15, 2022

Based on an offline chat, I'm going to go ahead and merge this here and we'll address the error logging elsewhere. Thank you, @pentschev and @crusaderky!

@shwina shwina merged commit ed2ebb8 into shwina:worker-memory-manager-fixes Mar 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants