Skip to content

Conversation

@smessmer
Copy link
Contributor

@smessmer smessmer commented Aug 1, 2019

Stack from ghstack:

Differential Revision: D16603130

@smessmer smessmer requested a review from dzhulgakov August 6, 2019 21:15
smessmer added a commit that referenced this pull request Aug 6, 2019
@smessmer smessmer changed the title [wip] c10 dispatcher stores autograd kernels c10 dispatcher stores autograd kernels Aug 6, 2019
@smessmer smessmer requested a review from li-roy August 6, 2019 21:31
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems a bit over-engineering for the purpose and it makes code much less understandable.

// before older registrations and the list head is the autograd kernel
// which is currently used.
std::list<void*> unboxedAutogradKernels_;
std::mutex unboxedAutogradKernelsMutex_;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, I think it's fine to assume that library loads don't happen concurrently. Even if they did - having a top-level mutex is a much easier solution that also doesn't blow up OperatorEntry size.

btw, since you already have kernelsMutex_ above - why not just use that (even though we don't really need mutex)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mutex costs us even less, the complexity added by just the mutex it is negligible, but then we get parallel and race-free library loading for free, don't see a reason for not doing it.

The default pattern for mutexes is to bind them directly to the data they protect and trying to keep these data units small. OperatorEntry size doesn't matter, this is not an entry in the dispatch table.

@smessmer smessmer requested a review from dzhulgakov August 7, 2019 20:38
@smessmer smessmer requested a review from dzhulgakov August 8, 2019 18:41
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments!

@zou3519 zou3519 deleted the gh/smessmer/14/head branch August 9, 2019 22:13
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 211bafc.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Aug 9, 2019
Summary:
Pull Request resolved: pytorch/pytorch#23666

ghstack-source-id: 88050430

Differential Revision: D16603130

fbshipit-source-id: bc77c218a4664ad3b57d6918043c93c9df3b42ca
yf225 pushed a commit to yf225/pytorch that referenced this pull request Aug 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: internals Related to internal abstractions in c10 and ATen

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants