-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Add TorchScript fork/join tutorial #1021
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Deploy preview for pytorch-tutorials-preview ready! Built with commit 353174f https://deploy-preview-1021--pytorch-tutorials-preview.netlify.app |
0c75b1b
to
f1bd4a2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! A few small comments.
|
||
return torch.sum(torch.stack(results)) | ||
|
||
print(example(torch.ones([]))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe put a non-empty tensor input
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is passing in a scalar 1.0
tensor
|
||
# For a head-to-head comparison to what we're going to do with fork/wait, let's | ||
# instantiate the model and compile it with TorchScript | ||
ens = torch.jit.script(LSTMEnsemble(n_models=4)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is ens
supposed to be ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"ensemble"
Aside: Visualizing Parallelism | ||
------------------------------ | ||
|
||
We're not done optimizing our model but it's worth introducing the tooling we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would move this to after the section wherer you improve parallelism further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to put this here because it allows us to see a delta of what happens when we add fork
and wait
, rather than just seeing one example
9625da5
to
9a7608a
Compare
No description provided.