Skip to content

Add TorchScript fork/join tutorial #1021

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2020
Merged

Conversation

jamesr66a
Copy link

@jamesr66a jamesr66a commented Jun 11, 2020

No description provided.

@netlify
Copy link

netlify bot commented Jun 11, 2020

Deploy preview for pytorch-tutorials-preview ready!

Built with commit 353174f

https://deploy-preview-1021--pytorch-tutorials-preview.netlify.app

@jamesr66a jamesr66a force-pushed the fork_join branch 5 times, most recently from 0c75b1b to f1bd4a2 Compare June 12, 2020 18:35
@jamesr66a jamesr66a changed the title [WIP] Add TorchScript fork/join tutorial Add TorchScript fork/join tutorial Jun 12, 2020
Copy link

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! A few small comments.


return torch.sum(torch.stack(results))

print(example(torch.ones([])))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe put a non-empty tensor input

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is passing in a scalar 1.0 tensor


# For a head-to-head comparison to what we're going to do with fork/wait, let's
# instantiate the model and compile it with TorchScript
ens = torch.jit.script(LSTMEnsemble(n_models=4))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is ens supposed to be ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ensemble"

Aside: Visualizing Parallelism
------------------------------

We're not done optimizing our model but it's worth introducing the tooling we

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would move this to after the section wherer you improve parallelism further.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to put this here because it allows us to see a delta of what happens when we add fork and wait, rather than just seeing one example

@jamesr66a jamesr66a force-pushed the fork_join branch 2 times, most recently from 9625da5 to 9a7608a Compare June 16, 2020 03:29
@jlin27 jlin27 changed the base branch from master to release/1.6 June 18, 2020 00:17
@jlin27 jlin27 merged commit 28f044e into pytorch:release/1.6 Jun 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants