-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Update autograd_tutorial.py #3382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Updated link to momentum article on TDS
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3382
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 44a0102 with merge base 8476a99 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @asrjy! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
@@ -67,7 +67,7 @@ | |||
loss.backward() # backward pass | |||
|
|||
############################################################ | |||
# Next, we load an optimizer, in this case SGD with a learning rate of 0.01 and `momentum <https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d>`__ of 0.9. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this link still works for me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link works but redirects to a different article for some reason. The medium piece is the right one
@@ -67,7 +67,7 @@ | |||
loss.backward() # backward pass | |||
|
|||
############################################################ | |||
# Next, we load an optimizer, in this case SGD with a learning rate of 0.01 and `momentum <https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d>`__ of 0.9. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link works but redirects to a different article for some reason. The medium piece is the right one
Updated link to momentum article on TDS
Description
The link to the momentum's article now points to a different article. Updated with the medium article of the same content.
Checklist
cc @svekars @sekyondaMeta @AlannaBurke @albanD @jbschlosser