Skip to content

Conversation

guozixu2001
Copy link
Contributor

Motivation and Context

This change addresses a performance issue where unnecessary gradients were being computed in the convolution_backward function. Computing gradients that are not needed can be time-consuming. By introducing a dynamic output_mask, this update ensures that only the required gradients are computed, leading to significant time savings and improved overall performance.

Description

  • Introduced std::array<bool, 3> output_mask to selectively compute only the necessary gradients in the convolution_backward function.
  • Modified the function to set output_mask based on the presence of grad_input and grad_weight. If either is not needed, the corresponding gradient computation is skipped.
  • Updated the call to at::native::convolution_backward.

Use cases (Optional)

BC-breaking (Optional)

Checklist

Before PR:

  • I have read and followed the workflow indicated in the Contributors.md to create this PR.
  • Pre-commit or linting tools indicated in Contributors.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • CLA has been signed and all committers have signed the CLA in this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants