-
Notifications
You must be signed in to change notification settings - Fork 1.2k
docs(ai): disclose AI assistance #3229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a new section to the contributing guidelines requiring disclosure of AI assistance in pull requests. The section emphasizes transparency about AI usage and provides examples of proper disclosure.
- Introduces mandatory AI assistance disclosure requirement for contributors
- Provides clear examples and guidelines for proper disclosure format
- Emphasizes respect for maintainers and the need for appropriate code scrutiny
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI-assisted coding has changed the nature of contributions. Previously, after much effort, developers would submit imperfect changes that created manageable review work and good teaching moments. Now, AI tools can generate large volumes of low-quality code quickly, increasing the burden on reviewers and reducing opportunities for meaningful feedback.
Asking contributors to disclose AI use doesn’t solve this—it can feel punitive and doesn’t reflect how responsibly the tool was used.
Instead, we should focus on best practices. For example, contributors could attest to:
- "I’ve reviewed and understood all code before submitting."
- "I’ve tested the changes and ensured they address only the issue at hand."
- "I’ve asked an AI system to review this code."
- "I’ve refined any AI-generated output to meet our standards."
This shifts the focus from whether AI was used to how it was used—and helps promote thoughtful, responsible contributions.
(i totally used AI to make this)
fyi @mattf we had a discussion about this here as well #3062 Avoiding punitive language is probably good and promoting responsible contributions is probably better. I also do agree that AI slop is a legitimate problem and the burden is increasingly left to the maintainers/reviewers now. I personally don't really care whether code is created via AI (to me that's an intermediate problem), what I really care about is "does the code make sense?" and "was the code tested?" and the first round of that should be done by the person submitting the PR. |
@mattf updated this in response to your feedback. |
|
||
> [!IMPORTANT] | ||
> We would like to move the emphasis from whether AI was used to _how_ it was used and what human oversight was applied. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anything action item for author to add in PR Summary? Or we just ensure authors to follow this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@slekkala1 yeah just hope they do!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's put the following in .github/PULL_REQUEST_TEMPLATE.md
## Developer Attestation
Please confirm the following:
- [ ] I've reviewed and understood all code before submitting.
- [ ] I've tested the changes and ensured they address only the issue at hand.
- [ ] I've asked an AI system to review this code.
- [ ] I've refined any AI-generated output to meet our standards.
- [ ] I've looked at generated tests and ensured they are meaningful (e.g., avoid using fragile mocks.
What is the goal of this PR?
If we only care about quality of PRs, can we figure out ways we can articulate the quality of the rubrics (like https://secure.phabricator.com/book/phabflavor/article/writing_reviewable_code/) and use AI to automatically review all PRs against those rubrics? That will help reviewers of purely human authored PRs as well. |
Closing this for now |
From my perspective, we shouldn't care whether the code was AI-generated, it's the norm, not really the exception, so disclosing usage makes very little sense. We want contributors to understand what they are contributing. Our contribution guidelines can be amended for more details, our template should have a tick box "yes I've read the contributing guidelines of the project" or something, that feels sufficient, that should only be needed for new contributors. Once you have more than one PR merged you shouldn't need to tick that box all the time. It's almost like a DCO or a CLA that you sign off once. My 2 cents. |
This is becoming the need of the hour. We need something so we can point to folks easily.
This is almost verbatim copied from: https://github.com/ghostty-org/ghostty/pull/8289/files