Skip to content

Commit ba9f1b6

Browse files
committed
enh: more edits and updates
1 parent 7926005 commit ba9f1b6

File tree

2 files changed

+146
-145
lines changed

2 files changed

+146
-145
lines changed

_posts/2025-09-16-generative-ai-peer-review.md

Lines changed: 0 additions & 145 deletions
This file was deleted.
Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
---
2+
layout: single
3+
title: "Navigating LLMs in Open Source: pyOpenSci's New Peer Review Policy"
4+
excerpt: "Generative AI products are reducing the effort and skill necessary to generate large amounts of code. In some cases, this strains volunteer peer review programs like ours. Learn about pyOpenSci's approach to developing a Generative AI policy for our software peer review program."
5+
author: "pyopensci"
6+
permalink: /blog/generative-ai-peer-review-policy.html
7+
header:
8+
overlay_image: images/headers/pyopensci-floral.png
9+
categories:
10+
- blog-post
11+
- community
12+
classes: wide
13+
toc: true
14+
comments: true
15+
last_modified: 2025-09-16
16+
---
17+
18+
authors: Leah Wasser, Jed Brown, Carter Rhea, Ellie Abrahams
19+
20+
## Generative AI meets scientific open source
21+
22+
Some developers believe that using AI products increases efficiency. However, in scientific open source, speed isn't everything—transparency, quality, and community trust are just as important as understanding the environmental impacts of using large language models in our everyday work. Similarly, the ethical questions that these tools raise are also a concern as some communities may benefit from the same tools that hurt others.
23+
24+
## Why we need guidelines
25+
26+
At pyOpenSci, we’ve drafted a new policy for our peer review process to set clear expectations for disclosing the use of LLMs in scientific open-source software.
27+
28+
This is not about banning AI tools. We recognize their value to some people. Instead, our goal is transparency. We want maintainers to **disclose when and how they’ve used LLMs** so editors and reviewers can fairly and efficiently evaluate submissions. Further, we want to avoid burdening our volunteer editorial and reviewer team with being the first to review generated code.
29+
30+
## A complex topic: Benefits and concerns
31+
32+
LLMs are perceived as helping developers:
33+
34+
* Explain complex codebases
35+
* Generate unit tests and docstrings
36+
* In some cases, simplifying language barriers for participants in open source around the world
37+
* Speeding up everyday workflows
38+
39+
Some contributors also perceive these products as making open source more accessible. However, LLM's also present
40+
unprecedented social and environmental challenges.
41+
42+
### Incorrectness of LLMs and misleading time benefits
43+
44+
Although it is commonly stated that LLMs help improve the productivity of high-level developers, recent scientific explorations of this hypothesis [indicate the contrary](https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/). What's more is that the responses of LLMs for complex coding tasks [tend to be incorrect](https://arxiv.org/html/2407.06153v1) and/or overly verbose/inefficient. It is crucial that, if you use an LLM to help produce code, you should independently evaluate code correctness and efficiency.
45+
46+
### Environmental impacts
47+
48+
Training and running LLMs [requires massive energy consumption](https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about), raising sustainability concerns that sit uncomfortably alongside much of the global scale scientific research that our community supports.
49+
50+
### Impact on learning
51+
52+
Heavy reliance on LLMs risks producing developers who can prompt, but not debug or maintain, code—undermining long-term project sustainability and growth. This also in the long run will make it [harder for young developers to learn how to code, and troubleshoot independently](https://knowledge.wharton.upenn.edu/article/without-guardrails-generative-ai-can-harm-education/).
53+
54+
> We’re really worried that if humans don’t learn, if they start using these tools as a crutch and rely on it, then they won’t actually build those fundamental skills to be able to use these tools effectively in the future. *Hamsa Bastani*
55+
56+
### Ethics and inclusion
57+
58+
LLM outputs can reflect and amplify bias in training data. In documentation and tutorials, that bias can harm the very communities we want to support.
59+
60+
## Our Approach: Transparency and Disclosure
61+
62+
We acknowledge that social and ethical norms, as well as concerns about environmental and societal impacts, vary widely across the community. We are not here to judge anyone who uses or doesn't use LLMs. Our focus centers on supporting informed decision-making and consent regarding LLM use in the pyOpenSci software submission, review, and editorial process.
63+
64+
Our community’s expectation is simple: **be open about and disclose any Generative AI use in your package** when you submit it to our open software review process.
65+
66+
* Disclose LLM use in your README and at the top of relevant modules.
67+
* Describe how the Generative AI tools were used in your package's development.
68+
* Be clear about what human review you performed on Generative AI outputs before submitting the package to our open peer review process.
69+
70+
Transparency helps reviewers understand context, trace decisions, and focus their time where it matters most.
71+
72+
### Human oversight
73+
74+
LLM-assisted code must be **reviewed, edited, and tested by humans** before submission.
75+
76+
* Run your tests and confirm the correctness of the code that you submitted.
77+
* Check for security and quality issues.
78+
* Ensure style, readability, and concise docstrings.
79+
* Explain your review process in your software submission to pyOpenSci.
80+
81+
Please **don’t offload vetting of generative AI content to volunteer reviewers**. Arrive with human-reviewed code that you understand, have tested, and can maintain.
82+
83+
### Watch out for licensing issues.
84+
85+
LLMs are trained on large amounts of open source code; most of that code has licenses that require attribution.
86+
The problem? LLMs sometimes spit out near-exact copies of that training data, but without any attribution or copyright notices.
87+
88+
Why this matters:
89+
90+
* Using LLM output verbatim could violate the original code's license
91+
* You might accidentally commit plagiarism or copyright infringement by using that output verbatim in your code
92+
* Due diligence is nearly impossible since you can't trace what the LLM "learned from" (most LLM's are black boxes)
93+
94+
When licenses clash, it gets messy. Say your package uses an MIT license (common in scientific Python), but an LLM outputs Apache-2.0 or GPL code—those licenses aren't compatible. You can't just add attribution to fix it. Technically, you'd have to delete everything and rewrite it from scratch to comply with the licensing requirements.
95+
96+
While this is all tricky, here's what you can do, now:
97+
98+
*Prefer human-edited, transformative outputs you fully understand*
99+
100+
* Be aware that when you directly use content developed by an LLM, there will be inherent license conflicts.
101+
* Be aware that LLM products can potentially return copyrighted code verbatim. **Don't paste LLM outputs directly into your code**. Instead, review, edit, and transform anything an LLM gives you. Consider using [clean-room techniques](https://en.wikipedia.org/wiki/Clean-room_design) to achieve this.
102+
* **Make sure you fully understand the code before using it:** This is actually in your best interest because you can learn a lot about programming by asking an LLM questions and reviewing the output critically.
103+
104+
You can't control what's in training data, but you can be thoughtful about how you use these tools.
105+
106+
<div class="notice" markdown="1">
107+
Examples of how these licensing issues are impacting and stressing our legal systems:
108+
109+
* [GitHub Copilot litication](https://githubcopilotlitigation.com/case-updates.html)
110+
* [Litigation around text from LLMs](https://arxiv.org/abs/2505.12546)
111+
* [incompatible licenses](https://dwheeler.com/essays/floss-license-slide.html)
112+
</div>
113+
114+
### Review for bias
115+
116+
Inclusion is part of quality. Treat AI-generated text with the same care as code.
117+
Given the known biases that can manifest in Generative AI-derived text:
118+
119+
* Review AI-generated text for stereotypes or exclusionary language.
120+
* Prefer plain, inclusive language.
121+
* Invite feedback and review from diverse contributors.
122+
123+
## Things to consider in your development workflows
124+
125+
If you are a maintainer or a contributor, some of the above can apply to your development and contribution process, too.
126+
Similar to how peer review systems are being taxed, rapid, AI-assisted pull requests and issues can also overwhelm maintainers too. To combat this:
127+
128+
* Open an issue first before submitting a pull request to ensure it's welcome and needed
129+
* Keep your pull requests small with clear scopes.
130+
* If you use LLMs, test and edit all of the output before you submit a pull request or issue.
131+
* Flag AI-assisted sections of any contribution so maintainers know where to look closely.
132+
* Be responsive to feedback from maintainers, especially when submitting code that is AI-generated.
133+
134+
## Where we go from here
135+
136+
A lot of thought and consideration has gone into the development of pyOpenSci's Generative AI policies.
137+
We will continue to suggest best practices for embracing modern technologies while critically evaluating their realities and the impacts they have on our ecosystem. These guidelines help us maintain the quality and integrity of packages in our peer review process while protecting the volunteer community that makes open peer review possible. As AI tools evolve, so will our approach—but transparency, human oversight, and community trust will always remain at the center of our work.
138+
139+
## Join the conversation
140+
141+
This policy is just the beginning. As AI continues to evolve, so will our practices. We invite you to:
142+
143+
👉 [Read the full draft policy and discussion](https://github.com/pyOpenSci/software-peer-review/pull/344)
144+
👉 Share your feedback and help us shape how the scientific Python community approaches Generative AI in open source.
145+
146+
The conversation is only starting, and your voice matters.

0 commit comments

Comments
 (0)