Skip to content

Conversation

@ajiwo
Copy link
Contributor

@ajiwo ajiwo commented Aug 25, 2025

TLDR

Add a new readAfterEdit configuration option (default: true) that controls whether file content is automatically read and provided to the LLM after edit operations. This enhances the AI's context awareness by including updated file content in responses, making the AI more aware of the changes it has made.

Dive Deeper

This PR introduces a new configuration setting that addresses the most common frustration: the "Failed to edit, 0 occurrences found for old_string" error. This error typically occurs when the AI attempts consecutive edits or follow-up modifications because it loses context of what it just changed.

Previously, after performing file edits, the AI would only receive a success message but wouldn't see the actual updated content. This meant that in multi-step editing workflows, the AI would often try to edit based on its memory of the original file content rather than the current state, leading to failed edits and frustrated users.

Current workaround: Users have to manually tell the AI to re-read the file before each subsequent edit (e.g., "First read the file, then make this change"), which is tedious, interrupts workflow, and isn't obvious to new users.

Key Changes:

  1. New Configuration Setting: Added readAfterEdit boolean setting to the settings schema with default value true
  2. Enhanced Edit Tool: Modified the edit.ts tool to conditionally include the new content in LLM responses after successful edits
  3. Configurable Behavior: Users can disable this feature by setting "readAfterEdit": false in their settings.json
  4. Testing: Added test coverage for both enabled and disabled states.

Benefits:

  • Fixes the Most Common Error: Reduces "Failed to edit, 0 occurrences found" errors by keeping the AI aware of current file state
  • Eliminates Manual Workarounds: No more need to tell the AI "read the file first" before each edit
  • Enables Consecutive Edits: The AI can now relatively perform multi-step editing workflows without losing track of changes
  • Improved AI Context: The AI can see the results of its edits immediately, leading to better follow-up suggestions
  • Better User Experience: More coherent conversations where the AI acknowledges and builds upon the changes it made
  • Configurable: Users who prefer the previous behavior can disable the feature

Reviewer Test Plan

To validate this change:

  1. Test with feature enabled (default):

    • Create a test file and ask the AI to edit it
    • Ask follow-up questions about the edited content to confirm the AI is aware of changes
    • The AI should demonstrate knowledge of what it just changed (this tests that the file content was included in llmContent)
  2. Test with feature disabled:

    • Set "readAfterEdit": false in your settings.json
    • Perform the same edit operation
    • Ask follow-up questions about the edited content
    • The AI should be less aware of the specific changes made (since file content is not included in llmContent)
  3. Test edge cases:

    • Test file creation (empty old_string)
    • Test multiple replacements
    • Test error scenarios (file not found, multiple matches) - these should behave the same regardless of the setting
    • Test with large files to ensure performance is acceptable
  4. Example prompts to test:

    • "Create a new file called test.js with a simple function"
    • "Replace the function name in test.js from 'hello' to 'greet'" (simplified example, the effect is more apparent with multi-step editing and large files)
    • "Now explain what you just changed" (this should show different behavior based on the setting)
    • "Add error handling to the function you just created" (AI should be more context-aware with feature enabled)

Testing Matrix

🍏 🪟 🐧
npm run
npx
Docker
Podman - -
Seatbelt - -

Linked issues / bugs

@ajiwo ajiwo force-pushed the feat/read-after-edit branch from ef9fc1e to d29967e Compare August 25, 2025 13:35
Introduces a new configuration setting, `readAfterEdit`, which is enabled by
default.

When this setting is active, the `edit` tool will automatically append the full
content of a file to its response message (`llmContent`) after a successful
modification or creation.

This provides the AI with immediate context of the changes, improving its
awareness of the file's current state and reducing the need for a subsequent
`read_file` call.

Co-authored-by: Qwen-Coder <[email protected]>
@ajiwo ajiwo force-pushed the feat/read-after-edit branch from d29967e to d2d1e74 Compare August 25, 2025 13:56
koalazf99 and others added 16 commits August 26, 2025 13:18
use sub-command to switch between project and global memory ops
* fix: add explicit background param for shell tool

* fix: explicit param schema

* docs(shelltool): update `is_background` description
* fix: sync token among multiple qwen sessions

* fix: adjust cleanup function
Introduces a new configuration setting, `readAfterEdit`, which is enabled by
default.

When this setting is active, the `edit` tool will automatically append the full
content of a file to its response message (`llmContent`) after a successful
modification or creation.

This provides the AI with immediate context of the changes, improving its
awareness of the file's current state and reducing the need for a subsequent
`read_file` call.

Co-authored-by: Qwen-Coder <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants