Skip to content

Conversation

KarlK90
Copy link

@KarlK90 KarlK90 commented Sep 19, 2025

PR Description

The existing git log logic fetched the first 300 commits of a repo and displayed them in the local and sub-commit views. Once a user selected a commit beyond a threshold of 200 commits the whole repository was loaded. This is problematic with large repos e.g. the linux kernel with currently ~138k commits as lazygit slows down substantially with such a large number of commits in memory.

This commit replaces the current all or only the first 300 commits logic with an incremental fetching approach:

  1. The first 400 commits of repo are loaded by default.
  2. If the user selects a commit beyond a threshold (current limit-100) the git log limit is increased by 400 if there are more commits available. If there are more commits available is currently checked by comparing the previous log limit with the real commit count in the model, if the commit count is less then the limit it is assumed that we reached the end of the commit log. Ideally it would be better to call git rev-list --count xyz in the right places and compare with this result, but this requires more changes.

Adding a "paginated implementation" by utilizing git log --skip=x --max-count=y and appending commits to the model instead of replacing the whole collection would be nice, but this requires deeper changes to keep everything consistent.

Please check if the PR fulfills these requirements

  • Cheatsheets are up-to-date (run go generate ./...)
  • Code has been formatted (see here)
  • Tests have been added/updated (see here for the integration test guide)
  • Text is internationalised (see here)
  • If a new UserConfig entry was added, make sure it can be hot-reloaded (see here)
  • Docs have been updated if necessary
  • You've read through your own file changes for silly mistakes etc

@stefanhaller stefanhaller force-pushed the fix/commit-chunk-fetches branch from f313b40 to 99a367b Compare September 21, 2025 14:09
@stefanhaller
Copy link
Collaborator

I like this change, although I do have to say that it doesn't address my own problem with large repos. There are two situations in which we turn off the 300 limit: scrolling to the end, or searching. For me, it's almost always searching (in which case we should remove the limit entirely, rather than add another batch of commits, and your branch does that, which is good), so I don't gain anything from the incremental limit.

But I guess the change is still good to have for those who do scroll to the end of the list sometimes. Which makes me wonder: if people want to force loading all commits for some reason, there is no longer an obvious way to do that (previously they could press > to do that), short of pressing /, esc. I wonder if this could be a problem (maybe not).

I rebased the branch so that it includes the recent change to support running tests with go 1.25, and force-pushed (hope you don't mind). I also pushed a fixup commit that addresses a crash: ArgIf doesn't do short-circuiting, i.e. it evaluates both arguments before it calls the function, so this always crashes when opts.LogLimit is nil. This makes me wonder if it would be better to model the limit as a plain int, which -1 meaning there's no limit. Yeah I know, magic numbers, but it's not our fault that go doesn't have a proper optional type (like std::optional in C++), so this is simple and pragmatic.

Last question: what is the reason to change our previous limit of 300 to 400? I worry that this might be a (probably very slight, but still) performance regression at startup, and fast startup times are really crucial.

@stefanhaller
Copy link
Collaborator

If there are more commits available is currently checked by comparing the previous log limit with the real commit count in the model, if the commit count is less then the limit it is assumed that we reached the end of the commit log.

I forgot to respond to this. If I understand this right, the only problem with this is that when you have a repo whose number of commits happens to be an exact multiple of 400, then when you get to the end of the list there will be one more, unnecessary refresh; is that right? This wouldn't bother me too much, given how unlikely that is, and I don't think it's worth calling git rev-list --count to improve this.

KarlK90 and others added 2 commits September 22, 2025 09:49
The existing git log logic fetched the first 300 commits of a repo and
displayed them in the local and sub-commit views. Once a user selected a
commit beyond a threshold of 200 commits the whole repository was
loaded. This is problematic with large repos e.g. the linux kernel with
currently ~138k commits as lazygit slows down substantially with such a
large number of commits in memory.

This commit replaces the current all or only the first 300 commits logic
with an incremental fetching approach:

1. The first 400 commits of repo are loaded by default.
2. If the user selects a commit beyond a threshold (current limit-100)
   the git log limit is increased by 400 if there are more commits
   available. If there are more commits available is currently checked
   by comparing the previous log limit with the real commit count in the
   model, if the commit count is less then the limit it is assumed that
   we reached the end of the commit log. Ideally it would be better to
   call `git rev-list --count xyz` in the right places and compare with
   this result, but this requires more changes.

Adding a "paginated implementation" by utilizing `git log --skip=x
--max-count=y` and appending commits to the model instead of replacing
the whole collection would be nice, but this requires deeper changes to
keep everything consistent.

Signed-off-by: Stefan Kerkmann <[email protected]>
@KarlK90 KarlK90 force-pushed the fix/commit-chunk-fetches branch from 99a367b to 43f31ea Compare September 22, 2025 07:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants