Skip to content

Conversation

@Yunnglin
Copy link
Collaborator

Support bert_scorer lazy initialization.

Refer to #909

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Yunnglin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces lazy initialization for the BERTScorer within the Drivelology Narrative Writing benchmark. The primary goal is to optimize resource usage by delaying the loading of the BERTScorer model until it's explicitly required. The changes involve restructuring the BERTScore computation logic into the adapter class and adding a dedicated test to validate the functionality.

Highlights

  • Lazy Initialization for BERTScorer: The BERTScorer model within the DrivelologyNarrativeWriting benchmark now uses lazy initialization, meaning the model is only loaded when it's first needed, improving resource management.
  • Refactored BERTScore Computation: The compute_bertscore_one_sample function has been refactored from a standalone utility into a method of the DrivelologyNarrativeWritingAdapter class, enhancing encapsulation.
  • Thread-Safe Initialization: A new _init_bert_score method, decorated with @thread_safe, ensures that the BERTScorer instance is initialized safely in multi-threaded environments.
  • New Test Case: A new test test_drivel_writing has been added to tests/benchmark/test_eval.py to cover the Drivelology Narrative Writing benchmark.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the BERTScore calculation to use lazy initialization, which is a great performance improvement. The implementation is clean and thread-safe. I've added one review comment with a suggestion to further simplify the new compute_bertscore_one_sample method by adjusting its return value to better match its usage, which would improve code clarity and maintainability.

@Yunnglin
Copy link
Collaborator Author

Yunnglin commented Nov 3, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces batch scoring capabilities, primarily for BERTScore, to enhance performance. The changes to the core evaluation loop and data adapter interfaces are well-structured. However, I've identified a critical performance issue in the DrivelologyNarrativeWritingAdapter where the BERTScore model is inefficiently reloaded for every batch. Additionally, there's a critical bug in the DefaultEvaluator that could lead to a crash when handling failed reviews during batch scoring. I have provided suggestions to address these critical issues, along with some refactoring recommendations to improve code quality and maintainability.

Comment on lines +274 to +286
# Batch calculate metrics if supported by the benchmark
if self.benchmark.use_batch_scoring:
reviewed_scores = self.benchmark.batch_calculate_metrics(
task_states=task_states, sample_scores=reviewed_scores
)

# Save review results to cache
for sample_score, task_state in zip(reviewed_scores, task_states):
if sample_score is not None:
self._save_review_result(task_state, sample_score, subset)

logger.info(f'Finished reviewing subset: {subset}. Total reviewed: {len(cached_score_list)}')
return cached_score_list + reviewed_scores
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's a critical bug in this block. run_in_threads_with_progress can return a list reviewed_scores containing None for failed tasks. This list is then passed to batch_calculate_metrics, which does not expect None values and will crash. Furthermore, the final return statement attempts to concatenate cached_score_list (List[SampleScore]) with reviewed_scores (List[Optional[SampleScore]]), which would result in a list of the wrong type and violate the function's return signature.

The logic needs to be adjusted to correctly handle failed reviews by filtering them out before batch processing and before returning the final list of scores.

        # Batch calculate metrics if supported by the benchmark
        if self.benchmark.use_batch_scoring:
            # Filter out failed reviews (where score is None) before batch processing
            valid_indices = [i for i, score in enumerate(reviewed_scores) if score is not None]
            if valid_indices:
                task_states_to_batch = [task_states[i] for i in valid_indices]
                scores_to_batch = [reviewed_scores[i] for i in valid_indices]

                updated_scores = self.benchmark.batch_calculate_metrics(
                    task_states=task_states_to_batch, sample_scores=scores_to_batch
                )

                # Place the updated scores back into the original list
                for i, updated_score in enumerate(updated_scores):
                    reviewed_scores[valid_indices[i]] = updated_score

        # Save review results to cache and filter out failures for the final list
        final_reviewed_scores = []
        for i, sample_score in enumerate(reviewed_scores):
            if sample_score is not None:
                self._save_review_result(task_states[i], sample_score, subset)
                final_reviewed_scores.append(sample_score)

        logger.info(f'Finished reviewing subset: {subset}. Total reviewed: {len(cached_score_list) + len(final_reviewed_scores)}')
        return cached_score_list + final_reviewed_scores

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants