-
Notifications
You must be signed in to change notification settings - Fork 215
[Feature] Update Bertscore for DrivelologyNarrativeWriting #935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @Yunnglin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces lazy initialization for the BERTScorer within the Drivelology Narrative Writing benchmark. The primary goal is to optimize resource usage by delaying the loading of the BERTScorer model until it's explicitly required. The changes involve restructuring the BERTScore computation logic into the adapter class and adding a dedicated test to validate the functionality. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively refactors the BERTScore calculation to use lazy initialization, which is a great performance improvement. The implementation is clean and thread-safe. I've added one review comment with a suggestion to further simplify the new compute_bertscore_one_sample method by adjusting its return value to better match its usage, which would improve code clarity and maintainability.
evalscope/benchmarks/drivelology/drivelology_writing_adapter.py
Outdated
Show resolved
Hide resolved
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces batch scoring capabilities, primarily for BERTScore, to enhance performance. The changes to the core evaluation loop and data adapter interfaces are well-structured. However, I've identified a critical performance issue in the DrivelologyNarrativeWritingAdapter where the BERTScore model is inefficiently reloaded for every batch. Additionally, there's a critical bug in the DefaultEvaluator that could lead to a crash when handling failed reviews during batch scoring. I have provided suggestions to address these critical issues, along with some refactoring recommendations to improve code quality and maintainability.
| # Batch calculate metrics if supported by the benchmark | ||
| if self.benchmark.use_batch_scoring: | ||
| reviewed_scores = self.benchmark.batch_calculate_metrics( | ||
| task_states=task_states, sample_scores=reviewed_scores | ||
| ) | ||
|
|
||
| # Save review results to cache | ||
| for sample_score, task_state in zip(reviewed_scores, task_states): | ||
| if sample_score is not None: | ||
| self._save_review_result(task_state, sample_score, subset) | ||
|
|
||
| logger.info(f'Finished reviewing subset: {subset}. Total reviewed: {len(cached_score_list)}') | ||
| return cached_score_list + reviewed_scores |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a critical bug in this block. run_in_threads_with_progress can return a list reviewed_scores containing None for failed tasks. This list is then passed to batch_calculate_metrics, which does not expect None values and will crash. Furthermore, the final return statement attempts to concatenate cached_score_list (List[SampleScore]) with reviewed_scores (List[Optional[SampleScore]]), which would result in a list of the wrong type and violate the function's return signature.
The logic needs to be adjusted to correctly handle failed reviews by filtering them out before batch processing and before returning the final list of scores.
# Batch calculate metrics if supported by the benchmark
if self.benchmark.use_batch_scoring:
# Filter out failed reviews (where score is None) before batch processing
valid_indices = [i for i, score in enumerate(reviewed_scores) if score is not None]
if valid_indices:
task_states_to_batch = [task_states[i] for i in valid_indices]
scores_to_batch = [reviewed_scores[i] for i in valid_indices]
updated_scores = self.benchmark.batch_calculate_metrics(
task_states=task_states_to_batch, sample_scores=scores_to_batch
)
# Place the updated scores back into the original list
for i, updated_score in enumerate(updated_scores):
reviewed_scores[valid_indices[i]] = updated_score
# Save review results to cache and filter out failures for the final list
final_reviewed_scores = []
for i, sample_score in enumerate(reviewed_scores):
if sample_score is not None:
self._save_review_result(task_states[i], sample_score, subset)
final_reviewed_scores.append(sample_score)
logger.info(f'Finished reviewing subset: {subset}. Total reviewed: {len(cached_score_list) + len(final_reviewed_scores)}')
return cached_score_list + final_reviewed_scores
Support bert_scorer lazy initialization.
Refer to #909