Skip to content

Conversation

@Jintao-Huang
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a bug in the Megatron framework's LoRA integration, specifically for sequence classification tasks. The fix ensures that when model weights are saved in PEFT format, the configuration correctly reflects the 'SEQ_CLS' task type, which is essential for accurate model loading and subsequent operations.

Highlights

  • Bugfix for LoRA Sequence Classification: Corrected an issue where the PEFT configuration's task type was not explicitly set for sequence classification models when saving weights in PEFT format. This ensures proper identification of the task type as 'SEQ_CLS'.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in saving LoRA models for sequence classification tasks within the Megatron bridge. The fix correctly sets the task_type in the PEFT configuration, which is essential for the model to be loaded and used correctly for sequence classification. The change is correct and well-targeted. I have one suggestion to improve code maintainability by using an enum from the peft library instead of a hardcoded string.

from swift.llm import get_multimodal_target_regex
peft_config = copy(mg_models[0].peft_config[self._adapter_name])
if args.task_type == 'seq_cls':
peft_config.task_type = 'SEQ_CLS'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and to avoid potential issues with string literals, it's recommended to use the TaskType enum from the peft library directly. This makes the code more robust to potential changes in the library.

To implement this, you would add from peft import TaskType with the other local imports and then change this line to:

peft_config.task_type = TaskType.SEQ_CLS

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a bugfix for the Megatron sequence classification LoRA bridge by correctly setting the task_type in the PEFT config during export. It also includes several improvements to the model conversion process, such as copying the args.json file for better portability and skipping precision tests when LoRA adapters are loaded to prevent errors.

My review identifies a potential race condition in swift/megatron/convert.py where a file operation is not guarded in a distributed context, which could lead to issues. I've also suggested a minor improvement in swift/megatron/export/export.py to make logging cleaner in a distributed environment. Overall, the changes are beneficial and address the intended issue.

Comment on lines 338 to 341
if os.path.exists(args_path):
shutil.copy(args_path, os.path.join(args.output_dir, 'args.json'))
else:
args.save_args(args.output_dir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

In a distributed environment, all ranks will execute this block. shutil.copy is not safe to be called by multiple processes on the same file, which can lead to a race condition. args.save_args has an internal check for the master rank, but shutil.copy does not. You should guard this file operation with a rank check (e.g., if dist.get_rank() == 0:) to ensure it's only executed by a single process.

Suggested change
if os.path.exists(args_path):
shutil.copy(args_path, os.path.join(args.output_dir, 'args.json'))
else:
args.save_args(args.output_dir)
if dist.get_rank() == 0:
if os.path.exists(args_path):
shutil.copy(args_path, os.path.join(args.output_dir, 'args.json'))
else:
args.save_args(args.output_dir)

shutil.copy(args_path, os.path.join(args.save, 'args.json'))
else:
args.save_args(args.save)
logger.info(f'Successfully saved HF model weights in `{args.save}`.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This log message will be printed by all ranks in a distributed environment, which can lead to cluttered logs. It's better to guard this with a rank check, similar to the log message on line 135 (logger.info_if(..., cond=is_last_rank())), to ensure the message is printed only once.

Suggested change
logger.info(f'Successfully saved HF model weights in `{args.save}`.')
logger.info_if(f'Successfully saved HF model weights in `{args.save}`.', cond=is_last_rank())

@Jintao-Huang Jintao-Huang merged commit a05f7cd into modelscope:main Dec 16, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants