-
Couldn't load subscription status.
- Fork 2.1k
Add conditional_generation example xpu support #2684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add conditional_generation example xpu support #2684
Conversation
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Signed-off-by: Liu, Kaixuan <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating the examples. I found a handful of issues, otherwise LGTM.
| @@ -179,7 +180,979 @@ | |||
| "metadata": { | |||
| "tags": [] | |||
| }, | |||
| "outputs": [], | |||
| "outputs": [ | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's clear the cells here. Normally, I prefer to see the outputs but this notebook already has cleared cells and having the given output in this case is not very helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| lora_dropout=0.1, | ||
| task_type=TaskType.SEQ_2_SEQ_LM, | ||
| inference_mode=False, | ||
| total_step=len(dataset['train']) * num_epochs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
| "outputs": [], | ||
| "outputs": [ | ||
| { | ||
| "ename": "ValueError", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this checkpoint has been deleted and I don't even know which model was used or how it was trained. I'd say, let's delete this notebook, as it is non-functional. The same is true for examples/causal_language_modeling/peft_lora_clm_accelerate_big_model_inference.ipynb.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deleted
| "\n", | ||
| "set_seed(42)\n", | ||
| "\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about we use this? device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| os.environ["TOKENIZERS_PARALLELISM"] = "false" | ||
|
|
||
| device = "cuda" | ||
| device = "xpu" if torch.xpu.is_available() else "cuda" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| "from datasets import load_dataset\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| "from datasets import load_dataset\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| "os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| "os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| "from datasets import load_dataset\n", | ||
| "\n", | ||
| "device = \"cuda\"\n", | ||
| "device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
|
@kaixuanliu Could you please run |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@BenjaminBossan Oops, my fault, have fixed the formatting issue. |
Signed-off-by: Liu, Kaixuan <[email protected]>
|
Thanks, but there still seem to be issues with formatting. Maybe let's check that you have matching ruff version (0.9.10) and that the settings from the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making the examples XPU-compatible and also updating them where necessary. The PR LGTM. @yao-matrix anything else from your side?
looks good for me, thx @BenjaminBossan |
Signed-off-by: Liu, Kaixuan <[email protected]>
No description provided.