Skip to content

Conversation

@kaixuanliu
Copy link
Contributor

No description provided.

Signed-off-by: Liu, Kaixuan <[email protected]>
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Signed-off-by: Liu, Kaixuan <[email protected]>
Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating the examples. I found a handful of issues, otherwise LGTM.

@@ -179,7 +180,979 @@
"metadata": {
"tags": []
},
"outputs": [],
"outputs": [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's clear the cells here. Normally, I prefer to see the outputs but this notebook already has cleared cells and having the given output in this case is not very helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

lora_dropout=0.1,
task_type=TaskType.SEQ_2_SEQ_LM,
inference_mode=False,
total_step=len(dataset['train']) * num_epochs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

"outputs": [],
"outputs": [
{
"ename": "ValueError",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this checkpoint has been deleted and I don't even know which model was used or how it was trained. I'd say, let's delete this notebook, as it is non-functional. The same is true for examples/causal_language_modeling/peft_lora_clm_accelerate_big_model_inference.ipynb.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted

"\n",
"set_seed(42)\n",
"\n",
"device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about we use this? device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

os.environ["TOKENIZERS_PARALLELISM"] = "false"

device = "cuda"
device = "xpu" if torch.xpu.is_available() else "cuda"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"from datasets import load_dataset\n",
"\n",
"device = \"cuda\"\n",
"device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"from datasets import load_dataset\n",
"\n",
"device = \"cuda\"\n",
"device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n",
"\n",
"device = \"cuda\"\n",
"device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n",
"\n",
"device = \"cuda\"\n",
"device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"from datasets import load_dataset\n",
"\n",
"device = \"cuda\"\n",
"device = \"xpu\" if torch.xpu.is_available() else \"cuda\"\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
Signed-off-by: Liu, Kaixuan <[email protected]>
@BenjaminBossan
Copy link
Member

@kaixuanliu Could you please run make style?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@kaixuanliu
Copy link
Contributor Author

@BenjaminBossan Oops, my fault, have fixed the formatting issue.

Signed-off-by: Liu, Kaixuan <[email protected]>
@BenjaminBossan
Copy link
Member

Thanks, but there still seem to be issues with formatting. Maybe let's check that you have matching ruff version (0.9.10) and that the settings from the pyproject.toml are being used.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making the examples XPU-compatible and also updating them where necessary. The PR LGTM. @yao-matrix anything else from your side?

@yao-matrix
Copy link
Contributor

Thanks for making the examples XPU-compatible and also updating them where necessary. The PR LGTM. @yao-matrix anything else from your side?

looks good for me, thx @BenjaminBossan

@BenjaminBossan BenjaminBossan merged commit e3d8fc9 into huggingface:main Aug 6, 2025
2 of 14 checks passed
@kaixuanliu kaixuanliu deleted the conditional_generation_xpu branch August 11, 2025 01:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants