Skip to content

Commit 26ee1bc

Browse files
Add Function Calling Fine-tuning LLMs on xLAM Dataset notebook
This notebook demonstrates how to fine-tune language models for function calling capabilities using the xLAM dataset from Salesforce and QLoRA technique. Key features: - Universal model support (Llama, Qwen, Mistral, Gemma, Phi, etc.) - Memory-efficient QLoRA training on consumer GPUs (16-24GB) - Automatic model configuration and token detection - Production-ready code with comprehensive documentation - Complete pipeline from training to deployment on Hugging Face Hub ✅ Contribution task completed
1 parent aafb3cc commit 26ee1bc

File tree

3 files changed

+14343
-1
lines changed

3 files changed

+14343
-1
lines changed

notebooks/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,8 @@
8484
title: Documentation Chatbot with Meta Synthetic Data Kit
8585
- local: optuna_hpo_with_transformers
8686
title: Hyperparameter Optimization with Optuna and Transformers
87+
- local: function_calling_fine_tuning_llms_on_xlam
88+
title: Function Calling Fine-tuning LLMs on xLAM Dataset
8789

8890

8991

0 commit comments

Comments
 (0)