This project demonstrates the use of various open-source models from Hugging Face to generate text based on different prompts. The models used include Llama, GPT-2, and Deepseek.
To install the necessary dependencies, run the following command:
!pip install bitsandbytesFirst, log in to Hugging Face using your token:
!huggingface-cli loginTo use the Llama model for text generation:
from transformers import pipeline
pipe = pipeline("text-generation", model="meta-llama/Llama-3.2-1B")
prompt = "Once upon a time:"
output = pipe(prompt, max_length=300, num_return_sequences=1)
print(output[0]['generated_text'])To use the GPT-2 model for text generation:
from transformers import pipeline
generator = pipeline("text-generation", model="gpt2")
prompt = "Once upon a time:"
output = generator(prompt, max_length=300, num_return_sequences=1)
print(output[0]['generated_text'])To use the Deepseek model for text generation:
from transformers import pipeline
pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", max_length=150, num_return_sequences=1)
messages = [{"role": "user", "content": "Once upon a time:"}]
output = pipe(messages)
print(output[0]["generated_text"][1]["content"])- Try some new models and some new prompts
- Explore Llama, GPT, and Deepseek open-source models
This project is licensed under the MIT License.