-
Notifications
You must be signed in to change notification settings - Fork 30.5k
Closed
Labels
Feature requestRequest for a new featureRequest for a new featureGood Second IssueIssues that are more difficult to do than "Good First" issues - give it a try if you want!Issues that are more difficult to do than "Good First" issues - give it a try if you want!bug
Description
System Info
transformers
version: 4.54.1- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.12.10
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.1.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'} - DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?:
- Using GPU in script?:
- GPU type: NVIDIA GeForce RTX 4090
Who can help?
@ArthurZucker
@ArthurZucker and @itazap
@SunMarc @MekkCyber
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
# --- VAE / VAE ---
vae = AutoencoderKLWan.from_pretrained(
"Wan-AI/Wan2.1-VACE-14B-diffusers",
subfolder="vae",
torch_dtype=torch.float32,
)
# --- Text Encoder / Кодировщик текста ---
text_encoder = UMT5EncoderModel.from_pretrained(
"city96/umt5-xxl-encoder-gguf",
gguf_file="umt5-xxl-encoder-Q8_0.gguf",
torch_dtype=torch.float16,
)
# --- Transformer / Трансформер ---
transformer = WanVACETransformer3DModel.from_single_file(
"https://huggingface.co/QuantStack/Wan2.1_T2V_14B_FusionX_VACE-GGUF/blob/main/Wan2.1_T2V_14B_FusionX_VACE-Q6_K.gguf",
quantization_config=GGUFQuantizationConfig(
compute_dtype=torch.float16
),
torch_dtype=torch.float16,
)
# --- Pipeline assembly / Сборка пайплайна ---
pipe = WanVACEPipeline.from_pretrained(
"Wan-AI/Wan2.1-VACE-14B-diffusers",
vae=vae,
text_encoder=text_encoder,
transformer=transformer,
torch_dtype=torch.float16
)
# --- Scheduler / Планировщик ---
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
Expected behavior
[WARNING|configuration_utils.py:622] 2025-08-10 11:46:45,092 >> You are using a model of type t5 to instantiate a model of type umt5. This is not supported for all configurations of models and can yield errors.
I want to cinvert into umt5, not simple t5
Metadata
Metadata
Assignees
Labels
Feature requestRequest for a new featureRequest for a new featureGood Second IssueIssues that are more difficult to do than "Good First" issues - give it a try if you want!Issues that are more difficult to do than "Good First" issues - give it a try if you want!bug