Skip to content
This repository was archived by the owner on Oct 6, 2025. It is now read-only.
This repository was archived by the owner on Oct 6, 2025. It is now read-only.

Error while pre-processing #219

@cubedmeatgoeshere

Description

@cubedmeatgoeshere

$ python3 -m piper_train.preprocess --language en-us --input-dir "/home/patrick/voicedata/wav/" --output-dir "/home/patrick/voicedata/model" --dataset-format ljspeech --single-speaker --sample-rate 44100 INFO:preprocess:Single speaker dataset INFO:preprocess:Wrote dataset config ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) INFO:preprocess:Processing 100 utterance(s) with 12 worker(s) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...)

Sorry for the formatting, WSL shell doesn't copy well

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions