-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
Hi, i am trying to install ista_daslab_optimizers on google cloud computing VM, however, I got error as below. The CUDA i am using is (according to nvcc -V):
release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
Building wheel for ista_daslab_optimizers (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for ista_daslab_optimizers (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [152 lines of output]
/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers
copying ista_daslab_optimizers/__init__.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers
copying ista_daslab_optimizers/tools.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers
creating build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/sparse_mfac
copying ista_daslab_optimizers/sparse_mfac/__init__.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/sparse_mfac
copying ista_daslab_optimizers/sparse_mfac/sparse_mfac.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/sparse_mfac
copying ista_daslab_optimizers/sparse_mfac/sparse_core_mfac_w_ef.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/sparse_mfac
creating build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/micro_adam
copying ista_daslab_optimizers/micro_adam/micro_adam.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/micro_adam
copying ista_daslab_optimizers/micro_adam/__init__.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/micro_adam
creating build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/dense_mfac
copying ista_daslab_optimizers/dense_mfac/__init__.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/dense_mfac
copying ista_daslab_optimizers/dense_mfac/dense_core_mfac.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/dense_mfac
copying ista_daslab_optimizers/dense_mfac/dense_mfac.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/dense_mfac
creating build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/acdc
copying ista_daslab_optimizers/acdc/__init__.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/acdc
copying ista_daslab_optimizers/acdc/wd_scheduler.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/acdc
copying ista_daslab_optimizers/acdc/acdc.py -> build/lib.linux-x86_64-cpython-39/ista_daslab_optimizers/acdc
running egg_info
writing ista_daslab_optimizers.egg-info/PKG-INFO
writing dependency_links to ista_daslab_optimizers.egg-info/dependency_links.txt
writing requirements to ista_daslab_optimizers.egg-info/requires.txt
writing top-level names to ista_daslab_optimizers.egg-info/top_level.txt
reading manifest file 'ista_daslab_optimizers.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'ista_daslab_optimizers.egg-info/SOURCES.txt'
running build_ext
/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py:458: UserWarning: There are no g++ version bounds defined for CUDA version 12.4
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
building 'ista_daslab_tools' extension
creating /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools
/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Emitting ninja build file /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools/tools.o.d -pthread -B /opt/conda/envs/ista/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/ista/include -I/opt/conda/envs/ista/include -fPIC -O2 -isystem /opt/conda/envs/ista/include -fPIC -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/ista/include/python3.9 -c -c /workspace/ISTA-DASLab-Optimizers/kernels/tools/tools.cpp -o /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools/tools.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=ista_daslab_tools -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++17
[2/2] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools/tools_kernel.o.d -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/ista/include/python3.9 -c -c /workspace/ISTA-DASLab-Optimizers/kernels/tools/tools_kernel.cu -o /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools/tools_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=ista_daslab_tools -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 -std=c++17
/workspace/ISTA-DASLab-Optimizers/kernels/tools/../utils.h(76): warning #940-D: missing return statement at end of non-void function "log_threads"
}
^
Remark: The warnings can be suppressed with "-diag-suppress <warning-number>"
g++ -pthread -B /opt/conda/envs/ista/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/ista/include -I/opt/conda/envs/ista/include -fPIC -O2 -isystem /opt/conda/envs/ista/include -pthread -B /opt/conda/envs/ista/compiler_compat -shared /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools/tools.o /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/tools/tools_kernel.o -L/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/lib -L/usr/local/cuda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-cpython-39/ista_daslab_tools.cpython-39-x86_64-linux-gnu.so
building 'ista_daslab_dense_mfac' extension
creating /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac
/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Emitting ninja build file /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac_kernel.o.d -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/ista/include/python3.9 -c -c /workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu -o /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=ista_daslab_dense_mfac -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 -std=c++17
FAILED: /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac_kernel.o
/usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac_kernel.o.d -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/ista/include/python3.9 -c -c /workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu -o /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=ista_daslab_dense_mfac -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 -std=c++17
/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu(48): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
[&] { const auto& the_type = tmp.type(); constexpr const char* at_dispatch_name = "hinv_setup_cuda"; at::ScalarType _st = ::detail::scalar_type(the_type); ; switch (_st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(48), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Double>; return ([&] { HinvCoefKernelDiag<scalar_t><<<m / SIZE, threads>>>( m, tmp.data<scalar_t>(), coef.data<scalar_t>() ); })(); } case at::ScalarType::Float: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Float)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(48), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Float), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Float>; return ([&] { HinvCoefKernelDiag<scalar_t><<<m / SIZE, threads>>>( m, tmp.data<scalar_t>(), coef.data<scalar_t>() ); })(); } default: if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(48), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", '"', at_dispatch_name, "\" not implemented for '", toString(_st), "'"))); }; } }()
^
/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu(55): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
[&] { const auto& the_type = tmp.type(); constexpr const char* at_dispatch_name = "hinv_setup_cuda"; at::ScalarType _st = ::detail::scalar_type(the_type); ; switch (_st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(55), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Double>; return ([&] { HinvCoefKernelMain<scalar_t><<<blocks, threads>>>( m, tmp.data<scalar_t>(), coef.data<scalar_t>(), i ); })(); } case at::ScalarType::Float: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Float)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(55), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Float), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Float>; return ([&] { HinvCoefKernelMain<scalar_t><<<blocks, threads>>>( m, tmp.data<scalar_t>(), coef.data<scalar_t>(), i ); })(); } default: if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(55), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", '"', at_dispatch_name, "\" not implemented for '", toString(_st), "'"))); }; } }()
^
/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu(181): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
[&] { const auto& the_type = giHig.type(); constexpr const char* at_dispatch_name = "hinv_mul_cuda"; at::ScalarType _st = ::detail::scalar_type(the_type); ; switch (_st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(181), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Double>; return ([&] { HinvMulKernel<scalar_t><<<1, m>>>( rows, m, giHig.data<scalar_t>(), giHix.data<scalar_t>() ); })(); } case at::ScalarType::Float: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Float)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(181), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Float), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Float>; return ([&] { HinvMulKernel<scalar_t><<<1, m>>>( rows, m, giHig.data<scalar_t>(), giHix.data<scalar_t>() ); })(); } default: if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu", static_cast<uint32_t>(181), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", '"', at_dispatch_name, "\" not implemented for '", toString(_st), "'"))); }; } }()
^
3 errors detected in the compilation of "/workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac_kernel.cu".
[2/2] c++ -MMD -MF /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac.o.d -pthread -B /opt/conda/envs/ista/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/ista/include -I/opt/conda/envs/ista/include -fPIC -O2 -isystem /opt/conda/envs/ista/include -fPIC -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/ista/include/python3.9 -c -c /workspace/ISTA-DASLab-Optimizers/kernels/dense_mfac/dense_mfac.cpp -o /workspace/ISTA-DASLab-Optimizers/build/temp.linux-x86_64-cpython-39/kernels/dense_mfac/dense_mfac.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=ista_daslab_dense_mfac -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++17
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2209, in _run_ninja_build
subprocess.run(
File "/opt/conda/envs/ista/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/ista/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
main()
File "/opt/conda/envs/ista/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
json_out["return_val"] = hook(**hook_input["kwargs"])
File "/opt/conda/envs/ista/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 280, in build_wheel
return _build_backend().build_wheel(
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 438, in build_wheel
return _build(['bdist_wheel', '--dist-info-dir', str(metadata_directory)])
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 426, in _build
return self._build_with_temp_dir(
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 407, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
File "<string>", line 19, in <module>
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 186, in setup
return run_commands(dist)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 202, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 983, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 1002, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/command/bdist_wheel.py", line 379, in run
self.run_command("build")
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 339, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 1002, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 136, in run
self.run_command(cmd_name)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 339, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 1002, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 99, in run
_build_ext.run(self)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 365, in run
self.build_extensions()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 900, in build_extensions
build_ext.build_extensions(self)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 481, in build_extensions
self._build_extensions_serial()
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 507, in _build_extensions_serial
self.build_extension(ext)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 264, in build_extension
_build_ext.build_extension(self, ext)
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 562, in build_extension
objects = self.compiler.compile(
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 713, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1869, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/tmp/pip-build-env-6z82954b/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2225, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for ista_daslab_optimizers
Building wheel for gpustat (pyproject.toml) ... done
Created wheel for gpustat: filename=gpustat-1.1.1-py3-none-any.whl size=26608 sha256=12f01e33aeda5a146a588d8b2cdd9ed7a79eee45aac8d793d4f545522db94cdf
Stored in directory: /root/.cache/pip/wheels/12/7d/d7/444dca5ad3c5ea8c4e00ff211673505e0214fa9b0303dbd3b6
Successfully built gpustat
Failed to build ista_daslab_optimizers
ERROR: Failed to build installable wheels for some pyproject.toml based projects (ista_daslab_optimizers)
Could you help with this? thanks!
nairouz
Metadata
Metadata
Assignees
Labels
No labels