-
Notifications
You must be signed in to change notification settings - Fork 255
Add support for PaddlePaddle Tensors for nb::ndarray
#1099
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…ensor. Some of the self-made tests won't pass. contig proc is broken, needs fixing.
|
||
@needs_paddle | ||
def test60_check_paddle(): | ||
assert t.check(paddle.zeros((1)).cpu()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.cpu() can actually be removed. Other parts are fine.
Found a problem though, don't know whether it is common for PyTorch and Numpy (I think it is): For these DL/Math frameworks, the tensors/ndarrays are managed by the framework, therefore, simple conversion might fail. For example (taking paddle::Tensor some_tensor(...);
auto shapes = some_tensor.dims();
auto strides = some_tensor.strides();
# Only viable when the paddle-supporting PR is merged
return nb::ndarray<nb::paddle>(some_tensor.data(), ndim, shapes.data(),
{}, strides.data(), dtype,
device_type, device_id); The above code will fail, since struct LifeTimeManager {
// steal from Tensor, and help us manage the life time
explicit LifeTimeManager(paddle::Tensor&& obj) : raii_obj(std::move(obj)) {}
void operator()(void*) const {
// A magic way: we don't use this,
}
paddle::Tensor raii_obj;
};
struct CapsuleDeleter {
std::function<void(void*)> func;
CapsuleDeleter(LifeTimeManager&& ltm)
: func([ltm=std::move(ltm)](void* p) mutable { ltm(p); }) {}
static void cleanup(void* ptr) noexcept {
auto* self = static_cast<CapsuleDeleter*>(ptr);
self->func(self);
delete self;
}
};
// upon returning:
auto* deleter_obj = new CapsuleDeleter(LifeTimeManager(std::move(tensor)));
nb::capsule deleter(
deleter_obj,
"tensor_deleter",
&CapsuleDeleter::cleanup
);
return nb::ndarray<nb::paddle>(some_tensor.data(), ndim, shapes.data(),
deleter, strides.data(), dtype,
device_type, device_id); Maybe I am overcomplicating things here? Is there any simple way for this? I think this is not a framework-specific problem. |
PR Type
Enhancement
Related topics
nb::ndarray
Intro
Hi! I’m a researcher from the PaddlePaddle team. After our conversation at SIGGRAPH Asia 2024 (where you recommended migrating from pybind11 to nanobind), I gave nanobind a try and was impressed by its performance. As part of this effort, I’ve extended
nb::ndarray
to supportpaddle.Tensor
, allowing direct return of PaddlePaddle tensors from C++ vianb::paddle
.This PR includes:
nb::paddle
: Mirroringnb::pytorch
’s behavior, with modifications innb_ndarray.cpp
for consistent tensor handling.@needs_torch
) to validate PaddlePaddle integration.nb::paddle
(preview welcome—I’m unsure how it renders on the website).test_ndarray.py
.This change maintains backward compatibility while enabling PaddlePaddle users to leverage nanobind’s efficiency. Looking forward to feedback!