Skip to content

Conversation

wjddn279
Copy link

@wjddn279 wjddn279 commented May 20, 2024

Hello, I'm making good use of the open source you provided.
When I use it, I find an issue and fix it to contribute to your project

issue what I find

I want to use set_shared_memory_region_from_dlpack for GPU to GPU shared memory
I converted pytorch tensor to dlpack using by to_dlpack from dlpack library but it raises error in this line

# it work well
img_tensor = img_tensor_raw.resize_(target_height, target_width, 3)
cudashm.set_shared_memory_region_from_dlpack(cuda_shm_ip_handle, [img_tensor])

# it doesn't work
img_tensor = img_tensor_raw.resize_(1, target_height, target_width, 3)
cudashm.set_shared_memory_region_from_dlpack(cuda_shm_ip_handle, [img_tensor])

although I make pytorch tensor contiguous, after making it dlpack strides corrupts
it is known issue pytorch and the maintainer says it is not an issue but logic for preventing another issue.
So to_dlpack make stride in tensor's dimension shape is 1 just 1

so, I fix is_contiguous logic to skip contiguous checking when shape is 1 and it work well in my code.
If the contribution process is in the project, I will follow it.
thanks

@wjddn279
Copy link
Author

wjddn279 commented May 20, 2024

The problem I've had is the same problem as this one #536
This PR can resolve the issue without lower version of pytorch below 1.12

@631068264
Copy link

631068264 commented Apr 20, 2025

Get same error when I use My torch tensor

is_contiguous: True, device: cuda:0, dtype: torch.float32 shape torch.Size([1, 3, 224, 224]) stride (150528, 50176, 224, 1)

Can this pr fix ?

@wjddn279
Copy link
Author

@dyastremsky
hello, check this pr please?
I think this pr is necessary to your project

@wjddn279
Copy link
Author

@indrajit96

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants