Skip to content

opencl: add set_rows for f16 and f32 #14547

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 10, 2025

Conversation

lhez
Copy link
Collaborator

@lhez lhez commented Jul 6, 2025

Following a70c8a0, this PR adds set_rows for f16 and f32.

Make sure to read the contributing guidelines before submitting a PR

@github-actions github-actions bot added ggml changes relating to the ggml tensor library for machine learning OpenCL Issues specific to the OpenCL backend labels Jul 6, 2025
Comment on lines 3482 to 3486

int nth0 = 256;
size_t global_work_size[] = {(size_t)ne01*nth0, (size_t)ne02, (size_t)ne03};
size_t local_work_size[] = {(size_t)nth0, 1, 1};

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that implementing it like this won't be very efficient. This dedicates 256 threads for each row of data. So for small rows with less than 256 elements, there will be wasted resources. For example, when FA is disabled, ggml_set_rows() is used with rows of 1 element (due to the V cache being transposed), so 255 out of the 256 local threads will be idle.

That's why in the Metal implementation I did "threadgroup batching" so that the local threads can work on multiple rows. Might want to consider implementing it here too for improved performance.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the suggestion - it makes a good point. Looking into this.

@lhez lhez marked this pull request as ready for review July 7, 2025 06:22
@CISC
Copy link
Collaborator

CISC commented Jul 9, 2025

@ggerganov Any chance getting an OpenCL runner on ggml-ci?

@ggerganov
Copy link
Member

The easiest way is if there are suitable machines in Azure cloud because we have a grant for these. The other option is someone to donate a dedicated machine (like how the SYCL node is donated by Menlo AI).

@max-krasnyansky
Copy link
Collaborator

@ggerganov Any chance getting an OpenCL runner on ggml-ci?

Yeah, I was thinking of adding some X-Elite based runners but didn't get a chance to look into it.
Standard runners do not have GPU -- https://docs.github.com/en/actions/reference/github-hosted-runners-reference#supported-runners-and-hardware-resources
We could probably enable some CPU based OpenCL testing but it won't really test all the features of the backend.

@max-krasnyansky
Copy link
Collaborator

The easiest way is if there are suitable machines in Azure cloud because we have a grant for these. The other option is someone to donate a dedicated machine (like how the SYCL node is donated by Menlo AI).

The N-series might work : https://learn.microsoft.com/en-us/azure-stack/user/gpu-vms-about?view=azs-2501
but would still be quite limited due to lack of OpenCL FP16 support.

@CISC
Copy link
Collaborator

CISC commented Jul 9, 2025

The easiest way is if there are suitable machines in Azure cloud because we have a grant for these. The other option is someone to donate a dedicated machine (like how the SYCL node is donated by Menlo AI).

The N-series might work : https://learn.microsoft.com/en-us/azure-stack/user/gpu-vms-about?view=azs-2501 but would still be quite limited due to lack of OpenCL FP16 support.

Even the MI25?

@max-krasnyansky
Copy link
Collaborator

The easiest way is if there are suitable machines in Azure cloud because we have a grant for these. The other option is someone to donate a dedicated machine (like how the SYCL node is donated by Menlo AI).

The N-series might work : https://learn.microsoft.com/en-us/azure-stack/user/gpu-vms-about?view=azs-2501 but would still be quite limited due to lack of OpenCL FP16 support.

Even the MI25?

Oh, I didn't notice MI25 in there, only saw the NV flavors somehow :).
In the past only the Intel and Qualcomm GPU drivers had decent OpenCL feature-set (for GGML / LLM needs that is).
Not a bad idea to check on the latest stuff.

@ggerganov if you could get one of those VMs hooked up as runner with some new tag we could see which tests we could run on it. It will be useful for the HIP and maybe Vulkan backends as well.

@lhez
Copy link
Collaborator Author

lhez commented Jul 9, 2025

The easiest way is if there are suitable machines in Azure cloud because we have a grant for these. The other option is someone to donate a dedicated machine (like how the SYCL node is donated by Menlo AI).

The N-series might work : https://learn.microsoft.com/en-us/azure-stack/user/gpu-vms-about?view=azs-2501 but would still be quite limited due to lack of OpenCL FP16 support.

Even the MI25?

Oh, I didn't notice MI25 in there, only saw the NV flavors somehow :). In the past only the Intel and Qualcomm GPU drivers had decent OpenCL feature-set (for GGML / LLM needs that is). Not a bad idea to check on the latest stuff.

@ggerganov if you could get one of those VMs hooked up as runner with some new tag we could see which tests we could run on it. It will be useful for the HIP and maybe Vulkan backends as well.

Currently it won't run on AMD. I did try to enable on AMD but have never really finished. I will take a look at it again. Intel should just work if integrated Intel GPUs can be used. Nvidia's OpenCL implementation never gets OpenCL 2.0 features like subgroups (unfortunately subgroups is not mandatory for OpenCL 3.0 so you can claim OpenCL 3.0 without subgroups).

@ggerganov
Copy link
Member

Ok, if you confirm support with any of the Azure hosts let me know and I'll add a node.

@max-krasnyansky
Copy link
Collaborator

I'm going to merge it now. We can iterate further if needed.

@max-krasnyansky max-krasnyansky merged commit 0b88557 into ggml-org:master Jul 10, 2025
48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning OpenCL Issues specific to the OpenCL backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants