Skip to content

Commit 9db83f3

Browse files
committed
krunkit: move install-vulkan-gpu.sh to docs
The script seems immature to be included in the driver package. The script didn't work for me on my MacBook Pro 2024 with Apple M4 Max: ``` cc1: sorry, unimplemented: no support for ‘sme’ without ‘sve2’ ``` Signed-off-by: Akihiro Suda <[email protected]>
1 parent a6ea2fa commit 9db83f3

File tree

3 files changed

+55
-75
lines changed

3 files changed

+55
-75
lines changed

pkg/driver/krunkit/hack/install-vulkan-gpu.sh

Lines changed: 0 additions & 52 deletions
This file was deleted.

pkg/driver/krunkit/krunkit_driver_darwin_arm64.go

Lines changed: 0 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -42,9 +42,6 @@ type LimaKrunkitDriver struct {
4242
var (
4343
_ driver.Driver = (*LimaKrunkitDriver)(nil)
4444
vmType limatype.VMType = "krunkit"
45-
46-
//go:embed hack/install-vulkan-gpu.sh
47-
gpuProvisionScript string
4845
)
4946

5047
func New() *LimaKrunkitDriver {
@@ -207,25 +204,6 @@ func (l *LimaKrunkitDriver) FillConfig(_ context.Context, cfg *limatype.LimaYAML
207204

208205
cfg.VMType = ptr.Of(vmType)
209206

210-
if isFedoraConfigured(cfg) {
211-
gpuInstallScript := limatype.Provision{
212-
Mode: limatype.ProvisionModeData,
213-
Script: ptr.Of(gpuProvisionScript),
214-
ProvisionData: limatype.ProvisionData{
215-
Content: ptr.Of(gpuProvisionScript),
216-
Path: ptr.Of("/usr/local/bin/install-vulkan-gpu.sh"),
217-
Permissions: ptr.Of("0755"),
218-
Overwrite: ptr.Of(false),
219-
Owner: cfg.User.Name,
220-
},
221-
}
222-
223-
cfg.Provision = append(cfg.Provision, gpuInstallScript)
224-
cfg.Message = `To enable GPU support (Vulkan) for Krunkit to use AI models without containers, run the following command inside the VM:
225-
` + "\x1b[32m" + `sudo install-vulkan-gpu.sh` + "\x1b[0m" + `
226-
` + "\x1b[31m" + `Ignore this if already done` + "\x1b[0m" + "\n"
227-
}
228-
229207
return validateConfig(cfg)
230208
}
231209

website/content/en/docs/config/vmtype/krunkit.md

Lines changed: 55 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,10 +101,64 @@ mountType: virtiofs
101101
102102
Once inside the VM, install GPU/Vulkan support:
103103
104+
<details><summary>Click to expand script</summary>
105+
104106
```bash
105-
sudo install-vulkan-gpu.sh
107+
#!/bin/bash
108+
# SPDX-FileCopyrightText: Copyright The Lima Authors
109+
# SPDX-License-Identifier: Apache-2.0
110+
111+
set -eu -o pipefail
112+
113+
# Install required packages
114+
dnf install -y dnf-plugins-core dnf-plugin-versionlock llvm18-libs
115+
116+
# Install Vulkan and Mesa base packages
117+
dnf install -y \
118+
mesa-vulkan-drivers \
119+
vulkan-loader-devel \
120+
vulkan-headers \
121+
vulkan-tools \
122+
vulkan-loader \
123+
glslc
124+
125+
# Enable COPR repo with patched Mesa for Venus support
126+
dnf copr enable -y slp/mesa-krunkit fedora-40-aarch64
127+
128+
# Downgrade to patched Mesa version from COPR
129+
dnf downgrade -y mesa-vulkan-drivers.aarch64 \
130+
--repo=copr:copr.fedorainfracloud.org:slp:mesa-krunkit
131+
132+
# Lock Mesa version to prevent automatic upgrades
133+
dnf versionlock add mesa-vulkan-drivers
134+
135+
# Clean up
136+
dnf clean all
137+
138+
echo "Installing llama.cpp with Vulkan support..."
139+
# Build and install llama.cpp with Vulkan support
140+
dnf install -y git cmake clang curl-devel glslc vulkan-devel virglrenderer
141+
(
142+
cd ~
143+
git clone https://github.com/ggml-org/llama.cpp
144+
(
145+
cd llama.cpp
146+
git reset --hard 97340b4c9924be86704dbf155e97c8319849ee19
147+
cmake -B build -DGGML_VULKAN=ON -DGGML_CCACHE=OFF -DCMAKE_INSTALL_PREFIX=/usr
148+
# FIXME: the build seems to fail on Apple M4 Max (and probably on other processors too).
149+
# Error:
150+
# cc1: sorry, unimplemented: no support for ‘sme’ without ‘sve2’
151+
cmake --build build --config Release -j8
152+
cmake --install build
153+
)
154+
rm -fr llama.cpp
155+
)
156+
157+
echo "Successfully installed llama.cpp with Vulkan support. Use 'llama-cli' app with .gguf models."
106158
```
107159

160+
</details>
161+
108162
The script will prompt to build and install `llama.cpp` with Venus support from source.
109163

110164
After installation, run:

0 commit comments

Comments
 (0)