Skip to content

Commit 618d1ec

Browse files
committed
krunkit: move install-vulkan-gpu.sh to docs
The script seems immature to be included in the driver package. The script didn't work for me on my MacBook Pro 2024 with Apple M4 Max: ``` cc1: sorry, unimplemented: no support for ‘sme’ without ‘sve2’ ``` Signed-off-by: Akihiro Suda <[email protected]>
1 parent a6ea2fa commit 618d1ec

File tree

3 files changed

+58
-75
lines changed

3 files changed

+58
-75
lines changed

pkg/driver/krunkit/hack/install-vulkan-gpu.sh

Lines changed: 0 additions & 52 deletions
This file was deleted.

pkg/driver/krunkit/krunkit_driver_darwin_arm64.go

Lines changed: 0 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -42,9 +42,6 @@ type LimaKrunkitDriver struct {
4242
var (
4343
_ driver.Driver = (*LimaKrunkitDriver)(nil)
4444
vmType limatype.VMType = "krunkit"
45-
46-
//go:embed hack/install-vulkan-gpu.sh
47-
gpuProvisionScript string
4845
)
4946

5047
func New() *LimaKrunkitDriver {
@@ -207,25 +204,6 @@ func (l *LimaKrunkitDriver) FillConfig(_ context.Context, cfg *limatype.LimaYAML
207204

208205
cfg.VMType = ptr.Of(vmType)
209206

210-
if isFedoraConfigured(cfg) {
211-
gpuInstallScript := limatype.Provision{
212-
Mode: limatype.ProvisionModeData,
213-
Script: ptr.Of(gpuProvisionScript),
214-
ProvisionData: limatype.ProvisionData{
215-
Content: ptr.Of(gpuProvisionScript),
216-
Path: ptr.Of("/usr/local/bin/install-vulkan-gpu.sh"),
217-
Permissions: ptr.Of("0755"),
218-
Overwrite: ptr.Of(false),
219-
Owner: cfg.User.Name,
220-
},
221-
}
222-
223-
cfg.Provision = append(cfg.Provision, gpuInstallScript)
224-
cfg.Message = `To enable GPU support (Vulkan) for Krunkit to use AI models without containers, run the following command inside the VM:
225-
` + "\x1b[32m" + `sudo install-vulkan-gpu.sh` + "\x1b[0m" + `
226-
` + "\x1b[31m" + `Ignore this if already done` + "\x1b[0m" + "\n"
227-
}
228-
229207
return validateConfig(cfg)
230208
}
231209

website/content/en/docs/config/vmtype/krunkit.md

Lines changed: 58 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,10 +101,67 @@ mountType: virtiofs
101101
102102
Once inside the VM, install GPU/Vulkan support:
103103
104+
<p>
105+
<details>
106+
<summary>Click to expand script</summary>
107+
104108
```bash
105-
sudo install-vulkan-gpu.sh
109+
#!/bin/bash
110+
# SPDX-FileCopyrightText: Copyright The Lima Authors
111+
# SPDX-License-Identifier: Apache-2.0
112+
113+
set -eu -o pipefail
114+
115+
# Install required packages
116+
dnf install -y dnf-plugins-core dnf-plugin-versionlock llvm18-libs
117+
118+
# Install Vulkan and Mesa base packages
119+
dnf install -y \
120+
mesa-vulkan-drivers \
121+
vulkan-loader-devel \
122+
vulkan-headers \
123+
vulkan-tools \
124+
vulkan-loader \
125+
glslc
126+
127+
# Enable COPR repo with patched Mesa for Venus support
128+
dnf copr enable -y slp/mesa-krunkit fedora-40-aarch64
129+
130+
# Downgrade to patched Mesa version from COPR
131+
dnf downgrade -y mesa-vulkan-drivers.aarch64 \
132+
--repo=copr:copr.fedorainfracloud.org:slp:mesa-krunkit
133+
134+
# Lock Mesa version to prevent automatic upgrades
135+
dnf versionlock add mesa-vulkan-drivers
136+
137+
# Clean up
138+
dnf clean all
139+
140+
echo "Installing llama.cpp with Vulkan support..."
141+
# Build and install llama.cpp with Vulkan support
142+
dnf install -y git cmake clang curl-devel glslc vulkan-devel virglrenderer
143+
(
144+
cd ~
145+
git clone https://github.com/ggml-org/llama.cpp
146+
(
147+
cd llama.cpp
148+
git reset --hard 97340b4c9924be86704dbf155e97c8319849ee19
149+
cmake -B build -DGGML_VULKAN=ON -DGGML_CCACHE=OFF -DCMAKE_INSTALL_PREFIX=/usr
150+
# FIXME: the build seems to fail on Apple M4 Max (and probably on other processors too).
151+
# Error:
152+
# cc1: sorry, unimplemented: no support for ‘sme’ without ‘sve2’
153+
cmake --build build --config Release -j8
154+
cmake --install build
155+
)
156+
rm -fr llama.cpp
157+
)
158+
159+
echo "Successfully installed llama.cpp with Vulkan support. Use 'llama-cli' app with .gguf models."
106160
```
107161

162+
</details>
163+
</p>
164+
108165
The script will prompt to build and install `llama.cpp` with Venus support from source.
109166

110167
After installation, run:

0 commit comments

Comments
 (0)