Autocomplete in VSCode not working #7738
Replies: 3 comments 4 replies
-
Beta Was this translation helpful? Give feedback.
-
@Aroni525 The autocompletions are a little better if I use their in-house "instinct" model, but ONLY in NextEdit mode, and only if I turn the temperature up (add temperature: 0.6 under defaultCompletionOptions). If I look at the Continue console it looks like it just isn't sending a prompt at all to other models -- just the context of the edit. I'll leave it with Instinct for the time being because it at least kinda works but hopefully the bug in continue can get fixed -- I'd prefer to use qwen2.5-coder if only for speed |
Beta Was this translation helpful? Give feedback.
-
@StreamOfRon a) Glad to hear that Instinct worked well for you! b) Very much would like to help you get to your preferred setup with qwen. I tested out a few scenarios and it looks like the issue is that OpenWebUI doesn't support a raw completions endpoint, only /chat/completions (looks like you've caught this with The easiest alternative would be Ollama, where you only need to add |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Continue: 1.2.2
Model: qwen2.5-coder:3b
I've been trying for the last couple of weeks to get autocomplete working in VSCode to no avail. When I get a completion out of it, it's more like the model's been asked to describe what it's looking at than an autocomplete:

I've been over the docs numerous times but whatever I'm missing hasn't been apparent yet.
I am running it via OpenWebUI -> Llama.cpp but when I've tested having it go to llama.cpp directly the behavior is identical so that seems to rule out that as a cause. Chat features work great using the gpt-oss:20b model this way as well. Config is as follows:
Beta Was this translation helpful? Give feedback.
All reactions