Skip to content

Documentation

William Antônio Siqueira edited this page Jul 7, 2025 · 4 revisions

Welcome to the LLMFX wiki!

About

LLM FX is a graphic (GUI) client for OpenAI API compatible LLM servers, which means it can run using local servers (ollama, lmstudio, ramalama) or using an LLM provider (deep seek, alibaba, ChatGPT and so on). It has basic features such as:

  • History support: Keep all your chat saved in a file to consult later or reload the same prompt against a different model
  • Easy switch between models during the conversation
  • Export the conversation to different formats

Additionally it provides:

  • A couple of tools available out of the box
  • MCP Support: map any MCP using configuration and you should be ready to use with your favorite LLM
  • Some visual tools to give powers to the LLM to draw shapes, 3d, render HTML, create animations and more
  • Good logging and easy configuration (it is based on Quarkus and LangChain4j)

Building

You will need:

  • Java 21
  • Maven

Then run mvn clean install on the project root configuration and find the available JAR in target folder.

Running

You can use the command line to run the generated JAR:

java -jar llm-fx-{version}-runner.jar

Configuration

All the configuration can be done in file application.properties that will be in the same directory as the JAR, or using environment variables:

LLM Server

  • llm.url: The API base URL. Defaults to Ollama;
  • llm.key: API Key when required
  • llm.system-message: A system message to be used with all LLM chat
  • llm.model: Default model selected when the app opens
  • llm.timeout: A timeout for connecting to the server
  • llm.log-requests: If true then all requests made by LLM FX will be logged
  • llm.log-responses: If true then all responses from the LLM Server will be logged

App

  • app.history-file: The path to a file to save the history. If omitted then history is not stored
  • app.always-on-top: A boolean property when true will make the app not minimize by default when losing the focus.

MCP

  • mcp.servers.{MCP Server Name}.commands: comma separated command to run the mcp server

Using Environment Variables

You can also use environment variables. Before starting the JAR set environment variable by making it all upper case and replace . (dots) by undercore. For example:

export LLM_URL=http://localhost:1234/v1 && java -jar llm-fx-{version}-runner.jar

For more information about configuration please check Quarkus Configuration Guide.

Logging

LLM FX uses Quarkus logging. You can setup logging to see everything that happening under the hoods, specially changing the langchain4j logging category could be very useful:

quarkus.log.category."dev.langchain4j".level=TRACE

Tools

As mentioned, multiple tools are available for use and they are organized under categories. Once a tool is enabled, then the application acquire features that go beyond chatting, be responsible to what you enable during chat! These are the Tools categories:

  • Date and Time: Gives the current date and time information to the LLM
  • Files: Will allow the LLM to read, search and write files
  • Execute: Allow the LLM to execute ANY COMMAND or your computer or run Python scripts.
  • Web: Tools to search the WEB and/or open web pages to read information.
  • Graphics: Make the LLM able to interact with Graphic elements inside LLM FX.

Graphics Tools

TBD

Clone this wiki locally