Interact with Ollama LLMs using LXMFy bot framework.
curl -o .env https://raw.githubusercontent.com/lxmfy/ollama-bot/main/.env-example
edit .env with your Ollama API URL, Model, and LXMF address.
pipx install git+https://github.com/lxmfy/ollama-bot.git
lxmfy-ollama-bot
poetry install
poetry run lxmfy-ollama-bot
First, pull the latest image:
docker pull ghcr.io/lxmfy/ollama-bot:latest
Then, run the bot, mounting your .env file:
docker run -d \
--name ollama-bot \
--restart unless-stopped \
--network host \
-v $(pwd)/.env:/app/.env \
ghcr.io/lxmfy/ollama-bot:latestCommand prefix: /
/help - show help message
/about - show bot information
Send any message without the / prefix to chat with the AI model.
The bot will automatically respond using the configured Ollama model.
