Skip to content

fix: Make sure AI docs are up-to-date and do some cleanup #13418

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions admin_manual/ai/app_assistant.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,11 +74,8 @@ These apps currently implement the following Assistant Tasks:
* *Summarize* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
* *Generate headline* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
* *Extract topics* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)

Additionally, *integration_openai* also implements the following Assistant Tasks:

* *Context write* (Tested with OpenAI GPT-3.5)
* *Reformulate text* (Tested with OpenAI GPT-3.5)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both are now available in llm2

* *Context write* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
* *Reformulate text* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)

These tasks may work with other models, but we can give no guarantees.

Expand All @@ -98,7 +95,7 @@ In order to make use of our special Context Chat feature, offering in-context in

* :ref:`context_chat + context_chat_backend<ai-app-context_chat>` - (Customer support available upon request)

You will also need a text processing provider as specified above (ie. llm2 or integration_openai).
You will also need a text processing provider as specified above (ie. llm2, integration_openai or integration_watsonx).

Context Agent
~~~~~~~~~~~~~
Expand All @@ -117,6 +114,7 @@ Text-To-Speech
In order to make use of Text-To-Speech, you will need an app that provides a Text-To-Speech backend:

* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
* *text2speech_kokoro* - Runs a local model

Configuration
-------------
Expand Down
70 changes: 64 additions & 6 deletions admin_manual/ai/app_context_agent.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,14 @@ Currently implemented tools:

* Example prompt: *"List the latest messages in my conversation with Andrew"*

* Send a message to a talk conversation

* Example prompt: *"Can you send a joke to Andrew in talk?"*

* Create a public talk conversation

* Example prompt: *"Can you create a new public talk conversation titled 'Press conference'?"*

* Find a person in the user's contacts

* Example prompt: *"What is Andrew's Email address?"*
Expand All @@ -34,21 +42,39 @@ Currently implemented tools:

* Example prompt: *"What is the company's sick leave process?"*

* Transcribe a media file

* Example prompt: *"Can you transcribe the following file? https://mycloud.com/f/9825679"* (Can be selected via smart picker.)

* Generate documents

* Example prompt: *"Can you generate me a slide deck for my presentation about cats?"*
* Example prompt: *"Can you generate me a spreadsheet with some plausible numbers for countries and their population count?"*
* Example prompt: *"Can you generate me a pdf with an outline about what to see in Berlin?"*

* Generate images

* Example prompt: *"Can you generate me an image of a cartoon drawing of a roman soldier typing something on a laptop?"*

* Get coordinates for an Address from Open Street Maps Nomatim

* Example prompt: *"List my calendars"*
* Example prompt: *"What are the coordinates for Berlin, Germany?"*

* Get the URL for a map of a location using Open Street Maps

* Example prompt: *"Can you show me a map of New York, please"*

* Get the current weather at a location

* Example prompt: *"How is the weather in Berlin?"*

* Schedule an event in the user's calendar

* Example prompt: *"Make schedule an event with Andrew tomorrow at noon."*
* Example prompt: *"Schedule an event with Andrew tomorrow at noon."*

* Send a message to a talk conversation
* Find free times in users' calendar

* Example prompt: *"Can you send a joke to Andrew in talk?"*
* Example prompt: *"Find a free 1-hour slot for a meeting with me and Marco next week."*

* Create a deck card

Expand All @@ -66,10 +92,42 @@ Currently implemented tools:

* Example prompt: *"Show me the youtube video of the Nextcloud hub 10 launch."*

* Search Duckduckgo

* Example prompt: *"Show me search results for quick pasta recipes, please."*

* Send an email via Nextcloud Mail

* Example prompt *"Send a test email from [email protected] to [email protected] from my account with id 12"*
* (The account ID will soon be irrelevant)
* Example prompt: *"Send a test email from my [email protected] account to [email protected]"*

* Get contents of a file

* Example prompt: *"Can you summarize the following file in my documents? Design/Planning.md"*

* Generate a public share link for a file

* Example prompt: *"Can create a share link for the following file in my documents? Design/Planning.md"*

* Get the folder tree of the user's files

* Example prompt: *"Can you show me the folder tree of my files?"*

* Determine public transport routes

* Example prompt: *"How can I get from Würzburg Hauptbahnhof to Berlin Hauptbahnhof?"*

* List all projects in OpenProject

* Example prompt: *"List all my projects in OpenProject, please"*

* List all available assignees of a project in OpenProject

* Example prompt: *"List all available assignees for the 'Product launch' project in OpenProject"*

* Create a new work package in a given project in OpenProject

* Example prompt: *"Create a work package called 'Publish release video' in the 'Product launch' project in OpenProject"*


These tools can also be combined by the agent to fulfil tasks like the following:

Expand Down
71 changes: 6 additions & 65 deletions admin_manual/ai/app_context_chat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Context Chat is an :ref:`assistant<ai-app-assistant>` feature that is implemente
* the *context_chat* app, written purely in PHP
* the *context_chat_backend* ExternalApp written in Python

Together they provide the ContextChat text processing tasks accessible via the :ref:`Nextcloud Assistant app<ai-app-assistant>`.
Together they provide the ContextChat *text processing* and *search* tasks accessible via the :ref:`Nextcloud Assistant app<ai-app-assistant>`.

The *context_chat* and *context_chat_backend* apps will use the Free text-to-text task processing providers like OpenAI integration, LLM2, etc. and such a provider is required on a fresh install, or it can be configured to run open source models entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.

Expand Down Expand Up @@ -37,7 +37,7 @@ Requirements
* At least 12GB of system RAM
* 2 GB + additional 500MB for each request made to the backend if the Free text-to-text provider is not on the same machine
* 8 GB is recommended in the above case for the default settings
* This app makes use of the configured free text-to-text task processing provider instead of running its own language model by default, you will thus need 4+ cores for the embedding model only (backed configuration needs changes to make use of the extra cores, refer to `Configuration Options (Backend)`_)
* This app makes use of the configured free text-to-text task processing provider instead of running its own language model by default, you will thus need 4+ cores for the embedding model only

* A dedicated machine is recommended

Expand Down Expand Up @@ -139,8 +139,8 @@ The options for each command can be found like this, using scan as example: ``co
| These file and ownership changes are synced with the backed through this actions queue.


Configuration Options (OCC)
---------------------------
Configuration Options
---------------------

* ``auto_indexing`` boolean (default: true)
To allow/disallow the IndexerJob from running in the background
Expand All @@ -149,64 +149,11 @@ Configuration Options (OCC)

occ config:app:set context_chat auto_indexing --value=true --type=boolean

* ``indexing_batch_size`` integer (default: 5000)
The number of files to index per run of the indexer background job (this is limited by `indexing_max_time`)

.. code-block::

occ config:app:set context_chat indexing_batch_size --value=100 --type=integer

* ``indexing_job_interval`` integer (default: 1800)
The interval at which the indexer jobs run in seconds

.. code-block::

occ config:app:set context_chat indexing_job_interval --value=1800 --type=integer

* ``indexing_max_time`` integer (default: 1800)
The number of seconds to index files for per run, regardless of batch size

.. code-block::

occ config:app:set context_chat indexing_max_time --value=1800 --type=integer

* ``request_timeout`` integer (default: 3000)
Request timeout in seconds for all requests made to the Context chat backend (the external app in AppAPI).
If a docker socket proxy is used, the ``TIMEOUT_SERVER`` environment variable should be set to a value higher than ``request_timeout``.

.. code-block::

occ config:app:set context_chat request_timeout --value=3 --type=integer


Configuration Options (Backend)
-------------------------------

Refer to `the Configuration head <https://github.com/nextcloud/context_chat_backend?tab=readme-ov-file#configuration>`_ in the backend's readme.


Logs
----

Logs for the ``context_chat`` PHP app can be found in the Nextcloud log file, which is usually located in the Nextcloud data directory. The log file is named ``nextcloud.log``.
Diagnostic logs can be found in the Nextcloud data directory in ``context_chat.log`` file.

| For the backend, warning and error logs can be found in the docker container logs ``docker logs -f -n 200 nc_app_context_chat_backend``, and the complete logs can be found in ``logs/`` directory in the persistent storage of the docker container.
| That will be ``/nc_app_context_chat_backend/logs/`` in the docker container.

This command can be used to view the detailed logs in real-time:

.. code-block::

docker exec nc_app_context_chat_backend tail -f /nc_app_context_chat_backend/logs/ccb.log

Same for the embedding server:

.. code-block::

docker exec nc_app_context_chat_backend tail -f /nc_app_context_chat_backend/logs/embedding_server_*.log``

See `the Logs head <https://github.com/nextcloud/context_chat_backend?tab=readme-ov-file#logs>`_ in the backend's readme for more information.
Logs for both the ``context_chat`` PHP app and the ``context_chat_backend`` ExApp can be found in the admin settings of your Nextcloud GUI as well as in the Context Chat log file, which is usually located in the Nextcloud data directory. The log file is named ``context_chat.log``.

Troubleshooting
---------------
Expand All @@ -215,12 +162,6 @@ Troubleshooting
2. Look for issues in the diagnostic logs, the server logs and the docker container ``nc_app_context_chat_container`` logs. If unsure, open an issue in either of the repositories.
3. Check "Admin settings -> Context Chat" for statistics and information about the indexing process.

Possibility of Data Leak
------------------------

| It is possible that some users who had access to certain files/folders (and have later have been denied this access) still have access to the content of those files/folders through the Context Chat app. We're working on a solution for this.
| The users who never had access to a particular file/folder will NOT be able to see those contents in any way.

File access control rules not supported
---------------------------------------

Expand All @@ -236,5 +177,5 @@ Known Limitations

* Language models are likely to generate false information and should thus only be used in situations that are not critical. It's recommended to only use AI at the beginning of a creation process and not at the end, so that outputs of AI serve as a draft for example and not as final product. Always check the output of language models before using it and make sure whether it meets your use-case's quality requirements.
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI).
* Large files are not supported in "Selective context" in the Assistant UI if they have not been indexed before. Use ``occ context_chat:scan <user_id> -d <directory_path>`` to index the desired directory synchronously and then use the Selective context option. "Large files" could mean differently for different users. It depends on the amount of text inside the documents in question and the hardware on which the indexer is running. Generally 20 MB should be large for a CPU-backed setup and 100 MB for a GPU-backed system.
* Files larger than 100MB are not supported
* Password protected PDFs or any other files are not supported. There will be error logs mentioning cryptography and AES in the docker container when such files are encountered but it is nothing to worry about, they will be simply ignored and the system will continue to function normally.
1 change: 1 addition & 0 deletions admin_manual/ai/app_llm2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ Scaling
-------

It is currently not possible to scale this app, we are working on this. Based on our calculations an instance has a rough capacity of 1000 user requests per hour. However, this number is based on theory and we do appreciate real-world feedback on this.
If you would like to scale up your language model usage, we recommend using an :ref:`AI as a Service provider<ai-ai_as_a_service>` or hosting a service compatible with the OpenAI API yourself that can be scaled up and connecting nextcloud to it via the `integration_openai app <https://apps.nextcloud.com/apps/integration_openai>`_.

App store
---------
Expand Down
1 change: 0 additions & 1 deletion admin_manual/ai/app_stt_whisper2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,3 @@ Known Limitations
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
* Language models notoriously have a high energy consumption, if you want to reduce load on your server you can choose smaller models or quantized models in exchange for lower accuracy
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
* Due to technical limitations that we are in the process of mitigating, each task currently incurs a time cost of between 0 and 5 minutes in addition to the actual processing time
69 changes: 69 additions & 0 deletions admin_manual/ai/app_text2speech_kokoro.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
==============================================
App: Local Text-To-Speech (text2speech_kokoro)
==============================================

.. _ai-app-text2speech_kokoro:

The *text2speech_kokoro* app is one of the apps that provide Text-To-Speech functionality in Nextcloud and act as a speech generation backend for the :ref:`Nextcloud Assistant app<ai-app-assistant>` and :ref:`other apps making use of the core `Text-To-Speech Task type<t2s-consumer-apps>`. The *text2speech_kokoro* app specifically runs only open source models and does so entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.

This app uses `Kokoro <https://github.com/hexgrad/kokoro>`_ under the hood.

The used model supports the following languages:

* American English
* British English
* Spanish
* French
* Italian
* Hindi
* Portuguese
* Japanese
* Mandarin

Requirements
------------

* Minimal Nextcloud version: 31
* This app is built as an External App and thus depends on AppAPI v2.3.0
* Nextcloud AIO is supported
* We currently support x86_64 CPUs
* We do not support GPUs

* CPU Sizing

* The more cores you have and the more powerful the CPU the better, we recommend around 10 cores
* The app will hog all cores by default, so it is usually better to run it on a separate machine
* 800MB RAM

Installation
------------

0. Make sure the :ref:`Nextcloud Assistant app<ai-app-assistant>` is installed
1. :ref:`Install AppAPI and setup a Deploy Demon<ai-app_api>`
2. Install the *text2speech_kokoro* "Local Text-To-Speech" ExApp via the "Apps" page in the Nextcloud web admin user interface


Scaling
-------

It is currently not possible to scale this app, we are working on this. Based on our calculations an instance has a rough capacity of 4h of transcription throughput per minute (measured with 8 CPU threads on an Intel(R) Xeon(R) Gold 6226R). It is unclear how close to real-world usage this number is, so we do appreciate real-world feedback on this.

App store
---------

You can also find this app in our app store, where you can write a review: `<https://apps.nextcloud.com/apps/text2speech_kokoro>`_

Repository
----------

You can find the app's code repository on GitHub where you can report bugs and contribute fixes and features: `<https://github.com/nextcloud/text2speech_kokoro>`_

Nextcloud customers should file bugs directly with our customer support.

Known Limitations
-----------------

* We currently only support languages supported by the underlying Kokoro model
* The Kokoro models perform unevenly across languages, and may show lower accuracy on low-resource and/or low-discoverability languages or languages where there was less training data available.
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
1 change: 1 addition & 0 deletions admin_manual/ai/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,5 @@ Artificial Intelligence
app_context_chat
app_context_agent
app_summary_bot
app_text2speech_kokoro
ai_as_a_service
Loading