You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: admin_manual/ai/app_assistant.rst
+4-6Lines changed: 4 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -74,11 +74,8 @@ These apps currently implement the following Assistant Tasks:
74
74
* *Summarize* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
75
75
* *Generate headline* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
76
76
* *Extract topics* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
77
-
78
-
Additionally, *integration_openai* also implements the following Assistant Tasks:
79
-
80
-
* *Context write* (Tested with OpenAI GPT-3.5)
81
-
* *Reformulate text* (Tested with OpenAI GPT-3.5)
77
+
* *Context write* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
78
+
* *Reformulate text* (Tested with OpenAI GPT-3.5 and Llama 3.1 8B)
82
79
83
80
These tasks may work with other models, but we can give no guarantees.
84
81
@@ -98,7 +95,7 @@ In order to make use of our special Context Chat feature, offering in-context in
98
95
99
96
* :ref:`context_chat + context_chat_backend<ai-app-context_chat>` - (Customer support available upon request)
100
97
101
-
You will also need a text processing provider as specified above (ie. llm2or integration_openai).
98
+
You will also need a text processing provider as specified above (ie. llm2, integration_openai or integration_watsonx).
102
99
103
100
Context Agent
104
101
~~~~~~~~~~~~~
@@ -117,6 +114,7 @@ Text-To-Speech
117
114
In order to make use of Text-To-Speech, you will need an app that provides a Text-To-Speech backend:
118
115
119
116
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
Copy file name to clipboardExpand all lines: admin_manual/ai/app_context_chat.rst
+6-65Lines changed: 6 additions & 65 deletions
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ Context Chat is an :ref:`assistant<ai-app-assistant>` feature that is implemente
9
9
* the *context_chat* app, written purely in PHP
10
10
* the *context_chat_backend* ExternalApp written in Python
11
11
12
-
Together they provide the ContextChat text processing tasks accessible via the :ref:`Nextcloud Assistant app<ai-app-assistant>`.
12
+
Together they provide the ContextChat *text processing* and *search* tasks accessible via the :ref:`Nextcloud Assistant app<ai-app-assistant>`.
13
13
14
14
The *context_chat* and *context_chat_backend* apps will use the Free text-to-text task processing providers like OpenAI integration, LLM2, etc. and such a provider is required on a fresh install, or it can be configured to run open source models entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.
15
15
@@ -37,7 +37,7 @@ Requirements
37
37
* At least 12GB of system RAM
38
38
* 2 GB + additional 500MB for each request made to the backend if the Free text-to-text provider is not on the same machine
39
39
* 8 GB is recommended in the above case for the default settings
40
-
* This app makes use of the configured free text-to-text task processing provider instead of running its own language model by default, you will thus need 4+ cores for the embedding model only (backed configuration needs changes to make use of the extra cores, refer to `Configuration Options (Backend)`_)
40
+
* This app makes use of the configured free text-to-text task processing provider instead of running its own language model by default, you will thus need 4+ cores for the embedding model only
41
41
42
42
* A dedicated machine is recommended
43
43
@@ -139,8 +139,8 @@ The options for each command can be found like this, using scan as example: ``co
139
139
|These file and ownership changes are synced with the backed through this actions queue.
140
140
141
141
142
-
Configuration Options (OCC)
143
-
---------------------------
142
+
Configuration Options
143
+
---------------------
144
144
145
145
* ``auto_indexing`` boolean (default: true)
146
146
To allow/disallow the IndexerJob from running in the background
Refer to `the Configuration head <https://github.com/nextcloud/context_chat_backend?tab=readme-ov-file#configuration>`_ in the backend's readme.
186
-
187
152
188
153
Logs
189
154
----
190
155
191
-
Logs for the ``context_chat`` PHP app can be found in the Nextcloud log file, which is usually located in the Nextcloud data directory. The log file is named ``nextcloud.log``.
192
-
Diagnostic logs can be found in the Nextcloud data directory in ``context_chat.log`` file.
193
-
194
-
|For the backend, warning and error logs can be found in the docker container logs ``docker logs -f -n 200 nc_app_context_chat_backend``, and the complete logs can be found in ``logs/`` directory in the persistent storage of the docker container.
195
-
|That will be ``/nc_app_context_chat_backend/logs/`` in the docker container.
196
-
197
-
This command can be used to view the detailed logs in real-time:
See `the Logs head <https://github.com/nextcloud/context_chat_backend?tab=readme-ov-file#logs>`_ in the backend's readme for more information.
156
+
Logs for both the ``context_chat`` PHP app and the ``context_chat_backend`` ExApp can be found in the admin settings of your Nextcloud GUI as well as in the Context Chat log file, which is usually located in the Nextcloud data directory. The log file is named ``context_chat.log``.
210
157
211
158
Troubleshooting
212
159
---------------
@@ -215,12 +162,6 @@ Troubleshooting
215
162
2. Look for issues in the diagnostic logs, the server logs and the docker container ``nc_app_context_chat_container`` logs. If unsure, open an issue in either of the repositories.
216
163
3. Check "Admin settings -> Context Chat" for statistics and information about the indexing process.
217
164
218
-
Possibility of Data Leak
219
-
------------------------
220
-
221
-
|It is possible that some users who had access to certain files/folders (and have later have been denied this access) still have access to the content of those files/folders through the Context Chat app. We're working on a solution for this.
222
-
|The users who never had access to a particular file/folder will NOT be able to see those contents in any way.
223
-
224
165
File access control rules not supported
225
166
---------------------------------------
226
167
@@ -236,5 +177,5 @@ Known Limitations
236
177
237
178
* Language models are likely to generate false information and should thus only be used in situations that are not critical. It's recommended to only use AI at the beginning of a creation process and not at the end, so that outputs of AI serve as a draft for example and not as final product. Always check the output of language models before using it and make sure whether it meets your use-case's quality requirements.
238
179
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI).
239
-
* Large files are not supported in "Selective context" in the Assistant UI if they have not been indexed before. Use ``occ context_chat:scan <user_id> -d <directory_path>`` to index the desired directory synchronously and then use the Selective context option. "Large files" could mean differently for different users. It depends on the amount of text inside the documents in question and the hardware on which the indexer is running. Generally 20 MB should be large for a CPU-backed setup and 100 MB for a GPU-backed system.
180
+
* Files larger than 100MB are not supported
240
181
* Password protected PDFs or any other files are not supported. There will be error logs mentioning cryptography and AES in the docker container when such files are encountered but it is nothing to worry about, they will be simply ignored and the system will continue to function normally.
Copy file name to clipboardExpand all lines: admin_manual/ai/app_llm2.rst
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -109,6 +109,7 @@ Scaling
109
109
-------
110
110
111
111
It is currently not possible to scale this app, we are working on this. Based on our calculations an instance has a rough capacity of 1000 user requests per hour. However, this number is based on theory and we do appreciate real-world feedback on this.
112
+
If you would like to scale up your language model usage, we recommend using an :ref:`AI as a Service provider<ai-ai_as_a_service>` or hosting a service compatible with the OpenAI API yourself that can be scaled up and connecting nextcloud to it via *integration_openai*.
Copy file name to clipboardExpand all lines: admin_manual/ai/app_stt_whisper2.rst
-1Lines changed: 0 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -77,4 +77,3 @@ Known Limitations
77
77
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
78
78
* Language models notoriously have a high energy consumption, if you want to reduce load on your server you can choose smaller models or quantized models in exchange for lower accuracy
79
79
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
80
-
* Due to technical limitations that we are in the process of mitigating, each task currently incurs a time cost of between 0 and 5 minutes in addition to the actual processing time
The *text2speech_kokoro* app is one of the apps that provide Text-To-Speech functionality in Nextcloud and act as a speech generation backend for the :ref:`Nextcloud Assistant app<ai-app-assistant>` and :ref:`other apps making use of the core `Text-To-Speech Task type<t2s-consumer-apps>`. The *text2speech_kokoro* app specifically runs only open source models and does so entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.
8
+
9
+
This app uses `Kokoro <https://github.com/hexgrad/kokoro>`_ under the hood.
10
+
11
+
The used model supports the following languages:
12
+
13
+
* American English
14
+
* British English
15
+
* Spanish
16
+
* French
17
+
* Italian
18
+
* Hindi
19
+
* Portuguese
20
+
* Japanese
21
+
* Mandarin
22
+
23
+
Requirements
24
+
------------
25
+
26
+
* Minimal Nextcloud version: 31
27
+
* This app is built as an External App and thus depends on AppAPI v2.3.0
28
+
* Nextcloud AIO is supported
29
+
* We currently support x86_64 CPUs
30
+
* We do not support GPUs
31
+
32
+
* CPU Sizing
33
+
34
+
* The more cores you have and the more powerful the CPU the better, we recommend around 10 cores
35
+
* The app will hog all cores by default, so it is usually better to run it on a separate machine
36
+
* 800MB RAM
37
+
38
+
Installation
39
+
------------
40
+
41
+
0. Make sure the :ref:`Nextcloud Assistant app<ai-app-assistant>` is installed
42
+
1. :ref:`Install AppAPI and setup a Deploy Demon<ai-app_api>`
43
+
2. Install the *text2speech_kokoro* "Local Text-To-Speech" ExApp via the "Apps" page in the Nextcloud web admin user interface
44
+
45
+
46
+
Scaling
47
+
-------
48
+
49
+
It is currently not possible to scale this app, we are working on this. Based on our calculations an instance has a rough capacity of 4h of transcription throughput per minute (measured with 8 CPU threads on an Intel(R) Xeon(R) Gold 6226R). It is unclear how close to real-world usage this number is, so we do appreciate real-world feedback on this.
50
+
51
+
App store
52
+
---------
53
+
54
+
You can also find this app in our app store, where you can write a review: `<https://apps.nextcloud.com/apps/text2speech_kokoro>`_
55
+
56
+
Repository
57
+
----------
58
+
59
+
You can find the app's code repository on GitHub where you can report bugs and contribute fixes and features: `<https://github.com/nextcloud/text2speech_kokoro>`_
60
+
61
+
Nextcloud customers should file bugs directly with our customer support.
62
+
63
+
Known Limitations
64
+
-----------------
65
+
66
+
* We currently only support languages supported by the underlying Kokoro model
67
+
* The Kokoro models perform unevenly across languages, and may show lower accuracy on low-resource and/or low-discoverability languages or languages where there was less training data available.
68
+
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
69
+
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
0 commit comments