Skip to content

Commit dc3cedc

Browse files
authored
Merge pull request #13163 from nextcloud/fix/mora-ai-updates
enh(context_chat): Warn about files_accesscontrol
2 parents 009934f + 133e4da commit dc3cedc

File tree

2 files changed

+11
-1
lines changed

2 files changed

+11
-1
lines changed

admin_manual/ai/app_context_chat.rst

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,11 +200,20 @@ Possibility of Data Leak
200200
| It is possible that some users who had access to certain files/folders (and have later have been denied this access) still have access to the content of those files/folders through the Context Chat app. We're working on a solution for this.
201201
| The users who never had access to a particular file/folder will NOT be able to see those contents in any way.
202202
203+
File access control rules not supported
204+
---------------------------------------
205+
206+
In Nextcloud you can set up file access control rules using the [files_accesscontrol](https://apps.nextcloud.com/apps/files_accesscontrol) app to restrict access to certain files.
207+
208+
| Context Chat does **not** follow these rules
209+
210+
It is thus possible for users who have been denied access to a document via the files_accesscontrol app to still gain access via Context Chat
211+
if the document is visible in the files app for the user in question.
212+
203213
Known Limitations
204214
-----------------
205215

206216
* Language models are likely to generate false information and should thus only be used in situations that are not critical. It's recommended to only use AI at the beginning of a creation process and not at the end, so that outputs of AI serve as a draft for example and not as final product. Always check the output of language models before using it and make sure whether it meets your use-case's quality requirements.
207-
* Context Chat is not integrated into the Chat UI of assistant app, at the moment, but has it's own interface in the assistant modal
208217
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI).
209218
* Large files are not supported in "Selective context" in the Assistant UI if they have not been indexed before. Use ``occ context_chat:scan <user_id> -d <directory_path>`` to index the desired directory synchronously and then use the Selective context option. "Large files" could mean differently for different users. It depends on the amount of text inside the documents in question and the hardware on which the indexer is running. Generally 20 MB should be large for a CPU-backed setup and 100 MB for a GPU-backed system.
210219
* Password protected PDFs or any other files are not supported. There will be error logs mentioning cryptography and AES in the docker container when such files are encountered but it is nothing to worry about, they will be simply ignored and the system will continue to function normally.

admin_manual/ai/app_llm2.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,7 @@ Known Limitations
127127

128128
* We currently only support languages that the underlying model supports; correctness of language use in languages other than English may be poor depending on the language's coverage in the model's training data (We recommended model Llama 3 or other models explicitly trained on multiple languages)
129129
* Language models can be bad at reasoning tasks
130+
* Language models can be bad at math
130131
* Language models are likely to generate false information and should thus only be used in situations that are not critical. It's recommended to only use AI at the beginning of a creation process and not at the end, so that outputs of AI serve as a draft for example and not as final product. Always check the output of language models before using it.
131132
* Make sure to test the language model you are using it for whether it meets the use-case's quality requirements
132133
* Language models notoriously have a high energy consumption, if you want to reduce load on your server you can choose smaller models or quantized models in exchange for lower accuracy

0 commit comments

Comments
 (0)