-
Notifications
You must be signed in to change notification settings - Fork 2.1k
fix: Make sure AI docs are up-to-date and do some cleanup #13418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
76d1b0b
to
5e463c4
Compare
Signed-off-by: Marcel Klehr <[email protected]>
5e463c4
to
45de402
Compare
Additionally, *integration_openai* also implements the following Assistant Tasks: | ||
|
||
* *Context write* (Tested with OpenAI GPT-3.5) | ||
* *Reformulate text* (Tested with OpenAI GPT-3.5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both are now available in llm2
@@ -37,7 +37,7 @@ Requirements | |||
* At least 12GB of system RAM | |||
* 2 GB + additional 500MB for each request made to the backend if the Free text-to-text provider is not on the same machine | |||
* 8 GB is recommended in the above case for the default settings | |||
* This app makes use of the configured free text-to-text task processing provider instead of running its own language model by default, you will thus need 4+ cores for the embedding model only (backed configuration needs changes to make use of the extra cores, refer to `Configuration Options (Backend)`_) | |||
* This app makes use of the configured free text-to-text task processing provider instead of running its own language model by default, you will thus need 4+ cores for the embedding model only |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed reference to backend config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why though? The config needs to be touched in case max performance for the h/w is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see below
|
||
.. code-block:: | ||
|
||
occ config:app:set context_chat request_timeout --value=3 --type=integer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed docs for config options that we should make sure are good defaults out of the box. If we document them publicly, we risk that people change them without knowing what they're doing and we get support requests for them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requested changes are optional.
Co-authored-by: Julien Veyssier <[email protected]> Signed-off-by: Marcel Klehr <[email protected]>
No description provided.