You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+58-25Lines changed: 58 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,11 @@ Also, we aimed the lib to be self-contained with the fewest dependencies possibl
28
28
29
29
---
30
30
31
-
In addition to the OpenAI API, this library also supports API-compatible providers such as:
31
+
👉 **No time to read a lengthy tutorial? Sure, we hear you! Check out the [examples](./openai-examples/src/main/scala/io/cequence/openaiscala/examples) to see how to use the lib in practice.**
32
+
33
+
---
34
+
35
+
In addition to the OpenAI API, this library also supports API-compatible providers (see [examples](./openai-examples/src/main/scala/io/cequence/openaiscala/examples/nonopenai)) such as:
32
36
-[Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) - cloud-based, utilizes OpenAI models but with lower latency
33
37
-[Azure AI](https://azure.microsoft.com/en-us/products/ai-studio) - cloud-based, offers a vast selection of open-source models
34
38
-[Anthropic](https://www.anthropic.com/api) - cloud-based, a major competitor to OpenAI, features proprietary/closed-source models such as Claude3 - Haiku, Sonnet, and Opus
@@ -42,8 +46,6 @@ In addition to the OpenAI API, this library also supports API-compatible provide
42
46
-[Ollama](https://ollama.com/) - runs locally, serves as an umbrella for open-source LLMs including LLaMA3, dbrx, and Command-R
43
47
-[FastChat](https://github.com/lm-sys/FastChat) - runs locally, serves as an umbrella for open-source LLMs such as Vicuna, Alpaca, and FastChat-T5
44
48
45
-
See [examples](./openai-examples/src/main/scala/io/cequence/openaiscala/examples/nonopenai) for more details.
46
-
47
49
---
48
50
49
51
👉 For background information read an article about the lib/client on [Medium](https://medium.com/@0xbnd/openai-scala-client-is-out-d7577de934ad).
@@ -153,54 +155,42 @@ Then you can obtain a service in one of the following ways.
@@ -413,7 +403,51 @@ For this to work you need to use `OpenAIServiceStreamedFactory` from `openai-sca
413
403
}
414
404
```
415
405
416
-
- 🔥 **New**: Count expected used tokens before calling `createChatCompletions` or `createChatFunCompletions`, this helps you select proper model ex. `gpt-3.5-turbo` or `gpt-3.5-turbo-16k` and reduce costs. This is an experimental feature and it may not work for all models. Requires `openai-scala-count-tokens` lib.
406
+
- Create chat completion with json output (🔥 **New**)
407
+
408
+
```scala
409
+
valmessages=Seq(
410
+
SystemMessage("Give me the most populous capital cities in JSON format."),
411
+
UserMessage("List only african countries")
412
+
)
413
+
414
+
valcapitalsSchema=JsonSchema.Object(
415
+
properties =Map(
416
+
"countries"->JsonSchema.Array(
417
+
items =JsonSchema.Object(
418
+
properties =Map(
419
+
"country"->JsonSchema.String(
420
+
description =Some("The name of the country")
421
+
),
422
+
"capital"->JsonSchema.String(
423
+
description =Some("The capital city of the country")
- Count expected used tokens before calling `createChatCompletions` or `createChatFunCompletions`, this helps you select proper model and reduce costs. This is an experimental feature and it may not work for all models. Requires `openai-scala-count-tokens` lib.
417
451
418
452
An example how to count message tokens:
419
453
```scala
@@ -567,7 +601,6 @@ class MyCompletionService @Inject() (
0 commit comments