Skip to content

Commit 48c0970

Browse files
committed
README updated
1 parent 9babfa6 commit 48c0970

File tree

2 files changed

+59
-26
lines changed

2 files changed

+59
-26
lines changed

README.md

Lines changed: 58 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,11 @@ Also, we aimed the lib to be self-contained with the fewest dependencies possibl
2828

2929
---
3030

31-
In addition to the OpenAI API, this library also supports API-compatible providers such as:
31+
👉 **No time to read a lengthy tutorial? Sure, we hear you! Check out the [examples](./openai-examples/src/main/scala/io/cequence/openaiscala/examples) to see how to use the lib in practice.**
32+
33+
---
34+
35+
In addition to the OpenAI API, this library also supports API-compatible providers (see [examples](./openai-examples/src/main/scala/io/cequence/openaiscala/examples/nonopenai)) such as:
3236
- [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) - cloud-based, utilizes OpenAI models but with lower latency
3337
- [Azure AI](https://azure.microsoft.com/en-us/products/ai-studio) - cloud-based, offers a vast selection of open-source models
3438
- [Anthropic](https://www.anthropic.com/api) - cloud-based, a major competitor to OpenAI, features proprietary/closed-source models such as Claude3 - Haiku, Sonnet, and Opus
@@ -42,8 +46,6 @@ In addition to the OpenAI API, this library also supports API-compatible provide
4246
- [Ollama](https://ollama.com/) - runs locally, serves as an umbrella for open-source LLMs including LLaMA3, dbrx, and Command-R
4347
- [FastChat](https://github.com/lm-sys/FastChat) - runs locally, serves as an umbrella for open-source LLMs such as Vicuna, Alpaca, and FastChat-T5
4448

45-
See [examples](./openai-examples/src/main/scala/io/cequence/openaiscala/examples/nonopenai) for more details.
46-
4749
---
4850

4951
👉 For background information read an article about the lib/client on [Medium](https://medium.com/@0xbnd/openai-scala-client-is-out-d7577de934ad).
@@ -153,54 +155,42 @@ Then you can obtain a service in one of the following ways.
153155
4. [Groq](https://wow.groq.com/) - requires `GROQ_API_KEY"`
154156
```scala
155157
val service = OpenAIChatCompletionServiceFactory(ChatProviderSettings.groq)
156-
```
157-
or with streaming
158-
```scala
158+
// or with streaming
159159
val service = OpenAIChatCompletionServiceFactory.withStreaming(ChatProviderSettings.groq)
160160
```
161161

162162
5. [Fireworks AI](https://fireworks.ai/) - requires `FIREWORKS_API_KEY"`
163163
```scala
164164
val service = OpenAIChatCompletionServiceFactory(ChatProviderSettings.fireworks)
165-
```
166-
or with streaming
167-
```scala
165+
// or with streaming
168166
val service = OpenAIChatCompletionServiceFactory.withStreaming(ChatProviderSettings.fireworks)
169167
```
170168

171169
6. [Octo AI](https://octo.ai/) - requires `OCTOAI_TOKEN`
172170
```scala
173171
val service = OpenAIChatCompletionServiceFactory(ChatProviderSettings.octoML)
174-
```
175-
or with streaming
176-
```scala
172+
// or with streaming
177173
val service = OpenAIChatCompletionServiceFactory.withStreaming(ChatProviderSettings.octoML)
178174
```
179175

180176
7. [TogetherAI](https://www.together.ai/) requires `TOGETHERAI_API_KEY`
181177
```scala
182178
val service = OpenAIChatCompletionServiceFactory(ChatProviderSettings.togetherAI)
183-
```
184-
or with streaming
185-
```scala
179+
// or with streaming
186180
val service = OpenAIChatCompletionServiceFactory.withStreaming(ChatProviderSettings.togetherAI)
187181
```
188182

189183
8. [Cerebras](https://cerebras.ai/) requires `CEREBRAS_API_KEY`
190184
```scala
191185
val service = OpenAIChatCompletionServiceFactory(ChatProviderSettings.cerebras)
192-
```
193-
or with streaming
194-
```scala
186+
// or with streaming
195187
val service = OpenAIChatCompletionServiceFactory.withStreaming(ChatProviderSettings.cerebras)
196188
```
197189

198190
9. [Mistral](https://mistral.ai/) requires `MISTRAL_API_KEY`
199191
```scala
200192
val service = OpenAIChatCompletionServiceFactory(ChatProviderSettings.mistral)
201-
```
202-
or with streaming
203-
```scala
193+
// or with streaming
204194
val service = OpenAIChatCompletionServiceFactory.withStreaming(ChatProviderSettings.mistral)
205195
```
206196

@@ -305,7 +295,7 @@ Full documentation of each call with its respective inputs and settings is provi
305295
service.createCompletion(
306296
text,
307297
settings = CreateCompletionSettings(
308-
model = ModelId.gpt_3_5_turbo_16k,
298+
model = ModelId.gpt_4o,
309299
max_tokens = Some(1500),
310300
temperature = Some(0.9),
311301
presence_penalty = Some(0.2),
@@ -340,7 +330,7 @@ For this to work you need to use `OpenAIServiceStreamedFactory` from `openai-sca
340330

341331
```scala
342332
val createChatCompletionSettings = CreateChatCompletionSettings(
343-
model = ModelId.gpt_3_5_turbo
333+
model = ModelId.gpt_4o
344334
)
345335

346336
val messages = Seq(
@@ -413,7 +403,51 @@ For this to work you need to use `OpenAIServiceStreamedFactory` from `openai-sca
413403
}
414404
```
415405

416-
- 🔥 **New**: Count expected used tokens before calling `createChatCompletions` or `createChatFunCompletions`, this helps you select proper model ex. `gpt-3.5-turbo` or `gpt-3.5-turbo-16k` and reduce costs. This is an experimental feature and it may not work for all models. Requires `openai-scala-count-tokens` lib.
406+
- Create chat completion with json output (🔥 **New**)
407+
408+
```scala
409+
val messages = Seq(
410+
SystemMessage("Give me the most populous capital cities in JSON format."),
411+
UserMessage("List only african countries")
412+
)
413+
414+
val capitalsSchema = JsonSchema.Object(
415+
properties = Map(
416+
"countries" -> JsonSchema.Array(
417+
items = JsonSchema.Object(
418+
properties = Map(
419+
"country" -> JsonSchema.String(
420+
description = Some("The name of the country")
421+
),
422+
"capital" -> JsonSchema.String(
423+
description = Some("The capital city of the country")
424+
)
425+
),
426+
required = Seq("country", "capital")
427+
)
428+
)
429+
),
430+
required = Seq("countries")
431+
)
432+
433+
val jsonSchemaDef = JsonSchemaDef(
434+
name = "capitals_response",
435+
strict = true,
436+
structure = schema
437+
)
438+
439+
service
440+
.createChatCompletion(
441+
messages = messages,
442+
settings = DefaultSettings.createJsonChatCompletion(jsonSchemaDef)
443+
)
444+
.map { response =>
445+
val json = Json.parse(messageContent(response))
446+
println(Json.prettyPrint(json))
447+
}
448+
```
449+
450+
- Count expected used tokens before calling `createChatCompletions` or `createChatFunCompletions`, this helps you select proper model and reduce costs. This is an experimental feature and it may not work for all models. Requires `openai-scala-count-tokens` lib.
417451

418452
An example how to count message tokens:
419453
```scala
@@ -567,7 +601,6 @@ class MyCompletionService @Inject() (
567601
authHeaders = Seq(("Authorization", s"Bearer ${sys.env("OCTOAI_TOKEN")}"))
568602
)
569603

570-
571604
// Anthropic
572605
val anthropicService = AnthropicServiceFactory.asOpenAI()
573606

openai-examples/src/main/scala/io/cequence/openaiscala/examples/CreateChatCompletionJson.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import scala.concurrent.Future
99

1010
object CreateChatCompletionJson extends Example with TestFixtures with OpenAIServiceConsts {
1111

12-
val messages = Seq(
12+
private val messages: Seq[BaseMessage] = Seq(
1313
SystemMessage(capitalsPrompt),
1414
UserMessage("List only african countries")
1515
)

0 commit comments

Comments
 (0)