Skip to content

Commit fbe3072

Browse files
SDK regeneration (#61)
Co-authored-by: fern-api <115122769+fern-api[bot]@users.noreply.github.com>
1 parent 7c91cbe commit fbe3072

19 files changed

+693
-341
lines changed

build.gradle

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ java {
4646

4747
group = 'com.cohere'
4848

49-
version = '1.8.1'
49+
version = '1.8.0'
5050

5151
jar {
5252
dependsOn(":generatePomFileForMavenPublication")
@@ -77,7 +77,7 @@ publishing {
7777
maven(MavenPublication) {
7878
groupId = 'com.cohere'
7979
artifactId = 'cohere-java'
80-
version = '1.8.1'
80+
version = '1.8.0'
8181
from components.java
8282
pom {
8383
name = 'cohere'

reference.md

Lines changed: 52 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -57,19 +57,6 @@ client.chatStream(
5757
<dl>
5858
<dd>
5959

60-
**rawPrompting:** `Optional<Boolean>`
61-
62-
When enabled, the user's prompt will be sent to the model without
63-
any pre-processing.
64-
65-
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
66-
67-
</dd>
68-
</dl>
69-
70-
<dl>
71-
<dd>
72-
7360
**message:** `String`
7461

7562
Text input for the model to respond to.
@@ -371,6 +358,19 @@ Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private D
371358
<dl>
372359
<dd>
373360

361+
**rawPrompting:** `Optional<Boolean>`
362+
363+
When enabled, the user's prompt will be sent to the model without
364+
any pre-processing.
365+
366+
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
367+
368+
</dd>
369+
</dl>
370+
371+
<dl>
372+
<dd>
373+
374374
**tools:** `Optional<List<Tool>>`
375375

376376
A list of available tools (functions) that the model may suggest invoking before producing a text response.
@@ -541,19 +541,6 @@ client.chatStream(
541541
<dl>
542542
<dd>
543543

544-
**rawPrompting:** `Optional<Boolean>`
545-
546-
When enabled, the user's prompt will be sent to the model without
547-
any pre-processing.
548-
549-
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
550-
551-
</dd>
552-
</dl>
553-
554-
<dl>
555-
<dd>
556-
557544
**message:** `String`
558545

559546
Text input for the model to respond to.
@@ -855,6 +842,19 @@ Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private D
855842
<dl>
856843
<dd>
857844

845+
**rawPrompting:** `Optional<Boolean>`
846+
847+
When enabled, the user's prompt will be sent to the model without
848+
any pre-processing.
849+
850+
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
851+
852+
</dd>
853+
</dl>
854+
855+
<dl>
856+
<dd>
857+
858858
**tools:** `Optional<List<Tool>>`
859859

860860
A list of available tools (functions) that the model may suggest invoking before producing a text response.
@@ -2291,19 +2291,6 @@ When set to `true`, tool calls in the Assistant message will be forced to follow
22912291
<dl>
22922292
<dd>
22932293

2294-
**rawPrompting:** `Optional<Boolean>`
2295-
2296-
When enabled, the user's prompt will be sent to the model without
2297-
any pre-processing.
2298-
2299-
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
2300-
2301-
</dd>
2302-
</dl>
2303-
2304-
<dl>
2305-
<dd>
2306-
23072294
**responseFormat:** `Optional<ResponseFormatV2>`
23082295

23092296
</dd>
@@ -2331,9 +2318,11 @@ Safety modes are not yet configurable in combination with `tools` and `documents
23312318

23322319
**maxTokens:** `Optional<Integer>`
23332320

2334-
The maximum number of tokens the model will generate as part of the response.
2321+
The maximum number of output tokens the model will generate in the response. If not set, `max_tokens` defaults to the model's maximum output token limit. You can find the maximum output token limits for each model in the [model documentation](https://docs.cohere.com/docs/models).
2322+
2323+
**Note**: Setting a low value may result in incomplete generations. In such cases, the `finish_reason` field in the response will be set to `"MAX_TOKENS"`.
23352324

2336-
**Note**: Setting a low value may result in incomplete generations.
2325+
**Note**: If `max_tokens` is set higher than the model's maximum output token limit, the generation will be capped at that model-specific maximum limit.
23372326

23382327
</dd>
23392328
</dl>
@@ -2435,8 +2424,14 @@ When `NONE` is specified, the model will be forced **not** to use one of the spe
24352424
If tool_choice isn't specified, then the model is free to choose whether to use the specified tools or not.
24362425

24372426
**Note**: This parameter is only compatible with models [Command-r7b](https://docs.cohere.com/v2/docs/command-r7b) and newer.
2427+
2428+
</dd>
2429+
</dl>
2430+
2431+
<dl>
2432+
<dd>
24382433

2439-
**Note**: The same functionality can be achieved in `/v1/chat` using the `force_single_step` parameter. If `force_single_step=true`, this is equivalent to specifying `REQUIRED`. While if `force_single_step=true` and `tool_results` are passed, this is equivalent to specifying `NONE`.
2434+
**thinking:** `Optional<Thinking>`
24402435

24412436
</dd>
24422437
</dl>
@@ -2582,19 +2577,6 @@ When set to `true`, tool calls in the Assistant message will be forced to follow
25822577
<dl>
25832578
<dd>
25842579

2585-
**rawPrompting:** `Optional<Boolean>`
2586-
2587-
When enabled, the user's prompt will be sent to the model without
2588-
any pre-processing.
2589-
2590-
Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
2591-
2592-
</dd>
2593-
</dl>
2594-
2595-
<dl>
2596-
<dd>
2597-
25982580
**responseFormat:** `Optional<ResponseFormatV2>`
25992581

26002582
</dd>
@@ -2622,9 +2604,11 @@ Safety modes are not yet configurable in combination with `tools` and `documents
26222604

26232605
**maxTokens:** `Optional<Integer>`
26242606

2625-
The maximum number of tokens the model will generate as part of the response.
2607+
The maximum number of output tokens the model will generate in the response. If not set, `max_tokens` defaults to the model's maximum output token limit. You can find the maximum output token limits for each model in the [model documentation](https://docs.cohere.com/docs/models).
2608+
2609+
**Note**: Setting a low value may result in incomplete generations. In such cases, the `finish_reason` field in the response will be set to `"MAX_TOKENS"`.
26262610

2627-
**Note**: Setting a low value may result in incomplete generations.
2611+
**Note**: If `max_tokens` is set higher than the model's maximum output token limit, the generation will be capped at that model-specific maximum limit.
26282612

26292613
</dd>
26302614
</dl>
@@ -2726,8 +2710,14 @@ When `NONE` is specified, the model will be forced **not** to use one of the spe
27262710
If tool_choice isn't specified, then the model is free to choose whether to use the specified tools or not.
27272711

27282712
**Note**: This parameter is only compatible with models [Command-r7b](https://docs.cohere.com/v2/docs/command-r7b) and newer.
2713+
2714+
</dd>
2715+
</dl>
27292716

2730-
**Note**: The same functionality can be achieved in `/v1/chat` using the `force_single_step` parameter. If `force_single_step=true`, this is equivalent to specifying `REQUIRED`. While if `force_single_step=true` and `tool_results` are passed, this is equivalent to specifying `NONE`.
2717+
<dl>
2718+
<dd>
2719+
2720+
**thinking:** `Optional<Thinking>`
27312721

27322722
</dd>
27332723
</dl>
@@ -2875,6 +2865,7 @@ Specifies the types of embeddings you want to get back. Can be one or more of th
28752865
* `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Supported with Embed v3.0 and newer Embed models.
28762866
* `"binary"`: Use this when you want to get back signed binary embeddings. Supported with Embed v3.0 and newer Embed models.
28772867
* `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Supported with Embed v3.0 and newer Embed models.
2868+
* `"base64"`: Use this when you want to get back base64 embeddings. Supported with Embed v3.0 and newer Embed models.
28782869

28792870
</dd>
28802871
</dl>
@@ -4365,17 +4356,17 @@ Creates a new fine-tuned model. The model will be trained on the dataset specifi
43654356
client.finetuning().createFinetunedModel(
43664357
FinetunedModel
43674358
.builder()
4368-
.name("api-test")
4359+
.name("name")
43694360
.settings(
43704361
Settings
43714362
.builder()
43724363
.baseModel(
43734364
BaseModel
43744365
.builder()
4375-
.baseType(BaseType.BASE_TYPE_CHAT)
4366+
.baseType(BaseType.BASE_TYPE_UNSPECIFIED)
43764367
.build()
43774368
)
4378-
.datasetId("my-dataset-id")
4369+
.datasetId("dataset_id")
43794370
.build()
43804371
)
43814372
.build()

src/main/java/com/cohere/api/AsyncRawCohere.java

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -87,9 +87,6 @@ public CompletableFuture<CohereHttpResponse<Iterable<StreamedChatResponse>>> cha
8787
.addPathSegments("v1/chat")
8888
.build();
8989
Map<String, Object> properties = new HashMap<>();
90-
if (request.getRawPrompting().isPresent()) {
91-
properties.put("raw_prompting", request.getRawPrompting());
92-
}
9390
properties.put("message", request.getMessage());
9491
if (request.getModel().isPresent()) {
9592
properties.put("model", request.getModel());
@@ -146,6 +143,9 @@ public CompletableFuture<CohereHttpResponse<Iterable<StreamedChatResponse>>> cha
146143
if (request.getPresencePenalty().isPresent()) {
147144
properties.put("presence_penalty", request.getPresencePenalty());
148145
}
146+
if (request.getRawPrompting().isPresent()) {
147+
properties.put("raw_prompting", request.getRawPrompting());
148+
}
149149
if (request.getTools().isPresent()) {
150150
properties.put("tools", request.getTools());
151151
}
@@ -299,9 +299,6 @@ public CompletableFuture<CohereHttpResponse<NonStreamedChatResponse>> chat(
299299
.addPathSegments("v1/chat")
300300
.build();
301301
Map<String, Object> properties = new HashMap<>();
302-
if (request.getRawPrompting().isPresent()) {
303-
properties.put("raw_prompting", request.getRawPrompting());
304-
}
305302
properties.put("message", request.getMessage());
306303
if (request.getModel().isPresent()) {
307304
properties.put("model", request.getModel());
@@ -358,6 +355,9 @@ public CompletableFuture<CohereHttpResponse<NonStreamedChatResponse>> chat(
358355
if (request.getPresencePenalty().isPresent()) {
359356
properties.put("presence_penalty", request.getPresencePenalty());
360357
}
358+
if (request.getRawPrompting().isPresent()) {
359+
properties.put("raw_prompting", request.getRawPrompting());
360+
}
361361
if (request.getTools().isPresent()) {
362362
properties.put("tools", request.getTools());
363363
}

src/main/java/com/cohere/api/RawCohere.java

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -83,9 +83,6 @@ public CohereHttpResponse<Iterable<StreamedChatResponse>> chatStream(
8383
.addPathSegments("v1/chat")
8484
.build();
8585
Map<String, Object> properties = new HashMap<>();
86-
if (request.getRawPrompting().isPresent()) {
87-
properties.put("raw_prompting", request.getRawPrompting());
88-
}
8986
properties.put("message", request.getMessage());
9087
if (request.getModel().isPresent()) {
9188
properties.put("model", request.getModel());
@@ -142,6 +139,9 @@ public CohereHttpResponse<Iterable<StreamedChatResponse>> chatStream(
142139
if (request.getPresencePenalty().isPresent()) {
143140
properties.put("presence_penalty", request.getPresencePenalty());
144141
}
142+
if (request.getRawPrompting().isPresent()) {
143+
properties.put("raw_prompting", request.getRawPrompting());
144+
}
145145
if (request.getTools().isPresent()) {
146146
properties.put("tools", request.getTools());
147147
}
@@ -256,9 +256,6 @@ public CohereHttpResponse<NonStreamedChatResponse> chat(ChatRequest request, Req
256256
.addPathSegments("v1/chat")
257257
.build();
258258
Map<String, Object> properties = new HashMap<>();
259-
if (request.getRawPrompting().isPresent()) {
260-
properties.put("raw_prompting", request.getRawPrompting());
261-
}
262259
properties.put("message", request.getMessage());
263260
if (request.getModel().isPresent()) {
264261
properties.put("model", request.getModel());
@@ -315,6 +312,9 @@ public CohereHttpResponse<NonStreamedChatResponse> chat(ChatRequest request, Req
315312
if (request.getPresencePenalty().isPresent()) {
316313
properties.put("presence_penalty", request.getPresencePenalty());
317314
}
315+
if (request.getRawPrompting().isPresent()) {
316+
properties.put("raw_prompting", request.getRawPrompting());
317+
}
318318
if (request.getTools().isPresent()) {
319319
properties.put("tools", request.getTools());
320320
}

src/main/java/com/cohere/api/core/ClientOptions.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,10 +32,10 @@ private ClientOptions(
3232
this.headers.putAll(headers);
3333
this.headers.putAll(new HashMap<String, String>() {
3434
{
35-
put("User-Agent", "com.cohere:cohere-java/1.8.1");
35+
put("User-Agent", "com.cohere:cohere-java/1.8.0");
3636
put("X-Fern-Language", "JAVA");
3737
put("X-Fern-SDK-Name", "com.cohere.fern:api-sdk");
38-
put("X-Fern-SDK-Version", "1.8.1");
38+
put("X-Fern-SDK-Version", "1.8.0");
3939
}
4040
});
4141
this.headerSuppliers = headerSuppliers;

0 commit comments

Comments
 (0)