Skip to content

Commit 3e18224

Browse files
authored
Upstream spec sync (64bit#220)
* add project id in OpenAIConfig * update audio tranlations: include json and verbose json functions * add fine tuning list checkpoints endpoint * update moderations comment * assistants=v2 header * add delete thread message api * add vector stores api: create, retrive, list, update, delete * VectorStores API group in client * vector stores and vector store files * vector store file batches API * expose VectorStoreFileBatches * Batches API * update ApiError type to match spec * updated from spec upto line 9601 * assistant api update; up to 10055 * update RunObject; up to 10327 * updates to Run and Thread objects and requests * MessageObject updated * message delta object * run step delta object and related * update step objects * BatchRequestInput and BatchRequestOutput types * example fix * updated spec from upstream * update README * fix completions example * fix completions streaming example * fix models test * fix AssistantsApiResponseFormatOption * fix MessageObject * have errors log in assistants example * fix metadata type in batch.rs types * fix metadata type in vector_store.rs type
1 parent f70ed12 commit 3e18224

34 files changed

+9937
-4158
lines changed

async-openai/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,10 @@
2323

2424
- It's based on [OpenAI OpenAPI spec](https://github.com/openai/openai-openapi)
2525
- Current features:
26-
- [x] Assistants (Beta)
26+
- [x] Assistants v2
27+
- [ ] Assistants v2 streaming
2728
- [x] Audio
29+
- [x] Batch
2830
- [x] Chat
2931
- [x] Completions (Legacy)
3032
- [x] Embeddings
@@ -112,7 +114,7 @@ A good starting point would be to look at existing [open issues](https://github.
112114

113115
To maintain quality of the project, a minimum of the following is a must for code contribution:
114116
- **Names & Documentation**: All struct names, field names and doc comments are from OpenAPI spec. Nested objects in spec without names leaves room for making appropriate name.
115-
- **Tested**: Examples are primary means of testing and should continue to work. For new features supporting example is required.
117+
- **Tested**: For changes supporting test(s) and/or example is required. Existing examples, doc tests, unit tests, and integration tests should be made to work with the changes if applicable.
116118
- **Scope**: Keep scope limited to APIs available in official documents such as [API Reference](https://platform.openai.com/docs/api-reference) or [OpenAPI spec](https://github.com/openai/openai-openapi/). Other LLMs or AI Providers offer OpenAI-compatible APIs, yet they may not always have full parity. In such cases, the OpenAI spec takes precedence.
117119
- **Consistency**: Keep code style consistent across all the "APIs" that library exposes; it creates a great developer experience.
118120

async-openai/src/audio.rs

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,8 @@ use crate::{
66
types::{
77
CreateSpeechRequest, CreateSpeechResponse, CreateTranscriptionRequest,
88
CreateTranscriptionResponseJson, CreateTranscriptionResponseVerboseJson,
9-
CreateTranslationRequest, CreateTranslationResponse,
9+
CreateTranslationRequest, CreateTranslationResponseJson,
10+
CreateTranslationResponseVerboseJson,
1011
},
1112
Client,
1213
};
@@ -52,11 +53,19 @@ impl<'c, C: Config> Audio<'c, C> {
5253
.await
5354
}
5455

55-
/// Translates audio into into English.
56+
/// Translates audio into English.
5657
pub async fn translate(
5758
&self,
5859
request: CreateTranslationRequest,
59-
) -> Result<CreateTranslationResponse, OpenAIError> {
60+
) -> Result<CreateTranslationResponseJson, OpenAIError> {
61+
self.client.post_form("/audio/translations", request).await
62+
}
63+
64+
/// Translates audio into English.
65+
pub async fn translate_verbose_json(
66+
&self,
67+
request: CreateTranslationRequest,
68+
) -> Result<CreateTranslationResponseVerboseJson, OpenAIError> {
6069
self.client.post_form("/audio/translations", request).await
6170
}
6271

async-openai/src/batches.rs

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
use serde::Serialize;
2+
3+
use crate::{
4+
config::Config,
5+
error::OpenAIError,
6+
types::{Batch, BatchRequest, ListBatchesResponse},
7+
Client,
8+
};
9+
10+
/// Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.
11+
///
12+
/// Related guide: [Batch](https://platform.openai.com/docs/guides/batch)
13+
pub struct Batches<'c, C: Config> {
14+
client: &'c Client<C>,
15+
}
16+
17+
impl<'c, C: Config> Batches<'c, C> {
18+
pub fn new(client: &'c Client<C>) -> Self {
19+
Self { client }
20+
}
21+
22+
/// Creates and executes a batch from an uploaded file of requests
23+
pub async fn create(&self, request: BatchRequest) -> Result<Batch, OpenAIError> {
24+
self.client.post("/batches", request).await
25+
}
26+
27+
/// List your organization's batches.
28+
pub async fn list<Q>(&self, query: &Q) -> Result<ListBatchesResponse, OpenAIError>
29+
where
30+
Q: Serialize + ?Sized,
31+
{
32+
self.client.get_with_query("/batches", query).await
33+
}
34+
35+
/// Retrieves a batch.
36+
pub async fn retrieve(&self, batch_id: &str) -> Result<Batch, OpenAIError> {
37+
self.client.get(&format!("/batches/{batch_id}")).await
38+
}
39+
40+
/// Cancels an in-progress batch.
41+
pub async fn cancel(&self, batch_id: &str) -> Result<Batch, OpenAIError> {
42+
self.client
43+
.post(
44+
&format!("/batches/{batch_id}/cancel"),
45+
serde_json::json!({}),
46+
)
47+
.await
48+
}
49+
}

async-openai/src/client.rs

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,8 @@ use crate::{
1111
file::Files,
1212
image::Images,
1313
moderation::Moderations,
14-
Assistants, Audio, Chat, Completions, Embeddings, FineTuning, Models, Threads,
14+
Assistants, Audio, Batches, Chat, Completions, Embeddings, FineTuning, Models, Threads,
15+
VectorStores,
1516
};
1617

1718
#[derive(Debug, Clone)]
@@ -128,6 +129,16 @@ impl<C: Config> Client<C> {
128129
Threads::new(self)
129130
}
130131

132+
/// To call [VectorStores] group related APIs using this client.
133+
pub fn vector_stores(&self) -> VectorStores<C> {
134+
VectorStores::new(self)
135+
}
136+
137+
/// To call [Batches] group related APIs using this client.
138+
pub fn batches(&self) -> Batches<C> {
139+
Batches::new(self)
140+
}
141+
131142
pub fn config(&self) -> &C {
132143
&self.config
133144
}

async-openai/src/config.rs

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,10 @@ use serde::Deserialize;
55

66
/// Default v1 API base url
77
pub const OPENAI_API_BASE: &str = "https://api.openai.com/v1";
8-
/// Name for organization header
8+
/// Organization header
99
pub const OPENAI_ORGANIZATION_HEADER: &str = "OpenAI-Organization";
10+
/// Project header
11+
pub const OPENAI_PROJECT_HEADER: &str = "OpenAI-Project";
1012

1113
/// Calls to the Assistants API require that you pass a Beta header
1214
pub const OPENAI_BETA_HEADER: &str = "OpenAI-Beta";
@@ -30,6 +32,7 @@ pub struct OpenAIConfig {
3032
api_base: String,
3133
api_key: Secret<String>,
3234
org_id: String,
35+
project_id: String,
3336
}
3437

3538
impl Default for OpenAIConfig {
@@ -40,6 +43,7 @@ impl Default for OpenAIConfig {
4043
.unwrap_or_else(|_| "".to_string())
4144
.into(),
4245
org_id: Default::default(),
46+
project_id: Default::default(),
4347
}
4448
}
4549
}
@@ -56,6 +60,12 @@ impl OpenAIConfig {
5660
self
5761
}
5862

63+
/// Non default project id
64+
pub fn with_project_id<S: Into<String>>(mut self, project_id: S) -> Self {
65+
self.project_id = project_id.into();
66+
self
67+
}
68+
5969
/// To use a different API key different from default OPENAI_API_KEY env var
6070
pub fn with_api_key<S: Into<String>>(mut self, api_key: S) -> Self {
6171
self.api_key = Secret::from(api_key.into());
@@ -83,6 +93,13 @@ impl Config for OpenAIConfig {
8393
);
8494
}
8595

96+
if !self.project_id.is_empty() {
97+
headers.insert(
98+
OPENAI_PROJECT_HEADER,
99+
self.project_id.as_str().parse().unwrap(),
100+
);
101+
}
102+
86103
headers.insert(
87104
AUTHORIZATION,
88105
format!("Bearer {}", self.api_key.expose_secret())
@@ -93,7 +110,7 @@ impl Config for OpenAIConfig {
93110

94111
// hack for Assistants APIs
95112
// Calls to the Assistants API require that you pass a Beta header
96-
headers.insert(OPENAI_BETA_HEADER, "assistants=v1".parse().unwrap());
113+
headers.insert(OPENAI_BETA_HEADER, "assistants=v2".parse().unwrap());
97114

98115
headers
99116
}

async-openai/src/error.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,8 @@ pub enum OpenAIError {
3232
pub struct ApiError {
3333
pub message: String,
3434
pub r#type: Option<String>,
35-
pub param: Option<serde_json::Value>,
36-
pub code: Option<serde_json::Value>,
35+
pub param: Option<String>,
36+
pub code: Option<String>,
3737
}
3838

3939
/// Wrapper to deserialize the error object nested in "error" JSON key

async-openai/src/fine_tuning.rs

Lines changed: 18 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ use crate::{
44
config::Config,
55
error::OpenAIError,
66
types::{
7-
CreateFineTuningJobRequest, FineTuningJob, ListFineTuningJobEventsResponse,
8-
ListPaginatedFineTuningJobsResponse,
7+
CreateFineTuningJobRequest, FineTuningJob, ListFineTuningJobCheckpointsResponse,
8+
ListFineTuningJobEventsResponse, ListPaginatedFineTuningJobsResponse,
99
},
1010
Client,
1111
};
@@ -80,4 +80,20 @@ impl<'c, C: Config> FineTuning<'c, C> {
8080
)
8181
.await
8282
}
83+
84+
pub async fn list_checkpoints<Q>(
85+
&self,
86+
fine_tuning_job_id: &str,
87+
query: &Q,
88+
) -> Result<ListFineTuningJobCheckpointsResponse, OpenAIError>
89+
where
90+
Q: Serialize + ?Sized,
91+
{
92+
self.client
93+
.get_with_query(
94+
format!("/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints").as_str(),
95+
query,
96+
)
97+
.await
98+
}
8399
}

async-openai/src/lib.rs

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,7 @@
7979
mod assistant_files;
8080
mod assistants;
8181
mod audio;
82+
mod batches;
8283
mod chat;
8384
mod client;
8485
mod completion;
@@ -98,10 +99,14 @@ mod steps;
9899
mod threads;
99100
pub mod types;
100101
mod util;
102+
mod vector_store_file_batches;
103+
mod vector_store_files;
104+
mod vector_stores;
101105

102106
pub use assistant_files::AssistantFiles;
103107
pub use assistants::Assistants;
104108
pub use audio::Audio;
109+
pub use batches::Batches;
105110
pub use chat::Chat;
106111
pub use client::Client;
107112
pub use completion::Completions;
@@ -116,3 +121,6 @@ pub use moderation::Moderations;
116121
pub use runs::Runs;
117122
pub use steps::Steps;
118123
pub use threads::Threads;
124+
pub use vector_store_file_batches::VectorStoreFileBatches;
125+
pub use vector_store_files::VectorStoreFiles;
126+
pub use vector_stores::VectorStores;

async-openai/src/messages.rs

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,10 @@ use serde::Serialize;
33
use crate::{
44
config::Config,
55
error::OpenAIError,
6-
types::{CreateMessageRequest, ListMessagesResponse, MessageObject, ModifyMessageRequest},
6+
types::{
7+
CreateMessageRequest, DeleteMessageResponse, ListMessagesResponse, MessageObject,
8+
ModifyMessageRequest,
9+
},
710
Client, MessageFiles,
811
};
912

@@ -70,4 +73,13 @@ impl<'c, C: Config> Messages<'c, C> {
7073
.get_with_query(&format!("/threads/{}/messages", self.thread_id), query)
7174
.await
7275
}
76+
77+
pub async fn delete(&self, message_id: &str) -> Result<DeleteMessageResponse, OpenAIError> {
78+
self.client
79+
.delete(&format!(
80+
"/threads/{}/messages/{message_id}",
81+
self.thread_id
82+
))
83+
.await
84+
}
7385
}

async-openai/src/moderation.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ use crate::{
55
Client,
66
};
77

8-
/// Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
8+
/// Given some input text, outputs if the model classifies it as potentially harmful across several categories.
99
///
1010
/// Related guide: [Moderations](https://platform.openai.com/docs/guides/moderation)
1111
pub struct Moderations<'c, C: Config> {
@@ -17,7 +17,7 @@ impl<'c, C: Config> Moderations<'c, C> {
1717
Self { client }
1818
}
1919

20-
/// Classifies if text violates OpenAI's Content Policy
20+
/// Classifies if text is potentially harmful.
2121
pub async fn create(
2222
&self,
2323
request: CreateModerationRequest,

0 commit comments

Comments
 (0)