Skip to content

Commit f87ecd4

Browse files
authored
Merge pull request #1865 from redis/DOC-5490
LangCache: Add multitabbed code examples
2 parents 01cca3a + 95fd241 commit f87ecd4

File tree

1 file changed

+314
-12
lines changed

1 file changed

+314
-12
lines changed

content/develop/ai/langcache/api-examples.md

Lines changed: 314 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -40,27 +40,79 @@ curl -s -X POST "https://$HOST/v1/caches/$CACHE_ID/entires/search" \
4040
This example uses `cURL` and Linux shell scripts to demonstrate the API; you can use any standard REST client or library.
4141
{{% /info %}}
4242

43-
You can also use the [LangCache SDKs](#langcache-sdk) for Javascript and Python to access the API.
43+
If your app is written in Python or Javascript, you can also use the LangCache Software Development Kits (SDKs) to access the API:
4444

45-
## API examples
45+
- [LangCache SDK for Python](https://pypi.org/project/langcache/)
46+
- [LangCache SDK for Javascript](https://www.npmjs.com/package/@redis-ai/langcache)
47+
48+
## Examples
4649

4750
### Search LangCache for similar responses
4851

49-
Use `POST /v1/caches/{cacheId}/entries/search` to search the cache for matching responses to a user prompt.
52+
Use [`POST /v1/caches/{cacheId}/entries/search`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/search" >}}}) to search the cache for matching responses to a user prompt.
5053

54+
{{< multitabs id="search-basic"
55+
tab1="REST API"
56+
tab2="Python"
57+
tab3="Javascript" >}}
5158
```sh
5259
POST https://[host]/v1/caches/{cacheId}/entries/search
5360
{
5461
"prompt": "User prompt text"
5562
}
5663
```
64+
-tab-sep-
65+
```python
66+
from langcache import LangCache
67+
import os
68+
69+
70+
with LangCache(
71+
server_url="https://<host>",
72+
cache_id="<cacheId>",
73+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
74+
) as lang_cache:
75+
76+
res = lang_cache.search(
77+
prompt="User prompt text",
78+
similarity_threshold=0.9
79+
)
80+
81+
print(res)
82+
```
83+
-tab-sep-
84+
```js
85+
import { LangCache } from "@redis-ai/langcache";
86+
87+
const langCache = new LangCache({
88+
serverURL: "https://<host>",
89+
cacheId: "<cacheId>",
90+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
91+
});
92+
93+
async function run() {
94+
const result = await langCache.search({
95+
prompt: "User prompt text",
96+
similarityThreshold: 0.9
97+
});
98+
99+
console.log(result);
100+
}
101+
102+
run();
103+
```
104+
{{< /multitabs >}}
57105

58106
Place this call in your client app right before you call your LLM's REST API. If LangCache returns a response, you can send that response back to the user instead of calling the LLM.
59107

60108
If LangCache does not return a response, you should call your LLM's REST API to generate a new response. After you get a response from the LLM, you can [store it in LangCache](#store-a-new-response-in-langcache) for future use.
61109

62-
You can also scope the responses returned from LangCache by adding an `attributes` object to the request. LangCache will only return responses that match the attributes you specify.
110+
You can also scope the responses returned from LangCache by adding an `attributes` object to the request. LangCache will only return responses that match the attributes you specify.
63111

112+
{{< multitabs id="search-attributes"
113+
tab1="REST API"
114+
tab2="Python"
115+
tab3="Javascript" >}}
64116
```sh
65117
POST https://[host]/v1/caches/{cacheId}/entries/search
66118
{
@@ -70,10 +122,60 @@ POST https://[host]/v1/caches/{cacheId}/entries/search
70122
}
71123
}
72124
```
125+
-tab-sep-
126+
```python
127+
from langcache import LangCache
128+
import os
129+
130+
131+
with LangCache(
132+
server_url="https://<host>",
133+
cache_id="<cacheId>",
134+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
135+
) as lang_cache:
136+
137+
res = lang_cache.search(
138+
prompt="User prompt text",
139+
attributes={"customAttributeName": "customAttributeValue"},
140+
similarity_threshold=0.9,
141+
)
142+
143+
print(res)
144+
```
145+
-tab-sep-
146+
```js
147+
import { LangCache } from "@redis-ai/langcache";
148+
149+
const langCache = new LangCache({
150+
serverURL: "https://<host>",
151+
cacheId: "<cacheId>",
152+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
153+
});
154+
155+
async function run() {
156+
const result = await langCache.search({
157+
prompt: "User prompt text",
158+
similarityThreshold: 0.9,
159+
attributes: {
160+
"customAttributeName": "customAttributeValue",
161+
},
162+
});
163+
164+
console.log(result);
165+
}
166+
167+
run();
168+
```
169+
{{< /multitabs >}}
73170

74171
### Store a new response in LangCache
75172

76-
Use `POST /v1/caches/{cacheId}/entries` to store a new response in the cache.
173+
Use [`POST /v1/caches/{cacheId}/entries`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/set" >}}) to store a new response in the cache.
174+
175+
{{< multitabs id="store-basic"
176+
tab1="REST API"
177+
tab2="Python"
178+
tab3="Javascript" >}}
77179

78180
```sh
79181
POST https://[host]/v1/caches/{cacheId}/entries
@@ -83,10 +185,61 @@ POST https://[host]/v1/caches/{cacheId}/entries
83185
}
84186
```
85187

188+
-tab-sep-
189+
190+
```python
191+
from langcache import LangCache
192+
import os
193+
194+
195+
with LangCache(
196+
server_url="https://[host]",
197+
cache_id="{cacheId}",
198+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
199+
) as lang_cache:
200+
201+
res = lang_cache.set(
202+
prompt="User prompt text",
203+
response="LLM response text",
204+
)
205+
206+
print(res)
207+
```
208+
209+
-tab-sep-
210+
211+
```js
212+
import { LangCache } from "@redis-ai/langcache";
213+
214+
const langCache = new LangCache({
215+
serverURL: "https://<host>",
216+
cacheId: "<cacheId>",
217+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
218+
});
219+
220+
async function run() {
221+
const result = await langCache.set({
222+
prompt: "User prompt text",
223+
response: "LLM response text",
224+
});
225+
226+
console.log(result);
227+
}
228+
229+
run();
230+
```
231+
232+
{{< /multitabs >}}
233+
86234
Place this call in your client app after you get a response from the LLM. This will store the response in the cache for future use.
87235

88236
You can also store the responses with custom attributes by adding an `attributes` object to the request.
89237

238+
{{< multitabs id="store-attributes"
239+
tab1="REST API"
240+
tab2="Python"
241+
tab3="Javascript" >}}
242+
90243
```sh
91244
POST https://[host]/v1/caches/{cacheId}/entries
92245
{
@@ -97,12 +250,122 @@ POST https://[host]/v1/caches/{cacheId}/entries
97250
}
98251
}
99252
```
253+
-tab-sep-
254+
255+
```python
256+
from langcache import LangCache
257+
import os
258+
259+
260+
with LangCache(
261+
server_url="https://[host]",
262+
cache_id="{cacheId}",
263+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
264+
) as lang_cache:
265+
266+
res = lang_cache.set(
267+
prompt="User prompt text",
268+
response="LLM response text",
269+
attributes={"customAttributeName": "customAttributeValue"},
270+
)
271+
272+
print(res)
273+
```
274+
275+
-tab-sep-
276+
277+
```js
278+
import { LangCache } from "@redis-ai/langcache";
279+
280+
const langCache = new LangCache({
281+
serverURL: "https://<host>",
282+
cacheId: "<cacheId>",
283+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
284+
});
285+
286+
async function run() {
287+
const result = await langCache.set({
288+
prompt: "User prompt text",
289+
response: "LLM response text",
290+
attributes: {
291+
"customAttributeName": "customAttributeValue",
292+
},
293+
});
294+
295+
console.log(result);
296+
}
297+
298+
run();
299+
```
300+
301+
{{< /multitabs >}}
100302

101303
### Delete cached responses
102304

103-
Use `DELETE /v1/caches/{cacheId}/entries/{entryId}` to delete a cached response from the cache.
305+
Use [`DELETE /v1/caches/{cacheId}/entries/{entryId}`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/delete" >}}) to delete a cached response from the cache.
306+
307+
{{< multitabs id="delete-entry"
308+
tab1="REST API"
309+
tab2="Python"
310+
tab3="Javascript" >}}
311+
312+
```sh
313+
DELETE https://[host]/v1/caches/{cacheId}/entries/{entryId}
314+
```
315+
-tab-sep-
316+
317+
```python
318+
from langcache import LangCache
319+
import os
320+
321+
322+
with LangCache(
323+
server_url="https://[host]",
324+
cache_id="{cacheId}",
325+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
326+
) as lang_cache:
327+
328+
res = lang_cache.delete_by_id(entry_id="{entryId}")
329+
330+
print(res)
331+
```
332+
333+
-tab-sep-
334+
335+
```js
336+
import { LangCache } from "@redis-ai/langcache";
337+
338+
const langCache = new LangCache({
339+
serverURL: "https://<host>",
340+
cacheId: "<cacheId>",
341+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
342+
});
343+
344+
async function run() {
345+
const result = await langCache.deleteById({
346+
entryId: "<entryId>",
347+
});
104348

105-
You can also use `DELETE /v1/caches/{cacheId}/entries` to delete multiple cached responses at once. If you provide an `attributes` object, LangCache will delete all responses that match the attributes you specify.
349+
console.log(result);
350+
}
351+
352+
run();
353+
```
354+
355+
{{< /multitabs >}}
356+
357+
You can also use [`DELETE /v1/caches/{cacheId}/entries`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/deleteQuery" >}}) to delete multiple cached responses based on the `attributes` you specify. If you specify multiple `attributes`, LangCache will delete entries that contain all given attributes.
358+
359+
{{< warning >}}
360+
If you do not specify any `attributes`, all responses in the cache will be deleted. This cannot be undone.
361+
{{< /warning >}}
362+
363+
<br/>
364+
365+
{{< multitabs id="delete-attributes"
366+
tab1="REST API"
367+
tab2="Python"
368+
tab3="Javascript" >}}
106369

107370
```sh
108371
DELETE https://[host]/v1/caches/{cacheId}/entries
@@ -112,11 +375,50 @@ DELETE https://[host]/v1/caches/{cacheId}/entries
112375
}
113376
}
114377
```
115-
## LangCache SDK
116378

117-
If your app is written in Javascript or Python, you can also use the LangCache Software Development Kits (SDKs) to access the API.
379+
-tab-sep-
118380

119-
To learn how to use the LangCache SDKs:
381+
```python
382+
from langcache import LangCache
383+
import os
384+
385+
386+
with LangCache(
387+
server_url="https://[host]",
388+
cache_id="{cacheId}",
389+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
390+
) as lang_cache:
391+
392+
res = lang_cache.delete_query(
393+
attributes={"customAttributeName": "customAttributeValue"},
394+
)
395+
396+
print(res)
397+
```
398+
399+
-tab-sep-
400+
401+
```js
402+
import { LangCache } from "@redis-ai/langcache";
403+
404+
const langCache = new LangCache({
405+
serverURL: "https://<host>",
406+
cacheId: "<cacheId>",
407+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
408+
});
409+
410+
async function run() {
411+
const result = await langCache.deleteQuery({
412+
attributes: {
413+
"customAttributeName": "customAttributeValue",
414+
},
415+
});
416+
417+
console.log(result);
418+
}
419+
420+
run();
421+
```
422+
423+
{{< /multitabs >}}
120424

121-
- [LangCache SDK for Javascript](https://www.npmjs.com/package/@redis-ai/langcache)
122-
- [LangCache SDK for Python](https://pypi.org/project/langcache/)

0 commit comments

Comments
 (0)