Skip to content

Commit c199c41

Browse files
authored
Merge pull request #1838 from redis/revert-1703-DOC-5172
Revert "RC: LangCache public preview"
2 parents 5d6e4b0 + 318d977 commit c199c41

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+104
-711
lines changed

content/develop/ai/langcache.md

Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
---
2+
Title: Redis LangCache
3+
alwaysopen: false
4+
categories:
5+
- docs
6+
- develop
7+
- ai
8+
description: Redis LangCache provides semantic caching-as-a-service to reduce LLM costs and improve response times for AI applications.
9+
linkTitle: LangCache
10+
weight: 30
11+
---
12+
13+
Redis LangCache is a fully-managed semantic caching service that reduces large language model (LLM) costs and improves response times for AI applications.
14+
15+
## How LangCache works
16+
17+
LangCache uses semantic caching to store and reuse previous LLM responses for similar queries. Instead of calling the LLM for every request, LangCache:
18+
19+
- **Checks for similar cached responses** when a new query arrives
20+
- **Returns cached results instantly** if a semantically similar response exists
21+
- **Stores new responses** for future reuse when no cache match is found
22+
23+
## Key benefits
24+
25+
### Cost reduction
26+
LangCache significantly reduces LLM costs by eliminating redundant API calls. Since up to 90% of LLM requests are repetitive, caching frequently-requested responses provides substantial cost savings.
27+
28+
### Improved performance
29+
Cached responses are retrieved from memory, providing response times up to 15 times faster than LLM API calls. This improvement is particularly beneficial for retrieval-augmented generation (RAG) applications.
30+
31+
### Simple deployment
32+
LangCache is available as a managed service through a REST API. The service includes:
33+
34+
- Automated embedding generation
35+
- Configurable cache controls
36+
- Simple billing structure
37+
- No database management required
38+
39+
### Advanced cache management
40+
The service provides comprehensive cache management features:
41+
42+
- Data access and privacy controls
43+
- Configurable eviction protocols
44+
- Usage monitoring and analytics
45+
- Cache hit rate tracking
46+
47+
## Use cases
48+
49+
### AI assistants and chatbots
50+
Optimize conversational AI applications by caching common responses and reducing latency for frequently asked questions.
51+
52+
### RAG applications
53+
Improve retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time.
54+
55+
### AI agents
56+
Enhance multi-step reasoning chains and agent workflows by caching intermediate results and common reasoning patterns.
57+
58+
### AI gateways
59+
Integrate LangCache into centralized AI gateway services to manage and control LLM costs across multiple applications.
60+
61+
## Getting started
62+
63+
LangCache is currently available through a private preview program. The service is accessible via REST API and supports any programming language.
64+
65+
### Prerequisites
66+
67+
To use LangCache, you need:
68+
69+
- An AI application that makes LLM API calls
70+
- A use case involving repetitive or similar queries
71+
- Willingness to provide feedback during the preview phase
72+
73+
### Access
74+
75+
LangCache is offered as a fully-managed cloud service. During the private preview:
76+
77+
- Participation is free
78+
- Usage limits may apply
79+
- Dedicated support is provided
80+
- Regular feedback sessions are conducted
81+
82+
## Data security and privacy
83+
84+
LangCache stores your data on your Redis servers. Redis does not access your data or use it to train AI models. The service maintains enterprise-grade security and privacy standards.
85+
86+
## Support
87+
88+
Private preview participants receive:
89+
90+
- Dedicated onboarding resources
91+
- Documentation and tutorials
92+
- Email and chat support
93+
- Regular check-ins with the product team
94+
- Exclusive roadmap updates
95+
96+
For more information about joining the private preview, visit the [Redis LangCache website](https://redis.io/langcache/).

content/develop/ai/langcache/_index.md

Lines changed: 0 additions & 111 deletions
This file was deleted.

content/develop/ai/langcache/api-reference.md

Lines changed: 0 additions & 129 deletions
This file was deleted.

content/embeds/langcache-cost-reduction.md

Lines changed: 0 additions & 21 deletions
This file was deleted.

content/embeds/rc-langcache-get-started.md

Lines changed: 0 additions & 7 deletions
This file was deleted.

content/operate/rc/changelog/2023/august-2023.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ If you'd like to use triggers and functions with a [Flexible subscription]({{< r
3838
For more information about triggers and functions, see the [triggers and functions documentation]({{< relref "/operate/oss_and_stack/stack-with-enterprise/deprecated-features/triggers-and-functions/" >}}).
3939

4040
{{< note >}}
41-
Triggers and functions is discontinued as of [May 2024]({{< relref "/operate/rc/changelog/2024/may-2024" >}}).
41+
Triggers and functions is discontinued as of [May 2024]({{< relref "/operate/rc/changelog/may-2024" >}}).
4242
{{< /note >}}
4343

4444
### Maintenance windows

0 commit comments

Comments
 (0)