Skip to content

Commit 0cbbf04

Browse files
Fix spelling errors (#351)
- Fixes some spelling errors - Adds codespell to the pre-commit hooks --------- Co-authored-by: Tyler Hutcherson <[email protected]>
1 parent 7c7d4f2 commit 0cbbf04

19 files changed

+55
-43
lines changed

.pre-commit-config.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,12 @@ repos:
66
entry: bash -c 'make format && make check-sort-imports && make check-types'
77
language: system
88
pass_filenames: false
9+
- repo: https://github.com/codespell-project/codespell
10+
rev: v2.2.6
11+
hooks:
12+
- id: codespell
13+
name: Check spelling
14+
args:
15+
- --write-changes
16+
- --skip=*.pyc,*.pyo,*.lock,*.git,*.mypy_cache,__pycache__,*.egg-info,.pytest_cache,docs/_build,env,venv,.venv
17+
- --ignore-words-list=enginee

docs/user_guide/02_hybrid_queries.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1090,7 +1090,7 @@
10901090
"source": [
10911091
"## Non-vector Queries\n",
10921092
"\n",
1093-
"In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but soley takes a ``FilterExpression``."
1093+
"In some cases, you may not want to run a vector query, but just use a ``FilterExpression`` similar to a SQL query. The ``FilterQuery`` class enable this functionality. It is similar to the ``VectorQuery`` class but solely takes a ``FilterExpression``."
10941094
]
10951095
},
10961096
{
@@ -1448,7 +1448,7 @@
14481448
"name": "python",
14491449
"nbconvert_exporter": "python",
14501450
"pygments_lexer": "ipython3",
1451-
"version": "3.13.2"
1451+
"version": "3.12.8"
14521452
},
14531453
"orig_nbformat": 4
14541454
},

docs/user_guide/03_llmcache.ipynb

Lines changed: 6 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,5 @@
11
{
22
"cells": [
3-
{
4-
"cell_type": "markdown",
5-
"metadata": {},
6-
"source": [
7-
"# Semantic Caching for LLMs\n",
8-
"\n",
9-
"RedisVL provides a ``SemanticCache`` interface to utilize Redis' built-in caching capabilities AND vector search in order to store responses from previously-answered questions. This reduces the number of requests and tokens sent to the Large Language Models (LLM) service, decreasing costs and enhancing application throughput (by reducing the time taken to generate responses).\n",
10-
"\n",
11-
"This notebook will go over how to use Redis as a Semantic Cache for your applications"
12-
]
13-
},
143
{
154
"cell_type": "markdown",
165
"metadata": {},
@@ -110,7 +99,7 @@
11099
" name=\"llmcache\", # underlying search index name\n",
111100
" redis_url=\"redis://localhost:6379\", # redis connection url string\n",
112101
" distance_threshold=0.1, # semantic cache distance threshold\n",
113-
" vectorizer=HFTextVectorizer(\"redis/langcache-embed-v1\"), # embdding model\n",
102+
" vectorizer=HFTextVectorizer(\"redis/langcache-embed-v1\"), # embedding model\n",
114103
")"
115104
]
116105
},
@@ -315,12 +304,12 @@
315304
"cell_type": "markdown",
316305
"metadata": {},
317306
"source": [
318-
"## Customize the Distance Threshhold\n",
307+
"## Customize the Distance Threshold\n",
319308
"\n",
320-
"For most use cases, the right semantic similarity threshhold is not a fixed quantity. Depending on the choice of embedding model,\n",
321-
"the properties of the input query, and even business use case -- the threshhold might need to change. \n",
309+
"For most use cases, the right semantic similarity threshold is not a fixed quantity. Depending on the choice of embedding model,\n",
310+
"the properties of the input query, and even business use case -- the threshold might need to change. \n",
322311
"\n",
323-
"Fortunately, you can seamlessly adjust the threshhold at any point like below:"
312+
"Fortunately, you can seamlessly adjust the threshold at any point like below:"
324313
]
325314
},
326315
{
@@ -930,7 +919,7 @@
930919
"name": "python",
931920
"nbconvert_exporter": "python",
932921
"pygments_lexer": "ipython3",
933-
"version": "3.13.2"
922+
"version": "3.12.8"
934923
},
935924
"orig_nbformat": 4
936925
},

docs/user_guide/04_vectorizers.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@
175175
}
176176
],
177177
"source": [
178-
"# openai also supports asyncronous requests, which we can use to speed up the vectorization process.\n",
178+
"# openai also supports asynchronous requests, which we can use to speed up the vectorization process.\n",
179179
"embeddings = await oai.aembed_many(sentences)\n",
180180
"print(\"Number of Embeddings:\", len(embeddings))\n"
181181
]
@@ -495,7 +495,7 @@
495495
"\n",
496496
"mistral = MistralAITextVectorizer()\n",
497497
"\n",
498-
"# embed a sentence using their asyncronous method\n",
498+
"# embed a sentence using their asynchronous method\n",
499499
"test = await mistral.aembed(\"This is a test sentence.\")\n",
500500
"print(\"Vector dimensions: \", len(test))\n",
501501
"print(test[:10])"

docs/user_guide/05_hash_vs_json.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@
282282
"from redisvl.query import VectorQuery\n",
283283
"from redisvl.query.filter import Tag, Text, Num\n",
284284
"\n",
285-
"t = (Tag(\"credit_score\") == \"high\") & (Text(\"job\") % \"enginee*\") & (Num(\"age\") > 17)\n",
285+
"t = (Tag(\"credit_score\") == \"high\") & (Text(\"job\") % \"enginee*\") & (Num(\"age\") > 17) # codespell:ignore enginee\n",
286286
"\n",
287287
"v = VectorQuery(\n",
288288
" vector=[0.1, 0.1, 0.5],\n",

docs/user_guide/07_message_history.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticable when asking simple questions, it becomes a hinderance when engaging in long running conversations that rely on conversational context.\n",
14+
"Large Language Models are inherently stateless and have no knowledge of previous interactions with a user, or even of previous parts of the current conversation. While this may not be noticeable when asking simple questions, it becomes a hindrance when engaging in long running conversations that rely on conversational context.\n",
1515
"\n",
1616
"The solution to this problem is to append the previous conversation history to each subsequent call to the LLM.\n",
1717
"\n",
@@ -276,7 +276,7 @@
276276
"source": [
277277
"You can adjust the degree of semantic similarity needed to be included in your context.\n",
278278
"\n",
279-
"Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everthing."
279+
"Setting a distance threshold close to 0.0 will require an exact semantic match, while a distance threshold of 1.0 will include everything."
280280
]
281281
},
282282
{

pyproject.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,7 @@ dev = [
7575
"types-pyopenssl",
7676
"testcontainers>=4.3.1,<5",
7777
"cryptography>=44.0.1 ; python_version > '3.9.1'",
78+
"codespell>=2.4.1,<3",
7879
]
7980
docs = [
8081
"sphinx>=4.4.0",
@@ -118,3 +119,5 @@ asyncio_mode = "auto"
118119
[tool.mypy]
119120
warn_unused_configs = true
120121
ignore_missing_imports = true
122+
exclude = ["env", "venv", ".venv"]
123+

redisvl/extensions/cache/llm/semantic.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -385,7 +385,7 @@ def check(
385385
.. code-block:: python
386386
387387
response = cache.check(
388-
prompt="What is the captial city of France?"
388+
prompt="What is the capital city of France?"
389389
)
390390
"""
391391
if not any([prompt, vector]):
@@ -476,7 +476,7 @@ async def acheck(
476476
.. code-block:: python
477477
478478
response = await cache.acheck(
479-
prompt="What is the captial city of France?"
479+
prompt="What is the capital city of France?"
480480
)
481481
"""
482482
aindex = await self._get_async_index()
@@ -588,7 +588,7 @@ def store(
588588
.. code-block:: python
589589
590590
key = cache.store(
591-
prompt="What is the captial city of France?",
591+
prompt="What is the capital city of France?",
592592
response="Paris",
593593
metadata={"city": "Paris", "country": "France"}
594594
)
@@ -656,7 +656,7 @@ async def astore(
656656
.. code-block:: python
657657
658658
key = await cache.astore(
659-
prompt="What is the captial city of France?",
659+
prompt="What is the capital city of France?",
660660
response="Paris",
661661
metadata={"city": "Paris", "country": "France"}
662662
)

redisvl/extensions/constants.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"""
22
Constants used within the extension classes SemanticCache, BaseMessageHistory,
33
MessageHistory, SemanticMessageHistory and SemanticRouter.
4-
These constants are also used within theses classes corresponding schema.
4+
These constants are also used within these classes' corresponding schemas.
55
"""
66

77
# BaseMessageHistory

redisvl/extensions/message_history/base_history.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ def get_recent(
6060
raw: bool = False,
6161
session_tag: Optional[str] = None,
6262
) -> Union[List[str], List[Dict[str, str]]]:
63-
"""Retreive the recent conversation history in sequential order.
63+
"""Retrieve the recent conversation history in sequential order.
6464
6565
Args:
6666
top_k (int): The number of previous exchanges to return. Default is 5.

0 commit comments

Comments
 (0)