Skip to content

Conversation

bai-uipath
Copy link
Collaborator

@bai-uipath bai-uipath commented Oct 17, 2025

This PR adds a file-based caching system for both LLM and input mocker responses used in evals. The hash of the prompt and model parameters are used as the cache key, and the LLM response is the cached value, using a hierarchical folder structure under .uipath/eval_cache.

@github-actions github-actions bot added test:uipath-langchain Triggers tests in the uipath-langchain-python repository test:uipath-llamaindex Triggers tests in the uipath-llamaindex-python repository labels Oct 17, 2025
@bai-uipath bai-uipath force-pushed the bai/caching-for-mocks branch from 2d3c169 to abcebb4 Compare October 17, 2025 20:46
@bai-uipath bai-uipath marked this pull request as ready for review October 17, 2025 20:46
@bai-uipath bai-uipath requested a review from akshaylive October 17, 2025 20:49
@bai-uipath bai-uipath force-pushed the bai/caching-for-mocks branch from abcebb4 to 2a843b4 Compare October 17, 2025 21:00
_CACHE_DIR = Path.cwd() / ".uipath" / "eval_cache"

@staticmethod
def _compute_cache_key(cache_key_data: Dict[str, Any]) -> str:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make all of these non-static. The instance method should maintain self.cache_dir

eval_set_id: str,
eval_item_id: str,
cache_key: str,
function_name: Optional[str] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest making this not optional. In input mocking, we can use function_name="get_mock_input" or something

) -> str:
"""Generate the LLM input mocking prompt."""
current_datetime = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
current_date = datetime.utcnow().strftime("%Y-%m-%d")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would suggest avoiding this change. I'd suggest adding

prompt_generation_args = {
  "input_schema": json.dumps(input_schema, indent=2),
  ...
}
prompt = get_input_mocking_prompt(**prompt_generation_args)
cache_key_data = {
            "response_format": response_format,
            "completion_kwargs": completion_kwargs,
            "prompt_generation_args": prompt_generation_args,
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

test:uipath-langchain Triggers tests in the uipath-langchain-python repository test:uipath-llamaindex Triggers tests in the uipath-llamaindex-python repository

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants