-
Notifications
You must be signed in to change notification settings - Fork 0
Add expression type caching #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: codebase-analysis-report
Are you sure you want to change the base?
Add expression type caching #9
Conversation
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements expression type caching to improve performance in the code analysis system. The changes focus on optimizing type inference by avoiding redundant calculations for the same expression nodes.
- Adds expression type caching mechanism to both CodeAnalyzer classes
- Modifies CodeGenerator to handle dataclass results in addition to dictionaries
- Refactors type inference methods to use cached results and structured control flow
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
File | Description |
---|---|
src/analyzer/code_analyzer.py | Implements expression type caching and refactors _infer_expression_type with improved control flow |
src/analyzer/code_analyzer_fixed.py | Adds similar caching mechanism with more comprehensive type inference logic |
src/converter/code_generator.py | Adds dataclass compatibility by converting objects to dictionaries when needed |
self._expr_type_cache[id(node)] = result | ||
return result | ||
elif type_info.startswith('std::map<'): | ||
# Return value type from std::map<K, V> | ||
parts = type_info[9:-1].split(', ') | ||
if len(parts) > 1: | ||
return parts[1] | ||
result = parts[1] | ||
self._expr_type_cache[id(node)] = result | ||
return result | ||
elif type_info.startswith('std::tuple<'): | ||
# For tuples, would need to know which index is being accessed | ||
# Default to first type for now | ||
parts = type_info[11:-1].split(', ') | ||
if parts: | ||
return parts[0] | ||
result = parts[0] | ||
self._expr_type_cache[id(node)] = result | ||
return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caching logic is inconsistent within the ast.Subscript handling. Some branches manually cache and return early, while others rely on the general caching at the end of the method. This creates unnecessary code duplication and potential maintenance issues.
Copilot uses AI. Check for mistakes.
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
…g, performance optimizations Co-authored-by: CrazyDubya <[email protected]>
…slve them all... test code at end make sure fucntional (#12) * Initial plan * Implement PRs #4, #6, #9, #10: Math functions, comprehensions, caching, performance optimizations Co-authored-by: CrazyDubya <[email protected]> --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: CrazyDubya <[email protected]>
Summary
CodeAnalyzer
to avoid repeated workCodeGenerator
to accept dataclass resultsTesting
pytest -q
https://chatgpt.com/codex/tasks/task_e_684a55e5d3248332bb4cf092accce2c3