Skip to content

Add expression type caching #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: codebase-analysis-report
Choose a base branch
from

Conversation

CrazyDubya
Copy link
Owner

Summary

  • cache inferred expression types in CodeAnalyzer to avoid repeated work
  • clear caches at the start of analysis
  • allow CodeGenerator to accept dataclass results

Testing

  • pytest -q

https://chatgpt.com/codex/tasks/task_e_684a55e5d3248332bb4cf092accce2c3

Copilot

This comment was marked as outdated.

@CrazyDubya CrazyDubya requested a review from Copilot July 23, 2025 15:27
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements expression type caching to improve performance in the code analysis system. The changes focus on optimizing type inference by avoiding redundant calculations for the same expression nodes.

  • Adds expression type caching mechanism to both CodeAnalyzer classes
  • Modifies CodeGenerator to handle dataclass results in addition to dictionaries
  • Refactors type inference methods to use cached results and structured control flow

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
src/analyzer/code_analyzer.py Implements expression type caching and refactors _infer_expression_type with improved control flow
src/analyzer/code_analyzer_fixed.py Adds similar caching mechanism with more comprehensive type inference logic
src/converter/code_generator.py Adds dataclass compatibility by converting objects to dictionaries when needed

Comment on lines +596 to +609
self._expr_type_cache[id(node)] = result
return result
elif type_info.startswith('std::map<'):
# Return value type from std::map<K, V>
parts = type_info[9:-1].split(', ')
if len(parts) > 1:
return parts[1]
result = parts[1]
self._expr_type_cache[id(node)] = result
return result
elif type_info.startswith('std::tuple<'):
# For tuples, would need to know which index is being accessed
# Default to first type for now
parts = type_info[11:-1].split(', ')
if parts:
return parts[0]
result = parts[0]
self._expr_type_cache[id(node)] = result
return result
Copy link
Preview

Copilot AI Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caching logic is inconsistent within the ast.Subscript handling. Some branches manually cache and return early, while others rely on the general caching at the end of the method. This creates unnecessary code duplication and potential maintenance issues.

Copilot uses AI. Check for mistakes.

Copilot AI added a commit that referenced this pull request Jul 23, 2025
…g, performance optimizations

Co-authored-by: CrazyDubya <[email protected]>
CrazyDubya added a commit that referenced this pull request Jul 23, 2025
…slve them all... test code at end make sure fucntional (#12)

* Initial plan

* Implement PRs #4, #6, #9, #10: Math functions, comprehensions, caching, performance optimizations

Co-authored-by: CrazyDubya <[email protected]>

---------

Co-authored-by: copilot-swe-agent[bot] <[email protected]>
Co-authored-by: CrazyDubya <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant