Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Nov 7, 2025

📄 30% (0.30x) speedup for _is_reverse_scale in optuna/visualization/_utils.py

⏱️ Runtime : 1.70 milliseconds 1.31 milliseconds (best of 165 runs)

📝 Explanation and details

The optimization achieves a 29% speedup by making two key changes:

  1. Pre-caching the enum value: _MINIMIZE = StudyDirection.MINIMIZE is computed once at import time, eliminating repeated attribute lookups to StudyDirection.MINIMIZE on every function call.

  2. Identity comparison instead of equality: Changed study.direction == StudyDirection.MINIMIZE to study.direction is _MINIMIZE. The is operator performs a fast pointer/identity check rather than invoking the == method, which can involve more overhead for enum comparisons.

Why this works: Python's is operator is one of the fastest comparison operations since it only checks if two variables reference the same object in memory. By pre-caching the enum value, we guarantee the identity check will work correctly while avoiding both attribute access overhead and equality method dispatch.

Performance impact by test case: The optimization shows particularly strong gains (40-77% faster) when the comparison needs to evaluate the study.direction check - especially in cases with non-MINIMIZE directions or unusual direction values. When short-circuiting occurs (target is not None), gains are smaller (0-10%) since the second part of the OR expression isn't reached.

Scale benefits: The optimization shines in high-frequency scenarios, as shown by the large-scale tests where 1000 iterations see 28-46% improvements. Since this is likely a utility function called frequently during optimization runs, these micro-optimizations compound significantly.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 10028 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests and Runtime
from __future__ import annotations

from collections.abc import Callable

# imports
import pytest
from optuna.study import Study
from optuna.study._study_direction import StudyDirection
from optuna.trial import FrozenTrial
from optuna.visualization._utils import _is_reverse_scale


# --- MOCKS for testing ---
class DummyStudy:
    """Minimal mock of optuna.study.Study for testing _is_reverse_scale."""
    def __init__(self, direction):
        self.direction = direction

# --- UNIT TESTS ---

# -------- BASIC TEST CASES --------

def test_minimize_direction_and_none_target_returns_true():
    # Study direction is MINIMIZE, target is None
    study = DummyStudy(StudyDirection.MINIMIZE)
    target = None
    codeflash_output = _is_reverse_scale(study, target) # 631ns -> 516ns (22.3% faster)

def test_maximize_direction_and_none_target_returns_false():
    # Study direction is MAXIMIZE, target is None
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = None
    codeflash_output = _is_reverse_scale(study, target) # 525ns -> 369ns (42.3% faster)

def test_minimize_direction_and_non_none_target_returns_true():
    # Study direction is MINIMIZE, target is not None
    study = DummyStudy(StudyDirection.MINIMIZE)
    target = lambda trial: 42.0
    codeflash_output = _is_reverse_scale(study, target) # 343ns -> 314ns (9.24% faster)

def test_maximize_direction_and_non_none_target_returns_true():
    # Study direction is MAXIMIZE, target is not None
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = lambda trial: 42.0
    codeflash_output = _is_reverse_scale(study, target) # 283ns -> 306ns (7.52% slower)

# -------- EDGE TEST CASES --------

def test_target_is_explicitly_false_returns_true():
    # target is a function, even if function always returns False
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = lambda trial: False
    codeflash_output = _is_reverse_scale(study, target) # 295ns -> 271ns (8.86% faster)

def test_target_is_explicitly_zero_returns_true():
    # target is a function, even if function always returns 0
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = lambda trial: 0
    codeflash_output = _is_reverse_scale(study, target) # 284ns -> 276ns (2.90% faster)

def test_target_is_explicitly_none_function_returns_true():
    # target is a function returning None, but not None itself
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = lambda trial: None
    codeflash_output = _is_reverse_scale(study, target) # 274ns -> 307ns (10.7% slower)

def test_study_direction_is_unusual_value():
    # Study direction is not MINIMIZE or MAXIMIZE (simulate unexpected value)
    class WeirdDirection:
        pass
    study = DummyStudy(WeirdDirection())
    target = None
    # Should only return True if target is not None
    codeflash_output = _is_reverse_scale(study, target) # 695ns -> 454ns (53.1% faster)
    target = lambda trial: 1
    codeflash_output = _is_reverse_scale(study, target) # 208ns -> 211ns (1.42% slower)

def test_study_direction_is_string_minimize():
    # Study direction is string "MINIMIZE" (simulate typo/misuse)
    study = DummyStudy("MINIMIZE")
    target = None
    # Should only return True if target is not None, since direction is not StudyDirection.MINIMIZE
    codeflash_output = _is_reverse_scale(study, target) # 665ns -> 440ns (51.1% faster)
    target = lambda trial: 1
    codeflash_output = _is_reverse_scale(study, target) # 221ns -> 185ns (19.5% faster)

def test_target_is_object_with_call_dunder_returns_true():
    # target is an object with __call__ (not a plain function)
    class CallableObj:
        def __call__(self, trial):
            return 123
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = CallableObj()
    codeflash_output = _is_reverse_scale(study, target) # 307ns -> 306ns (0.327% faster)

def test_target_is_empty_lambda_returns_true():
    # target is a lambda that ignores its input
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = lambda _: 0
    codeflash_output = _is_reverse_scale(study, target) # 288ns -> 300ns (4.00% slower)

def test_target_is_classmethod_returns_true():
    # target is a classmethod
    class TargetClass:
        @classmethod
        def target(cls, trial):
            return 1
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = TargetClass.target
    codeflash_output = _is_reverse_scale(study, target) # 278ns -> 310ns (10.3% slower)

# -------- LARGE SCALE TEST CASES --------

def test_large_number_of_calls_performance_maximize_none_target():
    # Test performance when called many times with MAXIMIZE and None target
    study = DummyStudy(StudyDirection.MAXIMIZE)
    target = None
    for _ in range(1000):
        codeflash_output = _is_reverse_scale(study, target) # 189μs -> 133μs (42.3% faster)

def test_large_number_of_calls_performance_minimize_none_target():
    # Test performance when called many times with MINIMIZE and None target
    study = DummyStudy(StudyDirection.MINIMIZE)
    target = None
    for _ in range(1000):
        codeflash_output = _is_reverse_scale(study, target) # 188μs -> 134μs (40.0% faster)

def test_large_number_of_unique_target_functions():
    # Test with many unique target functions
    study = DummyStudy(StudyDirection.MAXIMIZE)
    for i in range(1000):
        target = lambda trial, val=i: val
        codeflash_output = _is_reverse_scale(study, target) # 116μs -> 114μs (1.04% faster)

def test_large_number_of_studies_with_varied_directions():
    # Test with many studies with alternating directions
    for i in range(1000):
        direction = StudyDirection.MINIMIZE if i % 2 == 0 else StudyDirection.MAXIMIZE
        study = DummyStudy(direction)
        target = None
        expected = direction == StudyDirection.MINIMIZE
        codeflash_output = _is_reverse_scale(study, target) # 196μs -> 145μs (34.5% faster)

def test_large_number_of_studies_and_targets():
    # Test with many studies and targets
    for i in range(1000):
        direction = StudyDirection.MINIMIZE if i % 3 == 0 else StudyDirection.MAXIMIZE
        study = DummyStudy(direction)
        target = (lambda trial: i) if i % 5 == 0 else None
        expected = target is not None or direction == StudyDirection.MINIMIZE
        codeflash_output = _is_reverse_scale(study, target) # 183μs -> 142μs (28.7% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
from __future__ import annotations

from collections.abc import Callable
from types import SimpleNamespace

# imports
import pytest  # used for our unit tests
from optuna.visualization._utils import _is_reverse_scale


# Mock StudyDirection enum for testing
class StudyDirection:
    MINIMIZE = "minimize"
    MAXIMIZE = "maximize"

# Mock FrozenTrial for testing (not used in _is_reverse_scale, but required for signature)
class FrozenTrial:
    pass

# Mock Study for testing
class Study:
    def __init__(self, direction):
        self.direction = direction
from optuna.visualization._utils import _is_reverse_scale

# unit tests

# ---------------------------
# BASIC TEST CASES
# ---------------------------

def test_minimize_direction_and_no_target_returns_true():
    # Study direction is MINIMIZE, target is None
    study = Study(StudyDirection.MINIMIZE)
    codeflash_output = _is_reverse_scale(study, None) # 839ns -> 517ns (62.3% faster)

def test_maximize_direction_and_no_target_returns_false():
    # Study direction is MAXIMIZE, target is None
    study = Study(StudyDirection.MAXIMIZE)
    codeflash_output = _is_reverse_scale(study, None) # 619ns -> 421ns (47.0% faster)

def test_minimize_direction_and_target_returns_true():
    # Study direction is MINIMIZE, target is not None
    study = Study(StudyDirection.MINIMIZE)
    target = lambda trial: 42.0
    codeflash_output = _is_reverse_scale(study, target) # 314ns -> 328ns (4.27% slower)

def test_maximize_direction_and_target_returns_true():
    # Study direction is MAXIMIZE, target is not None
    study = Study(StudyDirection.MAXIMIZE)
    target = lambda trial: 42.0
    codeflash_output = _is_reverse_scale(study, target) # 318ns -> 305ns (4.26% faster)

def test_target_is_callable_object_returns_true():
    # Target is a callable object (not a lambda)
    study = Study(StudyDirection.MAXIMIZE)
    class TargetCallable:
        def __call__(self, trial):
            return 1.23
    target = TargetCallable()
    codeflash_output = _is_reverse_scale(study, target) # 339ns -> 312ns (8.65% faster)

# ---------------------------
# EDGE TEST CASES
# ---------------------------

def test_study_direction_is_unusual_string_returns_false():
    # Study direction is an unexpected string, target is None
    study = Study("unexpected_direction")
    codeflash_output = _is_reverse_scale(study, None) # 683ns -> 385ns (77.4% faster)

def test_study_direction_is_none_returns_false():
    # Study direction is None, target is None
    study = Study(None)
    codeflash_output = _is_reverse_scale(study, None) # 630ns -> 424ns (48.6% faster)

def test_study_direction_is_minimize_case_insensitive_returns_false():
    # Study direction is 'MINIMIZE' (uppercase), target is None
    study = Study("MINIMIZE")
    codeflash_output = _is_reverse_scale(study, None) # 615ns -> 370ns (66.2% faster)

def test_target_is_non_callable_object_returns_true():
    # Target is not None, but not callable (should still return True)
    study = Study(StudyDirection.MAXIMIZE)
    target = 12345  # Not a callable, but not None
    codeflash_output = _is_reverse_scale(study, target) # 314ns -> 318ns (1.26% slower)

def test_target_is_empty_list_returns_true():
    # Target is not None, but is an empty list (should still return True)
    study = Study(StudyDirection.MAXIMIZE)
    target = []
    codeflash_output = _is_reverse_scale(study, target) # 264ns -> 307ns (14.0% slower)

def test_study_is_object_with_direction_attribute_returns_true():
    # Study is not a Study instance, but has a direction attribute
    study = SimpleNamespace(direction=StudyDirection.MINIMIZE)
    codeflash_output = _is_reverse_scale(study, None) # 714ns -> 468ns (52.6% faster)

def test_study_is_object_without_direction_attribute_raises_attribute_error():
    # Study does not have a direction attribute, should raise AttributeError
    study = SimpleNamespace()
    with pytest.raises(AttributeError):
        _is_reverse_scale(study, None) # 1.15μs -> 1.11μs (3.05% faster)

def test_target_is_false_returns_true():
    # Target is False (not None), should return True
    study = Study(StudyDirection.MAXIMIZE)
    target = False
    codeflash_output = _is_reverse_scale(study, target) # 350ns -> 331ns (5.74% faster)

def test_target_is_zero_returns_true():
    # Target is 0 (not None), should return True
    study = Study(StudyDirection.MAXIMIZE)
    target = 0
    codeflash_output = _is_reverse_scale(study, target) # 318ns -> 327ns (2.75% slower)

# ---------------------------
# LARGE SCALE TEST CASES
# ---------------------------

def test_large_number_of_studies_minimize_direction_returns_true():
    # Test with many studies, all MINIMIZE direction, target is None
    for _ in range(1000):
        study = Study(StudyDirection.MINIMIZE)
        codeflash_output = _is_reverse_scale(study, None) # 195μs -> 134μs (44.8% faster)

def test_large_number_of_studies_maximize_direction_returns_false():
    # Test with many studies, all MAXIMIZE direction, target is None
    for _ in range(1000):
        study = Study(StudyDirection.MAXIMIZE)
        codeflash_output = _is_reverse_scale(study, None) # 195μs -> 134μs (45.9% faster)

def test_large_number_of_studies_with_target_returns_true():
    # Test with many studies, all directions, target is not None
    for i in range(1000):
        direction = StudyDirection.MINIMIZE if i % 2 == 0 else StudyDirection.MAXIMIZE
        study = Study(direction)
        target = lambda trial: i
        codeflash_output = _is_reverse_scale(study, target) # 115μs -> 114μs (0.470% faster)

def test_large_number_of_varied_targets_returns_true():
    # Test with many studies, target is varied but always not None
    for i in range(1000):
        study = Study(StudyDirection.MAXIMIZE)
        # target alternates between different types, but never None
        target = (lambda trial: i) if i % 3 == 0 else i if i % 3 == 1 else [i]
        codeflash_output = _is_reverse_scale(study, target) # 115μs -> 113μs (0.923% faster)

def test_large_number_of_studies_with_unusual_direction_returns_false():
    # Test with many studies, direction is an unexpected value, target is None
    for i in range(1000):
        study = Study("direction_" + str(i))
        codeflash_output = _is_reverse_scale(study, None) # 195μs -> 134μs (45.2% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-_is_reverse_scale-mho8igqi and push.

Codeflash Static Badge

The optimization achieves a **29% speedup** by making two key changes:

1. **Pre-caching the enum value**: `_MINIMIZE = StudyDirection.MINIMIZE` is computed once at import time, eliminating repeated attribute lookups to `StudyDirection.MINIMIZE` on every function call.

2. **Identity comparison instead of equality**: Changed `study.direction == StudyDirection.MINIMIZE` to `study.direction is _MINIMIZE`. The `is` operator performs a fast pointer/identity check rather than invoking the `==` method, which can involve more overhead for enum comparisons.

**Why this works**: Python's `is` operator is one of the fastest comparison operations since it only checks if two variables reference the same object in memory. By pre-caching the enum value, we guarantee the identity check will work correctly while avoiding both attribute access overhead and equality method dispatch.

**Performance impact by test case**: The optimization shows particularly strong gains (40-77% faster) when the comparison needs to evaluate the `study.direction` check - especially in cases with non-MINIMIZE directions or unusual direction values. When short-circuiting occurs (target is not None), gains are smaller (0-10%) since the second part of the OR expression isn't reached.

**Scale benefits**: The optimization shines in high-frequency scenarios, as shown by the large-scale tests where 1000 iterations see 28-46% improvements. Since this is likely a utility function called frequently during optimization runs, these micro-optimizations compound significantly.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 November 7, 2025 02:26
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Nov 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant