Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Nov 7, 2025

📄 14% (0.14x) speedup for JiraDataSource.set_dashboard_item_property in backend/python/app/sources/external/jira/jira.py

⏱️ Runtime : 2.26 milliseconds 1.98 milliseconds (best of 229 runs)

📝 Explanation and details

The optimized code achieves a 14% runtime improvement through several key dictionary processing optimizations in the set_dashboard_item_property method:

Primary Optimization - Smart Dictionary Conversion:
The original code called _as_str_dict() three times during HTTPRequest construction, spending ~52.6% of total time on these conversions. The optimization introduces _as_str_dict_cached() which:

  • Fast-path for empty dictionaries: Returns a shared _AS_EMPTY_DICT_STR constant instead of creating new empty dicts
  • Type checking short-circuit: If all keys/values are already strings, returns the original dict without copying
  • Eliminates redundant serialization: Only converts non-string values when necessary

Header Processing Optimization:
Replaces dict(headers or {}) + setdefault() pattern with conditional logic that avoids unnecessary dictionary operations when headers are empty or already contain 'Content-Type'.

HTTP Client Merge Logic:
In HTTPClient.execute(), the header merging logic now has fast-paths for empty dictionaries, avoiding the {**dict1, **dict2} spread operation when one dictionary is empty.

Performance Impact:

  • The line profiler shows _as_str_dict calls dropped from 3.37ms total time to being replaced by much faster _as_str_dict_cached calls
  • HTTPRequest construction time reduced significantly due to pre-computed string dictionaries
  • Runtime improved from 2.26ms to 1.98ms (14% faster)

Test Case Performance:
The optimizations are particularly effective for:

  • High-volume concurrent calls (50-100 requests) where dictionary reuse provides compounding benefits
  • Scenarios with empty headers/query params which are common in API calls
  • Repeated calls with similar parameter patterns where type checking shortcuts apply

Trade-off Note:
While runtime improved 14%, throughput slightly decreased (-8.4%) due to the overhead of additional type checking in high-concurrency scenarios. This suggests the optimization favors latency over pure throughput, making it ideal for applications prioritizing response time over maximum request rate.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 475 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 91.7%
🌀 Generated Regression Tests and Runtime
import asyncio
from typing import Any, Dict, Optional

import pytest
from app.sources.external.jira.jira import JiraDataSource

# --- Minimal stubs and helpers for the test environment ---

class HTTPResponse:
    """Stub for HTTPResponse object."""
    def __init__(self, status_code: int = 200, json_data: Optional[dict] = None, text: str = "", headers: Optional[dict] = None):
        self.status_code = status_code
        self._json_data = json_data or {}
        self.text = text
        self.headers = headers or {}

    def json(self):
        return self._json_data

class HTTPRequest:
    """Stub for HTTPRequest object."""
    def __init__(self, method, url, headers, path_params, query_params, body):
        self.method = method
        self.url = url
        self.headers = headers
        self.path_params = path_params
        self.query_params = query_params
        self.body = body

# --- Mocks for JiraClient and HTTP client ---

class MockAsyncHTTPClient:
    """Mock async HTTP client with an execute method."""
    def __init__(self, base_url="http://mocked.api", execute_result=None, raise_exc: Optional[Exception] = None):
        self._base_url = base_url
        self._execute_result = execute_result or HTTPResponse()
        self._raise_exc = raise_exc
        self.last_request = None
        self.execute_call_count = 0

    def get_base_url(self):
        return self._base_url

    async def execute(self, request: HTTPRequest):
        self.last_request = request
        self.execute_call_count += 1
        if self._raise_exc:
            raise self._raise_exc
        return self._execute_result

class MockJiraClient:
    """Mock JiraClient that returns a mock HTTP client."""
    def __init__(self, http_client=None):
        self._http_client = http_client

    def get_client(self):
        return self._http_client
from app.sources.external.jira.jira import JiraDataSource

# --- Unit Tests ---

# 1. Basic Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_basic_success():
    """Test basic successful call with all parameters."""
    mock_http_client = MockAsyncHTTPClient(
        base_url="http://mocked.api",
        execute_result=HTTPResponse(status_code=201, json_data={"success": True})
    )
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    result = await ds.set_dashboard_item_property(
        dashboardId="D1",
        itemId="I1",
        propertyKey="propA",
        body={"foo": "bar"},
        headers={"X-Custom": "yes"}
    )
    # Check that the request was constructed correctly
    req = mock_http_client.last_request

@pytest.mark.asyncio
async def test_set_dashboard_item_property_default_headers():
    """Test that Content-Type header is set by default if not provided."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    await ds.set_dashboard_item_property("D2", "I2", "propB")
    req = mock_http_client.last_request

@pytest.mark.asyncio
async def test_set_dashboard_item_property_body_none():
    """Test with body=None (should not fail)."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    result = await ds.set_dashboard_item_property("D3", "I3", "propC", body=None)
    req = mock_http_client.last_request

@pytest.mark.asyncio
async def test_set_dashboard_item_property_headers_none():
    """Test with headers=None (should not fail)."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    result = await ds.set_dashboard_item_property("D4", "I4", "propD", headers=None)

# 2. Edge Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_empty_strings():
    """Test with empty string path parameters."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    result = await ds.set_dashboard_item_property("", "", "", body={})
    req = mock_http_client.last_request

@pytest.mark.asyncio
async def test_set_dashboard_item_property_body_various_types():
    """Test with various types in body."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    body = {
        "int": 42,
        "float": 3.14,
        "bool": True,
        "list": [1, 2, 3],
        "dict": {"nested": "yes"},
        "none": None
    }
    result = await ds.set_dashboard_item_property("D5", "I5", "propE", body=body)
    req = mock_http_client.last_request

@pytest.mark.asyncio
async def test_set_dashboard_item_property_raises_if_client_none():
    """Test that ValueError is raised if HTTP client is None."""
    class DummyJiraClient:
        def get_client(self):
            return None
    with pytest.raises(ValueError, match="HTTP client is not initialized"):
        JiraDataSource(DummyJiraClient())

@pytest.mark.asyncio
async def test_set_dashboard_item_property_raises_if_client_missing_get_base_url():
    """Test that ValueError is raised if HTTP client lacks get_base_url."""
    class DummyHTTPClient:
        pass
    class DummyJiraClient:
        def get_client(self):
            return DummyHTTPClient()
    with pytest.raises(ValueError, match="HTTP client does not have get_base_url method"):
        JiraDataSource(DummyJiraClient())

@pytest.mark.asyncio
async def test_set_dashboard_item_property_execute_raises():
    """Test that exceptions from execute are propagated."""
    mock_http_client = MockAsyncHTTPClient(raise_exc=RuntimeError("fail!"))
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    with pytest.raises(RuntimeError, match="fail!"):
        await ds.set_dashboard_item_property("D6", "I6", "propF")

@pytest.mark.asyncio
async def test_set_dashboard_item_property_concurrent_calls():
    """Test concurrent execution of the async function."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    # Run 5 concurrent calls with different parameters
    results = await asyncio.gather(
        ds.set_dashboard_item_property("D7", "I7", "propG"),
        ds.set_dashboard_item_property("D8", "I8", "propH"),
        ds.set_dashboard_item_property("D9", "I9", "propI"),
        ds.set_dashboard_item_property("D10", "I10", "propJ"),
        ds.set_dashboard_item_property("D11", "I11", "propK"),
    )

# 3. Large Scale Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_many_concurrent_calls():
    """Test with a large number of concurrent async calls (scalability)."""
    num_calls = 50
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    tasks = [
        ds.set_dashboard_item_property(f"D{n}", f"I{n}", f"prop{n}")
        for n in range(num_calls)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_set_dashboard_item_property_large_body():
    """Test with a large body payload."""
    large_body = {f"key{i}": f"value{i}" for i in range(500)}
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    result = await ds.set_dashboard_item_property("D12", "I12", "propL", body=large_body)
    req = mock_http_client.last_request

# 4. Throughput Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_small_load():
    """Throughput: test function under small load (10 concurrent calls)."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    tasks = [
        ds.set_dashboard_item_property(f"D{n}", f"I{n}", f"prop{n}")
        for n in range(10)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_medium_load():
    """Throughput: test function under medium load (50 concurrent calls)."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    tasks = [
        ds.set_dashboard_item_property(f"D{n}", f"I{n}", f"prop{n}")
        for n in range(50)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_high_volume():
    """Throughput: test function under high volume (100 concurrent calls)."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    tasks = [
        ds.set_dashboard_item_property(f"D{n}", f"I{n}", f"prop{n}")
        for n in range(100)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_varied_payloads():
    """Throughput: test function with varied payloads and headers."""
    mock_http_client = MockAsyncHTTPClient()
    jira_client = MockJiraClient(http_client=mock_http_client)
    ds = JiraDataSource(jira_client)
    tasks = [
        ds.set_dashboard_item_property(
            f"D{n}", f"I{n}", f"prop{n}",
            body={"num": n, "even": n % 2 == 0},
            headers={"X-Test": str(n)}
        )
        for n in range(20)
    ]
    results = await asyncio.gather(*tasks)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import asyncio
from typing import Any, Dict, Optional

import pytest
from app.sources.external.jira.jira import JiraDataSource

# ---- Minimal stubs for dependencies ----

class DummyHTTPResponse:
    """A dummy HTTPResponse to simulate a real HTTP response."""
    def __init__(self, status_code=200, data=None):
        self.status_code = status_code
        self.data = data or {}

    def __eq__(self, other):
        return (
            isinstance(other, DummyHTTPResponse)
            and self.status_code == other.status_code
            and self.data == other.data
        )

class DummyHTTPClient:
    """A dummy HTTP client that simulates async HTTP execution."""
    def __init__(self):
        self.executed_requests = []

    def get_base_url(self):
        return "https://dummy.jira.local"

    async def execute(self, request):
        # Simulate a response with the request data for testability
        self.executed_requests.append(request)
        return DummyHTTPResponse(
            status_code=200,
            data={
                "url": request.url,
                "method": request.method,
                "headers": request.headers,
                "path_params": request.path_params,
                "query_params": request.query_params,
                "body": request.body,
            }
        )

class DummyJiraClient:
    """A dummy JiraClient that returns a DummyHTTPClient."""
    def __init__(self, client=None):
        self.client = client or DummyHTTPClient()

    def get_client(self):
        return self.client

# ---- Minimal stubs for HTTPRequest ----

class HTTPRequest:
    def __init__(self, method, url, headers, path_params, query_params, body):
        self.method = method
        self.url = url
        self.headers = headers
        self.path_params = path_params
        self.query_params = query_params
        self.body = body
from app.sources.external.jira.jira import JiraDataSource

# ---- TESTS ----

# 1. Basic Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_basic_success():
    """Test basic successful call with minimal required parameters."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    dashboardId = "dash123"
    itemId = "item456"
    propertyKey = "prop789"
    body = {"foo": "bar"}
    resp = await ds.set_dashboard_item_property(dashboardId, itemId, propertyKey, body=body)

@pytest.mark.asyncio
async def test_set_dashboard_item_property_with_custom_headers():
    """Test the function with custom headers provided."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    headers = {"X-Custom": "Value"}
    resp = await ds.set_dashboard_item_property(
        "d1", "i2", "p3", body={"a": 1}, headers=headers
    )

@pytest.mark.asyncio
async def test_set_dashboard_item_property_none_body_and_headers():
    """Test with None for body and headers."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    resp = await ds.set_dashboard_item_property("d", "i", "p")

# 2. Edge Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_empty_strings():
    """Test with empty strings as path parameters."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    resp = await ds.set_dashboard_item_property("", "", "", body={})

@pytest.mark.asyncio
async def test_set_dashboard_item_property_special_characters():
    """Test with special characters in path parameters."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    dashboardId = "dash/!@#"
    itemId = "item$%^"
    propertyKey = "prop&*()"
    resp = await ds.set_dashboard_item_property(dashboardId, itemId, propertyKey, body={"x": 1})

@pytest.mark.asyncio
async def test_set_dashboard_item_property_client_not_initialized():
    """Test that ValueError is raised if client is not initialized."""
    class BadClient:
        def get_client(self):
            return None
    with pytest.raises(ValueError, match="HTTP client is not initialized"):
        JiraDataSource(BadClient())

@pytest.mark.asyncio
async def test_set_dashboard_item_property_client_missing_get_base_url():
    """Test that ValueError is raised if client lacks get_base_url."""
    class NoBaseUrlClient:
        def get_client(self):
            class Dummy:
                pass
            return Dummy()
    with pytest.raises(ValueError, match="HTTP client does not have get_base_url method"):
        JiraDataSource(NoBaseUrlClient())

@pytest.mark.asyncio
async def test_set_dashboard_item_property_concurrent_calls():
    """Test concurrent execution of the async function."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    tasks = [
        ds.set_dashboard_item_property(f"d{i}", f"i{i}", f"p{i}", body={"n": i})
        for i in range(10)
    ]
    results = await asyncio.gather(*tasks)
    # Each response should have the correct body
    for i, r in enumerate(results):
        pass

@pytest.mark.asyncio
async def test_set_dashboard_item_property_exception_in_execute(monkeypatch):
    """Test that an exception in the client's execute method is propagated."""
    class FailingHTTPClient(DummyHTTPClient):
        async def execute(self, request):
            raise RuntimeError("Simulated failure")
    client = DummyJiraClient(FailingHTTPClient())
    ds = JiraDataSource(client)
    with pytest.raises(RuntimeError, match="Simulated failure"):
        await ds.set_dashboard_item_property("d", "i", "p")

# 3. Large Scale Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_many_concurrent_calls():
    """Test the function with a large number of concurrent calls."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    count = 50  # Large but not excessive
    tasks = [
        ds.set_dashboard_item_property(f"dash{i}", f"item{i}", f"key{i}", body={"idx": i})
        for i in range(count)
    ]
    results = await asyncio.gather(*tasks)
    for i, r in enumerate(results):
        pass

@pytest.mark.asyncio
async def test_set_dashboard_item_property_large_body():
    """Test with a large body dictionary."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    large_body = {f"key{n}": n for n in range(500)}
    resp = await ds.set_dashboard_item_property("d", "i", "p", body=large_body)

# 4. Throughput Test Cases

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_small_load():
    """Throughput: Small batch of requests."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    tasks = [
        ds.set_dashboard_item_property("d", "i", f"p{n}", body={"v": n})
        for n in range(5)
    ]
    results = await asyncio.gather(*tasks)
    for n, r in enumerate(results):
        pass

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_medium_load():
    """Throughput: Medium batch of requests."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    tasks = [
        ds.set_dashboard_item_property("d", "i", f"p{n}", body={"v": n})
        for n in range(30)
    ]
    results = await asyncio.gather(*tasks)
    for n, r in enumerate(results):
        pass

@pytest.mark.asyncio
async def test_set_dashboard_item_property_throughput_high_volume():
    """Throughput: High volume, sustained concurrent requests."""
    client = DummyJiraClient()
    ds = JiraDataSource(client)
    batch_size = 100
    tasks = [
        ds.set_dashboard_item_property("d", "i", f"p{n}", body={"v": n})
        for n in range(batch_size)
    ]
    results = await asyncio.gather(*tasks)
    # Check a few random results for correctness
    for n in [0, 25, 50, 99]:
        pass
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-JiraDataSource.set_dashboard_item_property-mhpead10 and push.

Codeflash Static Badge

The optimized code achieves a **14% runtime improvement** through several key dictionary processing optimizations in the `set_dashboard_item_property` method:

**Primary Optimization - Smart Dictionary Conversion:**
The original code called `_as_str_dict()` three times during `HTTPRequest` construction, spending ~52.6% of total time on these conversions. The optimization introduces `_as_str_dict_cached()` which:
- **Fast-path for empty dictionaries**: Returns a shared `_AS_EMPTY_DICT_STR` constant instead of creating new empty dicts
- **Type checking short-circuit**: If all keys/values are already strings, returns the original dict without copying
- **Eliminates redundant serialization**: Only converts non-string values when necessary

**Header Processing Optimization:**
Replaces `dict(headers or {})` + `setdefault()` pattern with conditional logic that avoids unnecessary dictionary operations when headers are empty or already contain 'Content-Type'.

**HTTP Client Merge Logic:**
In `HTTPClient.execute()`, the header merging logic now has fast-paths for empty dictionaries, avoiding the `{**dict1, **dict2}` spread operation when one dictionary is empty.

**Performance Impact:**
- The line profiler shows `_as_str_dict` calls dropped from 3.37ms total time to being replaced by much faster `_as_str_dict_cached` calls
- `HTTPRequest` construction time reduced significantly due to pre-computed string dictionaries
- Runtime improved from 2.26ms to 1.98ms (14% faster)

**Test Case Performance:**
The optimizations are particularly effective for:
- **High-volume concurrent calls** (50-100 requests) where dictionary reuse provides compounding benefits
- **Scenarios with empty headers/query params** which are common in API calls
- **Repeated calls with similar parameter patterns** where type checking shortcuts apply

**Trade-off Note:**
While runtime improved 14%, throughput slightly decreased (-8.4%) due to the overhead of additional type checking in high-concurrency scenarios. This suggests the optimization favors latency over pure throughput, making it ideal for applications prioritizing response time over maximum request rate.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 November 7, 2025 21:56
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Nov 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant