Skip to content

Conversation

@strahinjamijajlovic
Copy link
Contributor

This PR introduces a refactor and improvements to the LLM response parsing system. I've replaced the previous monolithic patching logic with a modular, provider-aware parser structure.
This allows us to easily add additional LLM providers and gives us more robust handling of different response types while keeping the logic for each self contained.
Another benefit is that this way we rely on the Assembly to resolve the correct parser and don't have to convert the whole object to a dictionary.
This would also allow us to have version specific parsers that deal with quirks of different versions of the same provider.
This does rely on resolving the correct parser from the assembly though.
To achieve this I've added several new models and helpers.
As a result the parsing logic has been moved and the patching has been simplified.

Key changes include:

LLM Response Parsing Refactor:

  • Introduced a new parser resolver (LLMResponseParserResolver) that delegates result parsing to provider-specific parsers.
  • This parser supports the parsing of both synchronous and asynchronous responses, which was the source of the bug that triggered this refactor.
  • This replaces the previous manual reflection-based extraction logic.
  • Added a parser interface (ILLMResponseParser) and initial implementations for OpenAI, RystemOpenAI, and a generic fallback, enabling easy future extension for new providers.
  • Added a new strongly-typed model (ParsedLLMResponseModel with nested TokenUsage) for parsed LLM responses for a bit of type safety and clarity.

LLM Provider and Sink Configuration:

  • Introduced new models for representing LLM providers, methods, and sinks (LLMProviderEnum, LLMMethod, LLMSink, and LLMSinks).
  • These structures define which methods on which assemblies should be patched and tracked, improving maintainability and clarity.

Patcher Logic Improvements:

  • Moved and refactored the LLMPatcher.
  • The new patcher delegates response parsing to the resolver instead of relying on if-else statements to examine the dictionary logic
  • Removed the generic object-to-dictionary conversion helper from ReflectionHelper, as it is no longer needed with the new parsing approach.

@strahinjamijajlovic strahinjamijajlovic marked this pull request as ready for review October 22, 2025 20:40
internal sealed class RystemOpenAIResponseParser : BaseResponseParser
{

public override bool CanParse(object result, string assembly) => assembly.Contains(LLMSinks.Sinks.First(s => s.Provider == LLMProviderEnum.RystemOpenAI).Assembly);
Copy link

@aikido-pr-checks aikido-pr-checks bot Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complex one-liner combines LINQ lookup and string operation; split into named intermediate steps for readability.

Feedback

Post a comment with the following structure to provide feedback on this finding:

@AikidoSec feedback: [FEEDBACK]

Aikido will process this feedback into learnings to give better review comments in the future.
More info

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's too complex?
But if others agree with this comment, I can break it up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants