-
Notifications
You must be signed in to change notification settings - Fork 6
Llm patching refactor #181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Aikido.Zen.Core/Patches/LLMs/LLMResultParsers/RystemOpenAIResponseParser.cs
Show resolved
Hide resolved
| internal sealed class RystemOpenAIResponseParser : BaseResponseParser | ||
| { | ||
|
|
||
| public override bool CanParse(object result, string assembly) => assembly.Contains(LLMSinks.Sinks.First(s => s.Provider == LLMProviderEnum.RystemOpenAI).Assembly); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complex one-liner combines LINQ lookup and string operation; split into named intermediate steps for readability.
Feedback
Post a comment with the following structure to provide feedback on this finding:
@AikidoSec feedback: [FEEDBACK]
Aikido will process this feedback into learnings to give better review comments in the future.
More info
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's too complex?
But if others agree with this comment, I can break it up.
This PR introduces a refactor and improvements to the LLM response parsing system. I've replaced the previous monolithic patching logic with a modular, provider-aware parser structure.
This allows us to easily add additional LLM providers and gives us more robust handling of different response types while keeping the logic for each self contained.
Another benefit is that this way we rely on the Assembly to resolve the correct parser and don't have to convert the whole object to a dictionary.
This would also allow us to have version specific parsers that deal with quirks of different versions of the same provider.
This does rely on resolving the correct parser from the assembly though.
To achieve this I've added several new models and helpers.
As a result the parsing logic has been moved and the patching has been simplified.
Key changes include:
LLM Response Parsing Refactor:
LLMResponseParserResolver) that delegates result parsing to provider-specific parsers.ILLMResponseParser) and initial implementations for OpenAI, RystemOpenAI, and a generic fallback, enabling easy future extension for new providers.ParsedLLMResponseModelwith nestedTokenUsage) for parsed LLM responses for a bit of type safety and clarity.LLM Provider and Sink Configuration:
LLMProviderEnum,LLMMethod,LLMSink, andLLMSinks).Patcher Logic Improvements:
LLMPatcher.ReflectionHelper, as it is no longer needed with the new parsing approach.