Skip to content

Conversation

@manototh
Copy link
Collaborator

@manototh manototh commented Nov 17, 2025

Based on #465

@manototh manototh self-assigned this Nov 17, 2025
@manototh manototh changed the title Mano/evals Add evals to AI engineering Nov 17, 2025
Copy link
Contributor

@c-ehrlich c-ehrlich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some big picture thoughts:

  • I appreciate why we have the "Create/Measure/Observe/Iterate" workflow. But feels strange that there is no page called "Evals" IMO.
  • Obviously not part of this PR, but I would LOVE a video at the top of the page


The `Eval` function provides a simple, declarative way to define a test suite for your capability directly in your codebase.

The key parameters of the `Eval` function:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note to self: need to better document configFlags

// Define the evaluation
Eval('spam-classification', {
// Specify which flags this eval uses
configFlags: pickFlags('ticketClassification'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only defined / explained further down the page. I understand why, and don't really have a better solution, but still feels weird.

@c-ehrlich
Copy link
Contributor

Closing in favor of #473, feel free to re-open if that's wrong

@c-ehrlich c-ehrlich closed this Nov 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants