Skip to content

Conversation

sjmonson
Copy link
Collaborator

TODO

  • Docs
  • CSV arg string support
  • More validation

Summary

Work to allow control of token prefix cache rates with the synthetic data generator. Firstly adds an auto-incrementing single token prefix to ensure we never repeat the same prefix. Secondly adds controls for sharing one or more fixed prefixes between samples.

Details

1. Ensure every prompt is unique

When generating a prompt, the first token is now taken from an iterator over the tokenizer vocab.

2. Add configurable prefixes to simulate system prompts or other common token prefixes

Adds a prefix_buckets argument to the SyntheticDatasetConfig, each bucket consists of a prefix count, token count, and bucket weight. Prefix count sets the number of unique prefixes to generate for a given bucket, token count is the length of each prompt in the bucket, and bucket weight is used to calculate the proportion of requests the bucket applies to relative to the sum of all bucket weights. Here are a few examples:

Here we have one bucket of 32 prefixes of length 2048. Since there are 1024 total samples each prefix will apply to 32 samples. If there is only one bucket than weight can be omitted as the bucket applies to 100% of samples.

data:
  prefix_buckets:
    - prefix_tokens: 2048
      prefix_count: 32
  prompt_tokens: 256,
  output_tokens: 256,
  samples: 1024

In this modified version of the first example 16 of the prompts have 2048 tokens while the other 16 have 1024 tokens.

data:
  prefix_buckets:
    - prefix_tokens: 2048
      prefix_count: 16
      bucket_weight: 50
    - prefix_tokens: 1024
      prefix_count: 16
      bucket_weight: 50
  prompt_tokens: 256,
  output_tokens: 256,
  samples: 1024

The prefix tokens of a bucket can also be 0 to disable prefixes for those samples. Here is an example where 40% of the samples have a prefix of 2048 tokens while the other 60% have no prefix.

data:
  prefix_buckets:
    - prefix_tokens: 2048
      bucket_weight: 40
    - prefix_tokens: 0
      bucket_weight: 60
  prompt_tokens: 256,
  output_tokens: 256,
  samples: 1000

Test Plan

  • PR includes unit tests for all synthetic dataset changes (pytest tests/unit/dataset)
  • Scenearios in the Details section can be used against a model server with prefix caching and the cache rate can be confirmed by inspecting console output.

Related Issues


  • "I certify that all code in this PR is my own, except as noted below."

Use of AI

  • Includes AI-assisted code completion
  • Includes code generated by an AI application
  • Includes AI-generated tests (NOTE: AI written tests should have a docstring that includes ## WRITTEN BY AI ##)

sjmonson and others added 8 commits August 19, 2025 15:50
Signed-off-by: Samuel Monson <[email protected]>
Signed-off-by: Samuel Monson <[email protected]>
Co-authored-by: Mehul <[email protected]>
Co-authored-by: Samuel Monson <[email protected]>
Signed-off-by: Samuel Monson <[email protected]>
Signed-off-by: Samuel Monson <[email protected]>
@sjmonson sjmonson self-assigned this Aug 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Consider having groups of queries with multiple system prompts
2 participants