Skip to content

Conversation

@jkrauss82
Copy link

Batch processing speeds up the process a little when working with larger batch sizes as we do not create a new onnx inference session for each image and we only read the CSV tags once per batch.

The model itself is not using batch inference as it seems the available onnx models have been compiled with batch size = 1 or I have not figured how to correctly pass the tensors to the input. I have looked at code from the kohya scripts but retrieving the supported batch size hasn't worked that way inside the comfyui tagger (but it might be my incorrect passing of the tensors).

Even without the model using batch processing we still gain some speed up of about 18-20% in my tests with a batch size of 6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant