Skip to content

ED batch processing #102

@mickvanhulst

Description

@mickvanhulst

ED mini-batches currently consist of single documents. As a consequence of this, GPU utilization is limited as the number of mentions per document varies and can, at many times, be small. A potential line of improvement is to:

  1. Introduce an additional dimension for the number of documents that are processed per mini-batch. This would result in the following dimensions for a given mini-batch: (n_documents, n_mentions, n_features).
  2. This will require the dimensions to align across documents, meaning that padding will be required for the n_mentions. We need to investigate how much padding would roughly be required and what the variance is across the number of mentions per document.
  3. During training, it is essential that there is still randomness across the batches, so grouping documents by their number of mentions is assumed to be suboptimal. Now, during inference, this is no longer an issue. As such, if our goal is to improve inference (which I believe it is), we can actually group documents based on their number of mentions to reduce the amount of padding that is required.

Related to #90

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions