Skip to content

Conversation

@In48semenov
Copy link
Collaborator

Implemented a ranking metric PFound@k:

$$pFound@K = \sum_{i=1}^{k} pLook[i]\ pRel[i]$$

$$pLook[1] = 1$$

$$pLook[i] = pLook[i-1]\ (1 - pRel[i-1])\ (1 - pBreak)$$

$$pBreak = 0.15$$

  1. Implemented the calculation of the metric, both in the context of each user, and in the context of the overall metric.
  2. To calculate this metric, the Score column is needed, so I added a condition for the presence of this column to the merge_reco function.
  3. The calculation of this metric depends on the values of the previous step, so I added a static method that fills in the missed ranks with zero scores. For example, the user has only ranks 1 and 5, to calculate PFound@6, you need scores at positions 2, 3, and 4 in order to correctly calculate pLook[5].
  4. Wrote tests for metric PFound.

…dition for column `Score`, and write test for new metric.
@In48semenov In48semenov marked this pull request as draft May 8, 2023 19:32
@In48semenov In48semenov marked this pull request as ready for review May 8, 2023 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant