From 2b773eb4c85a074bcef9f0103844f6d94be407e7 Mon Sep 17 00:00:00 2001 From: Olivier RISSER-MAROIX Date: Mon, 17 Jul 2023 11:53:43 +0200 Subject: [PATCH] Adding paper on Prompt Performance Prediction Adding new paper on the novel task of **Prompt Performance Prediction** by Bizzozzero, et al. (2023/06) --- README.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c779249..1f6c463 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ # PromptPapers -![](https://img.shields.io/github/last-commit/thunlp/PromptPapers?color=green) ![](https://img.shields.io/badge/PaperNumber-65-brightgreen) ![](https://img.shields.io/badge/PRs-Welcome-red) +![](https://img.shields.io/github/last-commit/thunlp/PromptPapers?color=green) ![](https://img.shields.io/badge/PaperNumber-87-brightgreen) ![](https://img.shields.io/badge/PRs-Welcome-red) We have released an open-source prompt-learning toolkit, check out **[OpenPrompt](https://github.com/thunlp/OpenPrompt)!** @@ -281,6 +281,12 @@ What Makes In-Context Learning Work?.** Arxiv 2022. ![](https://img.shields.io/b *Fábio Perez, Ian Ribeiro* [[pdf](https://arxiv.org/abs/2211.09527)] [[project](https://github.com/agencyenterprise/PromptInject)], 2022.11 +17. **Prompt Performance Prediction for Generative IR.** Preprint. ![](https://img.shields.io/badge/PromptPerformancePrediction-blue) + + *Nicolas Bizzozzero, Ihab Bendidi, Olivier Risser-Maroix* [[pdf](https://arxiv.org/abs/2306.08915)], 2023.6 + + + ### Improvements This section contains the improvement of the basic prompt tuning methods, include but not limitedd to using additional resources to improving the performances, making up the shortcomings of previous work or conducting prompt tuning in unsual ways. 1. **Calibrate Before Use: Improving Few-Shot Performance of Language Models.** Preprint. ![](https://img.shields.io/badge/Calibration-green)