diff --git a/examples/token_classification.ipynb b/examples/token_classification.ipynb index 3fb038c3..d6ca8b64 100644 --- a/examples/token_classification.ipynb +++ b/examples/token_classification.ipynb @@ -22,7 +22,7 @@ }, "outputs": [], "source": [ - "#! pip install datasets transformers seqeval" + "#! pip install datasets transformers seqeval evaluate" ] }, { @@ -171,7 +171,7 @@ "id": "W7QYTpxXIrIl" }, "source": [ - "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. " + "We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data. This can be easily done with the `load_dataset` function. To get the metric we need to use for evaluation (to compare our model to the benchmark), we will use the [🤗 Evaluate](https://github.com/huggingface/evaluate) library." ] }, { @@ -182,7 +182,8 @@ }, "outputs": [], "source": [ - "from datasets import load_dataset, load_metric" + "from datasets import load_dataset\n", + "import evaluate" ] }, { @@ -1096,7 +1097,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The last thing to define for our `Trainer` is how to compute the metrics from the predictions. Here we will load the [`seqeval`](https://github.com/chakki-works/seqeval) metric (which is commonly used to evaluate results on the CONLL dataset) via the Datasets library." + "The last thing to define for our `Trainer` is how to compute the metrics from the predictions. Here we will load the [`seqeval`](https://github.com/chakki-works/seqeval) metric (which is commonly used to evaluate results on the CONLL dataset) via the Evaluate library." ] }, { @@ -1105,7 +1106,7 @@ "metadata": {}, "outputs": [], "source": [ - "metric = load_metric(\"seqeval\")" + "metric = evaluate.load(\"seqeval\")" ] }, {