You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So basically in chapter 8 we have used "from datasets import load_metric" but now in the latest version of the datasets they have removed the load_metric and now you have to use the evaluate module for that.
So replace the mentioned line with:
Hello bro. Unrelated to your issue. Pls, I've got an issue with some section of chapter 4. Here's the link the kaggle notebook: https://www.kaggle.com/code/immanuelibekwe/fork-of-nlp-chapter-4
The trainer.train() runs without the gpu or cpu showing any indication of usage.
Also, when I run the code below, the cpu ui indicates usage instead of the gpu. What could be the cause:
import torch
@Emmanuel-Ibekwe Are you sure you have GPU available?
If yes make sure following things:
The environment isn't set up correctly. Maybe CUDA isn't properly installed, or PyTorch isn't the GPU version. You should check if torch.cuda.is_available() returns True. If not, they need to install the CUDA-compatible PyTorch.
So basically in chapter 8 we have used "from datasets import load_metric" but now in the latest version of the datasets they have removed the load_metric and now you have to use the evaluate module for that.
So replace the mentioned line with:
import evaluate
accuracy_score= evaluate.load('accuracy')
The text was updated successfully, but these errors were encountered: