Skip to content

Commit 87aa857

Browse files
authored
roberta-base-1B-1-finetuned-squadv1 model card (huggingface#5515)
1 parent c7d96b6 commit 87aa857

File tree

1 file changed

+88
-0
lines changed
  • model_cards/mrm8488/roberta-base-1B-1-finetuned-squadv1

1 file changed

+88
-0
lines changed
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
---
2+
language: english
3+
---
4+
5+
# RoBERTa-base (1B-1) + SQuAD v1 ❓
6+
7+
[roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
8+
9+
## Details of the downstream task (Q&A) - Model 🧠
10+
11+
RoBERTa Pretrained on Smaller Datasets
12+
13+
[NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
14+
15+
16+
## Details of the downstream task (Q&A) - Dataset 📚
17+
18+
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
19+
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
20+
21+
## Model training 🏋️‍
22+
23+
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
24+
25+
```bash
26+
python transformers/examples/question-answering/run_squad.py \
27+
--model_type roberta \
28+
--model_name_or_path 'nyu-mll/roberta-base-1B-1' \
29+
--do_eval \
30+
--do_train \
31+
--do_lower_case \
32+
--train_file /content/dataset/train-v1.1.json \
33+
--predict_file /content/dataset/dev-v1.1.json \
34+
--per_gpu_train_batch_size 16 \
35+
--learning_rate 3e-5 \
36+
--num_train_epochs 10 \
37+
--max_seq_length 384 \
38+
--doc_stride 128 \
39+
--output_dir /content/output \
40+
--overwrite_output_dir \
41+
--save_steps 1000
42+
```
43+
44+
## Test set Results 🧾
45+
46+
| Metric | # Value |
47+
| ------ | --------- |
48+
| **EM** | **72.62** |
49+
| **F1** | **82.19** |
50+
51+
52+
53+
```json
54+
{
55+
'exact': 72.62062440870388,
56+
'f1': 82.19430877136834,
57+
'total': 10570,
58+
'HasAns_exact': 72.62062440870388,
59+
'HasAns_f1': 82.19430877136834,
60+
'HasAns_total': 10570,
61+
'best_exact': 72.62062440870388,
62+
'best_exact_thresh': 0.0,
63+
'best_f1': 82.19430877136834,
64+
'best_f1_thresh': 0.0
65+
}
66+
67+
```
68+
69+
### Model in action 🚀
70+
71+
Fast usage with **pipelines**:
72+
73+
```python
74+
from transformers import pipeline
75+
76+
QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv1')
77+
78+
QnA_pipeline({
79+
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
80+
'question': 'What has been discovered by scientists from China ?'
81+
})
82+
# Output:
83+
84+
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.04702283976040074, 'start': 0}
85+
```
86+
87+
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
88+
> Made with <span style="color: #e25555;">&hearts;</span> in Spain

0 commit comments

Comments
 (0)