Skip to content

Commit 0197746

Browse files
authored
fix: typo spelling grammar (huggingface#13212)
* fix: typo spelling grammar * fix: make fixup
1 parent ef83dc4 commit 0197746

24 files changed

+32
-32
lines changed

docs/source/main_classes/trainer.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ which should make the "stop and resume" style of training as close as possible t
197197
However, due to various default non-deterministic pytorch settings this might not fully work. If you want full
198198
determinism please refer to `Controlling sources of randomness
199199
<https://pytorch.org/docs/stable/notes/randomness.html>`__. As explained in the document, that some of those settings
200-
that make things determinstic (.e.g., ``torch.backends.cudnn.deterministic``) may slow things down, therefore this
200+
that make things deterministic (.e.g., ``torch.backends.cudnn.deterministic``) may slow things down, therefore this
201201
can't be done by default, but you can enable those yourself if needed.
202202

203203

docs/source/model_doc/deberta_v2.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ New in v2:
5353
transformer layer to better learn the local dependency of input tokens.
5454
- **Sharing position projection matrix with content projection matrix in attention layer** Based on previous
5555
experiments, this can save parameters without affecting the performance.
56-
- **Apply bucket to encode relative postions** The DeBERTa-v2 model uses log bucket to encode relative positions
56+
- **Apply bucket to encode relative positions** The DeBERTa-v2 model uses log bucket to encode relative positions
5757
similar to T5.
5858
- **900M model & 1.5B model** Two additional model sizes are available: 900M and 1.5B, which significantly improves the
5959
performance of downstream tasks.

docs/source/model_doc/speech_to_text.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,8 @@ features. The :class:`~transformers.Speech2TextProcessor` wraps :class:`~transfo
4242
predicted token ids.
4343

4444
The feature extractor depends on :obj:`torchaudio` and the tokenizer depends on :obj:`sentencepiece` so be sure to
45-
install those packages before running the examples. You could either install those as extra speech dependancies with
46-
``pip install transformers"[speech, sentencepiece]"`` or install the packages seperatly with ``pip install torchaudio
45+
install those packages before running the examples. You could either install those as extra speech dependencies with
46+
``pip install transformers"[speech, sentencepiece]"`` or install the packages seperately with ``pip install torchaudio
4747
sentencepiece``. Also ``torchaudio`` requires the development version of the `libsndfile
4848
<http://www.mega-nerd.com/libsndfile/>`__ package which can be installed via a system package manager. On Ubuntu it can
4949
be installed as follows: ``apt install libsndfile1-dev``

docs/source/training.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ Fine-tuning in native PyTorch
281281
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
282282
picture-in-picture" allowfullscreen></iframe>
283283

284-
You might need to restart your notebook at this stage to free some memory, or excute the following code:
284+
You might need to restart your notebook at this stage to free some memory, or execute the following code:
285285

286286
.. code-block:: python
287287

src/transformers/deepspeed.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ def __init__(self, config_file_or_dict):
6262

6363
if isinstance(config_file_or_dict, dict):
6464
# Don't modify user's data should they want to reuse it (e.g. in tests), because once we
65-
# modified it, it will not be accepted here again, since `auto` values would have been overriden
65+
# modified it, it will not be accepted here again, since `auto` values would have been overridden
6666
config = deepcopy(config_file_or_dict)
6767
elif isinstance(config_file_or_dict, str):
6868
with io.open(config_file_or_dict, "r", encoding="utf-8") as f:

src/transformers/modelcard.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -468,7 +468,7 @@ def to_model_card(self):
468468
model_card += f"This model is a fine-tuned version of [{self.finetuned_from}](https://huggingface.co/{self.finetuned_from}) on "
469469

470470
if self.dataset is None:
471-
model_card += "an unkown dataset."
471+
model_card += "an unknown dataset."
472472
else:
473473
if isinstance(self.dataset, str):
474474
model_card += f"the {self.dataset} dataset."

src/transformers/modeling_flax_utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -177,14 +177,14 @@ def from_pretrained(
177177
- A path or url to a `pt index checkpoint file` (e.g, ``./tf_model/model.ckpt.index``). In this
178178
case, ``from_pt`` should be set to :obj:`True`.
179179
model_args (sequence of positional arguments, `optional`):
180-
All remaning positional arguments will be passed to the underlying model's ``__init__`` method.
180+
All remaining positional arguments will be passed to the underlying model's ``__init__`` method.
181181
config (:obj:`Union[PretrainedConfig, str, os.PathLike]`, `optional`):
182182
Can be either:
183183
184184
- an instance of a class derived from :class:`~transformers.PretrainedConfig`,
185185
- a string or path valid as input to :func:`~transformers.PretrainedConfig.from_pretrained`.
186186
187-
Configuration for the model to use instead of an automatically loaded configuation. Configuration can
187+
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
188188
be automatically loaded when:
189189
190190
- The model is a model provided by the library (loaded with the `model id` string of a pretrained

src/transformers/modeling_tf_utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1120,14 +1120,14 @@ def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
11201120
- :obj:`None` if you are both providing the configuration and state dictionary (resp. with keyword
11211121
arguments ``config`` and ``state_dict``).
11221122
model_args (sequence of positional arguments, `optional`):
1123-
All remaning positional arguments will be passed to the underlying model's ``__init__`` method.
1123+
All remaining positional arguments will be passed to the underlying model's ``__init__`` method.
11241124
config (:obj:`Union[PretrainedConfig, str]`, `optional`):
11251125
Can be either:
11261126
11271127
- an instance of a class derived from :class:`~transformers.PretrainedConfig`,
11281128
- a string valid as input to :func:`~transformers.PretrainedConfig.from_pretrained`.
11291129
1130-
Configuration for the model to use instead of an automatically loaded configuation. Configuration can
1130+
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
11311131
be automatically loaded when:
11321132
11331133
- The model is a model provided by the library (loaded with the `model id` string of a pretrained

src/transformers/modeling_utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1038,14 +1038,14 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
10381038
- :obj:`None` if you are both providing the configuration and state dictionary (resp. with keyword
10391039
arguments ``config`` and ``state_dict``).
10401040
model_args (sequence of positional arguments, `optional`):
1041-
All remaning positional arguments will be passed to the underlying model's ``__init__`` method.
1041+
All remaining positional arguments will be passed to the underlying model's ``__init__`` method.
10421042
config (:obj:`Union[PretrainedConfig, str, os.PathLike]`, `optional`):
10431043
Can be either:
10441044
10451045
- an instance of a class derived from :class:`~transformers.PretrainedConfig`,
10461046
- a string or path valid as input to :func:`~transformers.PretrainedConfig.from_pretrained`.
10471047
1048-
Configuration for the model to use instead of an automatically loaded configuation. Configuration can
1048+
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
10491049
be automatically loaded when:
10501050
10511051
- The model is a model provided by the library (loaded with the `model id` string of a pretrained

src/transformers/models/big_bird/modeling_big_bird.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1138,7 +1138,7 @@ def _bigbird_block_rand_mask_with_head(
11381138
from_block_size: int. size of block in from sequence.
11391139
to_block_size: int. size of block in to sequence.
11401140
num_heads: int. total number of heads.
1141-
plan_from_length: list. plan from length where num_random_blocks are choosen from.
1141+
plan_from_length: list. plan from length where num_random_blocks are chosen from.
11421142
plan_num_rand_blocks: list. number of rand blocks within the plan.
11431143
window_block_left: int. number of blocks of window to left of a block.
11441144
window_block_right: int. number of blocks of window to right of a block.

src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -952,7 +952,7 @@ def _bigbird_block_rand_mask_with_head(
952952
from_block_size: int. size of block in from sequence.
953953
to_block_size: int. size of block in to sequence.
954954
num_heads: int. total number of heads.
955-
plan_from_length: list. plan from length where num_random_blocks are choosen from.
955+
plan_from_length: list. plan from length where num_random_blocks are chosen from.
956956
plan_num_rand_blocks: list. number of rand blocks within the plan.
957957
window_block_left: int. number of blocks of window to left of a block.
958958
window_block_right: int. number of blocks of window to right of a block.

src/transformers/models/clip/tokenization_clip.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ def bytes_to_unicode():
6060
6161
The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab
6262
if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for
63-
decent coverage. This is a signficant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
63+
decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
6464
tables between utf-8 bytes and unicode strings.
6565
"""
6666
bs = (
@@ -317,7 +317,7 @@ def _tokenize(self, text):
317317
for token in re.findall(self.pat, text):
318318
token = "".join(
319319
self.byte_encoder[b] for b in token.encode("utf-8")
320-
) # Maps all our bytes to unicode strings, avoiding controle tokens of the BPE (spaces in our case)
320+
) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
321321
bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" "))
322322
return bpe_tokens
323323

src/transformers/models/detr/modeling_detr.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ class DetrObjectDetectionOutput(ModelOutput):
151151
unnormalized bounding boxes.
152152
auxiliary_outputs (:obj:`list[Dict]`, `optional`):
153153
Optional, only returned when auxilary losses are activated (i.e. :obj:`config.auxiliary_loss` is set to
154-
`True`) and labels are provided. It is a list of dictionnaries containing the two above keys (:obj:`logits`
154+
`True`) and labels are provided. It is a list of dictionaries containing the two above keys (:obj:`logits`
155155
and :obj:`pred_boxes`) for each decoder layer.
156156
last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
157157
Sequence of hidden-states at the output of the last layer of the decoder of the model.
@@ -218,8 +218,8 @@ class DetrSegmentationOutput(ModelOutput):
218218
:meth:`~transformers.DetrFeatureExtractor.post_process_panoptic` to evaluate instance and panoptic
219219
segmentation masks respectively.
220220
auxiliary_outputs (:obj:`list[Dict]`, `optional`):
221-
Optional, only returned when auxilary losses are activated (i.e. :obj:`config.auxiliary_loss` is set to
222-
`True`) and labels are provided. It is a list of dictionnaries containing the two above keys (:obj:`logits`
221+
Optional, only returned when auxiliary losses are activated (i.e. :obj:`config.auxiliary_loss` is set to
222+
`True`) and labels are provided. It is a list of dictionaries containing the two above keys (:obj:`logits`
223223
and :obj:`pred_boxes`) for each decoder layer.
224224
last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
225225
Sequence of hidden-states at the output of the last layer of the decoder of the model.

src/transformers/models/encoder_decoder/modeling_encoder_decoder.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,7 @@ def from_encoder_decoder_pretrained(
272272
a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
273273
274274
model_args (remaining positional arguments, `optional`):
275-
All remaning positional arguments will be passed to the underlying model's ``__init__`` method.
275+
All remaining positional arguments will be passed to the underlying model's ``__init__`` method.
276276
277277
kwargs (remaining dictionary of keyword arguments, `optional`):
278278
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,

src/transformers/models/gpt_neo/configuration_gpt_neo.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ def custom_unfold(input, dimension, size, step):
205205
def custom_get_block_length_and_num_blocks(seq_length, window_size):
206206
"""
207207
Custom implementation for GPTNeoAttentionMixin._get_block_length_and_num_blocks to enable the export to ONNX as
208-
original implmentation uses Python variables and control flow.
208+
original implementation uses Python variables and control flow.
209209
"""
210210
import torch
211211

src/transformers/models/hubert/modeling_hubert.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ def forward(self, hidden_states):
237237

238238
# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2FeatureExtractor with Wav2Vec2->Hubert
239239
class HubertFeatureExtractor(nn.Module):
240-
"""Construct the featurs from raw audio waveform"""
240+
"""Construct the features from raw audio waveform"""
241241

242242
def __init__(self, config):
243243
super().__init__()

src/transformers/models/rag/modeling_rag.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -283,7 +283,7 @@ def from_pretrained_question_encoder_generator(
283283
a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
284284
285285
model_args (remaining positional arguments, `optional`):
286-
All remaning positional arguments will be passed to the underlying model's ``__init__`` method.
286+
All remaining positional arguments will be passed to the underlying model's ``__init__`` method.
287287
retriever (:class:`~transformers.RagRetriever`, `optional`):
288288
The retriever to use.
289289
kwwargs (remaining dictionary of keyword arguments, `optional`):

src/transformers/models/rag/modeling_tf_rag.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ def from_pretrained_question_encoder_generator(
258258
``generator_from_pt`` should be set to :obj:`True`.
259259
260260
model_args (remaining positional arguments, `optional`):
261-
All remaning positional arguments will be passed to the underlying model's ``__init__`` method.
261+
All remaining positional arguments will be passed to the underlying model's ``__init__`` method.
262262
retriever (:class:`~transformers.RagRetriever`, `optional`):
263263
The retriever to use.
264264
kwargs (remaining dictionary of keyword arguments, `optional`):

src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -385,7 +385,7 @@ def __call__(self, hidden_states):
385385

386386

387387
class FlaxWav2Vec2FeatureExtractor(nn.Module):
388-
"""Construct the featurs from raw audio waveform"""
388+
"""Construct the features from raw audio waveform"""
389389

390390
config: Wav2Vec2Config
391391
dtype: jnp.dtype = jnp.float32

src/transformers/models/wav2vec2/modeling_wav2vec2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -308,7 +308,7 @@ def forward(self, hidden_states):
308308

309309

310310
class Wav2Vec2FeatureExtractor(nn.Module):
311-
"""Construct the featurs from raw audio waveform"""
311+
"""Construct the features from raw audio waveform"""
312312

313313
def __init__(self, config):
314314
super().__init__()

src/transformers/onnx/convert.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ def validate_model_outputs(
158158

159159
# We flatten potential collection of outputs (i.e. past_keys) to a flat structure
160160
for name, value in ref_outputs.items():
161-
# Overwriting the output name as "present" since it is the name used for the ONNX ouputs
161+
# Overwriting the output name as "present" since it is the name used for the ONNX outputs
162162
# ("past_key_values" being taken for the ONNX inputs)
163163
if name == "past_key_values":
164164
name = "present"

src/transformers/onnx/features.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ def check_supported_model_or_raise(model: PreTrainedModel, feature: str = "defau
114114
115115
Args:
116116
model: The model to export
117-
feature: The name of the feature to check if it is avaiable
117+
feature: The name of the feature to check if it is available
118118
119119
Returns:
120120
(str) The type of the model (OnnxConfig) The OnnxConfig instance holding the model export properties

src/transformers/tokenization_utils_base.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1375,7 +1375,7 @@ def all_special_ids(self) -> List[int]:
13751375
high-level keys being the ``__init__`` keyword name of each vocabulary file required by the model, the
13761376
low-level being the :obj:`short-cut-names` of the pretrained models with, as associated values, the
13771377
:obj:`url` to the associated pretrained vocabulary file.
1378-
- **max_model_input_sizes** (:obj:`Dict[str, Optinal[int]]`) -- A dictionary with, as keys, the
1378+
- **max_model_input_sizes** (:obj:`Dict[str, Optional[int]]`) -- A dictionary with, as keys, the
13791379
:obj:`short-cut-names` of the pretrained models, and as associated values, the maximum length of the sequence
13801380
inputs of this model, or :obj:`None` if the model has no maximum input size.
13811381
- **pretrained_init_configuration** (:obj:`Dict[str, Dict[str, Any]]`) -- A dictionary with, as keys, the
@@ -1785,7 +1785,7 @@ def _from_pretrained(
17851785
config = AutoConfig.from_pretrained(pretrained_model_name_or_path)
17861786
config_tokenizer_class = config.tokenizer_class
17871787
except (OSError, ValueError, KeyError):
1788-
# skip if an error occured.
1788+
# skip if an error occurred.
17891789
config = None
17901790
if config_tokenizer_class is None:
17911791
# Third attempt. If we have not yet found the original type of the tokenizer,

src/transformers/tokenization_utils_fast.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -707,7 +707,7 @@ def train_new_from_iterator(
707707

708708
special_token_full = getattr(self, f"_{token}")
709709
if isinstance(special_token_full, AddedToken):
710-
# Create an added token with the same paramters except the content
710+
# Create an added token with the same parameters except the content
711711
kwargs[token] = AddedToken(
712712
special_token,
713713
single_word=special_token_full.single_word,

0 commit comments

Comments
 (0)