@@ -28,17 +28,18 @@ assignees: ''
28
28
Models:
29
29
30
30
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
31
- - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
32
- - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
31
+ - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
32
+ - Blenderbot, MBART: @patil-suraj
33
+ - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
33
34
- FSMT: @stas00
34
35
- Funnel: @sgugger
35
36
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
36
37
- RAG, DPR: @patrickvonplaten, @lhoestq
37
38
- TensorFlow: @Rocketknight1
38
- - JAX/Flax: @patil-suraj @patrickvonplaten
39
+ - JAX/Flax: @patil-suraj
39
40
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
40
41
- GPT-Neo, GPT-J, CLIP: @patil-suraj
41
- - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
42
+ - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text : @patrickvonplaten, @anton-l
42
43
43
44
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
44
45
@@ -47,7 +48,7 @@ Library:
47
48
- Benchmarks: @patrickvonplaten
48
49
- Deepspeed: @stas00
49
50
- Ray/raytune: @richardliaw, @amogkam
50
- - Text generation: @patrickvonplaten
51
+ - Text generation: @patrickvonplaten @narsil
51
52
- Tokenizers: @LysandreJik
52
53
- Trainer: @sgugger
53
54
- Pipelines: @Narsil
0 commit comments