Translation
Transformers
PyTorch
TensorFlow
JAX
Rust
ONNX
Safetensors
t5
text2text-generation
summarization
text-generation-inference
Instructions to use google-t5/t5-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google-t5/t5-small with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "translation" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("translation", model="google-t5/t5-small")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small") model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small") - Inference
- Notebooks
- Google Colab
- Kaggle
It seems `T5WithLMHead` is outdated
#5
by Narsil - opened
Leaving it for potential older versions of the lib.
But adding it so that infer_framework_load_model from the pipeline can load the model directly without needing to refer to the pipeline task.
BTW on the Hub side we always use the first element of architectures ie. architectures[0] (to populate the "Use in Transformers" modal)
So you need to swap those two if you want the newer kind of AutoModel to appear
I have dived into the old code of Transformers since the rename was done a looooong time ago.
At that time, modeling_auto did not look at the architectures field of the config at all, so you can just replace the name and it will still work with older versions of Transformers.