T5Gemma-2-270m โ Text Encoder Only (Bidirectional)
Text encoder extracted from google/t5gemma-2-270m-270m,
saved as standard Gemma2Model with bidirectional attention (is_decoder=False).
Gemma is provided under and subject to the Gemma Terms of Use found at https://ai.google.dev/gemma/terms
Architecture
- 18 layers, hidden_size=640, heads=4
- Sliding window attention (512) + full attention every 6 layers
- Bidirectional (no causal mask)
- Parameters: 268M
Usage
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("knowledgator/t5gemma-2-text-encoder-270m")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/t5gemma-2-text-encoder-270m")
inputs = tokenizer("Your text here", return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
token_embeddings = outputs.last_hidden_state # (batch, seq_len, 640)
pooled = outputs.last_hidden_state.mean(1) # mean pooling -> (batch, 640)
- Downloads last month
- 37
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for knowledgator/t5gemma-2-text-encoder-270m
Base model
google/t5gemma-2-270m-270m