Instructions to use aehrm/moderngbert-droc-tagger with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aehrm/moderngbert-droc-tagger with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="aehrm/moderngbert-droc-tagger")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("aehrm/moderngbert-droc-tagger") model = AutoModelForTokenClassification.from_pretrained("aehrm/moderngbert-droc-tagger") - Notebooks
- Google Colab
- Kaggle
ModernGBERT DROC Mention Tagger
This model is a sequence tagger and a fine-tuned version of LSX-UniWue/ModernGBERT_134M on the DROC coreference dataset to annotate mentions of characters in literary text. Training and testing was performed on 1024 token windows. It achieves the following results on the evaluation document Die Wahlverwandtschaften, annotated in the context of the GerFuN corpus:
- Loss: 0.0334
- Precision: 0.9577
- Recall: 0.9665
- F1: 0.9620
- Accuracy: 0.9907
- Downloads last month
- 3
Model tree for aehrm/moderngbert-droc-tagger
Base model
LSX-UniWue/ModernGBERT_134M