sentiment-polish-gpt2-small
This model was trained from polish-gpt2-small on the polemo2-official dataset. It achieves the following results on the evaluation set:
- Loss: 0.4659
- Accuracy: 0.9627
Model description
Trained from polish-gpt2-small
Intended uses & limitations
Sentiment analysis - neutral/negative/positive/ambiguous
How to use
Transformers AutoModel
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
checkpoint = "nie3e/sentiment-polish-gpt2-small"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(
checkpoint
).to(device)
text = "Jak na cenÄ™ - super . Åšniadanie nie do przejedzenia ."
input_ids = tokenizer(text, return_tensors="pt").to(device)
logits = model(**input_ids)["logits"].to("cpu")
percent = torch.sigmoid(logits).squeeze(dim=0)
id2class = model.config.id2label
print({id2class[i]: f"{(p*100):.2f}%" for i, p in enumerate(percent.tolist())})
{'NEUTRAL': '2.95%', 'NEGATIVE': '0.21%', 'POSITIVE': '100.00%', 'AMBIGUOUS': '20.56%'}
Transformers Pipeline
from transformers import pipeline
pipe = pipeline(
"sentiment-analysis",
"nie3e/sentiment-polish-gpt2-small"
)
result = pipe("Jak na cenÄ™ - super . Åšniadanie nie do przejedzenia .")
print(result)
[{'label': 'POSITIVE', 'score': 0.9999890327453613}]
vLLM >= 0.9.2
from vllm import LLM
llm = LLM(
"nie3e/sentiment-polish-gpt2-small",
task="classify",
enforce_eager=True
)
text = ["Jak na cenÄ™ - super . Åšniadanie nie do przejedzenia ."]
outputs = llm.classify(text)
for output in outputs:
print(output.outputs.probs)
vLLM OpenAI serving (recommended)
docker run --gpus 1 --ipc=host -p 8000:8000 vllm/vllm-openai:v0.9.2 --model nie3e/sentiment-polish-gpt2-small
using curl:
curl -X 'POST' \
'http://127.0.0.1:8000/classify' \
-H 'Content-Type: application/json' \
-d '{
"model": "nie3e/sentiment-polish-gpt2-small",
"input": ["Przestronny hotel , jasny , z dużymi oknami .", "Położony całkiem blisko centrum ."]
}'
result:
{
"id": "classify-6619cecdb01a4bf8900df136a9b33b15",
"object": "list",
"created": 1749994841,
"model": "nie3e/sentiment-polish-gpt2-small",
"data": [
{
"index": 0,
"label": "POSITIVE",
"probs": [
0.000006198883056640625,
1.7881393432617188e-7,
1.0,
0.000007569789886474609
],
"num_classes": 4
},
{
"index": 1,
"label": "AMBIGUOUS",
"probs": [
0.00013005733489990234,
0.004421234130859375,
0.005367279052734375,
0.990234375
],
"num_classes": 4
}
],
"usage": {
"prompt_tokens": 17,
"total_tokens": 17,
"completion_tokens": 0,
"prompt_tokens_details": null
}
}
using python:
import requests
response = requests.post(
f"http://127.0.0.1:8000/classify",
headers={"Content-Type": "application/json"},
json={
"model": "nie3e/sentiment-polish-gpt2-small",
"input": [
"Przestronny hotel , jasny , z dużymi oknami .",
"Położony całkiem blisko centrum ."
]
}
)
print(response.json())
{'id': 'classify-ac86189c5d0e41908584c3e88d356316',
'object': 'list',
'created': 1753209126,
'model': 'nie3e/sentiment-polish-gpt2-small',
'data': [{'index': 0,
'label': 'POSITIVE',
'probs': [6.277556167333387e-06,
1.9292743047572003e-07,
0.9999858140945435,
7.631567314092536e-06],
'num_classes': 4},
{'index': 1,
'label': 'AMBIGUOUS',
'probs': [0.0001290593936573714,
0.004407374653965235,
0.005339722614735365,
0.9901238083839417],
'num_classes': 4}],
'usage': {'prompt_tokens': 17,
'total_tokens': 17,
'completion_tokens': 0,
'prompt_tokens_details': None}}
Training and evaluation data
Merged all rows from polemo2-official dataset.
Train/test split: 80%/20%
Datacollator:
from transformers import DataCollatorWithPadding
data_collator = DataCollatorWithPadding(
tokenizer=tokenizer,
padding="longest",
max_length=128,
pad_to_multiple_of=8
)
Training procedure
GPU: RTX 3090
Training time: 2:53:05
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.4049 | 1.0 | 3284 | 0.3351 | 0.8792 |
| 0.1885 | 2.0 | 6568 | 0.2625 | 0.9218 |
| 0.1182 | 3.0 | 9852 | 0.2583 | 0.9419 |
| 0.0825 | 4.0 | 13136 | 0.2886 | 0.9482 |
| 0.0586 | 5.0 | 16420 | 0.3343 | 0.9538 |
| 0.034 | 6.0 | 19704 | 0.3734 | 0.9595 |
| 0.0288 | 7.0 | 22988 | 0.4125 | 0.9599 |
| 0.0185 | 8.0 | 26273 | 0.4262 | 0.9626 |
| 0.0069 | 9.0 | 29557 | 0.4529 | 0.9622 |
| 0.0059 | 10.0 | 32840 | 0.4659 | 0.9627 |
Evaluation
Evaluated on allegro/klej-polemo2-out test dataset.
from datasets import load_dataset
from evaluate import evaluator
data = load_dataset("allegro/klej-polemo2-out", split="test").shuffle(seed=42)
task_evaluator = evaluator("text-classification")
# fix labels
l = {
"__label__meta_zero": 0,
"__label__meta_minus_m": 1,
"__label__meta_plus_m": 2,
"__label__meta_amb": 3
}
def fix_labels(examples):
examples["target"] = l[examples["target"]]
return examples
data = data.map(fix_labels)
eval_resutls = task_evaluator.compute(
model_or_pipeline="nie3e/sentiment-polish-gpt2-small",
data=data,
label_mapping={"NEUTRAL": 0, "NEGATIVE": 1, "POSITIVE": 2, "AMBIGUOUS": 3},
input_column="sentence",
label_column="target"
)
print(eval_resutls)
{
"accuracy": 0.9838056680161943,
"total_time_in_seconds": 5.2441766999982065,
"samples_per_second": 94.1997244296076,
"latency_in_seconds": 0.010615742307688678
}
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 268,398
Model tree for nie3e/sentiment-polish-gpt2-small
Base model
sdadas/polish-gpt2-smallDataset used to train nie3e/sentiment-polish-gpt2-small
Evaluation results
- accuracy on klej-polemo2-outself-reported98.38%