Fruit Detector - dfine_xlarge_coco
This model is a fine-tuned version of ustc-community/dfine-xlarge-coco for fruit and vegetable detection.
Model Details
- Base Model: ustc-community/dfine-xlarge-coco
- Architecture: dfine_xlarge_coco
- Task: Object Detection
- mAP@50:95 Score: 0.6631
- Input Size: 640x640
Classes
The model detects the following 12 fruit/vegetable classes:
| ID | Class |
|---|---|
| 0 | Apple |
| 1 | Cherry |
| 2 | Figs |
| 3 | Olive |
| 4 | Pomegranate |
| 5 | Orange |
| 6 | Rockmelon |
| 7 | Strawberry |
| 8 | Potato |
| 9 | Tomato |
| 10 | Watermelon |
| 11 | Bell-pepper |
Usage
from transformers import AutoImageProcessor, AutoModelForObjectDetection
from PIL import Image
import torch
# Load model and processor
processor = AutoImageProcessor.from_pretrained("MohamedKhayat/fruit-detector-dfine-xlarge")
model = AutoModelForObjectDetection.from_pretrained("MohamedKhayat/fruit-detector-dfine-xlarge")
# Load and process image
image = Image.open("fruit_image.jpg")
inputs = processor(images=image, return_tensors="pt")
# Run inference
with torch.no_grad():
outputs = model(**inputs)
# Post-process results
target_sizes = torch.tensor([[image.height, image.width]])
results = processor.post_process_object_detection(
outputs,
threshold=0.5,
target_sizes=target_sizes
)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = box.tolist()
print(f"Detected {model.config.id2label[label.item()]} with confidence {score:.2f} at {box}")
Training
This model was trained on a custom fruit detection dataset.
Training Repository: transformers-for-fruit-object-detection-internship
License
Apache 2.0
- Downloads last month
- 5