This is an ONNX version of Qwen/Qwen3-Embedding-8B. It was converted using the main_export function (from optimum.exporters.onnx).
- Downloads last month
- 27
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support