Depth-Anything-3

This version of Depth-Anything-3 has been converted to run on the Axera NPU using w8a16 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 5.0-patch1

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through

Support Platform

Chips Models Time
AX650 da3-base 67.341 ms
AX650 da3-small 22.768 ms
AX650 da3metric-large 217.577 ms
AX650 da3mono-large 217.615 ms
AX637 da3-base 174.100 ms
AX637 da3-small 75.802 ms
AX637 da3metric-large 698.765 ms
AX637 da3mono-large 697.891 ms

How to use

Download all files from this repository to the device

python env requirement

pyaxengine

https://github.com/AXERA-TECH/pyaxengine

wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc2/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl

others

Maybe None.

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)

Input image:

root@ax650:~/AXERA-TECH/Depth-Anything-3# python3 python/infer.py --model models/da3metric-large.axmodel --img examples/demo01.jpg
[INFO] Available providers:  ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.12.0s
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 3.3 ae03a08f
root@ax650:~/AXERA-TECH/Depth-Anything-3# ls

Output image:

Downloads last month
56
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AXERA-TECH/Depth-Anything-3

Quantized
(1)
this model