Papers
arxiv:2509.26469

DiVeQ: Differentiable Vector Quantization Using the Reparameterization Trick

Published on Mar 23
Authors:
,
,

Abstract

DiVeQ and SF-DiVeQ methods enable end-to-end training of vector quantization by allowing gradient flow while maintaining hard assignments, improving reconstruction quality in image compression, image generation, and speech coding tasks.

AI-generated summary

Vector quantization is common in deep models, yet its hard assignments block gradients and hinder end-to-end training. We propose DiVeQ, which treats quantization as adding an error vector that mimics the quantization distortion, keeping the forward pass hard while letting gradients flow. We also present a space-filling variant (SF-DiVeQ) that assigns to a curve constructed by the lines connecting codewords, resulting in less quantization error and full codebook usage. Both methods train end-to-end without requiring auxiliary losses or temperature schedules. In VQ-VAE image compression, VQGAN image generation, and DAC speech coding tasks across various data sets, our proposed methods improve reconstruction and sample quality over alternative quantization approaches.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.26469 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.26469 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.26469 in a Space README.md to link it from this page.

Collections including this paper 1