☆ Yσɠƚԋσʂ ☆

  • 11.6K Posts
  • 12.1K Comments
Joined 6 years ago
cake
Cake day: January 18th, 2020

help-circle

























  • Binary quantization and 1 bit vectors have definitely been floating around the space for years. The big difference here is not necessarily just better raw precision but how they completely eliminate the hidden memory tax that usually comes with extreme compression. Normally when you crush a 32 bit float down to a single bit you destroy a massive amount of scale and range information. To make the model actually usable after that traditional methods usually have to store extra full precision numbers alongside those compressed blocks to act as scaling factors or zero points. So your theoretical 1 bit compression actually ends up costing something like 2 or 3 bits per parameter in practice.

    TurboQuant gets around this by using the Quantized Johnson Lindenstrauss transform which is basically a mathematical guarantee that the relative distances between different data points will be preserved even when the data is aggressively shrunk. By doing this and dropping everything to just a positive or negative sign bit they completely remove the need to store any full precision scaling factors. It literally has zero memory overhead. To make sure the attention mechanism still works they use a special estimator that takes a high precision query and runs it against that low precision 1 bit cache in a way that mathematically eliminates bias.

    You also have to look at how they are actually applying it in the pipeline. They don’t just take the raw 32 bit vector and smash it down to 1 bit right out of the gate. They use that PolarQuant method first to map everything to polar coordinates and capture the main structure and strength of the vector. The 1 bit QJL algorithm is only deployed at the very end as a targeted cleanup to fix residual errors left over from the first step.