This repository was archived by the owner on Dec 29, 2022. It is now read-only.

Description
Hi,
The code to compute 8-bit non-sRGB output doesn't seem to follow the method outlined in the ASTC spec:
const int c = (c0 * (64 - weight) + c1 * weight + 32) / 64;
// TODO(google): Handle conversion to sRGB or FP16 per C.2.19.
const int quantized = ((c * 255) + 32767) / 65536;
assert(quantized < 256);
The spec says "If sRGB conversion is not enabled and the decoding mode is decode_unorm8, then the top 8 bits of the interpolation result for the R, G, B and A channels are used as the final result.":
Section 23.19
The difference will be very slight, and I'm not sure if this actually causes any issues at all because the scale factors are so similar. I'm pointing it out here because there's a comment, and because I noticed that the implementation deviates from the spec in a way that could break bit-exact decoding.