The data inputted into the model is 256-bit quantized, ranging from 0 to 255. Why is the input data not normalized to [0,1] or [-1,1] ?