This page summarizes some works which might be useful for Ristretto users. If you know about research which could further improve Ristretto, or you know of other frameworks or methods which are more advanced than Ristretto, feel free to send me an Email (pmgysel at ucdavis.edu). Please do not contact me concerning your own work.
- TensorFlow Quantization (P. Warden): 8-bit fixed point neural networks. Many different quantized layers are supported, more than Ristretto currently provides.
- Layer-wise fine-tuning (D. Lin): This paper shows improved results when networks are quantized layer-by-layer, where each step is followed by fine-tuning. In contrast, Ristretto quantizes all layers together.