Commit fc109ba
: On June 26, we updated our code on Github. This new version introduces some significant changes which might affect older versions of Ristretto:
- Ristretto network description files:
-
precision
field: the quantization modes areDYNAMIC_FIXED_POINT
,MINIFLOAT
andINTEGER_POWER_OF_2_WEIGHTS
.
-
- Ristretto layers:
- New
DeconvolutionRistrettoLayer
for applications like semantic segmentation. - Quantize layer inputs and layer outputs (previously: only quantize layer outputs).
- Remove
DataRistrettoLayer
since images can be quantized in first convolutional layer.
- New
- Layers without multiplications:
- For
precision
=INTEGER_POWER_OF_2_WEIGHTS
, the layer activations are now in dynamic fixed point.
- For
- Quantization of layer activations
- In older Ristretto versions, only the layer outputs were quantized. With this new version, layer inputs and layer outputs are in reduced word width format.
- For dynamic fixed point approximation, we introduce 2 new protobuffer fields:
bw_layer_in
for the word width of layer inputsfl_layer_in
for the fractional length of layer inputs
- Ristretto Demo on SqueezeNet:
- We renamed the models/SqueezeNet/demo folder (new: models/SqueezeNet/RistrettoDemo)
- We renamed the demo shell scripts in examples/ristretto/
- Ristretto Documentation:
- We removed the docs/ristretto/ folder. All documentation is available on this web page.
- Refactoring in code base:
- We improved the naming of class functions and variables for clarification:
- “fixed point” -> “dynamic fixed point”
- “mini floating point” -> “minifloat”
- “power-2-weights” -> “integer power of 2 weights”
- Updated documentation
- Merge with upstream: This Ristretto version is merged with official Caffe commit
f28f5ae
.