Ristretto: SqueezeNet Example

Brewing an 8-bit Dynamic Fixed Point SqueezeNet

SqueezeNet [1] by Iandola et al. has the accuracy of AlexNet [2], but with over 50X fewer network parameters. This guide explains how to quantize SqueezeNet to dynamic fixed point, fine-tune the condensed network, and finally benchmark the net on the ImageNet validation data set.

In order to reproduce the following results, you first need to do these steps:

  • Download the SqueeNet V1.0 parameters from here and put them into models/SqueezeNet/ folder. These are the pre-trained 32-bit FP weights provided by DeepScale.
  • We already fine-tuned an 8-bit dynamic fixed point SqueezeNet for you. Download it from the link provided at models/SqueezeNet/RistrettoDemo/ristrettomodel-url and put it into that folder.
  • Do two modifications to the SqueezeNet prototxt file (models/SqueezeNet/train_val.prototxt): You need to adjust the path to your local ImageNet data for both source fields.

[1] Iandola, Forrest N., et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size. arXiv preprint (2016).

[2] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems. 2012.

Quantization to Dynamic Fixed Point

This guide assumes you previously installed Ristretto (make all) and that you run all commands from Caffe root.

In a first step, we condense the 32-bit floating point network to dynamic fixed point. SqueezeNet performs well with 32 and 16-bit dynamic fixed point, however, we can reduce the bit-width further. There is a trade-off between parameter compression and network accuracy. The Ristretto tool can automatically find the appropriate bit-width for each part of the network:

./examples/ristretto/00_quantize_squeezenet.sh

This script will quantize the SqueezeNet model. You will see messages flying by as Ristretto tests the quantized model with different word widths. The final summary will look like this:

I0626 16:56:25.035650 14319 quantization.cpp:260] Network accuracy analysis for
I0626 16:56:25.035667 14319 quantization.cpp:261] Convolutional (CONV) and fully
I0626 16:56:25.035681 14319 quantization.cpp:262] connected (FC) layers.
I0626 16:56:25.035693 14319 quantization.cpp:263] Baseline 32bit float: 0.5768
I0626 16:56:25.035715 14319 quantization.cpp:264] Dynamic fixed point CONV
I0626 16:56:25.035728 14319 quantization.cpp:265] weights: 
I0626 16:56:25.035740 14319 quantization.cpp:267] 16bit: 0.557159
I0626 16:56:25.035761 14319 quantization.cpp:267] 8bit:  0.555959
I0626 16:56:25.035781 14319 quantization.cpp:267] 4bit:  0.00568
I0626 16:56:25.035802 14319 quantization.cpp:270] Dynamic fixed point FC
I0626 16:56:25.035815 14319 quantization.cpp:271] weights: 
I0626 16:56:25.035828 14319 quantization.cpp:273] 16bit: 0.5768
I0626 16:56:25.035848 14319 quantization.cpp:273] 8bit:  0.5768
I0626 16:56:25.035868 14319 quantization.cpp:273] 4bit:  0.5768
I0626 16:56:25.035888 14319 quantization.cpp:273] 2bit:  0.5768
I0626 16:56:25.035909 14319 quantization.cpp:273] 1bit:  0.5768
I0626 16:56:25.035938 14319 quantization.cpp:275] Dynamic fixed point layer
I0626 16:56:25.035959 14319 quantization.cpp:276] activations:
I0626 16:56:25.035979 14319 quantization.cpp:278] 16bit: 0.57578
I0626 16:56:25.036012 14319 quantization.cpp:278] 8bit:  0.57058
I0626 16:56:25.036051 14319 quantization.cpp:278] 4bit:  0.0405805
I0626 16:56:25.036073 14319 quantization.cpp:281] Dynamic fixed point net:
I0626 16:56:25.036087 14319 quantization.cpp:282] 8bit CONV weights,
I0626 16:56:25.036100 14319 quantization.cpp:283] 1bit FC weights,
I0626 16:56:25.036113 14319 quantization.cpp:284] 8bit layer activations:
I0626 16:56:25.036126 14319 quantization.cpp:285] Accuracy: 0.5516
I0626 16:56:25.036141 14319 quantization.cpp:286] Please fine-tune.

The analysis shows that both the activations and parameters of convolutional layers can be reduced to 8-bit with a top-1 accuracy drop of less than 3%. Since SqueezeNet contains no fully connected layers, the quantization results of this layer type can be ignored. Finally the tool quantizes all considered network parts simultaneously. The results indicate that an 8-bit SqueezeNet has a top-1 accuracy of 55.16% (compared to the baseline of 57.68%). In order to improve these results, we will fine-tune the network in the next step.

Fine-tune Dynamic Fixed Point Parameters

The previous step quantized the 32-bit floating point SqueezeNet to 8-bit fixed point and generated the appropriate network description file (models/SqueezeNet/RistrettoDemo/quantized.prototxt). We can now fine-tune the condensed network to regain as much of its original accuracy as possible.

During fine-tuning, Ristretto will keep a set of high-precision weights. For each training batch, these 32-bit floating point weights are stochastically rounded to 8-bit fixed point. The 8-bit parameters are then used for the forward and backward propagation, and finally the weight update is applied to the high precision weights.

The fine-tuning procedure can be done with the traditional caffe-tool. Just start the following script:

./examples/ristretto/01_finetune_squeezenet.sh

After 1,200 fine-tuning iterations (~5h on a Tesla K-40 GPU) with batch size 32*32, our condensed SqueezeNet will have a top-1 validation accuracy of around 57%. The fine-tuned net parameters are located at models/SqueezeNet/RistrettoDemo/squeezenet_iter_1200.caffemodel. All in all, you successfully trimmed SqueezeNet to 8-bit dynamic fixed point, with an accuracy loss below 1%.

Note that you could achieve a slightly better final result by improving the number format (i.e., choice of integer and fractional length for different network parts).

Benchmark Dynamic Fixed Point SqueezeNet

In this step, you will benchmark an existing dynamic fixed point SqueezeNet which we fine-tuned for you. You can do the scoring even if you skipped the previous fine-tuning step. The model can be benchmarked with the traditional caffe-tool. All the tool needs is a network description file as well as the network parameters.

./examples/ristretto/02_benchmark_fixedpoint_squeezenet.sh

You should get a top-1 accuracy of 56.95%.