Home / Articles posted byPhilipp Gysel

Author: Philipp Gysel

On Resource-Efficient Inference using Trained Convolutional Neural Networks

We are pleased to release new results on approximation of deep Convolutional Neural Networks (CNN). We used Ristretto to approximate trained 32-bit floating point CNNs. The key results can be summarized as follows: 8-bit dynamic fixed point is enough to approximate three ImageNet networks. 32-bit multiplications can be replaced by bit-shifts for small networks. Traditionally, CNNs are trained in 32-bit or 64-bit floating point. Deep CNNs are resource intense both in terms of computati...
Read More

Hardware-oriented Approximation of Convolutional Neural Networks

This extended abstract describes our Ristretto framework for CNN compression. Last week Philipp presented the Ristretto Poster at ICLR'16 in San Juan. Abstract High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model app...
Read More

AlexNet Forward Path Implementation

In this post I'll talk in detail about the forward path implementation of the famous AlexNet. This Convolutional Neural Network (CNN) by Krizhevsky and Hinton has won the ILSVR 2012 competition with a remarkable margin. If you are starting to work with CNNs or Deep Learning in general, this post will give you a head start. You can find a straight forward implementation of the CNN's forward path on our Github site. Feel free to download it and classify arbitrary images. When I was looking for ...
Read More