Data Parser in Caffe

This document briefly explains how to use the parser of the Caffe code base in order to read and parse the specifications of neural networks. Caffe’s text file format for specifying models uses Google Protocol Buffer format. The code that actually reads a model can be found in src/caffe/util/io.cpp:

bool ReadProtoFromTextFile(const char* filename, Message* proto) {
  int fd = open(filename, O_RDONLY);
  CHECK_NE(fd, -1) << "File not found: " << filename;
  FileInputStream* input = new FileInputStream(fd);
  bool success = google::protobuf::TextFormat::Parse(input, proto);
  delete input;
  return success;

Therefore, in order to be able to read and parse the data format which is used by Caffe, it is required to install and configure Google Protocol Buffer. The installation process on Ubuntu is explained here.

1. Download Google Protocol Buffer from here.

2. Before compiling and installing protobuf, it is required to install autoconf.

sudo apt-get install autoconf

3.  It is required to install libtool as well.

 sudo apt-get install build-essential libtool

4.  Then go to the directory where you have downloaded Google Protocol Buffer and execute the following:

$ ./configure
$ make
$ make check
$ make install

By default, the package will be installed to /usr/local.  However, on many platforms, /usr/local/lib is not part of LD_LIBRARY_PATH. You can add it, but it may be easier to just install to /usr instead.  To do this, invoke configure as follows:

./configure --prefix=/usr

If you already built the package with a different prefix, make sure to run “make clean” before building again.

 5. Now the installation is finished. Notice that in order to work with Google Protocol Buffer, it is required to create a .proto file. The definitions in a .proto file are simple: you add a message for each data structure you want to serialize then specify a name and a type for each field in the message. For Caffe, this .proto file is already prepared. This file can be found and downloaded src/caffe/proto/caffe.proto. The .proto file starts with a package declaration which helps to prevent naming conflicts between different projects. In C++, your generated classes will be placed in a namespace matching the package name.  As you can see, different messages are defined in this file which can be used for loading different parameters. In what follows the NetParameters  message is explained in detail.

message NetParameter {
  optional string name = 1; // consider giving the network a name
  // The input blobs to the network.
  repeated string input = 3;
  // The shape of the input blobs.
  repeated BlobShape input_shape = 8;

  // 4D input dimensions -- deprecated.  Use "shape" instead.
  // If specified, for each input blob there should be four
  // values specifying the num, channels, height and width of the input blob.
  // Thus, there should be a total of (4 * #input) numbers.
  repeated int32 input_dim = 4;

  // Whether the network will force every layer to carry out backward operation.
  // If set False, then whether to carry out backward is determined
  // automatically according to the net structure and learning rates.
  optional bool force_backward = 5 [default = false];
  // The current "state" of the network, including the phase, level, and stage.
  // Some layers may be included/excluded depending on this state and the states
  // specified in the layers' include and exclude fields.
  optional NetState state = 6;

  // Print debugging information about results while running Net::Forward,
  // Net::Backward, and Net::Update.
  optional bool debug_info = 7 [default = false];

  // The layers that make up the net.  Each of their configurations, including
  // connectivity and behavior, is specified as a LayerParameter.
  repeated LayerParameter layer = 100;  // ID 100 so layers are printed last.

  // DEPRECATED: use 'layer' instead.
  repeated V1LayerParameter layers = 2;

A message is just an aggregate containing a set of typed fields. Many standard simple data types are available as field types, including bool, int32, float, double, and string. You can also add further structure to your messages by using other message types as field types.You can even define message types nested inside other messages. You can also define enum types if you want one of your fields to have one of a predefined list of values. The "= 1", "= 2" markers on each element identify the unique “tag” that field uses in the binary encoding. Tag numbers 1-15 require one less byte to be encoded than higher numbers, so as an optimization you can decide to use those tags for the commonly used or repeated elements, leaving tags 16 and higher for less-commonly used optional elements. Each element in a repeated field requires re-encoding the tag number, so repeated fields are particularly good candidates for this optimization. Each field must be annotated with one of the following modifiers:

    • required: a value for the field must be provided, otherwise the message will be considered “uninitialized”. If libprotobuf is compiled in debug mode, serializing an uninitialized message will cause an assertion failure. In optimized builds, the check is skipped and the message will be written anyway. However, parsing an uninitialized message will always fail (by returning false from the parse method). Other than this, a required field behaves exactly like an optional field.
    • optional: the field may or may not be set. If an optional field value isn’t set, a default value is used.
  • repeated: the field may be repeated any number of times (including zero). The order of the repeated values will be preserved in the protocol buffer. Think of repeated fields as dynamically sized arrays.

6. When the .proto file is read, the next step is generating classes which are required for serialized read and write. We have already installed the compiler. Therefore, it is enough to go ahead and compile the .proto file.

protoc -I=$SRC_DIR --cpp_out=$DST_DIR $SRC_DIR/addressbook.proto


~/Desktop/test$ protoc -I=./ --cpp_out=./ caffe.proto

After compilation, the following files are generated in your destination directory:

  • caffe.pb.h, the header which declares your generated classes.
  •, which contains the implementation of your classes.

7. Now that the header files are ready, we can actually go ahead and write the parser. Here is a CPP code that I have written. This code receives a Neural Network Model, then read, parse and print its parameters.

#include <fcntl.h>
#include <google/protobuf/io/coded_stream.h>
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/text_format.h>
#include <stdint.h>

#include <algorithm>
#include <fstream>  // NOLINT(readability/streams)
#include <string>
#include <vector>
#include <string>
#include "caffe.pb.h"
#include <iostream>

using namespace std;

using google::protobuf::io::FileInputStream;
using google::protobuf::io::FileOutputStream;
using google::protobuf::io::ZeroCopyInputStream;
using google::protobuf::io::CodedInputStream;
using google::protobuf::io::ZeroCopyOutputStream;
using google::protobuf::io::CodedOutputStream;
using google::protobuf::Message;
int main() {
  caffe::NetParameter param;
  caffe::LayerParameter lparam;
  const char * filename = "deploy.prototxt";
  int fd = open(filename, O_RDONLY);
  if (fd == -1)
  cout << "File not found: " << filename;
  google::protobuf::io::FileInputStream* input = new google::protobuf::io::FileInputStream(fd);
  bool success = google::protobuf::TextFormat::Parse(input, &param);
  cout << "Network Name: " << << endl;
  cout << "Input: " << param.input(0) << endl;
  for (int j = 0; j < param.input_dim_size(); j++) {
    cout << "Input Dim "<< j << ": " << param.input_dim(j) << endl;
  cout << "Number of Layers (in implementation): " << param.layer_size() << endl << endl;
  for (int nlayers = 0; nlayers < param.layer_size(); nlayers++) {
    lparam = param.layer(nlayers);
    cout << endl << "Parameters for Layer "<< nlayers + 1 << ":" << endl;
    cout << "Name: " << << endl;
    cout << "Type: " << lparam.type() << endl;
    for (int num_bottom_layers = 0; num_bottom_layers < lparam.bottom_size(); num_bottom_layers++) {
      cout << "Bottom: " << lparam.bottom(num_bottom_layers) << endl;
    for (int num_top_layers = 0; num_top_layers < lparam.top_size(); num_top_layers++) {
      cout << "Top: " << << endl;
    for (int i = 0; i < lparam.param_size(); i++) {
      cout << "LR_MULT: " << lparam.param(i).lr_mult() << endl;
      cout << "decay_MULT: " << lparam.param(i).decay_mult() << endl;
    if (lparam.has_convolution_param()) {
      cout << "Number of Outputs: " << lparam.convolution_param().num_output() << endl;
      cout << "Pad: " << lparam.convolution_param().pad() << endl;
      cout << "Kernel Size: " << lparam.convolution_param().kernel_size() << endl;
      cout << "Stride: " << lparam.convolution_param().stride() << endl;
      cout << "Group: " << lparam.convolution_param().group() << endl;
    if (lparam.has_lrn_param()) {
      cout << "Local Size: " << lparam.lrn_param().local_size() << endl;
      cout << "Alpha: " << lparam.lrn_param().alpha() << endl;
      cout << "Beta: " << lparam.lrn_param().beta() << endl;
    if (lparam.has_pooling_param()) {
      cout << "Pool: " << lparam.pooling_param().pool() << endl;
      cout << "Kernel Size: " << lparam.pooling_param().kernel_size() << endl;
      cout << "Stride: " << lparam.pooling_param().stride() << endl;
    if (lparam.has_inner_product_param()) {
      cout << "Number of Outputs: " << lparam.inner_product_param().num_output() << endl;
    if (lparam.has_dropout_param()) {
      cout << "Dropout Ratio: " << lparam.dropout_param().dropout_ratio() << endl;
  delete input;
return 0;

8. Finally, the code should be compiled. A part of the output for AlexNet is shown here.

~/Desktop/test$ g++ -I /usr/include -L /usr/lib main.cpp -lprotobuf -pthread
Parameters for Layer 1:
Name: conv1
Type: Convolution
Bottom: data
Top: conv1
decay_MULT: 1
decay_MULT: 0
Number of Outputs: 96
Pad: 0
Kernel Size: 11
Stride: 4
Group: 1

Parameters for Layer 2:
Name: relu1
Type: ReLU
Bottom: conv1
Top: conv1

Parameters for Layer 3:
Name: norm1
Type: LRN
Bottom: conv1
Top: norm1
Local Size: 5
Alpha: 0.0001
Beta: 0.75