caffe

Error trying to run custom caffenet with new data

我的梦境 提交于 2019-12-11 23:57:40
问题 I just tried to train the provided Caffenet network with my own lmdb file. I changed the fully connected to convolutional layer of depth 4096 and custom frame_size. Here is the code: weight_param = dict(lr_mult=1, decay_mult=1) bias_param = dict(lr_mult=2, decay_mult=0) learned_param = [weight_param, bias_param] batch_size = 256 # 0 means non updating parameters frozen_param = [dict(lr_mult=0)] * 2 def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1, param=learned_param, weight_filler

How do I reduce 4096-dimensional feature vector to 1024-dimensional vector in CNN Caffemodel?

余生长醉 提交于 2019-12-11 22:07:18
问题 I used 16-layers VGGnet to extract features from an image. It outputs a 4096-dimensional feature vector. However, I need a 1024-dimensional vector. How do I further reduce this 4096-vector into 1024-vector? Do I need to add a new layer on top of fc7 ? 回答1: Yes, you need to add another layer on top of fc7 . This is how your last few layers should be like layers { bottom: "fc7" top: "fc7" name: "relu7" type: RELU } layers { bottom: "fc7" top: "fc7" name: "drop7" type: DROPOUT dropout_param {

How to evaluate the result is good or not in caffe?

旧时模样 提交于 2019-12-11 19:15:35
问题 I train my data set using caffe. I set (in slover.prototxt ): test_iter: 1000 test interval: 1000 max_iter: 450000 base_lr: 0.0001 lr_policy: "step" step_size: 100000 The test accuracy is around 0.02 and test loss is around 1.6 at the first test. Then the test accuracy increase and the test loss decrease every test. At iter 32000 the test accuracy is 1 and the test loss is 0.45. Then the accuracy decrease and the loss increase. I think the loss is too large when accuracy is 1. How do I know

Caffe - Doing forward pass with multiple input blobs

╄→гoц情女王★ 提交于 2019-12-11 18:37:56
问题 Following are the input layers of my fine-tuned model: layer { type: "HDF5Data" name: "data" top: "Meta" hdf5_data_param { source: "/path/to/train.txt" batch_size: 50 } include { phase: TRAIN } } layer { name: "data" type: "ImageData" top: "X" top: "Labels" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } image_data_param { source: "/path/to/train.txt" batch_size: 50 new_height: 256 new_width: 256 } } layer { type:

Error with Caffe C++ example with different deploy.prototxt file

被刻印的时光 ゝ 提交于 2019-12-11 14:38:32
问题 I trained a model using the MNIST example architecture (but on my own set of 3 image classes) and have been trying to integrate it into the C++ example. I modified the MNIST architecture file to make it similar to the deploy.prototxt file for the C++ example (replacing the train and test layers with the input layer). Unfortunately, when I run the C++ program it gives me the following error: F0827 14:57:28.427697 25511 insert_splits.cpp:35] Unknown bottom blob 'label' (layer 'accuracy', bottom

Check fail: how to use hdf5 data layer in deep layer?

天涯浪子 提交于 2019-12-11 13:24:17
问题 I have the train and label data as data.mat . (I have 200 training data with 6000 features and labels are (-1, +1) that have saved in data.mat). I am trying to convert my data (train and test) in hdf5 and run Caffe using: load input.mat hdf5write('my_data.h5', '/new_train_x', single( permute(reshape(new_train_x,[200, 6000, 1, 1]),[4:-1:1] ) )); hdf5write('my_data.h5', '/label_train', single( permute(reshape(label_train,[200, 1, 1, 1]), [4:-1:1] ) ) , 'WriteMode', 'append' ); hdf5write('my

Detecting pedestrian using CaffeNet model at moderate framerate

吃可爱长大的小学妹 提交于 2019-12-11 12:53:41
问题 I train the CaffeNet (more precisely Cifar10 model for two classes classification) model. Now the model is ready for detection. For the model testing with a single image, I use test_predict_imagenet.cpp . I haven't tested how fast the code can run for 640 x 480 image. My target is I like to have 5~10 frame/sec is just nice for offline detection. I understand that I need to implement multi-size detection (i.e. something like we do in face detection, original image size is re-sized for

How can I get two output values (for each of the two classes) for a binary classifier in Caffe?

眉间皱痕 提交于 2019-12-11 12:38:58
问题 I'm experimenting with LeNet network as a binary classifier (yes, no). The first and several last layers in the configuration file for testing is the following: layer { name: "data" type: "ImageData" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.00390625 } image_data_param { source: "examples/my_example/test_images_labels.txt" batch_size: 1 new_height: 128 new_width: 128 } } ... layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1

caffe create networks by python

旧城冷巷雨未停 提交于 2019-12-11 12:14:52
问题 We all know this python code can create caffe networks: n = caffe.NetSpec() n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1. / 255), ntop=2) n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) The layer's name is on the right of n. for example:"n.data",this layer's name is "data". Write simple code If I want to