pycaffe

How can I generate data layer (HDF5) for training and testing in same prototxt?

人走茶凉 提交于 2019-12-11 06:03:20
问题 I have a data layer with HDF5 type. It contains both Train and Test phase as expected name: "LogisticRegressionNet" layer { name: "data" type: "HDF5Data" top: "data" top: "label" include { phase: TRAIN } hdf5_data_param { source: "examples/hdf5_classification/data/train.txt" batch_size: 10 } } layer { name: "data" type: "HDF5Data" top: "data" top: "label" include { phase: TEST } hdf5_data_param { source: "examples/hdf5_classification/data/test.txt" batch_size: 10 } } I want to use python to

How to store volumetric patch into HDF5?

跟風遠走 提交于 2019-12-11 00:06:17
问题 I have a volumetric data with size 256x128x256 . Due to limit memory, I cannot use the whole data to directly feed to CAFFE. Hence, I will randomly choose n_sample patches 50x50x50 which extract from the volumetric data and store them into HDF5. I was successful to randomly extract the patches from raw data and its label by extract_patch_from_volumetric_data function. I want to store these patches into the HDF5 data. The bellow code performs the task. Could you look at and verify help me my

Caffe, setting custom weights in layer

混江龙づ霸主 提交于 2019-12-10 23:18:09
问题 I have a network. In one place I want to use concat. As on this picture. Unfortunately, the network doesn't train. To understand why I want to change weights in concat. Meaning that all values from FC4096 will get 1 and all values from FC16000 will get 0 at the beginning. I know that FC4096 will get me 57% accuracy, so with learning rate 10^-6 I will understand why after concatenation layers didn't learn. The question is, how can I set all values from FC4096 to 1 and all values from FC16000

Caffe accuracy bigger than 100%

微笑、不失礼 提交于 2019-12-10 11:04:17
问题 I'm building one but, and when I use the custom train function provided on lenet example with a batch size bigger than 110 my accuracy gets bigger than 1 (100%). If I use batch size 32, I get 30 percent of accuracy. Batch size equal 64 my net accuracy is 64. And batch size equal to 128, the accuracy is 1.2. My images are 32x32. Train dataset: 56 images of Neutral faces. 60 images of Surprise faces. Test dataset: 15 images of Neutral faces. 15 images of Surprise faces. This is my code: def

Caffe Iteration loss versus Train Net loss

戏子无情 提交于 2019-12-09 15:55:47
问题 I'm using caffe to train a CNN with a Euclidean loss layer at the bottom, and my solver.prototxt file configured to display every 100 iterations. I see something like this, Iteration 4400, loss = 0 I0805 11:10:16.976716 1936085760 solver.cpp:229] Train net output #0: loss = 2.92436 (* 1 = 2.92436 loss) I'm confused as to what the difference between the Iteration loss and Train net loss is. Usually the iteration loss is very small (around 0) and the Train net output loss is a bit larger. Can

How to apply a pre-trained model of 3 channel images on single channel images?

烂漫一生 提交于 2019-12-08 04:45:00
问题 I tried to used a pre-trained model that already was trained on three-channel color images, however, I am getting an error because of shape difference. Could someone let me know how can I tackle this issue? One user had suggested using Tile layer, but I could not find any relevant document/help for using this layer or any other solution. I really appreciate your help. 回答1: There is not much information in caffe.proto about tile layer. If you look at the code it just copies data tiles times

Multiple pretrained networks in Caffe

痴心易碎 提交于 2019-12-07 00:33:33
问题 Is there a simple way (e.g. without modifying caffe code) to load wights from multiple pretrained networks into one network? The network contains some layers with same dimensions and names as both pretrained networks. I am trying to achieve this using NVidia DIGITS and Caffe. EDIT : I thought it wouldn't be possible to do it directly from DIGITS, as confirmed by answers. Can anyone suggest a simple way to modify the DIGITS code to be able to select multiple pretrained networks? I checked the

Caffe accuracy bigger than 100%

纵饮孤独 提交于 2019-12-06 11:52:19
I'm building one but, and when I use the custom train function provided on lenet example with a batch size bigger than 110 my accuracy gets bigger than 1 (100%). If I use batch size 32, I get 30 percent of accuracy. Batch size equal 64 my net accuracy is 64. And batch size equal to 128, the accuracy is 1.2. My images are 32x32. Train dataset: 56 images of Neutral faces. 60 images of Surprise faces. Test dataset: 15 images of Neutral faces. 15 images of Surprise faces. This is my code: def train(solver): niter = 200 test_interval = 25 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter

caffe fully convolutional cnn - how to use the crop parameters

两盒软妹~` 提交于 2019-12-06 05:03:11
问题 I am trying to train a fully convolutional network for my problem. I am using the implementation https://github.com/shelhamer/fcn.berkeleyvision.org . I have different image sizes. I am not sure how to set the 'Offset' param in the 'Crop' layer. What are the default values for the 'Offset' param? How to use this param to crop the images around the center? 回答1: According to the Crop layer documentation, it takes two bottom blobs and outputs one top blob. Let's call the bottom blobs as A and B

batch size does not work for caffe with deploy.prototxt

做~自己de王妃 提交于 2019-12-06 04:14:08
I'm trying to make my classification process a bit faster. I thought of increasing the first input_dim in my deploy.prototxt but that does not seem to work. It's even a little bit slower than classifying each image one by one. deploy.prototxt input: "data" input_dim: 128 input_dim: 1 input_dim: 120 input_dim: 160 ... net description ... python net initialization net=caffe.Net( 'deploy.prototxt', 'model.caffemodel', caffe.TEST) net.blobs['data'].reshape(128, 1, 120, 160) transformer = caffe.io.Transformer({'data':net.blobs['data'].data.shape}) #transformer settings python classification images=