caffe

Caffe hangs after printing data -> label

て烟熏妆下的殇ゞ 提交于 2019-12-11 04:25:48
问题 I'm trying to train a LeNet on my own data (37 by 37 grayscale images of 1024 categories). I created the lmdb files, and changed the size of the ouput layer to 1024. When I ran the caffe train with my solver file, the program got stuck after printing ... layer { name: "loss" type: "SoftmaxWithLoss" bottom: "score" bottom: "label" top: "loss" } I0713 17:11:13.334890 9595 layer_factory.hpp:77] Creating layer data I0713 17:11:13.334939 9595 net.cpp:91] Creating Layer data I0713 17:11:13.334950

Image mean subtraction vs BatchNormalization - Caffe

China☆狼群 提交于 2019-12-11 04:18:32
问题 i have a question regarding Image preprocessing in Caffe. When i use the BatchNormalization Layer in my caffemodel, do i still need the preprocessing step "image mean subtraction" on all my trainings Images before Training Phase starts? Or is this done in the BatchNormalization Layer ? Thank you very much =) 回答1: Image mean subtraction does something different than BatchNormalization and is used for a different purpose. BatchNormalization normalizes a batch and not every single image and is

Caffe's second “top” of `“Accuracy”` layer

こ雲淡風輕ζ 提交于 2019-12-11 03:08:16
问题 Looking at the code of "Accuracy" layer, I see there is an option for a second output/"top" for this layer. What does this second output produce? 回答1: Looking at accuracy_layer.hpp, where the number of outputs for the layer are defined, there's this comment: // If there are two top blobs, then the second blob will contain // accuracies per class. So, the second "top" of the "Accuracy" layer simply reports per-class accuracies. Just as a side note for layer Accuracy, the reported Accuracy is

How to use new pretraining model with different dataset in DIGITS (different labels)?

走远了吗. 提交于 2019-12-11 02:39:13
问题 I want to use VGG_ILSVRC_19_layers as a pretrained model in digits but with different dataset. Do I need different label files? How can I upload this model and use it for my dataset? for the VGG 16 layers I got ERROR: Cannot copy param 0 weights from layer 'fc6'; shape mismatch. Source param shape is 1 1 4096 25088 (102760448); target param shape is 4096 32768 (134217728). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer. how can modify

Error : H5LTfind_dataset(file_id, dataset_name_) Failed to find HDF5 dataset label

你说的曾经没有我的故事 提交于 2019-12-11 02:25:18
问题 I want to use HDF5 file to input my data and labels in my CNN. I created the hdf5 file with matlab. Here is my code: h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/image',[522 775 3 numFrames]); h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/anno',[522 775 3 numFrames]); h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/label',[1 numFrames

When to stop training in caffe?

假装没事ソ 提交于 2019-12-11 02:18:05
问题 I am using bvlc_reference_caffenet for training. I am doing both training and testing. Below is an example log of my trained network: I0430 11:49:08.408740 23343 data_layer.cpp:73] Restarting data prefetching from start. I0430 11:49:21.221074 23343 data_layer.cpp:73] Restarting data prefetching from start. I0430 11:49:34.038710 23343 data_layer.cpp:73] Restarting data prefetching from start. I0430 11:49:46.816813 23343 data_layer.cpp:73] Restarting data prefetching from start. I0430 11:49:56

Directory structure and labeling in Caffe

二次信任 提交于 2019-12-11 02:05:38
问题 I would like to check if my understanding in organizing my folders and labeling is correct regarding Caffe's way of doing it. My train directory structure looks like below: ~/Documents/software_dev/caffe/data/smalloffice/images/train a_person not_a_person train.txt where both a_person and not_a_person are directories. My train.txt file looks like below: train.txt: ---------- not_a_person/1_rotated.jpg 0 not_a_person/2_rotated.jpg 0 not_a_person/3_rotated.jpg 0 not_a_person/4_rotated.jpg 0 not

caffe reshape / upsample fully connected layer

懵懂的女人 提交于 2019-12-11 00:16:12
问题 Assuming we have a layer like this: layer { name: "fully-connected" type: "InnerProduct" bottom: "bottom" top: "top" inner_product_param { num_output: 1 } } The output is batch_size x 1. In several papers (for exmaple link1 page 3 picture on the top, or link2 page 4 on top)I have seen that they used such a layer in the end to come up with a 2D image for pixel-wise prediction. How is it possible to transform this into a 2D image? I was thinking of reshape or deconvolution, but I cannot figure

How to store volumetric patch into HDF5?

跟風遠走 提交于 2019-12-11 00:06:17
问题 I have a volumetric data with size 256x128x256 . Due to limit memory, I cannot use the whole data to directly feed to CAFFE. Hence, I will randomly choose n_sample patches 50x50x50 which extract from the volumetric data and store them into HDF5. I was successful to randomly extract the patches from raw data and its label by extract_patch_from_volumetric_data function. I want to store these patches into the HDF5 data. The bellow code performs the task. Could you look at and verify help me my

Caffe, setting custom weights in layer

混江龙づ霸主 提交于 2019-12-10 23:18:09
问题 I have a network. In one place I want to use concat. As on this picture. Unfortunately, the network doesn't train. To understand why I want to change weights in concat. Meaning that all values from FC4096 will get 1 and all values from FC16000 will get 0 at the beginning. I know that FC4096 will get me 57% accuracy, so with learning rate 10^-6 I will understand why after concatenation layers didn't learn. The question is, how can I set all values from FC4096 to 1 and all values from FC16000