caffe

Changing the input data layer during training in Caffe

让人想犯罪 __ 提交于 2019-12-06 13:50:05
Is it possible to change the input source of ImageData layer or a MemoryData layer on the fly? I am trying to shuffle the data every epoch but I have both image and some other non-image features that I want to concatenate at a later stage in the network. I could not find a reliable way to shuffle both the image and my other data in a way that preserves the alignment of the two. So, I am thinking of re-generating the imagelist.txt as well as nonimage data (in Memory) every epoch and attach the new file to the ImageData layer and initialize MemoryDataLayer with the new data. How can I make sure

Caffe accuracy bigger than 100%

纵饮孤独 提交于 2019-12-06 11:52:19
I'm building one but, and when I use the custom train function provided on lenet example with a batch size bigger than 110 my accuracy gets bigger than 1 (100%). If I use batch size 32, I get 30 percent of accuracy. Batch size equal 64 my net accuracy is 64. And batch size equal to 128, the accuracy is 1.2. My images are 32x32. Train dataset: 56 images of Neutral faces. 60 images of Surprise faces. Test dataset: 15 images of Neutral faces. 15 images of Surprise faces. This is my code: def train(solver): niter = 200 test_interval = 25 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter

Merge two LMDB databases for feeding to the network (caffe)

时光怂恿深爱的人放手 提交于 2019-12-06 11:06:23
Here are two LMDB databases. Is there any way to merge these two databases and feed it to network using caffe? Simply write a script using the python lmdb interface. Something like: import lmdb env = lmdb.open("path/to/lmdbFile") txn = env.begin(write=True) database1 = txn.cursor("db1Name") database2 = txn.cursor("db2Name") env.open_db(key="newDBName", txn=txn) newDatabase = txt.cursor("newDBName") for (key, value) in database1: newDatabase.put(key, value) for (key, value) in database2: newDatabase.put(key, value) or you could just as simply add one to the other by: for (key, value) in

How can I have multiple losses in a network in Caffe?

前提是你 提交于 2019-12-06 09:40:41
If I define multiple loss layers in a network, will there be multiple back propagation happening from those ends to the beginning of the network? I mean, do they even work that way? Suppose I have something like this: Layer1{ } Layer2{ } ... Layer_n{ } Layer_cls1{ bottom:layer_n top:cls1 } Layer_cls_loss1{ type:some_loss bottom:cls1 top:loss1 } Layer_n1{ bottom:layer_n .. } Layer_n2{ } ... layer_n3{ } Layer_cls2{ bottom:layer_n3 top:cls2 } Layer_cls_loss2{ type:some_loss bottom:cls2 top:loss2 } layer_n4{ bottom:layer_n3 .. } ... layer_cls3End{ top:cls_end bottom:... } loss{ bottom:cls_end top

Pycaffe Installation in windows

≯℡__Kan透↙ 提交于 2019-12-06 09:02:08
问题 I had to work around with caffe in python so i tried to Build caffe for windows using opencv and vs2013 and for cpu only mode the process was sucessfull and build completed with few warnings and no Errors after that i copied the build file into lib/site packages in my anaconda package so that i will be able to use it but after that i tried to import caffe it shows error ImportError Traceback (most recent call last) <ipython-input-1-1cca3aa1f8c5> in <module>() ----> 1 import caffe F:\python

What is the order of mean values in Caffe's train.prototxt?

谁说我不能喝 提交于 2019-12-06 06:39:27
问题 In my Caffe 'train.prototxt' I'm doing some input data transformation, like this: transform_param { mirror: true crop_size: 321 mean_value: 104 # Red ? mean_value: 116 # Blue ? mean_value: 122 # Green ? } Now I want to store a modified version of my input images such that certain image regions are set to those mean values. The rational is that those regions are then set to 0 during mean subtraction. However I don't know what the order of channels is that caffe expects in such a prototxt file

Caffe network getting very low loss but very bad accuracy in testing

限于喜欢 提交于 2019-12-06 06:26:41
I'm somewhat new to caffe, and I'm getting some strange behavior. I'm trying to use fine tuning on the bvlc_reference_caffenet to accomplish an OCR task. I've taken their pretrained net, changed the last FC layer to the number of output classes that I have, and retrained. After a few thousand iterations I'm getting loss rates of ~.001, and an accuracy over 90 percent when the network tests. That said, when I try to run my network on data by myself, I get awful results, not exceeding 7 or 8 percent. The code I'm using to run the net is: [imports] net = caffe.Classifier('bvlc_reference_caffenet

caffe fully convolutional cnn - how to use the crop parameters

两盒软妹~` 提交于 2019-12-06 05:03:11
问题 I am trying to train a fully convolutional network for my problem. I am using the implementation https://github.com/shelhamer/fcn.berkeleyvision.org . I have different image sizes. I am not sure how to set the 'Offset' param in the 'Crop' layer. What are the default values for the 'Offset' param? How to use this param to crop the images around the center? 回答1: According to the Crop layer documentation, it takes two bottom blobs and outputs one top blob. Let's call the bottom blobs as A and B

Caffe compilation fails due to unsupported gcc compiler version

落爺英雄遲暮 提交于 2019-12-06 04:54:36
问题 I struggle with Caffe compilation. Unfortunately I failed to compile it. Steps I followed: git clone https://github.com/BVLC/caffe.git cd caffe mkdir build cd build cmake .. make all Running make all fails with the following error message: [ 2%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o In file included from /usr/include/cuda_runtime.h:59:0, from <command-line>:0: /usr/include/host_config.h:82:2: error: #error -- unsupported

batch size does not work for caffe with deploy.prototxt

做~自己de王妃 提交于 2019-12-06 04:14:08
I'm trying to make my classification process a bit faster. I thought of increasing the first input_dim in my deploy.prototxt but that does not seem to work. It's even a little bit slower than classifying each image one by one. deploy.prototxt input: "data" input_dim: 128 input_dim: 1 input_dim: 120 input_dim: 160 ... net description ... python net initialization net=caffe.Net( 'deploy.prototxt', 'model.caffemodel', caffe.TEST) net.blobs['data'].reshape(128, 1, 120, 160) transformer = caffe.io.Transformer({'data':net.blobs['data'].data.shape}) #transformer settings python classification images=