caffe

caffe training loss does not converge

佐手、 提交于 2019-12-12 04:22:58
问题 I'm getting the problem of non-converged training loss. (batchsize: 16, average loss:10). I have tried with the following methods + Vary the learning rate lr (initial lr = 0.002 cause very high loss, around e+10). Then with lr = e-6, the loss seem to small but do not converge. + Add initialization for bias + Add regularization for bias and weight This is the network structure and the training loss log Hope to hear from you Best regards 来源: https://stackoverflow.com/questions/41234297/caffe

How to generate concate layer prototxt using python

筅森魡賤 提交于 2019-12-12 04:22:21
问题 I have a prototxt as follows: layer { name: "data" type: "HDF5Data" top: "data1" top: "data2" top: "label" include { phase: TRAIN } hdf5_data_param { source: "./source_list.txt" batch_size: 2 shuffle: true } } layer { name: "concat" type: "Concat" bottom: "data1" bottom: "data2" top: "data" concat_param { concat_dim:1 } } I want to generate above prototxt using caffe NetSpec in python. However, It was wrong. This is my code. Please help me to fix it. Thanks from caffe import layers as L ... n

How to run Linux libraries on Docker on Windows?

点点圈 提交于 2019-12-12 04:22:08
问题 I am working on Windows and I need to use libraries, which are availible only with Linux (TensorFlow,Caffe). I would like to run the software on Docker. I cannot understand the docker mechanism clearly, so I am completly lost, when its up to my problem. What should I do and how should it work? 回答1: Edit: About Windows Docker hosting capabilities (container on a Windows host): Windows 10 offers Docker host capabilities, but only based on Hyper-V, i.e. by means of Linux-like VMs. Windows 2016

Caffe install getting ImportError: DLL load failed: The specified module could not be found

≡放荡痞女 提交于 2019-12-12 04:06:00
问题 I am trying to compile and run the snippets posted here, which basically is going to let me visualize the network internals(feature maps). I have successfully compiled caffe and pycaffe using the caffe-windows branch, And I have copied the caffe folder, into T:\Anaconda\Lib\site-packages folder. Yet still, when I try to run this snippet of code in jupyter notebook : import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Make sure that caffe is on the python path: caffe_root =

When I run my own images on caffe , it stop at Iteration 0, Testing net (#0)

半世苍凉 提交于 2019-12-12 03:36:35
问题 I ran caffe and got this output: who can tell me what is the problem? I will really appreciate!! 回答1: It seems like one (or more) of your label values are invalid, see this PR for information: If you have an invalid ground truth label, "SoftmaxWithLoss" will silently access invalid memory [...] The old check only worked in DEBUG mode and also only worked for CPU. Make sure your prediction vector length matches the number of labels you try to predict. From your comments, it seems like you have

Test accuracy cannot improve when learning ZFNet on ILSVRC12

岁酱吖の 提交于 2019-12-12 03:35:02
问题 I've implemented a home-brewed ZFNet (prototxt) for my research. After 20k iterations with the definition, the test accuracy stays at ~0.001 (i.e., 1/1000), the test loss at ~6.9, and training loss at ~6.9, which seems that the net keeps playing guessing games among the 1k classes. I've thoroughly checked the whole definition and tried to change some of the hyper-parameters to start a new training, but of no avail, same results' shown on the screen.... Could anyone show me some light? Thanks

How to train Caffe with only G and B channels

核能气质少年 提交于 2019-12-12 02:51:26
问题 Is there anyway to use only G and B channels for training Caffe using "ImageData" input layer? 回答1: You can add a convolution layer on top of your input that will select G and B: layer { name: "select_B_G" type: "Convolution" bottom: "data" top: "select_B_G" convolution_param { kernel_size: 1 num_output: 2 bias_term: false } param { lr_mult: 0 } # do not learn parameters for this layer } You'll need to do some net surgery prior to training to set the weights for this layer to be net.params[

Caffe - Image augmentation by cropping

↘锁芯ラ 提交于 2019-12-12 02:17:28
问题 The cropping strategy of caffe is to apply random-crop for training and center-crop for testing. From experiment, I observed that accuracy of recognition improves if I can provide two cropped version (random and center) for the same image during training. These experimental data (size 100x100) are generated offline (not using caffe) by applying random and center cropping on a 115x115 sized image. I would like to know how to perform this task in caffe? Note: I was thinking to use 2 data layers

Convolutional Neural Networks with Caffe and NEGATIVE IMAGES

一个人想着一个人 提交于 2019-12-12 00:26:43
问题 When training a set of classes (let's say #clases (number of classes) = N) on Caffe Deep Learning (or any CNN framework) and I make a query to the caffemodel , I get a % of probability of that image could be OK. So, let's take a picture of a similar Class 1, and I get the result: 1.- 90% 2.- 10% rest... 0% the problem is: when I take a random picture (for example of my environment), I keep getting the same result , where one of the class is predominant (>90% probability) but it doesn't belong

ImportError: dlopen(…) library not open

落花浮王杯 提交于 2019-12-12 00:18:06
问题 I am trying to run Google Research's DeepDream code on a mac running OSx 10.9.5. There are a few dependencies that I had to install. I am using the Anaconda distribution of python and I made sure that I have all the packages required. The hardest thing was to install Caffe. I have ATLAS installed using fink. Then I have compiled caffe and pycaffe. When I ran 'make runtest' all tests passed. I also ran 'make distribute'. When I run the notebook released from Google, I get the following error: