caffe

Changing the input data layer during training in Caffe

我只是一个虾纸丫 提交于 2019-12-10 10:41:08
问题 Is it possible to change the input source of ImageData layer or a MemoryData layer on the fly? I am trying to shuffle the data every epoch but I have both image and some other non-image features that I want to concatenate at a later stage in the network. I could not find a reliable way to shuffle both the image and my other data in a way that preserves the alignment of the two. So, I am thinking of re-generating the imagelist.txt as well as nonimage data (in Memory) every epoch and attach the

Caffe classification labels in HDF5

北城余情 提交于 2019-12-10 10:12:17
问题 I am finetuning a network. In a specific case I want to use it for regression, which works. In another case, I want to use it for classification. For both cases I have an HDF5 file, with a label. With regression, this is just a 1-by-1 numpy array that contains a float. I thought I could use the same label for classification, after changing my EuclideanLoss layer to SoftmaxLoss. However, then I get a negative loss as so: Iteration 19200, loss = -118232 Train net output #0: loss = 39.3188 (* 1

ubuntu上安装caffe

被刻印的时光 ゝ 提交于 2019-12-10 02:07:21
花了两个小时排错。。安装好caffe,测试用例也成功跑完了。本来想记录下的。。 直到我发现了这个东东: https://github.com/zeakey/caffe-auto-install :cold_sweat:一键安装就好了。。 来源: oschina 链接: https://my.oschina.net/u/811979/blog/663517

安装caffe过程记录

不问归期 提交于 2019-12-10 01:51:34
现在我的安装的深度学习的软件大都在台式机上进行的,今天要装的是caffe框架。我的操作系统是ubuntu14.04 先 是安装依赖项: sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler sudo apt-get install --no-install-recommends libboost-all-dev sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev# ubuntu系统独有安装包 opencv的安装,因为我也是要做视频图片处理的。GitHub有的。 htthttps://github.com/jayrambhia/Install-OpenCV ,下载后解压,然后进去该目录,选择自己的操作系统,比如我的是Ubuntu,执行 $ cd Ubuntu $ chmod +x * $ ./opencv_latest.sh #这是最新的3.1.0 好吧,这个过程稍长,可能要30分钟左右。 ATLAS安装 Automatic Tuned Linear Algebra Software,BLAS线性算法库的优化版本 ,安装步骤:

Batch processing mode in Caffe

安稳与你 提交于 2019-12-09 23:16:23
问题 I'd like to use the Caffe library to extract image features but I'm having performance issues. I can only use the CPU mode. I was told Caffe supported batch processing mode, in which the average time required to process one image was much slower. I'm calling the following method: const vector<Blob<Dtype>*>& Net::Forward(const vector<Blob<Dtype>* > & bottom, Dtype* loss = NULL); and I'm putting in a vector of size 1, containing a single blob of the following dimensions - (num: 10, channels: 3,

Multi-labels using two different LMDB

自古美人都是妖i 提交于 2019-12-09 18:27:31
问题 I am new in caffe framework and I would like to use caffe to implement the training with multi-label. I use two LMDB to save data and labels, respectively. The data LMDB is of dimension Nx1xHxW while the label LMDB is of dimension Nx1x1x3. Labels are float data. The text file is as follow: 5911 3 train/train_data/4224.bmp 13 0 12 train/train_data/3625.bmp 11 3 7 ... ... I use C++ to create LMDB. My main.cpp: #include <algorithm> #include <fstream> // NOLINT(readability/streams) #include

Sigaction and porting Linux code to Windows

。_饼干妹妹 提交于 2019-12-09 16:18:05
问题 I am trying to port caffe (developed for Linux) source code to Windows environment. The problem is at sigaction structure at signal_handler.cpp and signal_handler.h . The source codes are shown below. My query is which library or code replacement can be done to make this sigaction works in Windows. ///Header file #ifndef INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_ #define INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_ #include "caffe/proto/caffe.pb.h" #include "caffe/solver.hpp" namespace caffe { class

Caffe Iteration loss versus Train Net loss

戏子无情 提交于 2019-12-09 15:55:47
问题 I'm using caffe to train a CNN with a Euclidean loss layer at the bottom, and my solver.prototxt file configured to display every 100 iterations. I see something like this, Iteration 4400, loss = 0 I0805 11:10:16.976716 1936085760 solver.cpp:229] Train net output #0: loss = 2.92436 (* 1 = 2.92436 loss) I'm confused as to what the difference between the Iteration loss and Train net loss is. Usually the iteration loss is very small (around 0) and the Train net output loss is a bit larger. Can

How to use caffe convnet library to detect facial expressions?

杀马特。学长 韩版系。学妹 提交于 2019-12-09 07:18:14
问题 How can I use caffe convnet to detect facial expressions? I have a image dataset, Cohn Kanade, and I want to train caffe convnet with this dataset. Caffe has a documentation site, but its not explain how to train my own data. Just with pre trained data. Can someone teach me how to do it? 回答1: Caffe supports multiple formats for the input data (HDF5/lmdb/leveldb). It's just a matter of picking the one you feel most comfortable with. Here are a couple of options: caffe/build/tools/convert

How to write comments in prototxt files?

旧城冷巷雨未停 提交于 2019-12-08 17:35:58
问题 I can't find how to write comments in prototxt files. Is there any way to have comments in a prototxt file, how? Thanks 回答1: You can comment by adding the # char: everything in the line after that is a comment: layer { name: "aLayerWithComments" # I picked this cool name by myself type: "ReLU" bottom: "someData" # this is the output of the layer below top: "someData" # same name means this is an "in-place" layer } # and now you can comment the entire line... 来源: https://stackoverflow.com