caffe

How do I get ILSVRC12 data in image format or how to create ilsvrc12_val_lmdb?

泄露秘密 提交于 2019-12-04 19:05:06
I am trying to run imagenet example in Caffe. In this( https://github.com/BVLC/caffe/tree/master/examples/imagenet ) page they say We assume that you already have downloaded the ImageNet training data and validation data, and they are stored on your disk like: /path/to/imagenet/train/n01440764/n01440764_10026.JPEG /path/to/imagenet/val/ILSVRC2012_val_00000001.JPEG Where do I find this data? It's a bit of a process. 1. Got to imagenet's download page and select "Download Image URLs". 2. Download the image URL list from the links at the bottom of the page, e.g., fall 2011's list . 3. Download

Generating LMDB for Caffe

≡放荡痞女 提交于 2019-12-04 18:50:38
问题 I am trying to build a deep learning model for Saliency analysis using caffe (I am using the python wrapper). But I am unable to understand how to generate the lmdb data structure for this purpose. I have gone through the Imagenet and mnist examples and I understand that I should generate labels in the format my_test_dir/picture-foo.jpg 0 But in my case, I will be labeling each pixel with 0 or 1 indicating whether that pixel is salient or not. That won't be a single label for an image. How to

Batch processing mode in Caffe

不问归期 提交于 2019-12-04 18:21:47
I'd like to use the Caffe library to extract image features but I'm having performance issues. I can only use the CPU mode. I was told Caffe supported batch processing mode, in which the average time required to process one image was much slower. I'm calling the following method: const vector<Blob<Dtype>*>& Net::Forward(const vector<Blob<Dtype>* > & bottom, Dtype* loss = NULL); and I'm putting in a vector of size 1, containing a single blob of the following dimensions - (num: 10, channels: 3, width: 227, height: 227). It represents a single image oversampled in the same way as in the official

how do I update cuDNN to a newer version?

孤街浪徒 提交于 2019-12-04 14:29:08
问题 the cuDNN installation manual says ALL PLATFORMS Extract the cuDNN archive to a directory of your choice, referred to below as . Then follow the platform-specific instructions as follows. LINUX cd export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH Add to your build and link process by adding -I to your compile line and -L -lcudnn to your link line. It seems that it simply adds pwd to LD_LIBRARY_PATH , so I guess just replacing the files in pwd will do the update. But it seems not that simple as

nvcc fatal : Unsupported gpu architecture 'compute_20' while cuda 9.1+caffe+openCV 3.4.0 is installed

旧巷老猫 提交于 2019-12-04 11:35:40
问题 I have installed CUDA 9.1+cudnn-9.1+opencv 3.4.0+caffe . When I tried to run make all -j8 in caffe directory, this error occurred: nvcc fatal : Unsupported gpu architecture 'compute_20' I have tried to run: "cmake -D CMAKE_BUILD_TYPE=RELEASE -D CUDA_GENERATION=Kepler .." but it didn't work. 回答1: Try manually edit Makefile.config to remove compute_2* architectures from these lines (comments explain why): # CUDA architecture setting: going with all of them. # For CUDA < 6.0, comment the *_50

What is the order of mean values in Caffe's train.prototxt?

亡梦爱人 提交于 2019-12-04 10:53:28
In my Caffe 'train.prototxt' I'm doing some input data transformation, like this: transform_param { mirror: true crop_size: 321 mean_value: 104 # Red ? mean_value: 116 # Blue ? mean_value: 122 # Green ? } Now I want to store a modified version of my input images such that certain image regions are set to those mean values. The rational is that those regions are then set to 0 during mean subtraction. However I don't know what the order of channels is that caffe expects in such a prototxt file and I couldn't look it up in the caffe code either. Does someone now whether the 3 values given above

Caffe compilation fails due to unsupported gcc compiler version

倖福魔咒の 提交于 2019-12-04 09:50:39
I struggle with Caffe compilation. Unfortunately I failed to compile it. Steps I followed: git clone https://github.com/BVLC/caffe.git cd caffe mkdir build cd build cmake .. make all Running make all fails with the following error message: [ 2%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o In file included from /usr/include/cuda_runtime.h:59:0, from <command-line>:0: /usr/include/host_config.h:82:2: error: #error -- unsupported GNU version! gcc 4.9 and up are not supported! #error -- unsupported GNU version! gcc 4.9 and up are not

Caffe: variable input-image size

孤街浪徒 提交于 2019-12-04 09:45:23
I am trying out Google's deepdream code which makes use of Caffe. They use the GoogLeNet model pre-trained on ImageNet, as provided by the ModelZoo. That means the network was trained on images cropped to the size 224x224 pixel. From the train_val.prototext : layer { name: "data" type: "Data" ... transform_param { mirror: true crop_size: 224 ... The deploy.prototext used for processing also defines an input layer with the size of 224x224x3x10 (RGB images of size 224x224, batchsize 10). name: "GoogleNet" input: "data" input_shape { dim: 10 dim: 3 dim: 224 dim: 224 } However I can use this net

Multi-labels using two different LMDB

人盡茶涼 提交于 2019-12-04 08:36:24
I am new in caffe framework and I would like to use caffe to implement the training with multi-label. I use two LMDB to save data and labels, respectively. The data LMDB is of dimension Nx1xHxW while the label LMDB is of dimension Nx1x1x3. Labels are float data. The text file is as follow: 5911 3 train/train_data/4224.bmp 13 0 12 train/train_data/3625.bmp 11 3 7 ... ... I use C++ to create LMDB. My main.cpp: #include <algorithm> #include <fstream> // NOLINT(readability/streams) #include <string> #include <utility> #include <vector> #include <QImage> #include "boost/scoped_ptr.hpp" #include

FCN in TensorFlow missing crop layer

人走茶凉 提交于 2019-12-04 08:21:45
问题 I am currently trying to implement FCN for semantic segmentation in TensorFlow as it was previously done in Caffe here. Unfortunately I'm struggling with following 3 things: 1) How to map "Deconvolution" layer from Caffe to TensorFlow? Is it correctly tf.nn.conv2d_transpose ? 2) How to map "Crop" layer from Caffe to TensorFlow? Unfortunately I can't see any alternative in TensorFlow. Is there equivalent for this in TensorFlow? 3) Does Caffe SoftmaxWithLoss correspond to TensorFlow softmax