caffe

Caffe HDF5 not learning

谁都会走 提交于 2019-12-13 18:05:38
问题 I'm fine-tuning the GoogleNet network with Caffe to my own dataset. If I use IMAGE_DATA layers as input learning takes place. However, I need to switch to an HDF5 layer for further extensions that I require. When I use HDF5 layers no learning takes place. I am using the exact same input images, and the labels match also. I have also checked to ensure that the data in .h5 files can be loaded correctly. It does, and Caffe is also able to find the number of examples I feed it as well as the

Build caffe with Python ( cannot find -lboost_python3 )

扶醉桌前 提交于 2019-12-13 16:25:22
问题 I'm trying to build caffe with python but it keep saying this CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp /usr/bin/ld: cannot find -lboost_python3 collect2: error: ld returned 1 exit status make: *** [python/caffe/_caffe.so] Error 1 This is what I get when I try to locate boost_python $ sudo locate boost_python /usr/lib/x86_64-linux-gnu/libboost_python-py27.a /usr/lib/x86_64-linux-gnu/libboost_python-py27.so /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0 /usr/lib/x86_64

Caffe predicts same class regardless of image

ⅰ亾dé卋堺 提交于 2019-12-13 13:18:31
问题 I modified the MNIST example and when I train it with my 3 image classes it returns an accuracy of 91%. However, when I modify the C++ example with a deploy prototxt file and labels file, and try to test it on some images it returns a prediction of the second class (1 circle) with a probability of 1.0 no matter what image I give it - even if it's images that were used in the training set. I've tried a dozen images and it consistently just predicts the one class. To clarify things, in the C++

Video classification using HDF5 in CAFFE?

天大地大妈咪最大 提交于 2019-12-13 10:26:08
问题 I am using hdf5 layer for video classification (C3D). This is my code to generate hdf5 file import h5py import numpy as np import skvideo.datasets import skvideo.io videodata = skvideo.io.vread('./v_ApplyEyeMakeup_g01_c01.avi') videodata=videodata.transpose(3,0,1,2) # To chanelxdepthxhxw videodata=videodata[None,:,:,:] with h5py.File('./data.h5','w') as f: f['data'] = videodata f['label'] = 1 Now, the data.h5 is saved in the file video.list . I perform the classification based on the prototxt

How to compute the sum of the values of elements in a vector using cblas functions?

巧了我就是萌 提交于 2019-12-13 07:19:42
问题 I need to sum all the elements of a matrix in caffe, But as I noticed, the caffe wrapper of the cblas functions ( 'math_functions.hpp' & 'math_functions.cpp' ) is using cblas_sasum function as caffe_cpu_asum that computes the sum of the absolute values of elements in a vector. Since I'm a newbie in cblas, I tried to find a suitable function to get rid of absolute there, but it seems that there is no function with that property in cblas. Any suggestion? 回答1: There is a way to do so using cblas

image pre-processing for image classification and semantic segmentation

落爺英雄遲暮 提交于 2019-12-13 07:15:01
问题 In terms of training deep learning models for different types of image-related works, such as image classification, semantic segmentation, what kind of pre-processing works need to be performed? For instance, if I want to train a network for semantic segmentation, do I need to scale the image value (normally represented as an nd-array) to [0,1] range, or keep it as [0,255] range? Thanks. 回答1: There are few things that are done but really there is no set or fix set of pre-processing that is

How can I set a global weight filler in Caffe?

╄→гoц情女王★ 提交于 2019-12-13 06:37:58
问题 Now I'm writing the weight filler layer by layer, like layer { name: "Convolution1" type: "Convolution" bottom: "data" top: "Convolution1" convolution_param { num_output: 20 kernel_size: 5 weight_filler { type: "xavier" } } } How can I set a global weight filler type? Thanks. 回答1: It seems currently there's no other way of doing it. In the caffe.proto file, the NetParameter is defined as follows, where there's no such option as default_weight_filler or so. message NetParameter { optional

Got confused after I extracted weights from Trained caffenet

无人久伴 提交于 2019-12-13 03:39:46
问题 So basically this are the dimensions of the weights from trained caffenet: conv1: (96,3,11,11) conv2: (256,48,5,5) conv3:(384,256,3,3) conv4: (384,192,3,3) conv5:(256, 192, 3 , 3) I am confused that although conv1 gives 96 channels as output why does conv2 only considers 48 while convolution? Am I missing something? 回答1: Yes, you missed the parameter 'group'. The convolution_param defined in the conv2 layer is given below.You can find out that parameter group is set to 2 as grouping the

Error in make mattest in caffe

强颜欢笑 提交于 2019-12-13 03:32:23
问题 I am able to compile matcaffe but I am unable to run make mattest . My system config: Ubuntu -16.04, opencv 2.4.9, gcc-5, g++-5, Matlab2017b. Here's the crash report from matlab: Segmentation violation detected at Fri Mar 2 12:37:16 2018 ------------------------------------------------------------------------ Configuration: Crash Decoding : Disabled - No sandbox or build area path Crash Mode : continue (default) Current Graphics Driver: Unknown software Current Visual : None Default Encoding

Extracting Features from VGG

放肆的年华 提交于 2019-12-13 01:15:09
问题 I want to extract features from images in MS COCO dataset using a fine-tuned VGG-19 network. However, it takes about 6~7 seconds per image, roughly 2 hours per 1k images. (even longer for other fine-tuned models) There are 120k images in MS COCO dataset, so it'll take at least 10 days. Is there any way that I can speed up the feature extraction process? 回答1: Well, this is not just a command. First you must check whether your GPU is powerful enough to wrestle with deep CNNs. Knowing your GPU