caffe

Net surgery: How to reshape a convolution layer of a caffemodel file in caffe?

只愿长相守 提交于 2019-12-06 02:56:09
I'm trying to reshape the size of a convolution layer of a caffemodel (This is a follow-up question to this question ). Although there is a tutorial on how to do net surgery , it only shows how to copy weight parameters from one caffemodel to another of the same size. Instead I need to add a new channel (all 0) to my convolution filter such that it changes its size from currently ( 64 x 3 x 3 x 3 ) to ( 64 x 4 x 3 x 3 ). Say the convolution layer is called 'conv1' . This is what I tried so far: # Load the original network and extract the fully connected layers' parameters. net = caffe.Net('..

caffe ssd + cuda9.0

纵然是瞬间 提交于 2019-12-06 02:28:13
安装依赖 sudo apt -get install libprotobuf -dev libleveldb -dev libsnappy -dev libhdf5 - serial -dev protobuf -compile sudo apt - get install -- no -install -recommends libboost -all -dev sudo apt - get install libopenblas -dev liblapack -dev libatlas -base -dev sudo apt - get install libgflags -dev libgoogle -glog -dev liblmdb -dev sudo apt - get install git cmake build -essential 修改配置文件 来源: https://www.cnblogs.com/gris/p/11957016.html

Ubuntu下如何查看GPU版本和使用信息?

為{幸葍}努か 提交于 2019-12-06 01:48:11
nvidia-smi是用来查看GPU版本信息,GPU使用信息查询: nvidia-smi 第一栏的Fan:N/A是风扇转速,从0到100%之间变动,这个速度是计算机期望的风扇转速,实际情况下如果风扇堵转,可能打不到显示的转速。有的设备不会返回转速,因为它不依赖风扇冷却而是通过其他外设保持低温(比如我们实验室的服务器是常年放在空调房间里的)。 第二栏的Temp:是温度,单位摄氏度。 第三栏的Perf:是性能状态,从P0到P12,P0表示最大性能,P12表示状态最小性能。 第四栏下方的Pwr:是能耗,上方的Persistence-M:是持续模式的状态,持续模式虽然耗能大,但是在新的GPU应用启动时,花费的时间更少,这里显示的是off的状态。 第五栏的Bus-Id是涉及GPU总线的东西,domain:bus:device.function 第六栏的Disp.A是Display Active,表示GPU的显示是否初始化。 第五第六栏下方的Memory Usage是显存使用率。 第七栏是浮动的GPU利用率。 第八栏上方是关于ECC的东西。 第八栏下方Compute M是计算模式。 下面一张表示每个进程占用的显存使用率。 显存占用和GPU占用是两个不一样的东西,显卡是由GPU和显存等组成的,显存和GPU的关系有点类似于内存和CPU的关系。我跑caffe代码的时候显存占得少,GPU占得多

Setting input layer in CAFFE with C++

一曲冷凌霜 提交于 2019-12-05 21:45:33
I'm writing C++ code using CAFFE to predict a single (for now) image. The image has already been preprocessed and is in .png format. I have created a Net object and read in the trained model. Now, I need to use the .png image as an input layer and call net.Forward() - but can someone help me figure out how to set the input layer? I found a few examples on the web, but none of them work, and almost all of them use deprecated functionality. According to: Berkeley's Net API , using "ForwardPrefilled" is deprecated, and using "Forward(vector, float*)" is deprecated. API indicates that one should

Caffe classification labels in HDF5

让人想犯罪 __ 提交于 2019-12-05 21:29:56
I am finetuning a network. In a specific case I want to use it for regression, which works. In another case, I want to use it for classification. For both cases I have an HDF5 file, with a label. With regression, this is just a 1-by-1 numpy array that contains a float. I thought I could use the same label for classification, after changing my EuclideanLoss layer to SoftmaxLoss. However, then I get a negative loss as so: Iteration 19200, loss = -118232 Train net output #0: loss = 39.3188 (* 1 = 39.3188 loss) Can you explain if, and so what, goes wrong? I do see that the training loss is about

Does the dropout layer need to be defined in deploy.prototxt in caffe?

筅森魡賤 提交于 2019-12-05 16:58:36
In the AlexNet implementation in caffe, I saw the following layer in the deploy.prototxt file: layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } Now the key idea of dropout is to randomly drop units (along with their connections) from the neural network during training. Does this mean that I can simply delete this layer from deploy.prototxt, as this file is meant to be used during testing only? Yes. Dropout is not required during Testing. Even if you include a dropout layer, nothing special happens during Testing. See the source code of

Convolutional Neural Networks: How many pixels will be covered by each of the filters?

半城伤御伤魂 提交于 2019-12-05 16:40:41
How can I calculate the area (in the original image) covered by each of the filters in my network? e.g. Lets say the size of the image is WxW pixels. I am using the following network: layer 1 : conv : 5x5 layer 2 : pool : 3x3 layer 3 : conv : 5x5 ..... layer N : conv : 5x5 I want to calculate how much area in the original image will be covered by each filter. e.g. the filter in the layer 1 will cover 5x5 pixels in the original image. A similar problem would be, how many pixels will be covered by each activation? which is essentially the same as, how large an input image has to be in order to

How to train new fast-rcnn imageset

北战南征 提交于 2019-12-05 15:35:17
I am using fast-rcnn and try to train the system for new class (label) I followed this: https://github.com/EdisonResearch/fast-rcnn/tree/master/help/train Placed the images Placed the annotations Prepare the ImageSet with all the image name prefix Prepared selective search output: train.mat I failed while running the train_net.py with the following error: ./tools/train_net.py --gpu 0 --solver models/VGG_1024_pascal2007/solver.prototxt --imdb voc_2007_train_top_5000 Called with args: Namespace(cfg_file=None, gpu_id=0, imdb_name='voc_2007_train_top_5000', max_iters=40000, pretrained_model=None,

Caffe/pyCaffe: set all GPUs

蹲街弑〆低调 提交于 2019-12-05 15:06:41
is possible to set all GPUs for Caffe (especially pyCaffe)? Something like: caffe train -solver examples/mnist/lenet_solver.prototxt -gpu all AFAIK Caffe is not supporting multi gpu training at the moment. It is planned for future release. See a discussion here . It seems like NVIDIA's branch of caffe has this functionality. See the issue here . Both forks have supported multi-GPU for a while now. BVLC/caffe got support for multi-GPU on 08/13/2015 (see commit , issue ). NVIDIA/caffe got support for multi-GPU on 06/19/2015 (see release note ). You may be interested to know that there is a

caffe installation : opencv libpng16.so.16 linkage issues

馋奶兔 提交于 2019-12-05 12:29:05
I am trying to compile caffe with python interface on an Ubuntu 14.04 machine. I have installed Anaconda and opencv with conda install opencv . I have also installed all the requirement stipulated in the coffee and changed the commentary blocks in makefile.config so that PYTHON_LIB and PYTHON_INCLUDE point towards Anaconda distributions. When I am calling make all , the following command is issued: g++ .build_release/tools/caffe.o -o .build_release/tools/caffe.bin -pthread -fPIC -DNDEBUG -O2 -DWITH_PYTHON_LAYER -I/home/andrei/anaconda/include -I/home/andrei/anaconda/include/python2.7 -I/home