caffe

Using GPU despite setting CPU_Only, yielding unexpected keyword argument

主宰稳场 提交于 2019-12-20 09:47:22
问题 I'm installing Caffe on an Ubuntu 14.04 virtual server with CUDA installed (without driver) using https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM as inspiration. During the installation process I edited the MakeFile to include "CPU_ONLY := 1" before building it. However, it seems that Caffe is still trying to make use of the GPU. When I try to run a test example I get the following error: python python/classify.py examples/images/cat.jpg foo Traceback (most recent call last):

Crop size Error in caffe Model

我与影子孤独终老i 提交于 2019-12-20 04:54:24
问题 Im trying to train a caffe Model.I get this error I0806 09:41:02.010442 2992 sgd_solver.cpp:105] Iteration 360, lr = 9.76e- 05 F0806 09:41:20.544955 2998 data_transformer.cpp:168] Check failed: height<=datum_height (224 vs. 199) *** Check failure stack trace: *** @ 0x7f82b051edaa (unknown) @ 0x7f82b051ece4 (unknown) @ 0x7f82b051e6e6 (unknown) @ 0x7f82b0521687 (unknown) @ 0x7f82b0b8e9e0 caffe::DataTransformer<>::Transform() @ 0x7f82b0c09a2f caffe::DataLayer<>::load_batch() @ 0x7f82b0c9aa5

How to modify batch normalization layers (DeconvNet) to be able to run with caffe?

江枫思渺然 提交于 2019-12-20 04:52:13
问题 I wanted to run the Deconvnet on my data, however it seemd it has been written for another version of caffe . Does anyone know how to change batch_params ? The one that is in Deconvnet layers { bottom: 'conv1_1' top: 'conv1_1' name: 'bn1_1' type: BN bn_param { scale_filler { type: 'constant' value: 1 } shift_filler { type: 'constant' value: 0.001 } bn_mode: INFERENCE } } And the one that Caffe provides for cifar10 example: layer { name: "bn1" type: "BatchNorm" bottom: "pool1" top: "bn1" batch

ImportError: No module named 'google'

烈酒焚心 提交于 2019-12-20 03:39:25
Failed to include caffe_pb2, things might go wrong! Traceback (most recent call last): File "test_model.py", line 5, in <module> import caffe File "/opt/share0/guowuwei/caffe_root/caffe/python/caffe/__init__.py", line 4, in <module> from .proto.caffe_pb2 import TRAIN, TEST File "/opt/share0/guowuwei/caffe_root/caffe/python/caffe/proto/caffe_pb2.py", line 7, in <module> from google.protobuf.internal import enum_type_wrapper ImportError: No module named 'google' sudo pip install protobuf or conda install protobuf 来源: CSDN 作者: allan0808 链接: https://blog.csdn.net/weixin_44138807/article/details

Python interface of Caffe: Error in “import caffe”

大憨熊 提交于 2019-12-20 03:06:06
问题 I'm trying to run Caffe in it's Python interface. I've already run the command make pycaffe in the caffe directory and it worked fine. Now, when I run the command import caffe in the python environment in the terminal (Ubuntu 14.04), I'm getting the following error: >>> import caffe Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/pras/caffe/python/caffe/__init__.py", line 1, in <module> from .pycaffe import Net, SGDSolver File "/home/pras/caffe/python/caffe

How do I to prevent backward computation in specific layers in caffe

时光毁灭记忆、已成空白 提交于 2019-12-19 09:17:25
问题 I want to disable the backward computations in certain convolution layers in caffe, how do I do this? I have used propagate_down setting,however find out it works for fc layer but not convolution layer. Please help~ first update : I set propagate_down:false in test/pool_proj layer. I don't want it to backward(but other layer backward). But from the log file, it says that the layer still needs backward. second update : Let's denote a deep learning model, there are two path from input layer to

Where can I find the label map between trained model like googleNet's output to there real class label?

老子叫甜甜 提交于 2019-12-19 08:54:13
问题 everyone, I am new to caffe. Currently, I try to use the trained GoogleNet which was downloaded from model zoo to classify some images. However, the network's output seem to be a vector rather than real label(like dog, cat). Where can I find the label-map between trained model like googleNet's output to their real class label? Thanks. 回答1: If you got caffe from git you should find in data/ilsvrc12 folder a shell script get_ilsvrc_aux.sh . This script should download several files used for

'utf-8' codec can't decode byte 0x80

喜欢而已 提交于 2019-12-19 05:59:49
问题 I'm trying to download BVLC-trained model and I'm stuck with this error UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 110: invalid start byte I think it's because of the following function (complete code) # Closure-d function for checking SHA1. def model_checks_out(filename=model_filename, sha1=frontmatter['sha1']): with open(filename, 'r') as f: return hashlib.sha1(f.read()).hexdigest() == sha1 Any idea how to fix this? 回答1: You are opening a file that is not UTF-8

How to input multiple N-D arrays to a net in caffe?

蓝咒 提交于 2019-12-19 05:05:05
问题 I want to create a custom loss layer for semantic segmentation in caffe that requires multiple inputs. I wish this loss function to have an additional input factor in order to penalize the miss detection in small objects. To do that I have created an image GT that contains for each pixel a weight. If the pixel belongs to a small object the weight is high. I am newbie in caffe and I do not know how to feed my net with three 2-D signals at the same time (image, gt-mask and the per-pixel weights

How to use multi CPU cores to train NNs using caffe and OpenBLAS

独自空忆成欢 提交于 2019-12-18 13:35:06
问题 I am learning deep learning recently and my friend recommended me caffe. After install it with OpenBLAS, I followed the tutorial, MNIST task in the doc. But later I found it was super slow and only one CPU core was working. The problem is that the servers in my lab don't have GPU, so I have to use CPUs instead. I Googled this and got some page like this . I tried to export OPENBLAS_NUM_THREADS=8 and export OMP_NUM_THREADS=8 . But caffe still used one core. How can I make caffe use multi CPUs?