caffe

Caffe to Tensorflow (Kaffe by Ethereon) : TypeError: Descriptors should not be created directly, but only retrieved from their parent

我怕爱的太早我们不能终老 提交于 2020-01-15 05:49:08
问题 I wanted to use the wonderful package caffe-tensorflow by ethereon and I ran into the same problem described in this closed issue: When I run the example or try to import caffepb I got the error message: >>> import caffepb Traceback (most recent call last): File "<stdin>", line 1, in <module> File "caffepb.py", line 28, in <module> type=None), File "/home/me/anaconda/python2.7/site-packages/google/protobuf/descriptor.py", line 652, in __new__ _message.Message._CheckCalledFromGeneratedFile()

Multiple pathways for data through a layer in Caffe

前提是你 提交于 2020-01-15 04:51:46
问题 I would like to construct a network in Caffe in which the incoming data is split up initially, passes separately through the same set of layers, and is finally recombined using an eltwise layer. After this, all the parts will move as a single blob. The layer configuration of the part of the network for which the data moves parallely will be identical, except for the learned parameters. Is there a way to define this network in Caffe without redefining the layers through which the different

Caffe runtest fails

Deadly 提交于 2020-01-14 14:35:30
问题 After successfully building caffe, I make runtest and it fails in ImageDataLayer, DBTest, DataTransformTest, HDF5OutputLayerTest and some solvers. Is there a missing step in the building/linking to specific paths? Here is the part of the test cases that fails (some at the end are removed to reduce the question body length: [ FAILED ] 349 tests, listed below: [ FAILED ] NetUpgradeTest.TestUpgradeV1LayerType [ FAILED ] NetTest/0.TestAllInOneNetTrain, where TypeParam = caffe::CPUDevice<float> [

VGG 16/19 Slow Runtimes

偶尔善良 提交于 2020-01-14 05:58:09
问题 When I try to get an output from the pre-trained VGG 16/19 models using Caffe with Python (both 2.7 and 3.5) it's taking over 15 seconds on the net.forward() step (on my laptop's CPU). I was wondering if anyone might advise me as to why this could be, as with many other models (i.e. ResNet, AlexNet) I get an output in a split second, this is the only model I've found so far that's performing this poorly. The code I'm using is as follows: img = cv2.imread(path + img_name + '.jpg') img =

How to reuse same network twice within a new network in CAFFE

泪湿孤枕 提交于 2020-01-13 13:10:14
问题 I have a pretrained network (let's call it N ) I would like to use twice within a new network. Anybody knows how to duplicate it? Then I would like to assign a different learning rate to each copy. For example ( N1 is the 1st copy of N , N2 is the 2nd copy of N ), the new network might look like: N1 --> [joint ip N2 --> layer] I know how to reuse N with a single copy, however, since N1 and N2 will have different (finetune) learning rates, I don't know how can I make 2 copies of N and assign

Caffe SigmoidCrossEntropyLoss Layer Loss Function

余生长醉 提交于 2020-01-13 09:40:07
问题 I was looking through the code of Caffe's SigmoidCrossEntropyLoss layer and the docs and I'm a bit confused. The docs list the loss function as the logit loss (I'd replicate it here, but without Latex, the formula would be difficult to read. Check out the docs link, it's at the very top). However, the code itself ( Forward_cpu(...) ) shows a different formula Dtype loss = 0; for (int i = 0; i < count; ++i) { loss -= input_data[i] * (target[i] - (input_data[i] >= 0)) - log(1 + exp(input_data[i

slice/split a layer in keras as in caffe

妖精的绣舞 提交于 2020-01-13 02:13:15
问题 I have used this converter to convert a Caffe model to Keras. But one of my layers is of type slice and it needs to be converted as well but the converter currently does not support this and raises an exception. Is there any work around for it? Here is my layer: layer { name: "slice_label" type: SLICE bottom: "label" top: "label_wpqr" top: "label_xyz" slice_param { slice_dim: 1 slice_point: 4 } } 回答1: It seems that you want to use a Lambda layer. In this case you may do the following: sliced

Importing caffe results in ImportError: “No module named google.protobuf.internal” (import enum_type_wrapper)

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-12 01:45:09
问题 I installed Anaconda Python on my machine. When I start the Python Interpreter and type "import caffe" in the Python shell, I get the following error: ImportError: No module named google.protobuf.internal I have the following files: wire_format_lite_inl.h wire_format_lite.h wire_format.h unknown_field_set.h text_format.h service.h repeated_field.h reflection_ops.h message_lite.h message.h generated_message_util.h extension_set.h descriptor.proto descriptor.h generated_message_reflection.h

Importing caffe results in ImportError: “No module named google.protobuf.internal” (import enum_type_wrapper)

£可爱£侵袭症+ 提交于 2020-01-12 01:44:07
问题 I installed Anaconda Python on my machine. When I start the Python Interpreter and type "import caffe" in the Python shell, I get the following error: ImportError: No module named google.protobuf.internal I have the following files: wire_format_lite_inl.h wire_format_lite.h wire_format.h unknown_field_set.h text_format.h service.h repeated_field.h reflection_ops.h message_lite.h message.h generated_message_util.h extension_set.h descriptor.proto descriptor.h generated_message_reflection.h

Deploy caffe regression model

≡放荡痞女 提交于 2020-01-10 05:26:13
问题 I have trained a regression network with caffe . I use "EuclideanLoss" layer in both the train and test phase. I have plotted these and the results look promising. Now I want to deploy the model and use it. I know that if SoftmaxLoss is used, the final layer must be Softmax in the deploy file. What should this be in the case of Euclidean loss ? 回答1: For deploy you only need to discard the loss layer, in your case the "EuclideanLoss" layer. The output of your net is the "bottom" you fed the