caffe

How to generate a sentence from feature vector or words?

筅森魡賤 提交于 2019-12-22 06:38:42
问题 I used VGG 16-Layer Caffe model for image captions and I have several captions per image. Now, I want to generate a sentence from those captions (words). I read in a paper on LSTM that I should remove the SoftMax layer from the training network and provide the 4096 feature vector from fc7 layer directly to LSTM. I am new to LSTM and RNN stuff. Where should I begin? Is there any tutorial showing how to generate sentence by sequence labeling? 回答1: AFAIK the master branch of BVLC/caffe does not

Caffe net.predict() outputs random results (GoogleNet)

心不动则不痛 提交于 2019-12-22 06:08:16
问题 I used pretrained GoogleNet from https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet and finetuned it with my own data (~ 100k images, 101 classes). After one day training I achieved 62% in top-1 and 85% in top-5 classification and try to use this network to predict several images. I just followed example from https://github.com/BVLC/caffe/blob/master/examples/classification.ipynb, Here is my Python code: import caffe import numpy as np caffe_root = './caffe' MODEL_FILE = 'caffe

What does 'Attempting to upgrade input file specified using deprecated transformation parameters' mean?

≡放荡痞女 提交于 2019-12-22 05:05:38
问题 I am currently trying to train my first net with Caffe. I get the following output: caffe train --solver=first_net_solver.prototxt I0515 09:01:06.577710 15331 caffe.cpp:117] Use CPU. I0515 09:01:06.578014 15331 caffe.cpp:121] Starting Optimization I0515 09:01:06.578097 15331 solver.cpp:32] Initializing solver from parameters: test_iter: 1 test_interval: 1 base_lr: 0.01 display: 1 max_iter: 2 lr_policy: "inv" gamma: 0.0001 power: 0.75 momentum: 0.9 weight_decay: 0 snapshot: 1 snapshot_prefix:

Google Inceptionism: obtain images by class

懵懂的女人 提交于 2019-12-21 17:09:54
问题 In the famous Google Inceptionism article, http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html they show images obtained for each class, such as banana or ant. I want to do the same for other datasets. The article does describe how it was obtained, but I feel that the explanation is insufficient. There's a related code https://github.com/google/deepdream/blob/master/dream.ipynb but what it does is to produce a random dreamy image, rather than specifying a

semantic segmentation for large images

ぃ、小莉子 提交于 2019-12-21 16:56:43
问题 I am working on a limited number of large size images, each of which can have 3072*3072 pixels. To train a semantic segmentation model using FCN or U-net, I construct a large sample of training sets, each training image is 128*128 . In the prediction stage, what I do is to cut a large image into small pieces, the same as trainning set of 128*128 , and feed these small pieces into the trained model, get the predicted mask. Afterwards, I just stitch these small patches together to get the mask

Extracting weights from .caffemodel without caffe installed in Python

£可爱£侵袭症+ 提交于 2019-12-21 05:46:07
问题 Is there a relatively simple way to extract weights in Python from one of the many pretrained models in Caffe Zoo WITHOUT CAFFE (nor pyCaffe)? i.e. parsing .caffemodel to hdf5/numpy or whatever format that can be read by Python? All the answers I found use C++ code with caffe classes or Pycaffe. I have looked at pycaffe's code it looks like you really need caffe to make sense of the binary is that the only solution? 回答1: I had to resolve that exact issue just now. Assuming you have a

How reconstruct the caffe net by using pycaffe

可紊 提交于 2019-12-21 05:05:00
问题 What I want is, After loading a net, I will decompose some certain layers and save the new net. For example Orignial net: data -> conv1 -> conv2 -> fc1 -> fc2 -> softmax; New net: data -> conv1_1 -> conv1_2 -> conv2_1 -> conv2_2 -> fc1 -> fc2 -> softmax Therefore, during this process, I stuck in the following situation: 1. How to new a certain layer with specified layer parameters in pycaffe ? 2. How to copy the layer parameters from existing layers(such as fc1 and fc2 above)? I know by using

caffe: What does the **group** param mean?

心已入冬 提交于 2019-12-20 10:47:16
问题 I have read the documentation about the group param: group (g) [default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the ith output group channels will be only connected to the ith input group channels. But first of all I do not understand exactly what they mean. And secondly, why would I use it. Could anyone help me to explain it a bit better? As far as I have understood it, it

compiling caffe on Yosemite

删除回忆录丶 提交于 2019-12-20 10:46:06
问题 I'm trying to install caffe on Yosemite, and my C is not the strongest. Here is my error: Alis-MacBook-Pro:caffe ali$ make all NVCC src/caffe/layers/absval_layer.cu /usr/local/include/boost/smart_ptr/detail/sp_counted_base_clang.hpp(27): error: expected a ";" /usr/local/include/boost/smart_ptr/detail/sp_counted_base_clang.hpp(29): error: inline specifier allowed on function declarations only /usr/local/include/boost/smart_ptr/detail/sp_counted_base_clang.hpp(29): error: incomplete type is not

How to train a caffe model?

自作多情 提交于 2019-12-20 10:37:54
问题 Has anyone successfully trained a caffe model? I have a training ready image set that I would like to use to create a caffe model for use with Google's Deep Dream. The only resources I've been able to find on how to train a model are these: ImageNet Tutorial EDIT: Here's another, but it's not creating a deploy.prototxt file. When I try to use one from another model it "works" but isn't correct. caffe-oxford 102 Can anyone point me in the right direction to training my own model? 回答1: I have