caffe

semantic segmentation for large images

[亡魂溺海] 提交于 2019-12-04 08:09:43
I am working on a limited number of large size images, each of which can have 3072*3072 pixels. To train a semantic segmentation model using FCN or U-net, I construct a large sample of training sets, each training image is 128*128 . In the prediction stage, what I do is to cut a large image into small pieces, the same as trainning set of 128*128 , and feed these small pieces into the trained model, get the predicted mask. Afterwards, I just stitch these small patches together to get the mask for the whole image. Is this the right mechanism to perform the semantic segmentation against the large

Google Inceptionism: obtain images by class

孤人 提交于 2019-12-04 08:03:47
In the famous Google Inceptionism article, http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html they show images obtained for each class, such as banana or ant. I want to do the same for other datasets. The article does describe how it was obtained, but I feel that the explanation is insufficient. There's a related code https://github.com/google/deepdream/blob/master/dream.ipynb but what it does is to produce a random dreamy image, rather than specifying a class and learn what it looks like in the network, as shown in the article above. Could anyone give a more

How to split a Blob along channels in Caffe

瘦欲@ 提交于 2019-12-04 07:21:34
I would like to split the Blob channels in Caffe, so that I can split one Blob of (N, c, w, h) into two output Blobs of size (N, c/2, w, h) . What I have described above is very general, what I want to do actually is to separate a two-channel input image into two different images. One goes to a convolutional layer and the other goes to a pooling layer. Finally, I concatenate the outputs. So I am wondering if a Caffe layer that allows the user to do such thing exists, and how to define it in the prototxt file. Yes, the Slice layer is for that purpose. From the Layer Catalogue : The Slice layer

Testing a regression network in caffe

对着背影说爱祢 提交于 2019-12-04 05:37:31
问题 I am trying to count objects in an image using Alexnet. I have currently images containing 1, 2, 3 or 4 objects per image. For initial checkup, I have 10 images per class. For example in training set I have: image label image1 1 image2 1 image3 1 ... image39 4 image40 4 I used imagenet create script to create a lmdb file for this dataset. Which successfully converted my set of images to lmdb. Alexnet, as an example is converted to a regression model for learning the number of objects in the

Sigaction and porting Linux code to Windows

跟風遠走 提交于 2019-12-04 04:35:39
I am trying to port caffe (developed for Linux) source code to Windows environment. The problem is at sigaction structure at signal_handler.cpp and signal_handler.h . The source codes are shown below. My query is which library or code replacement can be done to make this sigaction works in Windows. ///Header file #ifndef INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_ #define INCLUDE_CAFFE_UTIL_SIGNAL_HANDLER_H_ #include "caffe/proto/caffe.pb.h" #include "caffe/solver.hpp" namespace caffe { class SignalHandler { public: // Contructor. Specify what action to take when a signal is received. SignalHandler

Is it possible in Caffe to calculate number of Operations taking place in an architecture? [duplicate]

耗尽温柔 提交于 2019-12-04 03:38:56
问题 This question already has answers here : how to calculate a net's FLOPs in CNN (3 answers) Closed 2 years ago . I have seen in tensorflow tutorials, they provide some interesting statistics about different architectures, such as number of operations taking place, etc . This model achieves a peak performance of about 86% accuracy within a few hours of training time on a GPU. Please see below and the code for details. It consists of 1,068,298 learnable parameters and requires about 19.5M

Extracting weights from .caffemodel without caffe installed in Python

落爺英雄遲暮 提交于 2019-12-03 20:44:16
Is there a relatively simple way to extract weights in Python from one of the many pretrained models in Caffe Zoo WITHOUT CAFFE (nor pyCaffe)? i.e. parsing .caffemodel to hdf5/numpy or whatever format that can be read by Python? All the answers I found use C++ code with caffe classes or Pycaffe. I have looked at pycaffe's code it looks like you really need caffe to make sense of the binary is that the only solution? I had to resolve that exact issue just now. Assuming you have a .caffemodel (binary proto format), it turns out to be quite simple. Download the latest caffe.proto Compile into

Caffe: Understanding expected lmdb datastructure for blobs

末鹿安然 提交于 2019-12-03 16:52:00
问题 I'm trying to understand how data is interpreted in Caffe. For that I've taken a look at the Minst Tutorial Looking at the input data definition: layers { name: "mnist" type: DATA data_param { source: "mnist_train_lmdb" backend: LMDB batch_size: 64 scale: 0.00390625 } top: "data" top: "label" } I've now looked at the mnist_train_lmdb and taken one of the entries (shown in hex): 0801101C181C229006 00000000000000000000000000000000000000000000000000000000

How reconstruct the caffe net by using pycaffe

雨燕双飞 提交于 2019-12-03 16:09:49
What I want is, After loading a net, I will decompose some certain layers and save the new net. For example Orignial net: data -> conv1 -> conv2 -> fc1 -> fc2 -> softmax; New net: data -> conv1_1 -> conv1_2 -> conv2_1 -> conv2_2 -> fc1 -> fc2 -> softmax Therefore, during this process, I stuck in the following situation: 1. How to new a certain layer with specified layer parameters in pycaffe ? 2. How to copy the layer parameters from existing layers(such as fc1 and fc2 above)? I know by using caffe::net_spec , we can define a new net manually. But caffe::net_spec can not specify a layer from a

Where is layer module defined in PyCaffe

我怕爱的太早我们不能终老 提交于 2019-12-03 15:35:26
I am modifying a Caffe tutorial to implement a neural network, but I am struggling to identify where some of the pycaffe modules are location in order to see certain function definitions. For example, the tutorial mentions: import caffe from caffe import layers a L, params as P .... L.Convolution(bottom, kernel_size=ks, stride=stride, num_output=nout, pad=pad, group=group) L.InnerProduct(bottom, num_output=nout) L.ReLU(fc, in_place=True) ... Where can I find these function definitions and where can I see what other types of layers are pre-defined? I see that layers and params are defined here