caffe

How to solve this issue in Python (creating weights for Infogain Loss layer)?

烂漫一生 提交于 2019-12-11 06:36:24
问题 I am working on semantic segmentation using CNNs. I have an imbalance number of pixels for each class. Based on this link, I am trying to create weight matrix H in order to define Infogain loss layer for my imbalance class members. My data has five classes. I wrote the following code in python: Reads a sample image: im=imread(sample_img_path) Counts the number of pixels of each class cl0=np.count_nonzero(im == 0) #0=background class . . cl4=np.count_nonzero(im == 4) #4=class 4 output: 39817

How can i know whether “bias” exists in a layer?

雨燕双飞 提交于 2019-12-11 06:17:59
问题 I'm trying to read weight and bias in a caffe network with pycaffe. Here is my code weight = net.params[layer_name][0].data bias = net.params[layer_name][1].data But, some layers in my network has no bias, so that there will be an error which is Index out of range . So my question is can I use if(net.params[layer_name][1] exists): bias = net.params[layer_name][1].data to control the assignments to bias ? And how to write the code? 回答1: You can simply iterate over net.params[layer_name] :

caffe:Check failed: target_blobs.size() == source_layer.blobs_size() (2 vs. 1) Incompatible number of blobs for layer conv1

…衆ロ難τιáo~ 提交于 2019-12-11 06:14:15
问题 I modify the FCN net and design a new net,in which I use two ImageData Layer as input param and hope the net produces a picture as output. here is the train_val.prototxt and the deploy.prototxt the original picture and the label are both gray scale pics and sizes are 224*224. I've trained a caffemodel and use infer.py to use the caffemodel to do a segmentation,but meet the error: F0505 06:15:08.072602 30713 net.cpp:767] Check failed: target_blobs.size() == source_layer.blobs_size() (2 vs. 1)

How can I generate data layer (HDF5) for training and testing in same prototxt?

人走茶凉 提交于 2019-12-11 06:03:20
问题 I have a data layer with HDF5 type. It contains both Train and Test phase as expected name: "LogisticRegressionNet" layer { name: "data" type: "HDF5Data" top: "data" top: "label" include { phase: TRAIN } hdf5_data_param { source: "examples/hdf5_classification/data/train.txt" batch_size: 10 } } layer { name: "data" type: "HDF5Data" top: "data" top: "label" include { phase: TEST } hdf5_data_param { source: "examples/hdf5_classification/data/test.txt" batch_size: 10 } } I want to use python to

How should I use blobs in a Caffe Python layer, and when does their training take place?

喜夏-厌秋 提交于 2019-12-11 05:53:08
问题 I am creating a network using Caffe, for which I need to define my own layer. I would like to use the Python layer for this. My layer will contain some learned parameters. From this answer, I am told that I will need to create a blob vector for this. Is there any specification that this blob will need to follow, such as constraints in dimensions, etc.? Irrespective of what my layer does, can I create a blob of one dimension, and use any element, one each, of the blob for any computation in

caffe - network produce zero gradient and not learning

别等时光非礼了梦想. 提交于 2019-12-11 05:33:08
问题 I'm training caffenet with multilabel data. However the lost is not decreasing during training phase. I'm now trying to check if the backward() is not working properly. I have this code to check if there is a gradient. import numpy as np import os.path as osp import matplotlib.pyplot as plt from pprint import pprint from copy import copy % matplotlib inline plt.rcParams['figure.figsize'] = (6, 6) caffe_root = '../' # this file is expected to be in {caffe_root}/examples sys.path.append(caffe

Derivatives in some Deconvolution layers mostly all zeroes

做~自己de王妃 提交于 2019-12-11 05:28:27
问题 This is a really weird error, partly a follow-up to the previous question(Deconvolution layer FCN initialization - loss drops too fast). However I init Deconv layers (bilinear or gaussian), I get the same situation: 1) Weights are updated, I checked this for multiple iterations . The size of deconvolution/upsample layers is the same: (2,2,8,8) First of all, net_mcn.layers[idx].blobs[0].diff return matrices with floats, the last Deconv layer ( upscore5 ) produces two array with the same

undefined symbol: _ZdlPvm

强颜欢笑 提交于 2019-12-11 05:28:12
问题 I am using apollocaffe and Reinspect. Apollocaffe is in c++ library and Reinspect is in python that uses apollocaffe library . I build apollocaffe using g++-4.8.5. When I run the command python -m pdb train.py config-- config.json , I have this error, ImportError: '/home/xxx/Softwares/Libraries/apollocaffe_22_3_17/build_debug/lib/libcaffe.so: undefined symbol: _ZdlPvm' What could be wrong? 回答1: Based on this (theano) thread, looks like you're using incompatible gcc versions. Try to move to

HDF5Data Processing with Caffe's Transformer for training

有些话、适合烂在心里 提交于 2019-12-11 05:14:54
问题 I am trying to load data to the network, since I need a custom data input (3 tops: 1 for data image, 2 for different labels) I load the data with HD5F files. It looks similar to this: layer { name: "data" type: "HDF5Data" top: "img" top: "alabels" top: "blabels" include { phase: TRAIN } hdf5_data_param { source: "path_to_caffe/examples/hdf5_classification/data/train.txt" batch_size: 64 } } I want to preprocess the images using Caffe's own Transformer (for standard), how can I do this when I

caffe: model definition: write same layer with different phase using caffe.NetSpec()

爱⌒轻易说出口 提交于 2019-12-11 04:38:08
问题 I want to set up a caffe CNN with python, using caffe.NetSpec() interface. Although I saw we can put test net in solver.prototxt , I would like to write it in model.prototxt with different phase. For example, caffe model prototxt implement two data layer with different phases: layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } .... } layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TEST } .... } How should I do in python to get such