pycaffe

Net surgery: How to reshape a convolution layer of a caffemodel file in caffe?

只愿长相守 提交于 2019-12-06 02:56:09
I'm trying to reshape the size of a convolution layer of a caffemodel (This is a follow-up question to this question ). Although there is a tutorial on how to do net surgery , it only shows how to copy weight parameters from one caffemodel to another of the same size. Instead I need to add a new channel (all 0) to my convolution filter such that it changes its size from currently ( 64 x 3 x 3 x 3 ) to ( 64 x 4 x 3 x 3 ). Say the convolution layer is called 'conv1' . This is what I tried so far: # Load the original network and extract the fully connected layers' parameters. net = caffe.Net('..

Multiple pretrained networks in Caffe

会有一股神秘感。 提交于 2019-12-05 05:43:45
Is there a simple way (e.g. without modifying caffe code) to load wights from multiple pretrained networks into one network? The network contains some layers with same dimensions and names as both pretrained networks. I am trying to achieve this using NVidia DIGITS and Caffe. EDIT : I thought it wouldn't be possible to do it directly from DIGITS, as confirmed by answers. Can anyone suggest a simple way to modify the DIGITS code to be able to select multiple pretrained networks? I checked the code a bit, and thought the training script would be a good place to start, but I don't have in-depth

deep learning - a number of naive questions about caffe

╄→гoц情女王★ 提交于 2019-12-04 21:14:35
问题 I am trying to understand the basics of caffe, in particular to use with python. My understanding is that the model definition (say a given neural net architecture) must be included in the '.prototxt' file. And that when you train the model on data using the '.prototxt' , you save the weights/model parameters to a '.caffemodel' file Also, there is a difference between the '.prototxt' file used for training (which includes learning rate and regularization parameters) and the one used for

Extracting weights from .caffemodel without caffe installed in Python

落爺英雄遲暮 提交于 2019-12-03 20:44:16
Is there a relatively simple way to extract weights in Python from one of the many pretrained models in Caffe Zoo WITHOUT CAFFE (nor pyCaffe)? i.e. parsing .caffemodel to hdf5/numpy or whatever format that can be read by Python? All the answers I found use C++ code with caffe classes or Pycaffe. I have looked at pycaffe's code it looks like you really need caffe to make sense of the binary is that the only solution? I had to resolve that exact issue just now. Assuming you have a .caffemodel (binary proto format), it turns out to be quite simple. Download the latest caffe.proto Compile into

How reconstruct the caffe net by using pycaffe

雨燕双飞 提交于 2019-12-03 16:09:49
What I want is, After loading a net, I will decompose some certain layers and save the new net. For example Orignial net: data -> conv1 -> conv2 -> fc1 -> fc2 -> softmax; New net: data -> conv1_1 -> conv1_2 -> conv2_1 -> conv2_2 -> fc1 -> fc2 -> softmax Therefore, during this process, I stuck in the following situation: 1. How to new a certain layer with specified layer parameters in pycaffe ? 2. How to copy the layer parameters from existing layers(such as fc1 and fc2 above)? I know by using caffe::net_spec , we can define a new net manually. But caffe::net_spec can not specify a layer from a

Caffe CNN: diversity of filters within a conv layer [closed]

谁都会走 提交于 2019-12-03 00:42:13
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed last year . I have the following theoretical questions regarding the conv layer in a CNN. Imagine a conv layer with 6 filters (conv1 layer and its 6 filters in the figure). 1) what guarantees the diversity of learned filters within a conv layer? (I mean, how the learning (optimization process)

windows7+visual studio 2013+CUDA7.5 编译caffe+配置matcaffe+配置pycaffe

风流意气都作罢 提交于 2019-12-02 15:04:52
经过朋友指导,终于成功在windows7上成功编译了caffe,这里将编译过程记录 安装文件准备 1 visual studio 2013安装包下载 2 CUDA75 optional 3 windows版本caffe 4 下载cuDNN optional 5 下载Anaconda安装包 optional 6 下载Matlab安装包 optional 安装visual studio 2013 安装cuda75 optional 利用Anaconda安装python optional 安装matlab optional 修改配置文件 1 解压缩下载的caffe-windows文件 2 进入到windows文件夹 3 复制配置文件并重命名 4 修改配置文件修改工程的属性文件 41 配置文件说明 42 非CUDA版本的caffe 43 CUDA版本的caffe 编译caffe 1 打开名称为Caffe的解决方案 2 编译libcaffe项目 3 编译caffe项目 4 编译pycaffe 5 编译matcaffe 6 编译其他项目 运行第一个caffe测试程序 配置python optional 配置matlab optional 1. 安装文件准备 1.1 visual studio 2013安装包下载 进入 visual studio下载页 选择Visual Studio 2013–

Caffe CNN: diversity of filters within a conv layer [closed]

ⅰ亾dé卋堺 提交于 2019-12-02 14:26:11
I have the following theoretical questions regarding the conv layer in a CNN. Imagine a conv layer with 6 filters (conv1 layer and its 6 filters in the figure). 1) what guarantees the diversity of learned filters within a conv layer? (I mean, how the learning (optimization process) makes sure that it does not learned the same (similar) filters? 2) diversity of filters within a conv layer is a good thing or not? Is there any research on this? 3) during the learning (optimization process), is there any interaction between the filters of the same layer? if yes, how? 1. Assuming you are training

I “import caffe” from ipython, but I got “RuntimeWarning”. How to resolve it?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-02 04:02:04
问题 I have read the article "Ubuntu Installation --Guide for Ubuntu 14.04 with a 64 bit processor." from Github website (https://github.com/tiangolo/caffe/blob/ubuntu-tutorial-b/docs/install_apt2.md). And now, I open IPython to test that PyCaffe is working. I input "ipython" command, and enter to the ipython page. Then, I input the command "import caffe", but I got below warnings: /root/code/caffe-master/python/caffe/pycaffe.py:13:RuntimeWarning: to-Python converter for boost::shared_ptr >

How to modify batch normalization layers (DeconvNet) to be able to run with caffe?

人盡茶涼 提交于 2019-12-02 03:49:39
I wanted to run the Deconvnet on my data, however it seemd it has been written for another version of caffe . Does anyone know how to change batch_params ? The one that is in Deconvnet layers { bottom: 'conv1_1' top: 'conv1_1' name: 'bn1_1' type: BN bn_param { scale_filler { type: 'constant' value: 1 } shift_filler { type: 'constant' value: 0.001 } bn_mode: INFERENCE } } And the one that Caffe provides for cifar10 example: layer { name: "bn1" type: "BatchNorm" bottom: "pool1" top: "bn1" batch_norm_param { use_global_stats: true } param { lr_mult: 0 } param { lr_mult: 0 } param { lr_mult: 0 }