caffe

LMDB files and how they are used for caffe deep learning network

丶灬走出姿态 提交于 2020-01-01 06:07:47
问题 I am quite new in deep learning and I am having some problems in using the caffe deep learning network. Basically, I didn't find any documentation explaining how I can solve a series of questions and problems I am dealing right now. Please, let me explain my situation first. I have thousands of images and I must do a series of pre-processing operations on them. For each pre-processing operation, I have to save these pre-processed images as 4D matrices and also store a vector with the images

LMDB files and how they are used for caffe deep learning network

橙三吉。 提交于 2020-01-01 06:07:15
问题 I am quite new in deep learning and I am having some problems in using the caffe deep learning network. Basically, I didn't find any documentation explaining how I can solve a series of questions and problems I am dealing right now. Please, let me explain my situation first. I have thousands of images and I must do a series of pre-processing operations on them. For each pre-processing operation, I have to save these pre-processed images as 4D matrices and also store a vector with the images

Caffe snapshots: .solverstate vs .caffemodel

久未见 提交于 2020-01-01 04:48:06
问题 When training a network, the snapshots taken every N iterations come in two forms together. One is the .solverstate file, which I presume is exactly what it sounds like, storing the state of the loss functions and gradients, etc. The other is the .caffemodel file which I know stores the trained parameters. The .caffemodel is the file you need if you want a pre-trained model, so I imagine it's also the file you want if you are going to test your network. WWhat is the .solverstate good for? In

How can I speed up deep learning on a non-NVIDIA setup?

余生长醉 提交于 2020-01-01 02:51:46
问题 Since I only have an AMD A10-7850 APU, and do not have the funds to spend on a $800-$1200 NVIDIA graphics card, I am trying to make due with the resources I have in order to speed up deep learning via tensorflow/keras. Initially, I used a pre-compiled version of Tensorflow. InceptionV3 would take about 1000-1200 seconds to compute 1 epoch. It has been painfully slow. To speed up calculations, I first self-compiled Tensorflow with optimizers (using AVX, and SSE4 instructions). This lead to a

Python real time image classification problems with Neural Networks

吃可爱长大的小学妹 提交于 2019-12-30 01:36:14
问题 I'm attempting use caffe and python to do real-time image classification. I'm using OpenCV to stream from my webcam in one process, and in a separate process, using caffe to perform image classification on the frames pulled from the webcam. Then I'm passing the result of the classification back to the main thread to caption the webcam stream. The problem is that even though I have an NVIDIA GPU and am performing the caffe predictions on the GPU, the main thread gets slown down. Normally

What is “Parameter” layer in caffe?

若如初见. 提交于 2019-12-29 09:05:34
问题 Recently I came across "Parameter" layer in caffe. It seems like this layer exposes its internal parameter blob to "top". What is this layer using for? Can you give a usage example? 回答1: This layer was introduced in the pull request #2079, with the following description: This layer simply holds a parameter blob of user-defined shape, and shares it as its single top. which is exactly what you expected. This was introduced in context of the issue #1474, which basically proposes to treat

What is “Parameter” layer in caffe?

倖福魔咒の 提交于 2019-12-29 09:05:13
问题 Recently I came across "Parameter" layer in caffe. It seems like this layer exposes its internal parameter blob to "top". What is this layer using for? Can you give a usage example? 回答1: This layer was introduced in the pull request #2079, with the following description: This layer simply holds a parameter blob of user-defined shape, and shares it as its single top. which is exactly what you expected. This was introduced in context of the issue #1474, which basically proposes to treat

Fully Convolutional Network Training Image Size

喜你入骨 提交于 2019-12-29 06:59:46
问题 I'm trying to replicate the results of Fully Convolutional Network (FCN) for Semantic Segmentation using TensorFlow. I'm stuck on feeding training images into the computation graph. The fully convolutional network used VOC PASCAL dataset for training. However, the training images in the dataset are of varied sizes. I just want to ask if they preprocessed the training images to make them have the same size and how they preprocessed the images. If not, did they just feed batches of images of

How should “BatchNorm” layer be used in caffe?

人盡茶涼 提交于 2019-12-29 06:44:08
问题 I am a little confused about how should I use/insert "BatchNorm" layer in my models. I see several different approaches, for instance: ResNets: "BatchNorm" + "Scale" (no parameter sharing) "BatchNorm" layer is followed immediately with "Scale" layer: layer { bottom: "res2a_branch1" top: "res2a_branch1" name: "bn2a_branch1" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "res2a_branch1" top: "res2a_branch1" name: "scale2a_branch1" type: "Scale" scale_param {

How to Create CaffeDB training data for siamese networks out of image directory

牧云@^-^@ 提交于 2019-12-28 04:04:28
问题 I need some help to create a CaffeDB for siamese CNN out of a plain directory with images and label-text-file. Best would be a python-way to do it. The problem is not to walk through the directory and making pairs of images. My problem is more of making a CaffeDB out of those pairs. So far I only used convert_imageset to create a CaffeDB out of an image directory. Thanks for help! 回答1: Why don't you simply make two datasets using good old convert_imagest ? layer { name: "data_a" top: "data_a"