caffe

Can Caffe or Caffe2 be given input data directly from gpu?

假装没事ソ 提交于 2020-01-30 08:30:08
问题 I've read caffe2 tutorials and tried pre-trained models. I knew caffe2 will leverge GPU to run the model/net. But the input data seems always be given from CPU(ie. Host) memory. For example, in Loading Pre-Trained Models, after model is loaded, we can predict an image by result = p.run([img]) However, image "img" should be read in CPU scope. What I look for is a framework that can pipline the images (which is decoded from a video and still resides in GPU memory) directly to the prediction

How to programmatically generate deploy.txt for caffe in python

谁都会走 提交于 2020-01-28 09:37:04
问题 I have written python code to programmatically generate a convolutional neural network (CNN) for training and validation .prototxt files in caffe. Below is my function: def custom_net(lmdb, batch_size): # define your own net! n = caffe.NetSpec() # keep this data layer for all networks n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, ntop=2, transform_param=dict(scale=1. / 255)) n.conv1 = L.Convolution(n.data, kernel_size=6, num_output=48, weight_filler=dict

caffe copy pretrained weights to train a network which supports for larger input

大憨熊 提交于 2020-01-24 00:46:08
问题 How can I use the pretrained e.g. AlexNet model to train a CNN which supports larger (e.g. 500x500) input image sizes? In other words, how to copy only the convolutional filters to the new network (using Matlab wrapper)? 来源: https://stackoverflow.com/questions/38244793/caffe-copy-pretrained-weights-to-train-a-network-which-supports-for-larger-input

How to convert .npy file into .binaryproto?

纵饮孤独 提交于 2020-01-21 09:56:04
问题 I have created a mean image file using python and saved it into numpy file. I would like to know how I could convert this .npy file into .binaryproto file. I am using this file to train using the GoogLeNet. 回答1: You can simply use numpy to creat the .binaryproto and the given caffe io functions import caffe #avg_img is your numpy array with the average data blob = caffe.io.array_to_blobproto( avg_img) with open( mean.binaryproto, 'wb' ) as f : f.write( blob.SerializeToString()) 回答2: Here's an

How to convert .npy file into .binaryproto?

这一生的挚爱 提交于 2020-01-21 09:52:46
问题 I have created a mean image file using python and saved it into numpy file. I would like to know how I could convert this .npy file into .binaryproto file. I am using this file to train using the GoogLeNet. 回答1: You can simply use numpy to creat the .binaryproto and the given caffe io functions import caffe #avg_img is your numpy array with the average data blob = caffe.io.array_to_blobproto( avg_img) with open( mean.binaryproto, 'wb' ) as f : f.write( blob.SerializeToString()) 回答2: Here's an

how to setup Caffe imagenet_solver.prototxt file for fewer jpgs, program exited after iteration 0

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-16 18:19:28
问题 We need help to understand the parameters to use for smaller set of training (6000 jpgs) and val (170 jpgs) jpgs. Our execution was killed and exited after test score 0/1 in Iteration 0. We are trying to run the imagenet sample on the caffe website tutorial at http://caffe.berkeleyvision.org/gathered/examples/imagenet.html. Instead of using the full set of ILSVRC2 images in the package, we use our own training set of 6000 jpegs and val set of 170 jpeg images. They are each 256 x 256 jpeg

Test net output #0: accuracy = 1 - Always- Caffe

試著忘記壹切 提交于 2020-01-16 08:34:12
问题 I'm always getting the same accuracy. When i run the classification, its always showing 1 label. I went through many articles and everyone recommending to shuffle the data. I did that using random.shuffle and also tried convert_imageset script as well but no help. Please find my solver.protoxt and caffenet_train.prototxt below. I have 1000 images in my dataset. 833 images in train_lmdb and rest of them in validation_lmdb. Training logs: I1112 22:41:26.373661 10633 solver.cpp:347] Iteration

Creating large LMDBs for Caffe with numpy arrays

邮差的信 提交于 2020-01-16 05:21:04
问题 I have two 60 x 80921 matrices, one filled with data, one with reference. I would like to store the values as key/value pairs in two different LMDBs, one for training (say I'll slice around the 60000 column mark) and one for testing. Here is my idea; does it work? X_train = X[:,:60000] Y_train = Y[:,:60000] X_test = X[:,60000:] Y_test = Y[:,60000:] X_train = X_train.astype(int) X_test = X_test.astype(int) Y_train = Y_train.astype(int) Y_test = Y_test.astype(int) map_size = X_train.nbytes * 10

What should be appropriate image size input to faster RCNN caffe model?

爱⌒轻易说出口 提交于 2020-01-15 09:44:07
问题 I am trying to train Faster RCNN using caffe for Custom dataset. I have acknowledged that the Faster RCNN caffe model is build considering input image size as 600*1000. I have many images with size 300*400 in my custom dataset. Do I need to zero pad the image upto size 600*100 or upscale it? If neither both, what should be appropriate modification to the images before giving it as input to the network. Please suggest. Thank you. 回答1: Faster RCNN was trained on pascal VOC images with image