chainer

Stacked Autoencoder

时光毁灭记忆、已成空白 提交于 2019-12-13 04:07:30
问题 I have a basic autoencoder structure. I want to change it to a stacked autoencoder. From what I know the stacked AE differs in 2 ways: It is made up of layers of sparse vanilla AEs It does layer-wise training. I want to know if sparsity is a necessity for stacked AEs or just increasing number of hidden layers in vanilla AE structure will make it a stacked AE? class Autoencoder(Chain): def __init__(self): super().__init__() with self.init_scope(): # encoder part self.l1 = L.Linear(1308608,500)

InvalidType: Invalid operation is performed

风流意气都作罢 提交于 2019-12-13 03:58:33
问题 I am trying to write a stacked autoencoder. Since this a stacked autoencoder we need to train the first autoencoder and pass the weights to the second autoencoder. So during training we need to define train_data_for_next_layer. Here I am getting error: InvalidType: Invalid operation is performed in: LinearFunction (Forward) Expect: x.shape[1] == W.shape[1] Actual: 784 != 250 I am having issue with the last line. Is this problem due to incorrect model layer, I want to know what is the issue

Does slice or index of chainer.Variable to get item in chainer has backward ability?

流过昼夜 提交于 2019-12-12 04:37:50
问题 Does the following code chainer.Variable still have ability to hold graph and can backward (gradient flow) after slice(a[0,1] or index(a[0]): >>> a = chainer.Variable(np.array([[1,2,3],[10,11,12]])) >>> a variable([[ 1, 2, 3], [10, 11, 12]]) >>> a[0] variable([1, 2, 3]) >>> a[0, 1] variable([1]) 回答1: Yes. Indexing of chainer.Variable supports backprop. 来源: https://stackoverflow.com/questions/45931252/does-slice-or-index-of-chainer-variable-to-get-item-in-chainer-has-backward-abil

Why does vgg.prepare() method create 9 copies of the given image?

浪尽此生 提交于 2019-12-11 04:24:45
问题 I get this result when I apply vgg.prepare() to the following image: I use this line of code: Image.fromarray(np.uint8(vgg.prepare(pep).reshape(224,224,3))) And get an image which is combined of 9 copies of the given image: 回答1: I finally got what you did... the only mistake is .reshape . Because the image is transposed , not reshaped , you have to re-transpose to restore the original image. pep = pep.transpose((1, 2, 0)) # transpose pep += [103.939, 116.779, 123.68] # un-normalize pep = pep

How to implement separate learning rate or optimizer in different layer in Chainer?

与世无争的帅哥 提交于 2019-12-11 04:22:16
问题 In my structure of NN, I wanna use different learning rate or optimizer , e.g. AdaGrad, in each layer. How to implement it? Wait for your help. Thks. 回答1: After you setup optimizer to the model , each parameter of link in the model has update_rule attribute (e.g. AdaGradRule in this case), which defines how to update this parameter. And each update_rule has hyperparam attribute separately, so you can overwrite these hyperparam for each parameter in the link. Below is a sample code, class MLP

In chainer, how to early stop iteration using chainer.training.Trainer?

泄露秘密 提交于 2019-12-08 03:58:11
问题 I am using chainer framework(Deep learning), suppose I have to stop iteration once two iteration's target function value's gap is little: f - old_f < eps . but chainer.training.Trainer's stop_trigger is (args.epoch, 'epoch') tuple. how to trigger early stop? 回答1: I implemented EarlyStoppingTrigger example according to @Seiya Tokui's answer, based on your situation. from chainer import reporter from chainer.training import util class EarlyStoppingTrigger(object): """Early stopping trigger It

How to use CUDA pinned “zero-copy” memory for a memory mapped file?

◇◆丶佛笑我妖孽 提交于 2019-12-05 02:42:33
问题 Objective/Problem In Python, I am looking for a fast way to read/write data from a memory mapped file to a GPU. In a previous SO overflow post [ Cupy OutOfMemoryError when trying to cupy.load larger dimension .npy files in memory map mode, but np.load works fine ] Where it is mentioned this is possible using CUDA pinned "zero-copy" memory. Furthermore, it seems that this method was developed by this person [ cuda - Zero-copy memory, memory-mapped file ] though that person was working in C++.