lasagne

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network论文翻译——中文版

♀尐吖头ヾ 提交于 2020-04-17 14:14:20
【推荐阅读】微服务还能火多久?>>> 文章作者:Tyan 博客: noahsnail.com | CSDN | 简书 声明:作者翻译论文仅为学习,如有侵权请联系作者删除博文,谢谢! 翻译论文汇总: https://github.com/SnailTyan/deep-learning-papers-translation Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network 摘要 尽管使用更快更深的卷积神经网络在单图像超分辨率的准确性和速度方面取得了突破,但仍有一个主要问题尚未解决:当使用大的上采样系数进行超分辨率时,我们怎样来恢复更精细的纹理细节。基于优化的超分辨率方法的行为主要由目标函数的选择来决定。最近的工作主要专注于最小化均方重构误差。由此得出的评估结果具有很高的峰值信噪比,但它们通常缺乏高频细节,并且在感知上是不令人满意的,在某种意义上,它们在较高分辨率上没有满足期望的保真度。在本文中,我们提出了SRGAN,一种用于图像超分辨率(SR)的生成对抗网络(GAN)。据我们所知,这是第一个对于4倍上采样系数,能推断逼真自然图像的框架。为此,我们提出了一种感知损失函数,其由对抗损失和内容损失组成。对抗损失使用判别器网络将我们的解推向自然图像流形

How to keep the weight value to zero in a particular location using theano or lasagne?

心不动则不痛 提交于 2019-12-24 01:37:08
问题 I'm a theano and lasagne user. I have a problem dealing with the variable length of the input matrix. i.e) x1 = [0, 1, 3] x2 = [1, 2] matrix_embedding = [ [ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.2, 0.3, 0.5], [ 0.5, 0.6, 0.7], ] matrix_embedding[x1] = [ [ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.5, 0.6, 0.7] ] matrix_embedding[x2] = [ [ 0.4, 0.5, 0.6], [ 0.2, 0.3, 0.5], ] So, I try to use the padding. matrix_padding_embedding = [ [ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.2, 0.3, 0.5], [ 0.5, 0.6,

Lasagne dropoutlayer does not utilize GPU efficiently

寵の児 提交于 2019-12-24 00:37:13
问题 I am using theano and lasagne for a DNN speech enhancement project. I use a feed-forward network very similar to the mnist example in the lasagne documentation (/github.com/Lasagne/Lasagne/blob/master/examples/mnist.py). This network uses several dropout layers. I train my network on an Nvidia Titan X GPU. However, when I do not use dropout my GPU utilization is approximately 60% and one epoch takes around 60s but when I use dropout my GPU utilization drops to 8% and each epoch takes

How to implement Weighted Binary CrossEntropy on theano?

我与影子孤独终老i 提交于 2019-12-21 12:06:14
问题 How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy

How to implement Weighted Binary CrossEntropy on theano?

萝らか妹 提交于 2019-12-21 12:06:07
问题 How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy

Input dimension mismatch binary crossentropy Lasagne and Theano

放肆的年华 提交于 2019-12-13 13:23:19
问题 I read all posts in the net adressing the issue where people forgot to change the target vector to a matrix, and as a problem remains after this change, I decided to ask my question here. Workarounds are mentioned below, but new problems show and I am thankful for suggestions! Using a convolution network setup and binary crossentropy with sigmoid activation function, I get a dimension mismatch problem, but not during the training data, only during validation / test data evaluation. For some

Lasagne vs Theano possible version mismatch (Windows)

ぃ、小莉子 提交于 2019-12-12 14:21:48
问题 So i finally managed to get theano up and running on the GPU using this guide. (the test code runs fine, telling me it used the GPU, YAY!!) I then wanted to try it out and followed this guide for training a CNN on digit recognition. problem is: i get errors from the way lasagne calls theano (i guess there is a version mismatch here): Using gpu device 0: GeForce GT 730M (CNMeM is disabled, cuDNN not available) Traceback (most recent call last): File "C:\Users\Soren Jensen\Desktop\CNN-test

Realtime Data augmentation in Lasagne

若如初见. 提交于 2019-12-12 09:09:49
问题 I need to do realtime augmentation on my dataset for input to CNN, but i am having a really tough time finding suitable libraries for it. I have tried caffe but the DataTransform doesn't support many realtime augmentations like rotating etc. So for ease of implementation i settled with Lasagne . But it seems that it also doesn't support realtime augmentation. I have seen some posts related to Facial Keypoints detection where he's using Batchiterator of nolearn.lasagne . But i am not sure

How to calculate F1-micro score using lasagne

╄→尐↘猪︶ㄣ 提交于 2019-12-11 11:49:14
问题 import theano.tensor as T import numpy as np from nolearn.lasagne import NeuralNet def multilabel_objective(predictions, targets): epsilon = np.float32(1.0e-6) one = np.float32(1.0) pred = T.clip(predictions, epsilon, one - epsilon) return -T.sum(targets * T.log(pred) + (one - targets) * T.log(one - pred), axis=1) net = NeuralNet( # your other parameters here (layers, update, max_epochs...) # here are the one you're interested in: objective_loss_function=multilabel_objective, custom_score=(

Convolutional Neural Network accuracy with Lasagne (regression vs classification)

 ̄綄美尐妖づ 提交于 2019-12-10 22:54:22
问题 I have been playing with Lasagne for a while now for a binary classification problem using a Convolutional Neural Network. However, although I get okay(ish) results for training and validation loss, my validation and test accuracy is always constant (the network always predicts the same class). I have come across this, someone who has had the same problem as me with Lasagne. Their solution was to set regression=True as they are using Nolearn on top of Lasagne. Does anyone know how to set this