neural-network

What is a batch in TensorFlow?

强颜欢笑 提交于 2019-12-17 23:47:34
问题 The introductory documentation, which I am reading (TOC here) introduces the term here without having defined it. [1] https://www.tensorflow.org/get_started/ [2] https://www.tensorflow.org/tutorials/mnist/tf/ 回答1: Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing inference ), compute the

Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'

非 Y 不嫁゛ 提交于 2019-12-17 23:12:31
问题 I got this error message when declaring the input layer in Keras. ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32]. My code is like this model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28))) Sample application: https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb 回答1: By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to

Choosing from different cost function and activation function of a neural network

戏子无情 提交于 2019-12-17 23:02:51
问题 Recently I started toying with neural networks. I was trying to implement an AND gate with Tensorflow. I am having trouble understanding when to use different cost and activation functions. This is a basic neural network with only input and output layers, no hidden layers. First I tried to implement it in this way. As you can see this is a poor implementation, but I think it gets the job done, at least in some way. So, I tried only the real outputs, no one hot true outputs. For activation

What is the difference between loss function and metric in Keras? [duplicate]

旧街凉风 提交于 2019-12-17 22:44:56
问题 This question already has answers here : What is “metrics” in Keras? (4 answers) Closed 10 months ago . It is not clear for me the difference between loss function and metrics in Keras. The documentation was not helpful for me. 回答1: The loss function is used to optimize your model. This is the function that will get minimized by the optimizer. A metric is used to judge the performance of your model. This is only for you to look at and has nothing to do with the optimization process. 回答2: The

Open-source .NET neural network library? [closed]

喜欢而已 提交于 2019-12-17 22:16:12
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed last year . Can anyone recommend a good open-source .NET neural network library? Thanks! 回答1: Best .NET neural network library is AForge Links: MainPage CodeProject-Article 回答2: Encog is a free open source neural network API for both Java and DotNet. http://www.heatonresearch.com/encog 回答3: In case this helps anybody else, MS

Fast sigmoid algorithm

放肆的年华 提交于 2019-12-17 21:45:04
问题 The sigmoid function is defined as I found that using the C built-in function exp() to calculate the value of f(x) is slow. Is there any faster algorithm to calculate the value of f(x) ? 回答1: you don't have to use the actual, exact sigmoid function in a neural network algorithm but can replace it with an approximated version that has similar properties but is faster the compute. For example, you can use the "fast sigmoid" function f(x) = x / (1 + abs(x)) Using first terms of the series

OpenCL / AMD: Deep Learning [closed]

こ雲淡風輕ζ 提交于 2019-12-17 21:38:59
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 11 months ago . While "googl'ing" and doing some research I were not able to find any serious/popular framework/sdk for scientific GPGPU-Computing and OpenCL on AMD hardware. Is there any literature and/or software I missed? Especially I am interested in deep learning . For all I know

Shaping data for LSTM, and feeding output of dense layers to LSTM

被刻印的时光 ゝ 提交于 2019-12-17 20:59:41
问题 I'm trying to figure out the proper syntax for the model I'm trying to fit. It's a time-series prediction problem, and I want to use a few dense layers to improve the representation of the time series before I feed it to the LSTM. Here's a dummy series that I'm working with: import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np import keras as K import tensorflow as tf d = pd.DataFrame(data = {"x": np.linspace(0, 100, 1000)}) d['l1_x'] = d.x

Using SparseTensor as a trainable variable?

僤鯓⒐⒋嵵緔 提交于 2019-12-17 19:08:44
问题 I'm trying to use SparseTensor to represent weight variables in a fully-connected layer. However, it seems that TensorFlow 0.8 doesn't allow to use SparseTensor as tf.Variable. Is there any way to go around this? I've tried import tensorflow as tf a = tf.constant(1) b = tf.SparseTensor([[0,0]],[1],[1,1]) print a.__class__ # shows <class 'tensorflow.python.framework.ops.Tensor'> print b.__class__ # shows <class 'tensorflow.python.framework.ops.SparseTensor'> tf.Variable(a) # Variable is

What is the difference between an Embedding Layer and a Dense Layer?

自闭症网瘾萝莉.ら 提交于 2019-12-17 18:48:57
问题 The docs for an Embedding Layer in Keras say: Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]] I believe this could also be achieved by encoding the inputs as one-hot vectors of length vocabulary_size , and feeding them into a Dense Layer. Is an Embedding Layer merely a convenience for this two-step process, or is something fancier going on under the hood? 回答1: Mathematically, the difference is this: An embedding layer performs