neural-network

How can I apply multithreading to the backpropagation neural network training?

与世无争的帅哥 提交于 2019-12-18 11:56:43
问题 For my university project I am creating a neural network that can classify the likelihood that a credit card transaction is fraudulent or not. I am training with backpropagation. I am writing this in Java. I would like to apply multithreading, because my computer is a quad-core i7. It bugs me to spend hours training and see most of my cores idle. But how would I apply multithreading to backpropagation? Backprop works by adjusting the errors backwards through the network. One layer must be

Using Dropout in Pytorch: nn.Dropout vs. F.dropout

て烟熏妆下的殇ゞ 提交于 2019-12-18 11:47:57
问题 By using pyTorch there is two ways to dropout torch.nn.Dropout and torch.nn.functional.Dropout . I struggle to see the difference between the use of them: When to use what? Does it make a difference? I don't see any performance difference when I switched them around. 回答1: The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience: A short example for illustration of some

TensorFlow: how is dataset.train.next_batch defined?

…衆ロ難τιáo~ 提交于 2019-12-18 11:28:28
问题 I am trying to learn TensorFlow and studying the example at: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/autoencoder.ipynb I then have some questions in the code below: for epoch in range(training_epochs): # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs}) # Display

Are evolutionary algorithms and neural networks used in the same domains?

浪子不回头ぞ 提交于 2019-12-18 11:21:49
问题 I am trying to get a feel for the difference between the various classes of machine-learning algorithms. I understand that the implementations of evolutionary algorithms are quite different from the implementations of neural networks. However, they both seem to be geared at determining a correlation between inputs and outputs from a potentially noisy set of training/historical data. From a qualitative perspective, are there problem domains that are better targets for neural networks as

Merge 2 sequential models in Keras

我只是一个虾纸丫 提交于 2019-12-18 11:10:00
问题 I a trying to merge 2 sequential models in keras. Here is the code: model1 = Sequential(layers=[ # input layers and convolutional layers Conv1D(128, kernel_size=12, strides=4, padding='valid', activation='relu', input_shape=input_shape), MaxPooling1D(pool_size=6), Conv1D(256, kernel_size=12, strides=4, padding='valid', activation='relu'), MaxPooling1D(pool_size=6), Dropout(.5), ]) model2 = Sequential(layers=[ # input layers and convolutional layers Conv1D(128, kernel_size=20, strides=5,

Siamese Neural Network in TensorFlow

只谈情不闲聊 提交于 2019-12-18 10:35:52
问题 I'm trying to implement a Siamese Neural Network in TensorFlow but I cannot really find any working example on the Internet (see Yann LeCun paper). The architecture I'm trying to build would consist of two LSTMs sharing weights and only connected at the end of the network. My question is: how to build two different neural networks sharing their weights (tied weights) in TensorFlow and how to connect them at the end? Thanks :) Edit : I implemented a simple and working example of a siamese

keras: how to save the training history attribute of the history object

让人想犯罪 __ 提交于 2019-12-18 10:35:06
问题 In Keras, we can return the output of model.fit to a history as follows: history = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, validation_data=(X_test, y_test)) Now, how to save the history attribute of the history object to a file for further uses (e.g. draw plots of acc or loss against epochs)? 回答1: What I use is the following: with open('/trainHistoryDict', 'wb') as file_pi: pickle.dump(history.history, file_pi) In this way I save the history as a dictionary in

How to calculate prediction uncertainty using Keras?

梦想的初衷 提交于 2019-12-18 10:33:32
问题 I would like to calculate NN model certainty/confidence (see What my deep model doesn't know) - when NN tells me an image represents "8", I would like to know how certain it is. Is my model 99% certain it is "8" or is it 51% it is "8", but it could also be "6"? Some digits are quite ambiguous and I would like to know for which images the model is just "flipping a coin". I have found some theoretical writings about this but I have trouble putting this in code. If I understand correctly, I

How to calculate prediction uncertainty using Keras?

折月煮酒 提交于 2019-12-18 10:33:10
问题 I would like to calculate NN model certainty/confidence (see What my deep model doesn't know) - when NN tells me an image represents "8", I would like to know how certain it is. Is my model 99% certain it is "8" or is it 51% it is "8", but it could also be "6"? Some digits are quite ambiguous and I would like to know for which images the model is just "flipping a coin". I have found some theoretical writings about this but I have trouble putting this in code. If I understand correctly, I

Can neural networks approximate any function given enough hidden neurons?

試著忘記壹切 提交于 2019-12-18 10:12:45
问题 I understand neural networks with any number of hidden layers can approximate nonlinear functions, however, can it approximate: f(x) = x^2 I can't think of how it could. It seems like a very obvious limitation of neural networks that can potentially limit what it can do. For example, because of this limitation, neural networks probably can't properly approximate many functions used in statistics like Exponential Moving Average, or even variance. Speaking of moving average, can recurrent