neural-network

Machine Learning Algorithm for Predicting Order of Events?

浪尽此生 提交于 2019-12-18 09:55:16
问题 Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the

Ordering of batch normalization and dropout?

情到浓时终转凉″ 提交于 2019-12-18 09:53:56
问题 The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow. When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering? It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch

Multithreaded backpropagation

拥有回忆 提交于 2019-12-18 09:35:18
问题 I have written a back propagation class in VB.NET -it works well- and I'm using it in a C# artificial intelligence project. But I have a AMD Phenom X3 at home and a Intel i5 at school. and my neural network is not multi-threaded. How to convert that back propagation class to a multithreaded algorithm? or how to use GPGPU programming in it? or should I use any third party libraries that have a multithreaded back propagation neural network? 回答1: JeffHeaton has recommend that you use resilient

Tensorflow Keras Copy Weights From One Model to Another

ぐ巨炮叔叔 提交于 2019-12-18 07:40:37
问题 Using Keras from Tensorflow 1.4.1, how does one copy weights from one model to another? As some background, I'm trying to implement a deep-q network (DQN) for Atari games following the DQN publication by DeepMind. My understanding is that the implementation uses two networks, Q and Q'. The weights of Q are trained using gradient descent, and then the weights are copied periodically to Q'. Here's how I build Q and Q': ACT_SIZE = 4 LEARN_RATE = 0.0025 OBS_SIZE = 128 def buildModel(): model = tf

How to solve XOR problem with MLP neural network?

心不动则不痛 提交于 2019-12-18 07:38:08
问题 Tomorrow morning I have to give neural network final exam, but there is a problem, I cannot solve XOR problem with MLP, I don't know how to assign weights and bias values :( 回答1: So, seeing as you posted this 2 days ago, I guess I'm a lil late to help with your exam :( However, learning is always a good thing, and learning about neural nets doubly so! Normally I'd answer this question by telling you to use a network with 2 input units (one for each boolean), 2 hidden units, and 1 output unit

XOR problem solvable with 2x2x1 neural network without bias?

▼魔方 西西 提交于 2019-12-18 06:09:54
问题 Is a Neural network with 2 input nodes, 2 hidden nodes and an output supposed to be able to solve the XOR problem provided there is no bias? Or can it get stuck? 回答1: Leave the bias in. It doesn't see the values of your inputs. In terms of a one-to-one analogy, I like to think of the bias as the offsetting c -value in the straight line equation: y = mx + c ; it adds an independent degree of freedom to your system that is not influenced by the inputs to your network. 回答2: If I remember

What is “batch normalizaiton”? why using it? how does it affect prediction?

此生再无相见时 提交于 2019-12-18 05:02:14
问题 Recently, many deep architectures use "batch normalization" for training. What is "batch normalization"? What does it do mathematically? In what way does it help the training process? How is batch normalization used during training? is it a special layer inserted into the model? Do I need to normalize before each layer, or only once? Suppose I used batched normalization for training. Does this affect my test-time model? Should I replace the batch normalization with some other/equivalent layer

Caffe: how to get the phase of a Python layer?

我们两清 提交于 2019-12-18 04:06:28
问题 I created a "Python" layer "myLayer" in caffe, and use it in the net train_val.prototxt I insert the layer like this: layer { name: "my_py_layer" type: "Python" bottom: "in" top: "out" python_param { module: "my_module_name" layer: "myLayer" } include { phase: TRAIN } # THIS IS THE TRICKY PART! } Now, my layer only participates in the TRAIN ing phase of the net, how can I know that in my layer's setup function?? class myLayer(caffe.Layer): def setup(self, bottom, top): # I want to know here

Interpreting a Self Organizing Map

匆匆过客 提交于 2019-12-18 02:45:07
问题 I have been doing reading about Self Organizing Maps, and I understand the Algorithm(I think), however something still eludes me. How do you interpret the trained network? How would you then actually use it for say, a classification task(once you have done the clustering with your training data)? All of the material I seem to find(printed and digital) focuses on the training of the Algorithm. I believe I may be missing something crucial. Regards 回答1: SOM s are mainly a dimensionality

How to create & train a neural model to use for Core ML [closed]

ぃ、小莉子 提交于 2019-12-18 02:12:18
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . Apple introduced Core ML. There are many third parties providing trained models. But what if I want to create a model myself? How can I do that and what tools & technologies can I use? 回答1: Core ML doesn't provide a way to train your own models. You only can convert existing ones