neural-network

Keras ImageDataGenerator for Cloud ML Engine

情到浓时终转凉″ 提交于 2019-12-21 21:23:52
问题 I need to train a neural net fed by some raw images that I store on the GCloud Storage. To do that I’m using the flow_from_directory method of my Keras image generator to find all the images and their related labels on the storage. training_data_directory = args.train_dir testing_data_directory = args.eval_dir training_gen = datagenerator.flow_from_directory( training_data_directory, target_size = (img_width, img_height), batch_size = 32) validation_gen = basic_datagen.flow_from_directory(

brain.js correct training of the neuralNetwork

百般思念 提交于 2019-12-21 20:54:11
问题 I must clearly have misunderstood something in the brain.js instructions on training I played around with this repl.it code const brain = require('brain.js'); const network = new brain.NeuralNetwork(); network.train([ { input: { doseA: 0 }, output: { indicatorA: 0 } }, { input: { doseA: 0.1 }, output: { indicatorA: 0.02 } }, { input: { doseA: 0.2 }, output: { indicatorA: 0.04 } }, { input: { doseA: 0.3 }, output: { indicatorA: 0.06 } }, { input: { doseA: 0.4 }, output: { indicatorA: 0.08 } },

Neural net input/output

Deadly 提交于 2019-12-21 20:29:33
问题 Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out if someone could provide examples in python that would be a big help 回答1: You have to encode your input and your output to something that can be represented by the neural network units. ( for example 1 for "x has a certain property p" -1 for

Add low layers in a Tensorflow model

若如初见. 提交于 2019-12-21 19:18:23
问题 Trying to develop some transfert learning algorithm, I use some trained neural networks and add layers. I am using Tensorflow and python. It seems quite common to use existing graphs in Tensorflow: you import the graph, for example using metaGraphs, then you set new high layers by adding nodes. For example, I found this code here : vgg_saver = tf.train.import_meta_graph(dir + '/vgg/results/vgg-16.meta') # Access the graph vgg_graph = tf.get_default_graph() # Retrieve VGG inputs self.x_plh =

Google Inceptionism: obtain images by class

懵懂的女人 提交于 2019-12-21 17:09:54
问题 In the famous Google Inceptionism article, http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html they show images obtained for each class, such as banana or ant. I want to do the same for other datasets. The article does describe how it was obtained, but I feel that the explanation is insufficient. There's a related code https://github.com/google/deepdream/blob/master/dream.ipynb but what it does is to produce a random dreamy image, rather than specifying a

Selectively zero weights in TensorFlow?

痞子三分冷 提交于 2019-12-21 16:03:01
问题 Lets say I have an NxM weight variable weights and a constant NxM matrix of 1s and 0s mask . If a layer of my network is defined like this (with other layers similarly defined): masked_weights = mask*weights layer1 = tf.relu(tf.matmul(layer0, masked_weights) + biases1) Will this network behave as if the corresponding 0s in mask are zeros in weights during training? (i.e. as if the connections represented by those weights had been removed from the network entirely)? If not, how can I achieve

How to do machine learning when the inputs are of different sizes?

依然范特西╮ 提交于 2019-12-21 12:20:02
问题 In standard cookbook machine learning, we operate on a rectangular matrix; that is, all of our data points have the same number of features. How do we cope with situations in which all of our data points have different numbers of features? For example, if we want to do visual classification but all of our pictures are of different dimensions, or if we want to do sentiment analysis but all of our sentences have different amounts of words, or if we want to do stellar classification but all of

How to take the average of the weights of two networks?

瘦欲@ 提交于 2019-12-21 12:08:04
问题 Suppose in PyTorch I have model1 and model2 which have the same architecture. They were further trained on same data or one model is an earlier version of the othter, but it is not technically relevant for the question. Now I want to set the weights of model to be the average of the weights of model1 and model2 . How would I do that in PyTorch? 回答1: beta = 0.5 #The interpolation parameter params1 = model1.named_parameters() params2 = model2.named_parameters() dict_params2 = dict(params2) for

How to read json files in Tensorflow?

送分小仙女□ 提交于 2019-12-21 09:33:24
问题 I'm trying to write a function, that reads json files in tensorflow. The json files have the following structure: { "bounding_box": { "y": 98.5, "x": 94.0, "height": 197, "width": 188 }, "rotation": { "yaw": -27.97019577026367, "roll": 2.206029415130615, "pitch": 0.0}, "confidence": 3.053506851196289, "landmarks": { "1": { "y": 180.87722778320312, "x": 124.47326660156205}, "0": { "y": 178.60653686523438, "x": 183.41931152343795}, "2": { "y": 224.5936889648438, "x": 141.62365722656205 }}} I

How to load training data in PyBrain?

痞子三分冷 提交于 2019-12-21 09:33:11
问题 I am trying to use PyBrain for some simple NN training. What I don't know how to do is to load the training data from a file. It is not explained in their website anywhere. I don't care about the format because I can build it now, but I need to do it in a file instead of adding row by row manually, because I will have several hundreds of rows. 回答1: Here is how I did it: ds = SupervisedDataSet(6,3) tf = open('mycsvfile.csv','r') for line in tf.readlines(): data = [float(x) for x in line.strip(