artificial-intelligence

Reshaping Keras layers

允我心安 提交于 2020-05-10 04:27:18
问题 I have an input image 416x416. How can I create an output of 4 x 10, where 4 is number of columns and 10 the number of rows? My label data is 2D array with 4 columns and 10 rows. I know about the reshape() method but it requires that the resulted shape has same number of elements as the input. With 416 x 416 input size and max pools layers I can get max 13 x 13 output. Is there a way to achieve 4x10 output without loss of data? My input label data looks like for example like [[ 0 0 0 0] [ 0 0

Reshaping Keras layers

荒凉一梦 提交于 2020-05-10 04:24:39
问题 I have an input image 416x416. How can I create an output of 4 x 10, where 4 is number of columns and 10 the number of rows? My label data is 2D array with 4 columns and 10 rows. I know about the reshape() method but it requires that the resulted shape has same number of elements as the input. With 416 x 416 input size and max pools layers I can get max 13 x 13 output. Is there a way to achieve 4x10 output without loss of data? My input label data looks like for example like [[ 0 0 0 0] [ 0 0

Reshaping Keras layers

折月煮酒 提交于 2020-05-10 04:24:05
问题 I have an input image 416x416. How can I create an output of 4 x 10, where 4 is number of columns and 10 the number of rows? My label data is 2D array with 4 columns and 10 rows. I know about the reshape() method but it requires that the resulted shape has same number of elements as the input. With 416 x 416 input size and max pools layers I can get max 13 x 13 output. Is there a way to achieve 4x10 output without loss of data? My input label data looks like for example like [[ 0 0 0 0] [ 0 0

Simple neural network gives wrong output after training

前提是你 提交于 2020-04-18 06:10:17
问题 I've been working on a simple neural network. It takes in a data set with 3 columns, if the first column's value is a 1, then the output should be a 1. I've provided comments so it is easier to follow. Code is as follows: import numpy as np import random def sigmoid_derivative(x): return x * (1 - x) def sigmoid(x): return 1 / (1 + np.exp(-x)) def think(weights, inputs): sum = (weights[0] * inputs[0]) + (weights[1] * inputs[1]) + (weights[2] * inputs[2]) return sigmoid(sum) if __name__ == "_

How to run python code with support of GPU

こ雲淡風輕ζ 提交于 2020-04-11 12:14:29
问题 I have created a flask service for accepting requests with camera URLs as parameters for finding objects(table, chair etc...) in the camera frame. I have written code in flask for accepting POST requests. @app.route('/rest/detectObjects', methods=['GET','POST']) def detectObjects() ... json_result = function_call_for_detecting_objects() ... return In the function, its loads the tf model for object detection and returns the result. A large amount of request needs to be processed simultaneously

How to run python code with support of GPU

巧了我就是萌 提交于 2020-04-11 12:12:14
问题 I have created a flask service for accepting requests with camera URLs as parameters for finding objects(table, chair etc...) in the camera frame. I have written code in flask for accepting POST requests. @app.route('/rest/detectObjects', methods=['GET','POST']) def detectObjects() ... json_result = function_call_for_detecting_objects() ... return In the function, its loads the tf model for object detection and returns the result. A large amount of request needs to be processed simultaneously

brain.js - predicting next 10 values

旧巷老猫 提交于 2020-04-10 06:30:08
问题 On the brain.js page there is a simple example of LSTMTimeStep - https://github.com/BrainJS/brain.js var net = new brain.recurrent.LSTMTimeStep(); net.train([ [1, 3], [2, 2], [3, 1], ]); var output = net.run([[1, 3], [2, 2]]); // [3, 1] This is good enough to predict the next value/label. But what if I have thousands of training set and thousands of test data set and I would like to predict next 10 or 100 values. How to do this? 回答1: I think that you need to use the forecast method in order

how important is the loss difference between training and validation data at the beginning when training a neuronal network?

只愿长相守 提交于 2020-03-25 15:49:49
问题 Short question: Is the difference between validation and training loss at the beginning of the training (first epochs) a good indicator for the amount of data that should be used? E.g would it be a good method to increase the amount of data until the difference at the beginning is as small as possible? It would save me time and computation. backround: I am working on a neuronal network that overfits very fast. The best result after applying many different techniques like dropouts, batch

word2vec gensim multiple languages

做~自己de王妃 提交于 2020-03-22 06:42:53
问题 This problem is going completely over my head. I am training a Word2Vec model using gensim. I have provided data in multiple languages i.e. English and Hindi. When I am trying to find the words closest to 'man', this is what I am getting: model.wv.most_similar(positive = ['man']) Out[14]: [('woman', 0.7380284070968628), ('lady', 0.6933152675628662), ('monk', 0.6662989258766174), ('guy', 0.6513140201568604), ('soldier', 0.6491742134094238), ('priest', 0.6440571546554565), ('farmer', 0

Object detection ARKit vs CoreML

随声附和 提交于 2020-03-20 07:55:33
问题 I am building ARKit application for iPhone. I need to detect specific perfume bottle and display content depending on what is detected. I used demo app from developer.apple.com to scan real world object and export .arobject file which I can use in assets. It's working fine, although since bottle is from glass detection is very poor. It detects only in location where scan was made in range from 2-30 seconds or doesn't detect at all. Merging of scans doesn't improve situation, something making