tensorflow

Best strategy to reduce false positives: Google's new Object Detection API on Satellite Imagery

孤街浪徒 提交于 2021-02-05 13:43:18
问题 I'm setting up the new Tensorflow Object Detection API to find small objects in large areas of satellite imagery. It works quite well - it finds all 10 objects I want, but I also get 50-100 false positives [things that look a little like the target object, but aren't]. I'm using the sample config from the 'pets' tutorial, to fine-tune the faster_rcnn_resnet101_coco model they offer. I've started small, with only 100 training examples of my objects (just 1 class). 50 examples in my validation

Best strategy to reduce false positives: Google's new Object Detection API on Satellite Imagery

做~自己de王妃 提交于 2021-02-05 13:42:27
问题 I'm setting up the new Tensorflow Object Detection API to find small objects in large areas of satellite imagery. It works quite well - it finds all 10 objects I want, but I also get 50-100 false positives [things that look a little like the target object, but aren't]. I'm using the sample config from the 'pets' tutorial, to fine-tune the faster_rcnn_resnet101_coco model they offer. I've started small, with only 100 training examples of my objects (just 1 class). 50 examples in my validation

What is the reason to use parameter server in distributed tensorflow learning?

可紊 提交于 2021-02-05 13:20:27
问题 Short version: can't we store variables in one of the workers and not use parameter servers? Long version: I want to implement synchronous distributed learning of neural network in tensorflow. I want each worker to have a full copy of the model during training. I've read distributed tensorflow tutorial and code of distributed training imagenet and didn't get why do we need parameter servers. I see that they are used for storing values of variables and replica_device_setter takes care that

Prevent TensorFlow from accessing the GPU? [duplicate]

☆樱花仙子☆ 提交于 2021-02-05 13:13:45
问题 This question already has answers here : Can Keras with Tensorflow backend be forced to use CPU or GPU at will? (7 answers) Closed 3 years ago . Is there a way to run TensorFlow purely on the CPU. All of the memory on my machine is hogged by a separate process running TensorFlow. I have tried setting the per_process_memory_fraction to 0, unsuccessfully. 回答1: Have a look to this question or this answer. To summarise you can add this piece of code: import os os.environ["CUDA_VISIBLE_DEVICES"] =

Prevent TensorFlow from accessing the GPU? [duplicate]

末鹿安然 提交于 2021-02-05 13:11:40
问题 This question already has answers here : Can Keras with Tensorflow backend be forced to use CPU or GPU at will? (7 answers) Closed 3 years ago . Is there a way to run TensorFlow purely on the CPU. All of the memory on my machine is hogged by a separate process running TensorFlow. I have tried setting the per_process_memory_fraction to 0, unsuccessfully. 回答1: Have a look to this question or this answer. To summarise you can add this piece of code: import os os.environ["CUDA_VISIBLE_DEVICES"] =

Keras NN regression model gives low loss, and 0 acuracy

送分小仙女□ 提交于 2021-02-05 12:30:15
问题 I am having a problem with this NN regression model in keras. I am working on a cars dataset to predict the price based on 13 dimensions. In short, I have read it as pandas dataframe, converted numeric values to float, scaled the values, and then used one-hot encoding for categorical values, which has created a lot of new columns, but that does not concern me much at this point. What concerns me is that the accuracy is practically 0%, and I cannot figure out why. Dataset can be found here:

How to load tensorflow-js weights from express using tf.loadLayersModel()?

ε祈祈猫儿з 提交于 2021-02-05 12:26:15
问题 I get the error - RangeError: attempting to construct out-of-bounds TypedArray on ArrayBuffer when I try to load a tf-js model into Reactjs. I'm using express.js to send the json+bin files to react so that I can run inference in the browser itself. Here's the relevant Express.js code. The json+bin files are all in the same folder. app.use( "/api/pokeml/classify", express.static(path.join(__dirname, "classifier_models/original/model.json")) ) Here's how I'm loading this in React - import * as

How to implement recursive neural networks in Tensorflow?

。_饼干妹妹 提交于 2021-02-05 12:24:06
问题 I am trying to implement a very basic recursive neural network into my linear regression analysis project in Tensorflow that takes two inputs passed to it and then a third value of what it previously calculated. So, my project is trying to calculate something across the next x number of years, and after the first year I want it to keep taking the value of the last year. Currently, my training data has two inputs, not three, predicting one output, so how could I make it recursive, so it keeps

How to implement recursive neural networks in Tensorflow?

走远了吗. 提交于 2021-02-05 12:22:31
问题 I am trying to implement a very basic recursive neural network into my linear regression analysis project in Tensorflow that takes two inputs passed to it and then a third value of what it previously calculated. So, my project is trying to calculate something across the next x number of years, and after the first year I want it to keep taking the value of the last year. Currently, my training data has two inputs, not three, predicting one output, so how could I make it recursive, so it keeps

How to load tensorflow-js weights from express using tf.loadLayersModel()?

♀尐吖头ヾ 提交于 2021-02-05 12:22:22
问题 I get the error - RangeError: attempting to construct out-of-bounds TypedArray on ArrayBuffer when I try to load a tf-js model into Reactjs. I'm using express.js to send the json+bin files to react so that I can run inference in the browser itself. Here's the relevant Express.js code. The json+bin files are all in the same folder. app.use( "/api/pokeml/classify", express.static(path.join(__dirname, "classifier_models/original/model.json")) ) Here's how I'm loading this in React - import * as