tensorflow

How to predict new data with a trained neural network (Tensorflow 2.0, regression analysis)?

╄→尐↘猪︶ㄣ 提交于 2020-12-11 05:17:17
问题 I am new to machine learning and using Tensorflow. I have trained a neural network for regression following the tutorial on the Tensorflow website. I have 3 input columns and 2 output columns which I have marked as "labels". The network seemingly predicts data fine when using testing data, but when I try to predict data outside the testing and training set, by importing a file with 3 input columns only, it gives me an error saying "expected dense_input to have shape (5,) but got array with

How to give multi-dimensional inputs to tflite via C++ API

依然范特西╮ 提交于 2020-12-11 05:05:36
问题 I am trying out tflite C++ API for running a model that I built. I converted the model to tflite format by following snippet: import tensorflow as tf converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5') tfmodel = converter.convert() open("model.tflite", "wb").write(tfmodel) I am following the steps provided at tflite official guide, and my code upto this point looks like this // Load the model std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel:

tensorflow object detection API : training fails silently

自闭症网瘾萝莉.ら 提交于 2020-12-10 07:57:07
问题 I am using Tensorflow's object detection API, with my custom dataset. I am currently training "ssd_mobilenet_v1_coco" Everytime I try, training starts but training stops silently and randomly without error message. (Using COMMAND below, Command prompt shows the number of steps to some extent.) It is seems that GPU(CUDA) also stops. I've already tried changing batch_size("64" shows best score)and "ssd_mobilenet_v2_coco" Is this parameter(like "sample_1_of_n_eval_examples=1") or GPU problem? OS

tensorflow object detection API : training fails silently

主宰稳场 提交于 2020-12-10 07:56:07
问题 I am using Tensorflow's object detection API, with my custom dataset. I am currently training "ssd_mobilenet_v1_coco" Everytime I try, training starts but training stops silently and randomly without error message. (Using COMMAND below, Command prompt shows the number of steps to some extent.) It is seems that GPU(CUDA) also stops. I've already tried changing batch_size("64" shows best score)and "ssd_mobilenet_v2_coco" Is this parameter(like "sample_1_of_n_eval_examples=1") or GPU problem? OS

Hacker News 简讯 2020-12-06

自古美人都是妖i 提交于 2020-12-10 07:55:54
最后更新时间: 2020-12-06 23:00 Antioxidants prevent health-promoting effects of physical exercise [pdf] - (pnas.org) 抗氧化剂阻止体育锻炼对健康的促进作用[pdf] 得分:20 | 评论:8 Diem – A rebrand of Facebook Libra - (diem.com) Diem–Facebook Libra的改版 得分:70 | 评论:58 Hardware-Accelerated TensorFlow and TensorFlow Addons for macOS 11.0 - (github.com/apple) macOS 11.0的硬件加速TensorFlow和TensorFlow插件 得分:95 | 评论:38 More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru - (theguardian.com) 谴责谷歌解雇1200多名员工 得分:40 | 评论:19 How I Collected a Debt from an Unscrupulous Merchant - (mtlynch.io) 我是如何向一个无良商人讨债的 得分:314 | 评论:171

deepfm tensorflow 模型导出及java使用

一世执手 提交于 2020-12-10 06:31:16
接上篇 python导出 from tensorflow.python import pywrap_tensorflow import tensorflow as tf from tensorflow.python.framework import graph_util def getAllNodes(checkpoint_path): reader = pywrap_tensorflow.NewCheckpointReader(checkpoint_path) var_to_shape_map = reader.get_variable_to_shape_map() # Print tensor name and values for key in var_to_shape_map: print("tensor_name: ", key) #print(reader.get_tensor(key)) def freeze_graph(ckpt, output_graph): output_node_names = 'feat_index,feat_value,label,dropout_keep_fm,dropout_keep_deep,train_phase,output/predictlabel' # saver = tf.train.import_meta_graph

Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3]

╄→гoц情女王★ 提交于 2020-12-10 04:07:19
问题 I am trying to train a model, at first I had dataset of 5000 images and training worked fine, Now I have added couple of more images and now my dataset contains 6,423‬ images. I am using python 3.6.1 on Ubuntu 18.04, my tensorflow version is 1.15 & numpy version is 1.16 (had same versions before and it worked fine). Now when I use: python model_main.py --logtostderr --pipeline_config_path=training/faster_rcnn_resnet50_coco.config --model_dir=training It starts settings up for couple of

'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_in_graph_mode'

寵の児 提交于 2020-12-10 02:56:25
问题 I am trying to visualize CNN filters by optimizing a random 'image' so that it produces a high mean activation on that filter which is somehow similar to the neural style transfer algorithm. For that purpose, I am using TensorFlow==2.2.0-rc. But during the optimization process, an error occurs saying 'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_in_graph_mode' . I tried debugging it and it somehow works when I don't use the opt.apply_gradients() and instead, apply

'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_in_graph_mode'

泪湿孤枕 提交于 2020-12-10 02:56:07
问题 I am trying to visualize CNN filters by optimizing a random 'image' so that it produces a high mean activation on that filter which is somehow similar to the neural style transfer algorithm. For that purpose, I am using TensorFlow==2.2.0-rc. But during the optimization process, an error occurs saying 'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_in_graph_mode' . I tried debugging it and it somehow works when I don't use the opt.apply_gradients() and instead, apply

巨头们为什么要开源自己的技术?解析科技企业对软件开源的态度

假如想象 提交于 2020-12-10 01:35:42
今年上半年 , Google 公开了用于Big Transfer(BiT)的预训练模型和微调代码——Big Transfer是一种深度学习的计算机视觉模型。根据 Google 的说法,Big Transfer 可使 任何人在相应的任务上达到最优表现,即使每个类只有少量的标签图片。BiT仅是这家科技巨头 众多 免费开放产品 中的一个 , 其实业界大佬 发布免费 且实用的 开源软件科技界 中并不罕见 , 那 大型科技公司 为什么要这样做,真的是“用爱发电”吗 ? 在 90年代后期, 那时 Open Source Initiative 才 出现 不久 , 人们普遍认为 将源代码公开的想法 很不理智 。 毕竟 专有软件是标准, 相关企业或者组织 会尽一切努力保护软件。 但 到 如今 2020年,开源的概念 早已经 发生了巨大的变化,现在 开源思维正一步步 成为主流。 世界上有如此之多的 开源技术 企业(组织) ,其中一些年 盈利 过亿美元 , 甚至 超过 10亿美元, 亿元俱乐部中不乏有像 红帽、MongoDB、Cloudera、MuleSoft、Hashicorp、Databricks(Spark)和Confluent(Kafka) 这样的大佬 。 除了上述高调收购和投资开源项目 的 科技公司外, 就连 谷歌和Facebook 这类传统科技巨擘 也 在推进开源战略 , 可见 开源对于