tensorflow

TensorFlow or PyTorch

别来无恙 提交于 2021-02-11 02:29:06
既然你已经读到了这篇文章,我就断定你已经开始了你的深度学习之旅了,并且对人造神经网络的研究已经有一段时间了;或者也许你正打算开始你的学习之旅。无论是哪一种情况,你都是因为发现你陷入了困惑中,才找到了这篇文章。你可能查询浏览了各种各样的深度学习的框架和库,但是其中有两个比较突出,他们是两个最流行的深度学习库:TensorFlow 和 PyTorch。你没有办法指出这两个库有什么本质的不同,不用担心!我将在这网络上无休止的存储空间中添加一篇新的文章,也许可以帮你弄清楚一些问题。我将简要的快速的给出你五点内容。仅仅是五点,那么,让我们开始吧! 第一点:尽管 TensorFlow 和 PyTorch 都是开源的,但是他们是由两个不同的公司创建的。TensorFlow 是由 Google 基于 Theano 开发的,而 PyTorch 是由 Facebook 基于 Torch 开发的。 第二点:这两个框架最大的不同是他们定义计算图的方式不同。TensorFlow 定义一个静态图,而 PyTorch 定义动态图,这是什么意思呢?意思就是在 TensorFlow 中,你必须首先定义整个计算图,然后运行你的机器学习模型。但是在 PyTorch 中,你可以在运行的过程中,定义或控制你的图,当在神经网络中使用变长的输入时,这是非常有用的。 第三点:TensorFlow 比 PyTorch

Use GPU installation of tensorflow/cuda in spyder under ubuntu 14.04

元气小坏坏 提交于 2021-02-11 02:01:06
问题 I am running ubuntu 14.04 with an anaconda2 installation and would like to use tensorflow in combination with CUDA. So far the steps I performed are: Installed CUDA 7.5 and cudnn Installed tensorflow (GPU version) through a DEB package. Note that I don't want to use the conda package of tensorflow since that one is not the GPU version. Added Anaconda, CUDA and cudnn to path. Created a conda environment for tensorflow (conda create -n tensorflow python=2.7) Now if I start python or IDLE from

Use GPU installation of tensorflow/cuda in spyder under ubuntu 14.04

不想你离开。 提交于 2021-02-11 01:56:17
问题 I am running ubuntu 14.04 with an anaconda2 installation and would like to use tensorflow in combination with CUDA. So far the steps I performed are: Installed CUDA 7.5 and cudnn Installed tensorflow (GPU version) through a DEB package. Note that I don't want to use the conda package of tensorflow since that one is not the GPU version. Added Anaconda, CUDA and cudnn to path. Created a conda environment for tensorflow (conda create -n tensorflow python=2.7) Now if I start python or IDLE from

Setting initial state in dynamic RNN

萝らか妹 提交于 2021-02-11 01:51:14
问题 Based on the link: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn In the example, it is shown that the "initial state" is defined in the first example and not in the second example . Could anyone please explain what is the purpose of the initial state ? What's the difference if I don't set it vs if i set it ? Is it only required in a single RNN cell and not in a stacked cell like in the example provided in the link? I'm currently debugging my RNN model, as it seemed to classify

Result changes every time I run Neural Network code

烈酒焚心 提交于 2021-02-10 22:47:28
问题 I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result? 回答1: The code is full of random.randint() everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result. Y_train, X_test, X_train

Result changes every time I run Neural Network code

别等时光非礼了梦想. 提交于 2021-02-10 22:47:23
问题 I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result? 回答1: The code is full of random.randint() everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result. Y_train, X_test, X_train

A problem when run tflite model(the result of tflite model is nan)

老子叫甜甜 提交于 2021-02-10 22:42:24
问题 I trained a model to convert sketch picture to color picture. enter image description here middle is ground truth, left is original and right is pridiction. This result was ran by model.h5 but there is a wrong result when i run program using model.tflite. this is my convert code tflite_convert --keras_model_file=G:/pix2pix/generator.h5 --output_file=G:/pix2pix/convert.tflite this is the result ran by model.tflite [[[[nan nan nan] ... 回答1: I find a solution about this question. Just change the

Tensorflow serving custom gpu op cannot find dependency when compiling

十年热恋 提交于 2021-02-10 22:18:09
问题 I fallowed the guides on making custom gpu op for tensorflow and could make shared lib. For tensorflow-serving I adapted required paths but I get error when building: ERROR: /home/g360/Documents/eduardss/serving/tensorflow_serving/custom_ops/CUSTOM_OP/BUILD:32:1: undeclared inclusion(s) in rule '//tensorflow_serving/custom_ops/CUSTOM_OP:CUSTOM_OP_ops_gpu': this rule is missing dependency declarations for the following files included by 'tensorflow_serving/custom_ops/CUSTOM_OP/cc/magic_op.cu

Tensorflow serving custom gpu op cannot find dependency when compiling

天大地大妈咪最大 提交于 2021-02-10 22:12:56
问题 I fallowed the guides on making custom gpu op for tensorflow and could make shared lib. For tensorflow-serving I adapted required paths but I get error when building: ERROR: /home/g360/Documents/eduardss/serving/tensorflow_serving/custom_ops/CUSTOM_OP/BUILD:32:1: undeclared inclusion(s) in rule '//tensorflow_serving/custom_ops/CUSTOM_OP:CUSTOM_OP_ops_gpu': this rule is missing dependency declarations for the following files included by 'tensorflow_serving/custom_ops/CUSTOM_OP/cc/magic_op.cu

Tensorflow Lite - ValueError: Cannot set tensor: Dimension mismatch

拜拜、爱过 提交于 2021-02-10 20:47:46
问题 This is probably going to be a stupid question but I am new to machine learning and Tensorflow. I am trying to run object detection API on Raspberry Pi using Tensorflow Lite . I am trying to modify my code with the help of this example https://github.com/freedomtan/tensorflow/blob/deeplab_tflite_python/tensorflow/contrib/lite/examples/python/object_detection.py This piece of code will detect object from a image. But instead of a image I want to detect object on real time through Pi camera. I