tensorflow

Tensorflow: Setting allow_growth to true does still allocate memory of all my GPUs

◇◆丶佛笑我妖孽 提交于 2021-01-23 11:09:09
问题 I have several GPUs but I only want to use one GPU for my training. I am using following options: config = tf.ConfigProto(allow_soft_placement=True, log_device_placement=True) config.gpu_options.allow_growth = True with tf.Session(config=config) as sess: Despite setting / using all these options, all of my GPUs allocate memory and #processes = #GPUs How can I prevent this from happening? Note I do not want use set the devices manually and I do not want to set CUDA_VISIBLE_DEVICES since I want

Tensorflow: Setting allow_growth to true does still allocate memory of all my GPUs

送分小仙女□ 提交于 2021-01-23 11:02:41
问题 I have several GPUs but I only want to use one GPU for my training. I am using following options: config = tf.ConfigProto(allow_soft_placement=True, log_device_placement=True) config.gpu_options.allow_growth = True with tf.Session(config=config) as sess: Despite setting / using all these options, all of my GPUs allocate memory and #processes = #GPUs How can I prevent this from happening? Note I do not want use set the devices manually and I do not want to set CUDA_VISIBLE_DEVICES since I want

看懂2020年智能浪潮,我们从百度和谷歌的AI足迹出发

ぃ、小莉子 提交于 2021-01-23 09:35:56
来源: 脑极体 2020年已经过去,无论我们过得顺遂平安,还是过得无比艰难,我们应该都会记住这一年。 回顾2020年,在这个不同寻常的疫情之年,科技成为人类抗击疫情的关键,而人工智能技术投入抗疫战争之中,可以说是人类有史以来的第一次。而与此同时,AI技术已经在我们的生产生活、公共管理等方方面面得到深入应用。 人工智能应用场景的爆发,其背后是全球AI科技企业从后端走向前台,从实验室走向产业纵深的努力和实践。 最近,百度和谷歌不约而同都发发布了长文,来总结2020的AI发展。 谷歌首席AI科学家Jeff Dean在谷歌博客上刊出了Google Research: Looking Back at 2020, and Forward to 2021的万字长文,详细介绍了谷歌在2020年AI技术在多个领域取得的应用进展。 而更早一点时间,百度先后发布了长达万字的《百度AI的2020》和《百度研究院2021年十大科技趋势预测》,同样细数了百度过去一年里在AI技术业务体系和产业赋能体系中的各项成果,同时也对2021年的AI技术和应用领域做出清晰判断。 如果仔细对比谷歌和百度在AI技术和产业应用的实践,我们就会惊讶地发现,两家几乎同时以搜索引擎业务起家的公司已经将AI技术应用到自身产品体系的方方面面。 我们同时也看到,谷歌和百度正在构建一幅千行百业智能化的全球AI图景,不过两家巨头的不同之处在于

手把手教你用 TensorFlow 实战线性回归问题

匆匆过客 提交于 2021-01-22 13:17:45
TensorFlow 实战线性回归问题 线性回归 (Linear Regression) 是利用称为线性回归方程的最小平方函数对一个或多个自变量和因变量之间关系进行建模的一种回归分析,用来确定两种或两种以上变量间相互依赖的定量关系的一种统计分析方法,运用十分广泛。线性回归问题也是机器学习的入门级知识,下面就和小编一起来学习一下用 Python + TensorFlow 如何实现线性回归吧! 1、线性回归方程 单变量的线性回归方程可以表示为: y=w*x+b 本例我们将通过代码来生成一个人工数据集。随机生成一个近似采样随机分布,使得w=2.0,b=1,并加入一个噪声,噪声的最大振幅为0.4。即方程表示为: y=2.0*x+1 2、人工数据集生成 %matplotlib inline import matplotlib.pyplot as plt import numpy as np import tensorflow as tf # 设置随机数种子 np.random.seed(5) #采用np生成等差数列,生成100个点,每个点取值在-1到1之间 x_data = np.linspace(-1,1,100) # y=2x+1,其中,噪声的维度与x_data一致 y_data = 2*x_data + 1.0 + np.random.randn(*x_data.shape)*0.4

Tensorflow on MacOS: Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

痴心易碎 提交于 2021-01-22 09:08:38
问题 I tried to validate my tensorflow for my mac using these instructions https://www.tensorflow.org/install/install_mac#ValidateYourInstallation but produce this result. is that ok? bad? how can i fix this? thanks sess = tf.Session() Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA print(sess.run(hello)) b'Hello, TensorFlow!' Mac OS version: MacOS High Sierra 10.13.6 here is the full installation and validation output: usermacbook:tensorflowve someuser

Tensorflow on MacOS: Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

送分小仙女□ 提交于 2021-01-22 09:05:58
问题 I tried to validate my tensorflow for my mac using these instructions https://www.tensorflow.org/install/install_mac#ValidateYourInstallation but produce this result. is that ok? bad? how can i fix this? thanks sess = tf.Session() Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA print(sess.run(hello)) b'Hello, TensorFlow!' Mac OS version: MacOS High Sierra 10.13.6 here is the full installation and validation output: usermacbook:tensorflowve someuser

Tensorflow on MacOS: Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

醉酒当歌 提交于 2021-01-22 09:05:08
问题 I tried to validate my tensorflow for my mac using these instructions https://www.tensorflow.org/install/install_mac#ValidateYourInstallation but produce this result. is that ok? bad? how can i fix this? thanks sess = tf.Session() Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA print(sess.run(hello)) b'Hello, TensorFlow!' Mac OS version: MacOS High Sierra 10.13.6 here is the full installation and validation output: usermacbook:tensorflowve someuser

Tensorflow model prediction is slow

廉价感情. 提交于 2021-01-22 08:34:21
问题 I have a TensorFlow model with a single Dense layer: model = tf.keras.Sequential([tf.keras.layers.Dense(2)]) model.build(input_shape=(None, None, 25)) I construct a single input vector in float32 : np_vec = np.array(np.random.randn(1, 1, 25), dtype=np.float32) vec = tf.cast(tf.convert_to_tensor(np_vec), dtype=tf.float32) I want to feed that to my model for prediction, but it is very slow. If I call predict or __call__ it takes a really long time, compared to doing the same operation in NumPy.

Tensorflow flatten vs numpy flatten function effect on machine learning training

我是研究僧i 提交于 2021-01-22 07:00:34
问题 I am starting with deep learning stuff using keras and tensorflow. At very first stage i am stuck with a doubt. when I use tf.contrib.layers.flatten (Api 1.8) for flattening a image (could be multichannel as well). How is this different than using flatten function from numpy? How does this affect the training. I can see the tf.contrib.layers.flatten is taking longer time than numpy flatten. Is it doing something more? This is a very close question but here the accepted answer includes Theano

Tensorflow flatten vs numpy flatten function effect on machine learning training

五迷三道 提交于 2021-01-22 06:58:11
问题 I am starting with deep learning stuff using keras and tensorflow. At very first stage i am stuck with a doubt. when I use tf.contrib.layers.flatten (Api 1.8) for flattening a image (could be multichannel as well). How is this different than using flatten function from numpy? How does this affect the training. I can see the tf.contrib.layers.flatten is taking longer time than numpy flatten. Is it doing something more? This is a very close question but here the accepted answer includes Theano