tensorflow

WARNING:tensorflow:`write_grads` will be ignored in TensorFlow 2.0 for the `TensorBoard` Callback

白昼怎懂夜的黑 提交于 2020-12-10 00:19:08
问题 I am using the following lines of codes to visualise the gradients of an ANN model using tensorboard tensorboard_callback = tf.compat.v1.keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=1, write_graph = True, write_grads =True, write_images = False) tensorboard_callback .set_model(model) %tensorboard --logdir ./Graph I received a warning message saying "WARNING:tensorflow: write_grads will be ignored in TensorFlow 2.0 for the TensorBoard Callback." I get the tensorboard output,

WARNING:tensorflow:`write_grads` will be ignored in TensorFlow 2.0 for the `TensorBoard` Callback

微笑、不失礼 提交于 2020-12-10 00:18:17
问题 I am using the following lines of codes to visualise the gradients of an ANN model using tensorboard tensorboard_callback = tf.compat.v1.keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=1, write_graph = True, write_grads =True, write_images = False) tensorboard_callback .set_model(model) %tensorboard --logdir ./Graph I received a warning message saying "WARNING:tensorflow: write_grads will be ignored in TensorFlow 2.0 for the TensorBoard Callback." I get the tensorboard output,

教程 | 如何使用变分自编码器VAE生成动漫人物形象

巧了我就是萌 提交于 2020-12-10 00:13:02
点击上方“ 迈微AI研习社 ”,选择“ 星标★ ”公众号 重磅干货,第一时间送达 变分自编码器(VAE)与生成对抗网络(GAN)经常被相互比较,其中前者在图像生成上的应用范围远窄于后者。VAE 是不是只能在 MNIST 数据集上生成有意义的输出?在本文中,作者尝试使用 VAE 自动生成动漫人物的头像,并取得了不错的结果。 以上是通过变分自编码器生成的动画图片样本。想要获得本文在 Github 代码仓库中的相关代码请点击:https://github.com/wuga214/IMPLEMENTATION_Variational-Auto-Encoder 在图像生成领域,人们总是喜欢试着将变分自编码器(VAE)和对抗生成网络(GAN)相比较。人们的共识是,VAE 更容易被训练,并且具有显式的分布假设(高斯分布)用于显式的表示和观察,而 GAN 则能够更好地捕获观测值的分布并且对观测分布没有任何的假设。结果就是,每个人都相信只有 GAN 能够创造出清晰而生动的图片。虽然可能确实是这样,因为从理论上讲,GAN 捕获到了像素之间的相关性,但是没有多少人试过用比 28*28 维的 MNIST 数据更大的图片作为输入训练 VAE 来证明这一点。 在 MNIST 数据集上有太多变分自编码器(VAE)的实现,但是很少有人在其他的数据集上做些不一样的事情。这是因为最原始的变分自编码器的论文仅仅只用

性能优化:线程资源回收

一世执手 提交于 2020-12-09 19:25:29
本文来自: PerfMa技术社区 PerfMa(笨马网络)官网 一、问题 模型服务平台的排序请求出现较多超时情况,且不定时伴随空指针异常。 二、问题发生前后的改动 召回引擎扩大了召回量,导致排序请求的item数量增加了。 三、出问题的模型 基于XGBoost预测的全排序模型。 四、项目介绍 web-rec-model:模型服务平台。用于管理排序模型:XGBoost、TensorFlow、pmml....召回模型:item2item,key2item,vec2item....等模型的上下线、测试模型一致性、模型服务等。 五、一次排序请求流程 1、如下图所示,一次排序请求流程包含:特征获取、向量获取、数据处理及预测。以上提到的三个步骤均采用多线程并行处理,均以子任务形式执行。每个阶段中间夹杂这数据处理的流程,由主线程进行处理,且每个阶段的执行任务均为超时返回,主线程等待子线程任务时,也采用超时等待的策略。(同事实现的一个树形任务执行,超时等待的线程框架) 2、特征数据闭环:该步骤为异步执行,将排序计算使用到的特征及分数,模型版本等信息记录。后续作为模型的训练样本,达到特征闭环。 3、一次排序请求中, 特征获取及向量获取 为网络IO(IO密集型任务),超时可直接响应中断,线程可快速返回。 数据处理及模型 为计算步骤(CPU密集型任务)。 4、当前请求耗时情况

'Dense' object has no attribute 'op'

落爺英雄遲暮 提交于 2020-12-08 06:11:37
问题 I am trying to make a fully connected model using tensorflow.keras, here is my code from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Dense, Flatten def load_model(input_shape): input = Input(shape = input_shape) dense_shape = input_shape[0] x = Flatten()(input) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense

logits and labels must be broadcastable error in Tensorflow RNN

泪湿孤枕 提交于 2020-12-08 05:48:10
问题 I am new to Tensorflow and deep leaning. I am trying to see how the loss decreases over 10 epochs in my RNN model that I created to read a dataset from kaggle which contains credit card fraud data. I am trying to classify the transactions as fraud(1) and not fraud(0). When I try to run the below code I keep getting the below error: > 2018-07-30 14:59:33.237749: W > tensorflow/core/kernels/queue_base.cc:277] > _1_shuffle_batch/random_shuffle_queue: Skipping cancelled enqueue attempt with queue

How to create a One-hot Encoded Matrix from a PNG for Per Pixel Classification in Tensorflow 2

血红的双手。 提交于 2020-12-07 14:46:20
问题 I'm attempting to train a Unet to provide each pixel of a 256x256 image with a label, similar to the tutorial given here. In the example, the predictions of the Unet are a (128x128x3) output where the 3 denotes one of the classifications assigned to each pixel. In my case, I need a (256x256x10) output having 10 different classifications (Essentially a one-hot encoded array for each pixel in the image). I can load the images but I'm struggling to convert each image's corresponding segmentation

Issue in creating Tflite model populated with metadata (for object detection)

谁说胖子不能爱 提交于 2020-12-07 07:26:49
问题 I am trying to run a tflite model on Android for object detection. For the same, I have successfully trained the model with my sets of images as follows: (a) Training: !python3 object_detection/model_main.py \ --pipeline_config_path=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config \ --model_dir=training/ (modifying the config file to point to where my specific TFrecords are mentioned) (b) Export inference graph !python

Issue in creating Tflite model populated with metadata (for object detection)

落爺英雄遲暮 提交于 2020-12-07 07:26:37
问题 I am trying to run a tflite model on Android for object detection. For the same, I have successfully trained the model with my sets of images as follows: (a) Training: !python3 object_detection/model_main.py \ --pipeline_config_path=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config \ --model_dir=training/ (modifying the config file to point to where my specific TFrecords are mentioned) (b) Export inference graph !python

tensorflow gpu is only running on CPU

柔情痞子 提交于 2020-12-07 07:17:43
问题 I installed Anaconda-Navigatoron Windows 10 and all necessary Nvidia/Cuda packages, created a new environment called tensorflow-gpu-env, updated PATH information, etc. When I run a model (build by using tensorflow.keras ), I see that CPU utilization increases significantly, GPU utilization is 0%, and the model just does not train. I run a couple of tests to make sure how things look: print(tf.test.is_built_with_cuda()) True The above output ('True') looks correct. Another try: from tensorflow