tensorflow

How can I set the name of my loss operation in Tensorflow?

旧时模样 提交于 2021-01-21 10:53:09
问题 In Tensorflow I can assign names to operations and tensors to retrieve them later. For example in one function I can do input_layer=tf.placeholder(tf.float32, shape= [None,300], name='input_layer') And then in another function later, I can do input_layer=get_tensor_by_name('input_layer:0') I came to believe that this is handy for making my tf code as modular as possible. I would like to be able to do the same with my loss but how can I assign a custom name to that operation? The problem is

How do I preprocess and tokenize a TensorFlow CsvDataset inside the map method?

允我心安 提交于 2021-01-21 10:39:09
问题 I made a TensorFlow CsvDataset , and I'm trying to tokenize the data as such: import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' from tensorflow import keras import tensorflow as tf from tensorflow.keras.preprocessing.text import Tokenizer import os os.chdir('/home/nicolas/Documents/Datasets') fname = 'rotten_tomatoes_reviews.csv' def preprocess(target, inputs): tok = Tokenizer(num_words=5_000, lower=True) tok.fit_on_texts(inputs) vectors = tok.texts_to_sequences(inputs) return vectors,

How to display weights and bias of the model on Tensorboard using python

て烟熏妆下的殇ゞ 提交于 2021-01-21 08:58:09
问题 I have created the following model for training and want to get it visualized on Tensorboard: ## Basic Cell LSTM tensorflow index_in_epoch = 0; perm_array = np.arange(x_train.shape[0]) np.random.shuffle(perm_array) # function to get the next batch def get_next_batch(batch_size): global index_in_epoch, x_train, perm_array start = index_in_epoch index_in_epoch += batch_size if index_in_epoch > x_train.shape[0]: np.random.shuffle(perm_array) # shuffle permutation array start = 0 # start next

Keras Model for Siamese Network not Learning and always predicting the same ouput

做~自己de王妃 提交于 2021-01-21 05:55:12
问题 I am trying to train a Siamese neural network using Keras, with the goal of identifying if 2 images belong to same class or not. My data is shuffled and has equal number of positive examples and negative examples. My model is not learning anything and it is predicting the same output always. I am getting the same loss, validation accuracy, and validation loss every time. Training Output def convert(row): return imread(row) def contrastive_loss(y_true, y_pred): margin = 1 square_pred = K

解析 TensorFlow Eager | DevFest 2018 现场实录

旧城冷巷雨未停 提交于 2021-01-21 02:58:33
大家期待已久的 DevFest 2018 现场实录 终于出炉了! 11 月 25 日,1125 位开发者之约,你在吗? 什么?你错过了 DevFest 2018 ? 不要担心,我们已经为大家推送本次大会的嘉宾演讲实录,不在现场也能干货满满! 回顾前情: 谷歌资深工程师顾仁民:TensorFlow Extended 帮你快速落地项目 | DevFest 2018 实录 AI 选手别错过!谷歌云美女工程师 Shirley 教你用 AutoML 定制机器学习模型 | DevFest 2018 实录 时下最火的实时视频通信如何使用深度学习?听声网首席科学家钟声聊一聊 | DevFest 2018 实录 谷歌移动技术专家 Palances Liao 带你解析 PWA/AMP & 洞察Web趋势 | DevFest 2018 实录 谷歌机器学习专家江骏 详解 TensorFlow Hub & Tensor2Tensor | DevFest 2018 实录 Merculet 首席架构师吴翔彬 借助公链和联盟链构建融合基础设施 | DevFest 2018 实录 本文由谷歌机器学习专家、了得研究院 CEO 彭靖田为我们分享《TensorFlow Eager》。 1 关于演讲者 2 演讲实录 开场白 这里简单的做一个自我介绍,我叫彭靖田,我从 16 年的时候开始做 TensorFlow

Can not use both bias and batch normalization in convolution layers

泪湿孤枕 提交于 2021-01-20 23:55:50
问题 I use slim framework for tensorflow, because of its simplicity. But I want to have convolutional layer with both biases and batch normalization. In vanilla tensorflow, I have: def conv2d(input_, output_dim, k_h=5, k_w=5, d_h=2, d_w=2, name="conv2d"): with tf.variable_scope(name): w = tf.get_variable('w', [k_h, k_w, input_.get_shape()[-1], output_dim], initializer=tf.contrib.layers.xavier_initializer(uniform=False)) conv = tf.nn.conv2d(input_, w, strides=[1, d_h, d_w, 1], padding='SAME')

Can not use both bias and batch normalization in convolution layers

不想你离开。 提交于 2021-01-20 23:48:45
问题 I use slim framework for tensorflow, because of its simplicity. But I want to have convolutional layer with both biases and batch normalization. In vanilla tensorflow, I have: def conv2d(input_, output_dim, k_h=5, k_w=5, d_h=2, d_w=2, name="conv2d"): with tf.variable_scope(name): w = tf.get_variable('w', [k_h, k_w, input_.get_shape()[-1], output_dim], initializer=tf.contrib.layers.xavier_initializer(uniform=False)) conv = tf.nn.conv2d(input_, w, strides=[1, d_h, d_w, 1], padding='SAME')

【20201127期嵌入式AI周报】NanoDet 目标检测模型、移植 ncnn到 RISC-V等!

谁说我不能喝 提交于 2021-01-20 22:50:10
导读:本期为 AI 简报 20201127 期,将为您带来 8 条相关新闻,希望对您有所帮助~ 一共2000+字,全篇看完需要5~7分钟 1. NanoDet:轻量级(1.8MB)、超快速(移动端97fps)目标检测项目 Github: https://github.com/RangiLyu/nanodet 近日,GitHub 上出现了一个项目 nanodet,它开源了一个 移动端 实时的 Anchor-free 检测模型,希望能够提供不亚于 YOLO 系列的性能,而且同样方便训练和移植。该项目上线仅两天,Star 量已经超过 200。 NanoDet 是一个速度超快和轻量级的移动端 Anchor-free 目标检测模型。该模型具备以下优势: 超轻量级:模型文件大小仅 1.8m; 速度超快:在移动 ARM CPU 上的速度达到 97fps(10.23ms); 训练友好:GPU 内存成本比其他模型低得多。GTX1060 6G 上的 Batch-size 为 80 即可运行; 方便部署:提供了基于 ncnn 推理框架的 C++ 实现和 Android demo。 2. 不到1000行代码,GitHub 2000星,天才黑客开源深度学习框架tinygrad Github: https://github.com/geohot/tinygrad 视频地址: https://www

Error importing tensorflow in anaconda on Mac OSX

风格不统一 提交于 2021-01-20 20:24:11
问题 I am trying to import tensorflow using python and anaconda on Mac OSX 10.11.6 (El Capitan). I have followed the instructions on tensorflow.org relating to installation with anaconda as follows: conda create -n tensorflow pip python=3.6 source activate tensorflow sudo -H pip3 install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.10.0-py3-none-any.whl And then starting python, and typing import tensorflow, produces an error: ImportError: dlopen(

Keras - Validation Loss and Accuracy stuck at 0

此生再无相见时 提交于 2021-01-20 19:20:47
问题 I am trying to train a simple 2 layer Fully Connected neural net for Binary Classification in Tensorflow keras. I have split my data into Training and Validation sets with a 80-20 split using sklearn's train_test_split() . When I call model.fit(X_train, y_train, validation_data=[X_val, y_val]) , it shows 0 validation loss and accuracy for all epochs , but it trains just fine. Also, when I try to evaluate it on the validation set, the output is non-zero. Can someone please explain why I am