tensorflow

【NLP实战系列】Tensorflow命名实体识别实战

余生长醉 提交于 2020-12-01 00:31:35
实战是学习一门技术最好的方式,也是深入了解一门技术唯一的方式。因此,NLP专栏计划推出一个实战专栏,让有兴趣的同学在看文章之余也可以自己动手试一试。 本篇介绍自然语言处理中一种非常重要的任务:命名实体识别。因为最常见的是Bilstm+CRF模型进行实体识别,本文介绍介绍另外一种有效的模型,Dilated-CNN+CRF模型,但是两种模型的代码都会给出。 作者&编辑 | 小Dream哥 1 命名实体识别任务介绍 笔者在这篇文章中,曾经系统的介绍过命名实体识别任务的相关 概念 和 语料 标注方式 ,不了解的同学可以先阅读这篇文章: 【NLP-NER】什么是命名实体识别? 关于Bilstm和Dilated-CNN两个模型 理论 方面的内容,笔者在这篇文章中做了详细的介绍,不了解的同学可以先阅读这篇文章: 【NLP-NER】命名实体识别中最常用的两种深度学习模型 话不多说,既然是实战篇,我们就赶紧开始吧。 2 数据预处理 1) 查看数据格式 先了解一下数据格式,方便后面进行处理。如下图所示,语料为标准的BIO标注方式, 每个字和标记之间用空格隔开,语料之间用一个空行隔开 。 2)读取训练数据 def load_sentences(path, lower, zeros): """ 加载训练,测试,验证数据的函数 """ sentences = [] sentence = [] num = 0

GradienTape convergence much slower than Keras.model.fit

蹲街弑〆低调 提交于 2020-11-30 12:25:09
问题 I am currently trying to get a hold of the TF2.0 api, but as I compared the GradientTape to a regular keras.Model.fit I noticed: It ran slower(probably due to the Eager Execution) It converged much slower (and I am not sure why). +--------+--------------+--------------+------------------+ | Epoch | GradientTape | GradientTape | keras.Model.fit | | | | shuffling | | +--------+--------------+--------------+------------------+ | 1 | 0.905 | 0.918 | 0.8793 | +--------+--------------+-------------

tf object detection api - extract feature vector for each detection bbox

北城以北 提交于 2020-11-30 07:30:23
问题 I'm using Tensorflow object detection API and working on pretrainedd ssd-mobilenet model. is there a way to extact the last global pooling of the mobilenet for each bbox as a feature vector? I can't find the name of the operation holding this info. I've been able to extract detection labels and bboxes based on the example on github: image_tensor = detection_graph.get_tensor_by_name( 'image_tensor:0' ) # Each box represents a part of the image where a particular object was detected. detection

speed benchmark for testing tensorflow install

无人久伴 提交于 2020-11-30 04:50:58
问题 I'm doubting whether tensorflow is correctly configured on my gpu box, since it's about 100x slower per iteration to train a simple linear regression model (batchsize = 32, 1500 input features, 150 output variables) on my fancy gpu machine than on my laptop. I'm using a Titan X, with a modern cpu, etc. nvidia-smi says that I'm only at 10% gpu utilization, but I expect that's because of the small batchsizes. I'm not using a feed_dict to move data into the computation graph. Everything is

speed benchmark for testing tensorflow install

旧时模样 提交于 2020-11-30 04:44:52
问题 I'm doubting whether tensorflow is correctly configured on my gpu box, since it's about 100x slower per iteration to train a simple linear regression model (batchsize = 32, 1500 input features, 150 output variables) on my fancy gpu machine than on my laptop. I'm using a Titan X, with a modern cpu, etc. nvidia-smi says that I'm only at 10% gpu utilization, but I expect that's because of the small batchsizes. I'm not using a feed_dict to move data into the computation graph. Everything is

speed benchmark for testing tensorflow install

混江龙づ霸主 提交于 2020-11-30 04:44:45
问题 I'm doubting whether tensorflow is correctly configured on my gpu box, since it's about 100x slower per iteration to train a simple linear regression model (batchsize = 32, 1500 input features, 150 output variables) on my fancy gpu machine than on my laptop. I'm using a Titan X, with a modern cpu, etc. nvidia-smi says that I'm only at 10% gpu utilization, but I expect that's because of the small batchsizes. I'm not using a feed_dict to move data into the computation graph. Everything is

speed benchmark for testing tensorflow install

五迷三道 提交于 2020-11-30 04:40:09
问题 I'm doubting whether tensorflow is correctly configured on my gpu box, since it's about 100x slower per iteration to train a simple linear regression model (batchsize = 32, 1500 input features, 150 output variables) on my fancy gpu machine than on my laptop. I'm using a Titan X, with a modern cpu, etc. nvidia-smi says that I'm only at 10% gpu utilization, but I expect that's because of the small batchsizes. I'm not using a feed_dict to move data into the computation graph. Everything is

Tensorflow in Scala reflection

回眸只為那壹抹淺笑 提交于 2020-11-29 19:36:22
问题 I am trying to get tensorflow for java to work on Scala. I am use the tensorflow java library without any wrapper for Scala. At sbt I have: If I run the HelloWord found here, it WORKS fine, with the Scala adaptations: import org.tensorflow.Graph import org.tensorflow.Session import org.tensorflow.Tensor import org.tensorflow.TensorFlow val g = new Graph() val value = "Hello from " + TensorFlow.version() val t = Tensor.create(value.getBytes("UTF-8")) // The Java API doesn't yet include

Tensorflow in Scala reflection

风格不统一 提交于 2020-11-29 19:30:55
问题 I am trying to get tensorflow for java to work on Scala. I am use the tensorflow java library without any wrapper for Scala. At sbt I have: If I run the HelloWord found here, it WORKS fine, with the Scala adaptations: import org.tensorflow.Graph import org.tensorflow.Session import org.tensorflow.Tensor import org.tensorflow.TensorFlow val g = new Graph() val value = "Hello from " + TensorFlow.version() val t = Tensor.create(value.getBytes("UTF-8")) // The Java API doesn't yet include

Tensorflow in Scala reflection

时光总嘲笑我的痴心妄想 提交于 2020-11-29 19:30:38
问题 I am trying to get tensorflow for java to work on Scala. I am use the tensorflow java library without any wrapper for Scala. At sbt I have: If I run the HelloWord found here, it WORKS fine, with the Scala adaptations: import org.tensorflow.Graph import org.tensorflow.Session import org.tensorflow.Tensor import org.tensorflow.TensorFlow val g = new Graph() val value = "Hello from " + TensorFlow.version() val t = Tensor.create(value.getBytes("UTF-8")) // The Java API doesn't yet include