tensor

TensorFlow Training CNN on Custom Images

十年热恋 提交于 2019-12-25 00:13:31
问题 All the tensorflow tutorials do a great job, however, they all use preprocessed downloadable datasets that work out of the box. Their tutorial on MNIST is the perfect example. For a school project, 4 others and I have been assigned to train a CNN on supplied data in the form of PNG images. It's just a directory with 150 images. The labels are contained in the image file names. The way the codes sits now we are getting an error which I will include below. We followed the MNIST code found here:

matmul function for vector with tensor multiplication in tensorflow

丶灬走出姿态 提交于 2019-12-25 00:05:21
问题 In general when we multiply a vector v of dimension 1*n with a tensor T of dimension m*n*k , we expect to get a matrix/tensor of dimension m*k / m*1*k . This means that our tensor has m slices of matrices with dimension n*k , and v is multiplied to each matrix and the resulting vectors are stacked together. In order to do this multiplication in tensorflow , I came up with the following formulation. I am just wondering if there is any built-in function that does this standard multiplication

Tensorflow python: reshape input [batchsize] to tensor [batchsize, 2] with specific order

China☆狼群 提交于 2019-12-24 21:49:34
问题 I have a tensor (shape=[batchsize]). I want to reshape the tensor in a specific order and into shape=[-1,2]. But I want to have: Element at [0,0] Element at [1,0] Element at [0,1] Element at [1,1] Element at [0,2] Element at [0,3] Element at [2,1] Element at [3,1] and so on for an unknow batchsize. Here is an example code with a tensor range=(0 to input=8). import tensorflow as tf import numpy as np batchsize = tf.placeholder(shape=[], dtype=tf.int32) x = tf.range(0, batchsize, 1) x = tf

Keras - MS-SSIM as loss function

眉间皱痕 提交于 2019-12-24 18:32:17
问题 Edit: Updates since I had initially misinterpreted the paper I am trying to implement a custom loss function for keras, such that the objective is to minimize the MS-SSIM (http://www.cns.nyu.edu/~zwang/files/papers/msssim.pdf) I am getting the following error: Traceback (most recent call last): File "kerasmodel_const_init_customloss.py", line 318, in <module> model.fit(x=[np.array(training_data_LR), np.array(training_data_MC)], y=[np.array(training_data_HR)], batch_size=128, epochs=2, verbose

AOE工程实践-NCNN组件

北城余情 提交于 2019-12-24 16:16:06
作者:杨科 NCNN是腾讯开源的一个为手机端极致优化的高性能神经网络前向计算框架。在AOE开源工程里,我们提供了NCNN组件,下面我们以SqueezeNet物体识别这个Sample为例,来讲一讲NCNN组件的设计和用法。 直接集成NCNN缺点 为SqueezeNet接入NCNN,把相关的模型文件,NCNN的头文件和库,JNI调用,前处理和后处理相关业务逻辑等。把这些内容都放在SqueezeNet Sample工程里。这样简单直接的集成方法,问题也很明显,和业务耦合比较多,不具有通用性,前处理后处理都和SqueezeNcnn这个Sample有关,不能很方便地提供给其他业务组件使用。深入思考一下,如果我们把AI业务,作为一个一个单独的AI组件提供给业务的同学使用,会发生这样的情况: 每个组件都要依赖和包含NCNN的库,而且每个组件的开发同学,都要去熟悉NCNN的接口,写C的调用代码,写JNI。所以我们很自然地会想到要提取一个NCNN的组件出来,例如这样: AOE SDK里的NCNN组件 在AOE开源SDK里,我们提供了NCNN组件,下面我们从4个方面来讲一讲NCNN组件: NCNN组件的设计 对SqueezeNet Sample的改造 应用如何接入NCNN组件 对NCNN组件的一些思考 NCNN组件的设计 NCNN组件的设计理念是组件里不包含具体的业务逻辑

How to modify the return tensor from tf.nn.embedding_lookup()?

女生的网名这么多〃 提交于 2019-12-24 08:57:39
问题 I want to use scatter_nd_update to change the content of the tensor returned from tf.nn.embedding_lookup() . However, the returned tensor is not mutable, and the scatter_nd_update() require an mutable tensor as input. I spent a lot of time trying to find a solution, including using gen_state_ops._temporary_variable and using tf.sparse_to_dense , unfortunately all failed. I wonder is there a beautiful solution toward it? with tf.device('/cpu:0'), tf.name_scope("embedding"): self.W = tf

tensorflow - map_fn to do computation on every possible combination of two tensors

家住魔仙堡 提交于 2019-12-24 07:09:53
问题 does anyone know how to use map_fn or any other tensorflow-func to do a computation on every combination of two input-tensors? So what i want is something like this: Having two arrays ( [1,2] and [4,5] ) i want as a result a matrix with the output of the computation (e.g. add ) on every possible combination of the two arrays. So the result would be: [[5,6], [6,7]] I used map_fn but this only takes the elements index-wise: [[5] [7]] Has anyone an idea how implement this? Thanks 回答1: You can

Is this declaration of an Eigen::Tensor in C++ safe, or buggy? And should I submit an issue for it?

非 Y 不嫁゛ 提交于 2019-12-23 22:26:49
问题 Using Eigen's unsupported Tensor module, if I do: size_t dim0 = 3; size_t dim1 = 2; size_t dim2 = 4; Eigen::Tensor<double, 3> var(dim0, dim1, dim2); I get the following error: /usr/local/include/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorDimensions.h:287:167: error: non-constant-expression cannot be narrowed from type 'unsigned long' to 'std::__1::array<long, 3>::value_type' (aka 'long') in initializer list [-Wc++11-narrowing] But the code compiles OK if I explicitly cast the dimensions

怎样使用core dump

六眼飞鱼酱① 提交于 2019-12-23 21:03:00
TfLite micro中打印model 结构的方法 在tflite for micro中,提交了micro_optional_debug_tools.h/cc等文件用于得到类似如下的打印: 打印出的信息 Interpreter has 16 tensors and 7 nodes Inputs: 1 Outputs: 0 Tensor 0 Identity kTfLiteFloat32 kTfLiteArenaRw 16 bytes ( 0.0 MB) 1 4 Tensor 1 conv2d_10_input kTfLiteFloat32 kTfLiteArenaRw 1536 bytes ( 0.0 MB) 1 128 3 1 Tensor 2 sequential_5/conv2d_10/Conv2D/ReadVariableOp kTfLiteFloat32 kTfLiteMmapRo 384 bytes ( 0.0 MB) 1 4 3 8 Tensor 3 sequential_5/conv2d_10/Conv2D_bias kTfLiteFloat32 kTfLiteMmapRo 32 bytes ( 0.0 MB) 8 Tensor 4 sequential_5/conv2d_10/Relu kTfLiteFloat32 kTfLiteArenaRw 12288

用tensorflow实现SVM

旧巷老猫 提交于 2019-12-23 18:46:36
环境配置 win10 Python 3.6 tensorflow1.15 scipy matplotlib (运行时可能会遇到module tkinter的问题) sklearn 一个基于Python的第三方模块。sklearn库集成了一些常用的机器学习方法。 代码实现 import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from sklearn import datasets from tensorflow.python.framework import ops ops.reset_default_graph() sess = tf.Session() Session API Session的详细作用 Session是tensorflow中的一个执行OP和计算tensor的一个类。 framework API 补充: 张量(tensor):TensorFlow程序使用tensor数据结构来代表所有的数据,计算图中,操作间传递的数据都是tensor,你可以把TensorFlow tensor看做一个n维的数组或者列表。 变量(Var iable):常用于定义模型中的参数,是通过不断训练得到的值。比如权重和偏置。 占位符(placeholder):输入变量的载体