deep-learning

Run tensorflow model in CPP

我只是一个虾纸丫 提交于 2021-01-29 04:53:14
问题 I trained my model using tf.keras. I convert this model to '.pb' by, import os import tensorflow as tf from tensorflow.keras import backend as K K.set_learning_phase(0) from tensorflow.keras.models import load_model model = load_model('model_checkpoint.h5') model.save('model_tf2', save_format='tf') This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore

How to implement a gaussian renderer with mean and variance values as input in any deep modeling framework (needs to be back-propagable)

有些话、适合烂在心里 提交于 2021-01-29 04:32:54
问题 Imagine a typical auto-encoder-decoder model. However, instead of a general decoder where deconvoutions together with upscaling are used to create/synthesize a tensor similar to the model's input, I need to implement a structured/custom decoder. Here, I need the decoder to take its input, e.g. a 10x2 tensor where each row represents x,y positions or coordinates, and render a fixed predefined size image where there are 10 gaussian distributions generated at the location specified by the input.

CNN pytorch : How are parameters selected and flow between layers

寵の児 提交于 2021-01-29 04:23:49
问题 I'm pretty new to CNN and have been following the below code. I'm not able to understand how and why have we selected the each argument of Conv2d() and nn.Linear () as they are i.e. the output, filter, channels, weights,padding and stride. I do understand the meaning of each though. Can someone very succinctly explain the flow for each layer? (Input Image Size is 32*32*3) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__(

CNN pytorch : How are parameters selected and flow between layers

↘锁芯ラ 提交于 2021-01-29 04:23:28
问题 I'm pretty new to CNN and have been following the below code. I'm not able to understand how and why have we selected the each argument of Conv2d() and nn.Linear () as they are i.e. the output, filter, channels, weights,padding and stride. I do understand the meaning of each though. Can someone very succinctly explain the flow for each layer? (Input Image Size is 32*32*3) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__(

Android App use HED with OpenCV and Deep Learning(Java)

最后都变了- 提交于 2021-01-28 21:58:46
问题 I am developing an android app to scan documents with my phone. I am using openCV and Canny edge detection and it works ok but if I try to scan an document on a background without enough contrast between the document and the background it fails. I have tried other apps in the Play Store and they are still able to scan the document with less contrast. So I was looking for ways to improve my edge detection and found this: https://www.pyimagesearch.com/2019/03/04/holistically-nested-edge

Arrange each pixel of a Tensor according to another Tensor

浪子不回头ぞ 提交于 2021-01-28 20:31:19
问题 Now, I am working on a work about registration using deep learning with the Keras backends. The state of task is that finish the registration between two images fixed and moving . Finally I get a deformation field D(200,200,2) where 200 is image size and 2 represents the offset of each pixel dx, dy, dz .I should apply D on moving and calculate loss with fixed . The problem is that is there a way that I can arrange the pixels in moving according to D in Keras model? 回答1: You should be able to

Error parsing text-format caffe.NetParameter: 54:17: Message type “caffe.ConvolutionParameter” has no field named “sparse_ratio”

泄露秘密 提交于 2021-01-28 13:58:56
问题 i hope you are doing well, i tried to run a python code that i downloaded from here : "https://github.com/may0324/DeepCompression-caffe/tree/master/examples/mnist" i am using Ubuntu 16.04,python (2.7,3.5), import sys import os sparse_ratio_vec = [0.33, 0.8, 0.9, 0.8] #sparse ratio of each layer iters = [500, 1000, 10500, 11000, 500] #max iteration of each stage def generate_data_layer(): data_layer_str = ''' name: "LeNet" layer { name: "mnist" type: "Data" top: "data" top: "label" include {

sess.run() and “.eval()” in tensorflow programming

ⅰ亾dé卋堺 提交于 2021-01-28 12:12:11
问题 In Tensorflow programming, can someone please tell what is the difference between ".eval()" and "sess.run()". What do each of them do and when to use them? 回答1: A session object encapsulates the environment in which Tensor objects are evaluated. If x is a tf.Tensor object, tf.Tensor.eval is shorthand for tf.Session.run , where sess is the current tf.get_default_session . You can make session the default as below x = tf.constant(5.0) y = tf.constant(6.0) z = x * y with tf.Session() as sess:

Computing gradient of the model with modified weights

别等时光非礼了梦想. 提交于 2021-01-28 11:16:47
问题 I was implementing Sharpness Aware Minimization (SAM) using Tensorflow. The algorithm is simplified as follows Compute gradient using current weight W Compute ε according to the equation in the paper Compute gradient using the weights W + ε Update model using gradient from step 3 I have implement step 1 and 2 already, but having trouble implementing step 3 according to the code below def train_step(self, data, rho=0.05, p=2, q=2): if (1 / p) + (1 / q) != 1: raise tf.python.framework.errors

Keras ValueError: No gradients provided for any variable

£可爱£侵袭症+ 提交于 2021-01-28 09:01:14
问题 I've read related threads but not been able to solve my problem. I'm currently trying to get my model to run in order to classify 5000 different events, which all currently fall under the same category (so my "labels" dataset consists of 5000 1s). I'm using one hot encoding for my labels data set: labels = np.loadtxt("/content/drive/My Drive/5000labels1.csv") from keras.utils import to_categorical labels=to_categorical(labels) # convert labels to one-hot encoding I then define my model like