tensorflow

“IndexError: list index out of range” in model.fit() method when using Dataset in Tensorflow Keras classifier

六月ゝ 毕业季﹏ 提交于 2021-02-11 12:36:33
问题 I'm new in TensorFlow and I'm trying to create a classifier using Keras. My training data is spitted into two files: - one with training examples, each example is a vector of 64 floats - second with labels, each label is an int within range (0,..,SIZE) (SIZE is 100) and it describes a class. Both files are quire large and I can't fit them into memory so I've used tf.Dataset. I create two Datasets (one for features and one for labels) and them merge them using tf.data.Dataset.zip(). However

Why doesn't the Adadelta optimizer decay the learning rate?

倖福魔咒の 提交于 2021-02-11 12:32:07
问题 I have initialised an Adadelta optimizer in Keras (using Tensorflow backend) and assigned it to a model: my_adadelta = keras.optimizers.Adadelta(learning_rate=0.01, rho=0.95) my_model.compile(optimizer=my_adadelta, loss="binary_crossentropy") During training, I am using a callback to print the learning rate after every epoch: class LRPrintCallback(Callback): def on_epoch_end(self, epoch, logs=None): lr = self.model.optimizer.lr print(K.eval(lr)) However, this prints the same (initial)

How to implement gradient ascent in a Keras DQN

落爺英雄遲暮 提交于 2021-02-11 12:30:01
问题 Have built a Reinforcement Learning DQN with variable length sequences as inputs, and positive and negative rewards calculated for actions. Some problem with my DQN model in Keras means that although the model runs, average rewards over time decrease, over single and multiple cycles of epsilon. This does not change even after significant period of training. My thinking is that this is due to using MeanSquareError in Keras as the Loss function (minimising error). So I am trying to implement

How to implement gradient ascent in a Keras DQN

生来就可爱ヽ(ⅴ<●) 提交于 2021-02-11 12:27:04
问题 Have built a Reinforcement Learning DQN with variable length sequences as inputs, and positive and negative rewards calculated for actions. Some problem with my DQN model in Keras means that although the model runs, average rewards over time decrease, over single and multiple cycles of epsilon. This does not change even after significant period of training. My thinking is that this is due to using MeanSquareError in Keras as the Loss function (minimising error). So I am trying to implement

Is there a way to generate real time depthmap from single camera video in python/opencv?

蓝咒 提交于 2021-02-11 12:23:25
问题 I'm trying to convert single images into it's depthmap , but I can't find any useful tutorial or documentation. I'd like to use opencv, but if you know a way to get the depth map using for example tensorflow, I'd be glad to hear it. There are numerous tutorials for stereo vision but I want to make it cheaper because it's for a project to help blind people. I'm currently using esp32 cam to stream frame by frame and receiving the images on python using opencv. 回答1: Usually, we need a

TFX Pipeline Error While Executing TFMA: AttributeError: 'NoneType' object has no attribute 'ToBatchTensors'

本秂侑毒 提交于 2021-02-11 12:21:02
问题 Basically I only reused code from iris utils and iris pipeline with minor change on serving input: def _get_serve_tf_examples_fn(model, tf_transform_output): model.tft_layer = tf_transform_output.transform_features_layer() feature_spec = tf_transform_output.raw_feature_spec() print(feature_spec) feature_spec.pop(_LABEL_KEY) @tf.function def serve_tf_examples_fn(*args): parsed_features = {} for arg in args: parsed_features[arg.name.split(":")[0]] = arg print(parsed_features) transformed

How do I build a TFmodel from NumPy array files?

家住魔仙堡 提交于 2021-02-11 12:19:54
问题 I have a dir with NumPy array files: bias1.npy, kernel1.npy, bias2.npy, kernel2.npy . How can I build a TF model that uses those arrays as kernels and biases of layers? 回答1: To avoid confusion bias matrix for the consistency of the numpy file is the 2D matrix with one column. This post shows how did I reproduce tf's model based on the numpy weights and biases. class NumpyInitializer(tf.keras.initializers.Initializer): # custom class converting numpy arrays to tf's initializers # used to

Keras Deploy for Tensorflow.js Usage

帅比萌擦擦* 提交于 2021-02-11 12:05:04
问题 I need to be able to deploy a keras model for Tensorflow.js prediction, but the Firebase docs only seem to support a TFLite object, which tf.js cannot accept. Tf.js appears to accept JSON files for loading (loadGraphModel() / loadLayersModel() ), but not a keras SavedModel (.pb + /assets + /variables). How can I attain this goal? Note for the Tensorflow.js portion: There are a lot of pointers to the tfjs_converter, but the closest API function offered to what I'm looking for is the

Input Pipeline for LSTM with Timeseries Data Using a Large Dataset with Multiple .csv in Tensorflow

此生再无相见时 提交于 2021-02-11 11:59:32
问题 Currently I can train a LSTM network using one csv file based on this tutorial: https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/ This code generate sliding windows where the last n_steps of the features are saved to predict the actual target (similar to this: Keras LSTM - feed sequence data with Tensorflow dataset API from the generator): #%% Import import pandas as pd import tensorflow as tf from tensorflow.python.keras.models import Sequential,

Reward function for learning to play Curve Fever game with DQN

会有一股神秘感。 提交于 2021-02-11 10:40:41
问题 I've made a simple version of Curve Fever also known as "Achtung Die Kurve". I want the machine to figure out how to play the game optimally. I copied and slightly modified an existing DQN from some Atari game examples that is made with Google's Tensorflow. I'm tyring to figure out an appropriate reward function. Currently, I use this reward setup: 0.1 for every frame it does not crash -500 for every crash Is this the right approach? Do I need to tweak the values? Or do I need a completely