tensorflow2.0

WARNING:tensorflow:sample_weight modes were coerced from … to ['…']

心已入冬 提交于 2020-06-24 04:58:07
问题 Training an image classifier using .fit_generator() or .fit() and passing a dictionary to class_weight= as an argument. I never got errors in TF1.x but in 2.1 I get the following output when starting training: WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] What does it mean to coerce something from ... to ['...'] ? The source for this warning on tensorflow 's repo is here, comments placed are: Attempt to coerce sample_weight_modes to the target structure. This

WARNING:tensorflow:sample_weight modes were coerced from … to ['…']

筅森魡賤 提交于 2020-06-24 04:58:03
问题 Training an image classifier using .fit_generator() or .fit() and passing a dictionary to class_weight= as an argument. I never got errors in TF1.x but in 2.1 I get the following output when starting training: WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] What does it mean to coerce something from ... to ['...'] ? The source for this warning on tensorflow 's repo is here, comments placed are: Attempt to coerce sample_weight_modes to the target structure. This

How to create a model easily convertible to TensorFlow Lite?

谁都会走 提交于 2020-06-23 14:27:28
问题 How to create a TensorFlow model which can be converted to TensorFlow Lite (tflite) and can be used in Android application? Following the examples in Google ML Crash Course I've created a classifier and trained a model. I've exported the model as saved model . I wanted to convert the model to .tflite file and use it to infer on Android . Soon (actually later) I understand that my model uses unsupported operation - ParseExampleV2 . Here is the classifier I'm using for training the model:

How to use tensorflow's FFT?

痴心易碎 提交于 2020-06-17 15:51:51
问题 I am having some trouble reconciling my FFT results from MATLAB and TF. The results are actually very different. Here is what I have done: 1). I would attach my data file here but didn't find a way to do so. Anyways, my data is stored in a .mat file, and the variable we will work with is called 'TD'. In MATLAB, I first subtract the mean of the data, and then perform fft: f_hat = TD-mean(TD); x = fft(f_hat); 2). In TF, I use tf.math.reduce_mean to calculate the mean, and it only differs from

The output of my regression NN with LSTMs is wrong even with low val_loss

亡梦爱人 提交于 2020-06-17 09:41:47
问题 The bounty expires in 5 days . Answers to this question are eligible for a +50 reputation bounty. Sharan Duggirala wants to draw more attention to this question. The Model I am currently working on a stack of LSTMs and trying to solve a regression problem. The architecture of the model is as below: comp_lstm = tf.keras.models.Sequential([ tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(units

The output of my regression NN with LSTMs is wrong even with low val_loss

烂漫一生 提交于 2020-06-17 09:41:00
问题 The bounty expires in 5 days . Answers to this question are eligible for a +50 reputation bounty. Sharan Duggirala wants to draw more attention to this question. The Model I am currently working on a stack of LSTMs and trying to solve a regression problem. The architecture of the model is as below: comp_lstm = tf.keras.models.Sequential([ tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64, return_sequences = True), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(units

Model fitting doesn't use all of the provided data [duplicate]

自闭症网瘾萝莉.ら 提交于 2020-06-17 09:38:06
问题 This question already has answers here : Keras not training on entire dataset (2 answers) TensorFlow Only running on 1/32 of the Training data provided (1 answer) Closed last month . I ran into a problem when playing with the introduction Tutorial for Tensorflow 2.0 Keras (https://www.tensorflow.org/tutorials/keras/classification). The Problem: There should be (and there are) 60.000 Images to fit the model. I checked this by printing out the length of train_images and train_labels . The

Tensorflow 2.0rc not detecting GPUs

☆樱花仙子☆ 提交于 2020-06-17 02:14:25
问题 TF2 is currently not detecting GPUs, I migrated from TF1.14 where using tf.keras.utils.multi_gpu_model(model=model, gpus=2) is now returning an error ValueError: To call `multi_gpu_model` with `gpus=2`, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1']. However this machine only has: ['/cpu:0', '/xla_cpu:0', '/xla_gpu:0', '/xla_gpu:1', '/xla_gpu:2', '/xla_gpu:3']. Try reducing `gpus`. Running nvidia-smi returns the following information +-------------------------

In TensorFlow 2.0, how can I see the number of elements in a dataset?

邮差的信 提交于 2020-06-16 13:03:21
问题 When I load a dataset, I wonder if there is any quick way to find the number of samples or batches in that dataset. I know that if I load a dataset with with_info=True , I can see for example total_num_examples=6000, but this information is not available if I split a dataset. Currently, I count the number of samples as follows, but wondering if there is any better solution: train_subsplit_1, train_subsplit_2, train_subsplit_3 = tfds.Split.TRAIN.subsplit(3) cifar10_trainsub3 = tfds.load(

What are _get_hyper and _set_hyper in TensorFlow optimizers?

▼魔方 西西 提交于 2020-06-16 04:18:22
问题 I see it in __init__ of e.g. Adam optimizer: self._set_hyper('beta_1', beta_1) . There are also _get_hyper and _serialize_hyperparameter throughout the code. I don't see these in Keras optimizers - are they optional? When should or shouldn't they be used when creating custom optimizers? 回答1: They enable setting and getting Python literals ( int , str , etc), callables , and tensors. Usage is for convenience and consistency : anything set via _set_hyper can be retrieved via _get_hyper ,