tensorflow

How to fine-tune a keras model with existing plus newer classes?

南笙酒味 提交于 2021-01-27 14:03:10
问题 Good day! I have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers frozen. This whole thing is pretty much like intuitive. Now what I require is, that my model learns to identify the celebrity faces, while also being able to detect all the other

Transfer learning: model is giving unchanged loss results. Is it not training? [closed]

一笑奈何 提交于 2021-01-27 13:33:14
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 months ago . Improve this question I'm trying to train a Regression Model on Inception V3. Inputs are images of size (96,320,3). There are a total of 16k+ images out of which 12k+ are for training and the rest for validation. I have frozen all layers in Inception, but unfreezing them

Transfer learning: model is giving unchanged loss results. Is it not training? [closed]

≯℡__Kan透↙ 提交于 2021-01-27 13:24:40
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 months ago . Improve this question I'm trying to train a Regression Model on Inception V3. Inputs are images of size (96,320,3). There are a total of 16k+ images out of which 12k+ are for training and the rest for validation. I have frozen all layers in Inception, but unfreezing them

neural network does not learn (loss stays the same)

我与影子孤独终老i 提交于 2021-01-27 13:14:35
问题 My project partner and I are currently facing a problem in our latest university project. Our mission is to implement a neural network that plays the game Pong. We are giving the ball position the ball speed and the position of the paddles to our network and have three outputs: UP DOWN DO_NOTHING. After a player has 11 points we train the network with all states, the made decisions and the reward of the made decisions (see reward_cal()). The problem we are facing is, that the loss is

ValueError: Data cardinality is ambiguous

跟風遠走 提交于 2021-01-27 13:09:35
问题 I'm trying to train LSTM network on data taken from a DataFrame. Here's the code: x_lstm=x.to_numpy().reshape(1,x.shape[0],x.shape[1]) model = keras.models.Sequential([ keras.layers.LSTM(x.shape[1], return_sequences=True, input_shape=(x_lstm.shape[1],x_lstm.shape[2])), keras.layers.LSTM(NORMAL_LAYER_SIZE, return_sequences=True), keras.layers.LSTM(NORMAL_LAYER_SIZE), keras.layers.Dense(y.shape[1]) ]) optimizer=keras.optimizers.Adadelta() model.compile(loss="mse", optimizer=optimizer) for i in

Choose random validation data set

纵然是瞬间 提交于 2021-01-27 13:09:05
问题 Given a numpy array consisting of data which has been generated for ongoing time from a simulation. Based on this I'm using tensorflow and keras to train a neural network and my question refers to this line of code in my model: model.fit(X1, Y1, epochs=1000, batch_size=100, verbose=1, shuffle=True, validation_split=0.2) After having read in the documentation of Keras I found out that the validation data set (in this case 20% of the original data) is sliced from the end. As Im generating data

Does tf.keras.layers.Conv1D support RaggedTensor input?

ⅰ亾dé卋堺 提交于 2021-01-27 13:04:47
问题 In the tensorflow conv1D layer documentation, it says that; 'When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None, e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for variable-length sequences of 128-dimensional vectors.' So I understand that we can input variable length sequences but when I use a ragged tensor input for conv1D layer, it gives me an error: ValueError: Layer conv1d does not support

Can you activate multiple Python virtual environments at once?

柔情痞子 提交于 2021-01-27 12:50:20
问题 I want to use tensorflow through a virtual environment. However, the Python script I want to run requires me to use a separate virtual environment that does not include tensorflow. Is it possible to activate these simultaneously? If not, can I merge the two virtual environments somehow? 回答1: Check this out. You could also activate different virtual environments on different terminal sessions 回答2: You could try adding the site-packages dir of the other virtualenv to your PYTHONPATH variable.

Reducing TFLite model size?

一世执手 提交于 2021-01-27 12:41:44
问题 I'm currently making a multi-label image classification model by following this guide (it uses inception as the base model): https://towardsdatascience.com/multi-label-image-classification-with-inception-net-cbb2ee538e30 After converting from .pb to .tflite the model is only approximately 0.3mb smaller. Here is my conversion code: toco \ --graph_def_file=optimized_graph.pb \ --output_file=output/optimized_graph.tflite \ --output_format=TFLITE \ --input_shape=1,299,299,3 \ --input_array=Mul \

dynamically catch exceptions in TensorFlow as part of the graph execution

烈酒焚心 提交于 2021-01-27 12:12:08
问题 E.g. the QueueBase.dequeue function can raise an OutOfRangeError exception which I will receive in Python from the Session.run call. Is there any way I can catch the exception inside the graph, similar as tf.cond ? E.g. something like: result = tf.on_exception(queue.dequeue(), lambda: 42) Maybe also the first argument would need to be a lambda such that it can properly set the context. To make this work, like in tf.cond , the result from both arguments would need to be of the same type. 回答1: