tensorflow2.0

How do I get the gradient of a keras model with respect to its inputs?

我的未来我决定 提交于 2020-01-22 03:09:18
问题 I just asked a question on the same topic but for custom models (How do I find the derivative of a custom model in Keras?) but realised quickly that this was trying to run before I could walk so that question has been marked as a duplicate of this one. I've tried to simplify my scenario and now have a (not custom) keras model consisting of 2 Dense layers: inputs = tf.keras.Input((cols,), name='input') layer_1 = tf.keras.layers.Dense( 10, name='layer_1', input_dim=cols, use_bias=True, kernel

How can I check/release GPU-memory in tensorflow 2.0b?

守給你的承諾、 提交于 2020-01-21 05:34:05
问题 In my tensorflow2.0b program I do get an error like this ResourceExhaustedError: OOM when allocating tensor with shape[727272703] and type int8 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:TopKV2] The error occurs after a number of GPU-based operations within this program have been successfully executed. I like to release all GPU-memory associated with these past operations in order to avoid the above error. How can I do this in tensorflow-2.0b? How could I check

Is there an alternative to tf.py_function() for custom Python code?

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-16 19:33:12
问题 I have started using TensorFlow 2.0 and have a little uncertainty with regard to one aspect. Suppose I have this use case: while ingesting data with the tf.data.Dataset I want to apply some specific augmentation operations upon some images. However, the external libraries that I am using require that the image is a numpy array , not a tensor . When using tf.data.Dataset.from_tensor_slices() , the flowing data needs to be of type Tensor. Concrete example: def my_function(tensor_image): print

Importing Tensorflow 2.0 gpu from different Processes

柔情痞子 提交于 2020-01-16 09:03:09
问题 I'm working on a project in which I got a python module that implements an iterative process and some computations are performed by GPU using tensorflow 2.0. The module works right when used stand-alone from a single process. Since I have to perform several runs with different parameters I'd like to parallelize the calls, but when I call the module (which imports tensorflow) from a different process, I got CUDA_ERROR_OUT_OF_MEMORY and an infinite loop of CUDA_ERROR_NOT_INITIALIZED , so the

Install TensorFlow addons

假如想象 提交于 2020-01-16 08:36:10
问题 I have a venv with the following details: python 3.6 TensorFlow 2.0.0 I tried to install tensorflow-addons using the following: pip install -q --no-deps tensorflow-addons~=0.6 But then I keep receiving the following error : Could not find a version that satisfies the requirement tensorflow-addons~=0.6 (from versions: ) No matching distribution found for tensorflow-addons~=0.6 You are using pip version 18.0, however version 19.3.1 is available. You should consider upgrading via the 'python -m

Visualize TFLite graph and get intermediate values of a particular node?

北慕城南 提交于 2020-01-16 04:27:25
问题 I was wondering if there is a way to know the list of inputs and outputs for a particular node in tflite? I know that I can get input/outputs details, but this does not allow me to reconstruct the computation process that happens inside an Interpreter . So what I do is: interpreter = tf.lite.Interpreter(model_path=model_path) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() interpreter.get_tensor_details() The

Moved to Tensorflow 2.0, training now hangs after third step

纵饮孤独 提交于 2020-01-15 11:44:06
问题 Recently I have decided to move from version 1.14 of Tensorflow (gpu variant) to current version 2.0. My current setup is: Tensorflow (gpu variant) 2.0 Cudnn 7.6.4 CUDA 10 Python 3.6 IDE: Visual Studio 2019 I did expect there will be some pain involved, but this caught me off guard. When I tried to run one of my (now adjusted) 1.14 projects, the model built with now issue, and the training process begun smoothly. Only to completely stop after third step. The same project runs just fine on cpu

Setting hyperparameter optimization bounds in GPflow 2.0

拜拜、爱过 提交于 2020-01-15 08:20:07
问题 In GPflow 1.0, if I wanted to set hard bounds on a parameter like lengthscale (i.e. limiting the optimisation range for the parameter), transforms.Logistic(a=4., b=6.) would bound the parameter between 4 and 6. GPflow 2.0's documentation says that transforms are handled by TensorFlow Probability's Bijector classes. Which Bijector class handles setting hard limits on parameters, and what is the proper way to implement it? A similar question was asked here (Kernel's hyper-parameters;

Get Gradients with Keras Tensorflow 2.0

不打扰是莪最后的温柔 提交于 2020-01-13 11:40:12
问题 I would like to keep track of the gradients over tensorboard. However, since session run statements are not a thing anymore and the write_grads argument of tf.keras.callbacks.TensorBoard is depricated , I would like to know how to keep track of gradients during training with Keras or tensorflow 2.0 . My current approach is to create a new callback class for this purpose, but without success. Maybe someone else knows how to accomplish this kind of advanced stuff. The code created for testing

Does Model.fit() upload the whole training dataset to the GPU?

让人想犯罪 __ 提交于 2020-01-11 10:31:57
问题 I'm training an LSTM on a couple GB dataset using the keras API, tensorflow backend. When running Model.fit() on some in-memory data (numpy), it allocates 8GB of memory in one request, which doesn't happen when loading only a small subset of the data. My GPU can't take both the model parameters and that 8GB, it goes out of memory and stops. I'm pretty sure this started happening after I upgraded to TF2rc from TF2 beta. Here's how I call fit: tb = tf.keras.callbacks.TensorBoard(log_dir=log_dir