tpu

Output of Keras predict method has the wrong shape when using Google Colab's tpu strategy

ぃ、小莉子 提交于 2021-02-11 15:14:34
问题 I made the following architecture Layer (type) Output Shape Param # ================================================================= embedding_7 (Embedding) (None, 50, 64) 512000 _________________________________________________________________ bidirectional_5 (Bidirection (None, 200) 132000 _________________________________________________________________ dense_9 (Dense) (None, 1) 201 ================================================================= Total params: 644,201 Trainable params:

TPU training freezes in the middle of training

余生颓废 提交于 2021-02-11 12:32:39
问题 I'm trying to train a CNN regression net in TF 1.12, using TPU v3-8 1.12 instance. The model succesfully compiles with XLA, starting the training process, but some where after the half iterations of the 1t epoch freezes, and doing nothing. I cannot find the root of the problem. def read_tfrecord(example): features = { 'image': tf.FixedLenFeature([], tf.string), 'labels': tf.FixedLenFeature([], tf.string) } sample=tf.parse_single_example(example, features) image = tf.image.decode_jpeg(sample[

Hard-swish for TFLite

依然范特西╮ 提交于 2021-01-03 22:45:02
问题 I have a custom neural network written in Tensorflow.Keras and apply the hard-swish function as activation (as used in the MobileNetV3 paper): Implementation: def swish(x): return x * tf.nn.relu6(x+3) / 6 I am running quantization aware training and write a protobuf file at the end. Then, I am using this code to convert to tflite (and deploy it finally on the EdgeTPU): tflite_convert --output_file test.tflite --graph_def_file=test.pb --inference_type=QUANTIZED_UINT8 --input_arrays=input_1 -

Hard-swish for TFLite

时光怂恿深爱的人放手 提交于 2021-01-03 22:24:09
问题 I have a custom neural network written in Tensorflow.Keras and apply the hard-swish function as activation (as used in the MobileNetV3 paper): Implementation: def swish(x): return x * tf.nn.relu6(x+3) / 6 I am running quantization aware training and write a protobuf file at the end. Then, I am using this code to convert to tflite (and deploy it finally on the EdgeTPU): tflite_convert --output_file test.tflite --graph_def_file=test.pb --inference_type=QUANTIZED_UINT8 --input_arrays=input_1 -

File system scheme '[local]' not implemented in Google Colab TPU

。_饼干妹妹 提交于 2021-01-02 19:13:11
问题 I am using TPU runtime in Google Colab, but having problems in reading files (not sure). I initialized TPU using: import tensorflow as tf import os import tensorflow_datasets as tfds resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR']) tf.config.experimental_connect_to_cluster(resolver) # This is the TPU initialization code that has to be at the beginning. tf.tpu.experimental.initialize_tpu_system(resolver) print("All devices: ", tf.config

In google colab, is there a way to check what TPU verison is running?

孤街醉人 提交于 2020-12-06 19:22:22
问题 colab offers free TPUs. It's easy to see how many cores are given, but I was wondering if its possible to see how much memory per core? 回答1: As far as I know we don't have an Tensorflow op or similar for accessing memory info, though in XRT we do. In the meantime, would something like the following snippet work? import os from tensorflow.python.profiler import profiler_client tpu_profile_service_address = os.environ['COLAB_TPU_ADDR'].replace('8470', '8466') print(profiler_client.monitor(tpu

how to use torchaudio with torch xla on google colab tpu

大兔子大兔子 提交于 2020-08-10 01:07:48
问题 I'm trying to run a pytorch script which is using torchaudio on a google TPU. To do this I'm using pytorch xla following this notebook, more specifically I'm using this code cell to load the xla: !pip install torchaudio import os assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator' VERSION = "20200220" #@param ["20200220","nightly", "xrt==1.15.0"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup

how to use torchaudio with torch xla on google colab tpu

笑着哭i 提交于 2020-08-10 01:06:35
问题 I'm trying to run a pytorch script which is using torchaudio on a google TPU. To do this I'm using pytorch xla following this notebook, more specifically I'm using this code cell to load the xla: !pip install torchaudio import os assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator' VERSION = "20200220" #@param ["20200220","nightly", "xrt==1.15.0"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup