google-cloud-ml

Using Instance Keys for Batch Prediction w/Tensorflow

*爱你&永不变心* 提交于 2021-01-28 06:08:31
问题 I am trying to figure out how to do batch prediction using Google Cloud. Specifically, I'm looking to do object detection, getting from a faster-RCNN tensorflow ckpt to a graph/saved model. My issue is that I need to be able to recover some kind of ID for my input images, perhaps an index or a filename. I'm not entirely sure how to do this in my situation, since this link mentions using instance keys, and the only relevant examples I've found regarding instance keys use JSON as the input

ai-platform: No eval folder or export folder in outputs when running TensorFlow 2.1 training job using Estimators

≯℡__Kan透↙ 提交于 2021-01-28 03:38:54
问题 The Problem My code works locally, but I am not able to get any evaluation data or exports from my TensorFlow estimator when submitting online training jobs after having upgraded to TensorFlow 2.1. Here's the bulk of my code: def build_estimator(model_dir, config): return tf.estimator.LinearClassifier( feature_columns=feature_columns, n_classes=2, optimizer=tf.keras.optimizers.Ftrl( learning_rate=args.learning_rate, l1_regularization_strength=args.l1_strength ), model_dir=model_dir, config

google ai platform vs ml engine

狂风中的少年 提交于 2021-01-22 19:06:49
问题 I did lots of search, but I cannot understand what the difference between google ai platform and ml engine . It seems that both of them can be used for training and deploying models. Other words like google-cloud-automl, google ai hub are also very confusing. What are the differences between them? Thanks 回答1: The short answer is: there isn't. In 2019 "ML Engine" was renamed to "AI Platform" and in time some services changed and expanded. To see what has changed, check the release notes,

TensorFlow model serving on Google AI Platform online prediction too slow with instance batches

可紊 提交于 2020-12-12 02:54:46
问题 I'm trying to deploy a TensorFlow model to Google AI Platform for Online Prediction. I'm having latency and throughput issues . The model runs on my machine in less than 1 second (with only an Intel Core I7 4790K CPU) for a single image. I deployed it to AI Platform on a machine with 8 cores and an NVIDIA T4 GPU. When running the model on AI Platform on the mentioned configuration, it takes a little less than a second when sending only one image. If I start sending many requests, each with

TensorFlow model serving on Google AI Platform online prediction too slow with instance batches

∥☆過路亽.° 提交于 2020-12-12 02:52:56
问题 I'm trying to deploy a TensorFlow model to Google AI Platform for Online Prediction. I'm having latency and throughput issues . The model runs on my machine in less than 1 second (with only an Intel Core I7 4790K CPU) for a single image. I deployed it to AI Platform on a machine with 8 cores and an NVIDIA T4 GPU. When running the model on AI Platform on the mentioned configuration, it takes a little less than a second when sending only one image. If I start sending many requests, each with

Cannot deploy trained model to Google Cloud Ai-Platform with custom prediction routine: Model requires more memory than allowed

匆匆过客 提交于 2020-07-05 04:44:06
问题 I am trying to deploy a pretrained pytorch model to AI Platform with a custom prediction routine. After following the instructions described here the deployment fails with the following error: ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to have error, please contact Cloud ML. The contents of the model folder are 83.89 MB large

Tensorflow 2 on Google Cloud AI platform

£可爱£侵袭症+ 提交于 2020-05-27 05:09:29
问题 Any news when Tensorflow 2 will be supported on Google Cloud AI platform? According to the list, 1.15 is the last version to be supported: https://cloud.google.com/ml-engine/docs/runtime-version-list 回答1: we will support TF 2.1 officially in early Feb due to large corresponding changes on service. Thank you for your patience! 回答2: Add a dependency tensorflow-gpu==2.0 to setup.py file to train with Tensorflow 2.0 in AI Platform run-time version 1.15. Tensorflow 2.1 can't use GPUs due to

Tensorflow 2 on Google Cloud AI platform

时光怂恿深爱的人放手 提交于 2020-05-27 05:09:21
问题 Any news when Tensorflow 2 will be supported on Google Cloud AI platform? According to the list, 1.15 is the last version to be supported: https://cloud.google.com/ml-engine/docs/runtime-version-list 回答1: we will support TF 2.1 officially in early Feb due to large corresponding changes on service. Thank you for your patience! 回答2: Add a dependency tensorflow-gpu==2.0 to setup.py file to train with Tensorflow 2.0 in AI Platform run-time version 1.15. Tensorflow 2.1 can't use GPUs due to

Tensorflow 2 on Google Cloud AI platform

ⅰ亾dé卋堺 提交于 2020-05-27 05:09:06
问题 Any news when Tensorflow 2 will be supported on Google Cloud AI platform? According to the list, 1.15 is the last version to be supported: https://cloud.google.com/ml-engine/docs/runtime-version-list 回答1: we will support TF 2.1 officially in early Feb due to large corresponding changes on service. Thank you for your patience! 回答2: Add a dependency tensorflow-gpu==2.0 to setup.py file to train with Tensorflow 2.0 in AI Platform run-time version 1.15. Tensorflow 2.1 can't use GPUs due to