问题
I am trying to use an existing TensorFlow model, which I have so far run locally, with Google Cloud ML Engine.
The model currently obtains its training data by passing filesnames such as my_model.train
and my_model.eval
into tf.data.TextLineDataset. These filenames are now hardcoded in the model's trainer, but I plan to refactor it such that it obtains them as training application parameters (along with --job-dir
) on the command line instead; e.g. like so:
my_trainer.pl --job-dir job \
--filename-train my_model.train --filename-eval my_model.eval
This should then also allow me to run the trainer with Clould ML Engine locally:
gcloud ml-engine local train \
--job-dir job
...
-- \
--filename-train my_model.train \
--filename-eval my_model.eval
Am I making correct assumptions so far and could I also run the same trainer in Google's cloud (after uploading my dataset files into my_bucket
) by replacing local filenames with Google Cloud Storage gs:
URIs e.g. like so :
gcloud ml-engine local train \
--job-dir job
...
-- \
--filename-train gs://my_bucket/my_model.train \
--filename-eval gs://my_bucket/my_model.eval
I other worlds, can tf.data.TextLineDataset
handle gs:
URIs as "filenames" transparently, or do I have to include special code in my trainer for processing such URIs beforehand?
回答1:
Yes, tf.read_file and tf.TextLineReader and tf.data.TextLineDataset all handle GCS implicitly. Just make sure you pass in GCS URLs of the gs://my_bucket/path/to/data.csv as the "filename"
One thing to be careful about: always use os.path.join() to combine "directory" names and "file" names. While most Linux distributions handle paths like /some/path//somefile.txt by ignoring the repeated slash, GCS (being a key-value store) considers it different from /some/path/somefile.txt. So, use os.path.join to make sure you are not repeating directory separators.
来源:https://stackoverflow.com/questions/47952143/does-google-cloud-ml-engine-trainer-have-to-be-explicitly-aware-of-google-cloud