google-compute-engine

GAE don't see gunicorn, but it is already installed

拈花ヽ惹草 提交于 2021-02-18 10:23:09
问题 I am trying to deploy Django app with Google App Engine. My app.yaml file: # [START runtime] runtime: python api_version: 1 threadsafe: true env: flex entrypoint: gunicorn -b :$PORT wsgi runtime_config: python_version: 3.4 env_variables: CLOUDSQL_CONNECTION_NAME: ugram-mysql CLOUDSQL_USER: root handlers: - url: / script: wsgi.application # [END runtime] But when I run gcloud app deploy , app deploy is running (5 minutes), but I get an error: Updating service [default]...failed. ERROR: (gcloud

Limit access to metadata on GCE instance

主宰稳场 提交于 2021-02-11 16:59:26
问题 Is there some way to limit access to the internal metadata IP? Background is: https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/ When I fetch all the data with curl I can see the email address of my google account among other stuff. I'd like to limit the data itself and access to the data as much as possible. Metadata is required during setup and boot as far as I know. Is there some way around this or at least some way to lock down access

Limit access to metadata on GCE instance

一个人想着一个人 提交于 2021-02-11 16:57:53
问题 Is there some way to limit access to the internal metadata IP? Background is: https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/ When I fetch all the data with curl I can see the email address of my google account among other stuff. I'd like to limit the data itself and access to the data as much as possible. Metadata is required during setup and boot as far as I know. Is there some way around this or at least some way to lock down access

Fixing broken /etc/network/interfaces

淺唱寂寞╮ 提交于 2021-02-11 14:47:22
问题 I have an Ubuntu 16.04 VM on Google Compute Engine. I was adding some commands to etc/network/interfaces, and restarted the vm to test them out. They were apparently incorrect, and I can no longer ssh into my vm. Is there a way I can edit the /etc/network/interfaces file without ssh to recover my vm? 回答1: This answer is based on an article Resolving getting locked out of a Compute Engine. Minor corrections were made and the solution was checked for the Debian 9 image. As in the case of bare

Python script using Cron Job Not saving output to file in Google VM Instance

这一生的挚爱 提交于 2021-02-11 14:34:04
问题 I am trying to run a python script in my Google VM Instance using Cron jobs. My script is supposed to log some data from a website and stores it in a CSV file. I tried running it using the usual python3 kb_sc.py and it worked just fine. ... #scrap the website print("checkpoint") if not (os.path.isdir(path)): os.makedirs(path) if not (os.path.isfile(path + file)): data_new.to_csv(path+file, index = False) else: data = pd.read_csv(path+file) data = data.append(data_new) data.to_csv(path+file,

TPU training freezes in the middle of training

余生颓废 提交于 2021-02-11 12:32:39
问题 I'm trying to train a CNN regression net in TF 1.12, using TPU v3-8 1.12 instance. The model succesfully compiles with XLA, starting the training process, but some where after the half iterations of the 1t epoch freezes, and doing nothing. I cannot find the root of the problem. def read_tfrecord(example): features = { 'image': tf.FixedLenFeature([], tf.string), 'labels': tf.FixedLenFeature([], tf.string) } sample=tf.parse_single_example(example, features) image = tf.image.decode_jpeg(sample[

Too many open files: '/home/USER/PATH/SERVICE_ACCOUNT.json' when calling Google's Natural Language API

无人久伴 提交于 2021-02-11 06:27:39
问题 I'm working on a Sentiment Analysis project using the Google Cloud Natural Language API and Python, this question might be similar to this other question, what I'm doing is the following: Reads a CSV file from Google Cloud Storage, file has approximately 7000 records. Converts the CSV into a Pandas DataFrame. Iterates over the dataframe and calls the Natural Language API to perform sentiment analysis on one of the dataframe's columns, on the same for loop I extract the score and magnitude

Is there no way to use GCP TCP load balancing and IPv6 for a http/https website?

删除回忆录丶 提交于 2021-02-10 16:50:38
问题 I have a website all set up and ready to go in a Docker environment behind an NGINX proxy. I've configured SSL to the website works with http and https, and the website is working over IPv4. Now I need to add IPv6 support. It seems I can't attach an IPv6 address directly to my VM, I have to create a load balancer. I don't want to use the HTTP(S) load balancer, because that would involve re-doing my whole setup, configuring new certificates for the LB, routines for renewing them etc. So I've

Run python file daily on Google Compute Engine with Linux Ubuntu

半腔热情 提交于 2021-02-10 14:15:43
问题 I need to run my python file once in every day on Google Compute Engine that Ubuntu 18.04 installed in it. 回答1: Use Crontab to run the script (here's the crontab documentation) and make your .py file executable with " chmod +x script.py " Similar topics were discussed here and here. 来源: https://stackoverflow.com/questions/61584857/run-python-file-daily-on-google-compute-engine-with-linux-ubuntu

Run python file daily on Google Compute Engine with Linux Ubuntu

*爱你&永不变心* 提交于 2021-02-10 14:05:31
问题 I need to run my python file once in every day on Google Compute Engine that Ubuntu 18.04 installed in it. 回答1: Use Crontab to run the script (here's the crontab documentation) and make your .py file executable with " chmod +x script.py " Similar topics were discussed here and here. 来源: https://stackoverflow.com/questions/61584857/run-python-file-daily-on-google-compute-engine-with-linux-ubuntu