google-cloud-platform

What is the rate limit for GCE instance metadata service?

筅森魡賤 提交于 2021-02-08 09:11:07
问题 On a GCP compute environment, I need to get an id_token (expires every 3600s) to make service-to-service authentication (using GCF, Cloud Run etc). I get this id_token from the instance metadata service: http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=[...] Instead of implementing some form of caching+TTL for this identity token, I'm curious if I can call this endpoint every time I will make an outbound RPC (I might make a lot). That's why I'm curious:

What is the rate limit for GCE instance metadata service?

断了今生、忘了曾经 提交于 2021-02-08 09:08:26
问题 On a GCP compute environment, I need to get an id_token (expires every 3600s) to make service-to-service authentication (using GCF, Cloud Run etc). I get this id_token from the instance metadata service: http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=[...] Instead of implementing some form of caching+TTL for this identity token, I'm curious if I can call this endpoint every time I will make an outbound RPC (I might make a lot). That's why I'm curious:

How do you successfully invoke gsutil rsync from a python script?

谁都会走 提交于 2021-02-08 08:10:50
问题 I am trying to execute the following line gsutil -m rsync s3://input gs://output in python. When running this line in the shell terminal it works fine. However, I am trying to run this in a python script by using the following line. subprocess.Popen(["gsutil", "-m", "rsync", "s3://input", "gs://output"]) However it just hangs forever. It outputs the following: Building synchronization state... Starting synchronization... The bash command successfully prints: Building synchronization state...

How do you successfully invoke gsutil rsync from a python script?

烈酒焚心 提交于 2021-02-08 08:06:40
问题 I am trying to execute the following line gsutil -m rsync s3://input gs://output in python. When running this line in the shell terminal it works fine. However, I am trying to run this in a python script by using the following line. subprocess.Popen(["gsutil", "-m", "rsync", "s3://input", "gs://output"]) However it just hangs forever. It outputs the following: Building synchronization state... Starting synchronization... The bash command successfully prints: Building synchronization state...

Big Query - Transpose arrays into colums

旧街凉风 提交于 2021-02-08 06:33:29
问题 We have a table in Big Query like below. Input table: Name | Interests -----+---------- Bob | ["a"] Sue | ["a","b"] Joe | ["b","c"] We want to convert the above table to below format to make it BI/Visualisation friendly. Target/Required table: +------------------+ | Name | a | b | c | +------------------+ | Bob | 1 | 0 | 0 | | Sue | 1 | 1 | 0 | | Joe | 0 | 1 | 0 | +------------------+ Note: The Interests column is an array datatype. Is this sort of transformation possible in Big Query? If yes

Big Query - Transpose arrays into colums

主宰稳场 提交于 2021-02-08 06:33:07
问题 We have a table in Big Query like below. Input table: Name | Interests -----+---------- Bob | ["a"] Sue | ["a","b"] Joe | ["b","c"] We want to convert the above table to below format to make it BI/Visualisation friendly. Target/Required table: +------------------+ | Name | a | b | c | +------------------+ | Bob | 1 | 0 | 0 | | Sue | 1 | 1 | 0 | | Joe | 0 | 1 | 0 | +------------------+ Note: The Interests column is an array datatype. Is this sort of transformation possible in Big Query? If yes

System is not terminated in scala application in docker on GKE

こ雲淡風輕ζ 提交于 2021-02-08 06:24:37
问题 I have a scala application that uses Akka Streams and running as a cronjob in Google Kubernetes Engine. But the pod is still in the “Running” state (not completed). And the Java process is still running inside the container. Here's what I do exactly: I build the docker image with sbt-native-packager and sbt docker:publish . When the job is done, I terminate it with regular system.terminate call. implicit val system: ActorSystem = ActorSystem("actor-system") /* doing actual stuff */ stream

System is not terminated in scala application in docker on GKE

落爺英雄遲暮 提交于 2021-02-08 06:24:36
问题 I have a scala application that uses Akka Streams and running as a cronjob in Google Kubernetes Engine. But the pod is still in the “Running” state (not completed). And the Java process is still running inside the container. Here's what I do exactly: I build the docker image with sbt-native-packager and sbt docker:publish . When the job is done, I terminate it with regular system.terminate call. implicit val system: ActorSystem = ActorSystem("actor-system") /* doing actual stuff */ stream

BigQuery Data Transfer Service with BigQuery partitioned table [closed]

怎甘沉沦 提交于 2021-02-08 06:12:56
问题 Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 months ago . Improve this question I have access to a project within BigQuery. I'm looking to create a partitioned table by ingestion time, partitioned by day, then set up a BigQuery Data Transfers process that brings avro files in from multiple directories within a Google Cloud Storage Bucket.

BigQuery Data Transfer Service with BigQuery partitioned table [closed]

谁说胖子不能爱 提交于 2021-02-08 06:11:56
问题 Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 months ago . Improve this question I have access to a project within BigQuery. I'm looking to create a partitioned table by ingestion time, partitioned by day, then set up a BigQuery Data Transfers process that brings avro files in from multiple directories within a Google Cloud Storage Bucket.