gcp

GCP Point Custom Domain to Specific App Engine Service

陌路散爱 提交于 2020-01-03 12:47:40
问题 I currently have an Google App Engine Flexible project with four services. And when I map my custom domain to my project using the documentation https://cloud.google.com/appengine/docs/standard/python/mapping-custom-domains, it automatically points to the default service which is not the frontend application. How do I map it to a different service. 回答1: You cannot map a certain (sub)domain to a certain service in the app-level custom domain mapping, mapping is done only at the app level (as a

Copy files from one Google Cloud Storage Bucket to other using Apache Airflow

狂风中的少年 提交于 2020-01-03 02:30:09
问题 Problem : I want to copy files from a folder in Google Cloud Storage Bucket (e.g Folder1 in Bucket1) to another Bucket (e.g Bucket2). I can't find any Airflow Operator for Google Cloud Storage to copy files. 回答1: I know this is an old question but I found myself dealing with this task too. Since I'm using the Google Cloud-Composer, GoogleCloudStorageToGoogleCloudStorageOperator was not available in the current version. I managed to solve this issue by using a simple BashOperator from airflow

Does Kubernetes Federation rebalance pods across clusters after it recovers from outage?

徘徊边缘 提交于 2019-12-25 09:42:55
问题 When the federation plane recovers from a zone outage, would it discover the changes I made to the other cluster? For instance, assume a federation with clusters A and B. Cluster A hosts the federation. I have a pod deployment with 4 replicas; cluster A and B gets 2 each. When cluster A goes down, hence federation plane goes down, if I increase the replica count on Cluster B to 4 to compensate for the loss of cluster A, what happens when federation comes back up? Would it overwrite cluster B

GCP MySQL server has gone away (Google SQL MySQL 2nd Gen 5.7)

痞子三分冷 提交于 2019-12-24 02:23:58
问题 We are running on Google Compute Engine/Debian9/PHP/Lumen/Doctrine2 <-> Google SQL MySQL 2nd Gen 5.7. Usually it works without hiccups, but we are now getting error messages, similar to the one below, with increasing frequency: Error while sending QUERY packet. PID=123456 PDOStatement::execute(): MySQL server has gone away Any idea why this is happening and how i would fix it? 回答1: As noted here, there is a list of cases which may be causing this error. A few are: You have encountered a

How can I create composite index without using GAE?

百般思念 提交于 2019-12-23 22:13:13
问题 I'm working on Google Cloud Datastore with Go SDK, and hitting a GQL query error - "Your Datastore does not have the composite index (developer-supplied) required for this query." I'm aware that I need to create the composite index. But according to Google Datastore document, it assumes that the application is up and running as an GAE, while in my case we run it on GKE and Go SDK to work with Datastore. So my question is, do I need to have an GAE instance just for creating an composite index?

change the horizontal-pod-autoscaler-sync-period with gke

流过昼夜 提交于 2019-12-20 06:08:47
问题 horizontal-pod-autoscaler-sync-period How can I change this setting with gke? I want to change from the default 30 seconds. 回答1: There is no way to add/remove flags when using GKE - that's the downside of it being managed for you and not by you . 来源: https://stackoverflow.com/questions/46317275/change-the-horizontal-pod-autoscaler-sync-period-with-gke

StackOverflow-error when applying pyspark ALS's “recommendProductsForUsers” (although cluster of >300GB Ram available)

ぐ巨炮叔叔 提交于 2019-12-20 03:43:24
问题 Looking for expertise to guide me on issue below. Background: I'm trying to get going with a basic PySpark script inspired on this example As deploy infrastructure I use a Google Cloud Dataproc Cluster. Cornerstone in my code is the function "recommendProductsForUsers" documented here which gives me back the top X products for all users in the model Issue I incur The ALS.Train script runs smoothly and scales well on GCP (Easily >1mn customers). However, applying the predictions: i.e. using

error in creating gpu google instance

自闭症网瘾萝莉.ら 提交于 2019-12-19 12:49:19
问题 I have tried creating GPU instance in Google Cloud Platform but every time I try to create an instance it shows "You've reached your limit of 0 GPUs NVIDIA K80". I am trying to create an instance with 4 vCPU, 8-15 gb memory, 1 GPU and in us-east1-c/us-west1-b. Please help for the following. 回答1: Follow all the steps in specified order, because otherwise GPUs won't be seen in Quotas page. You need to go to the Quotas part of IAM & Admin: https://console.cloud.google.com/projectselector/iam

How to reset Google Cloud Shell user persistent disk?

送分小仙女□ 提交于 2019-12-12 19:06:16
问题 Ok, folks. Pushing the limits (of my understanding), I've broken my Google Cloud Shell on Google Cloud Platform. I can no longer open a shell session. When I click the shell icon >_ on the tool bar, the shell opens on the lower half of the screen for a moment, stating it is provisioning the instance (if it has been over an hour), established a connection, and then poof, it closes. I was able to time a screen capture just right to see the following: Welcome to Cloud Shell! For help, visit

Permission Denied When Making Request to GCP Video Intelligence API

♀尐吖头ヾ 提交于 2019-12-12 16:00:13
问题 so I am able to make a valid request to the video intelligence api with the sample video given in the quickstart. https://cloud.google.com/video-intelligence/docs/getting-started I have tried many different ways of authenticating to the api as well. The API token I am using was created from the Credentials page in the console. There are no options to tie it to the video api so I figured it should automatically work. The API has been enabled on my account. export TOKEN="foobar" curl -XPOST -s