google-compute-engine

How to migrate old Google Compute Engine disks?

有些话、适合烂在心里 提交于 2019-12-24 03:13:01
问题 I am using Google Compute Engine in Europe and the maintenance window just hit us. The "automatic migration" didn't work, so all of our servers are offline. During the recovery from backup, we found a few files missing. I have a persistent boot disk created from the debian-7-wheezy-v20130617 image with data, which I am trying to access. I came up with 2 possible solutions to access the data: Create a new VM with the old bootdisk. Sounds easy, but Google changed something and the VM won't boot

VM Instance is not accessible The project you requested is unavailable

对着背影说爱祢 提交于 2019-12-24 03:00:55
问题 Just signed to the free Google cloud account (300$ Credit) to see if it supports exporting VMs in OVF format. Created a new project and By clicking on the Compute>Compute engine> VM Instance I see below error message: "The project you requested is unavailable." There is no extra information provided on the screen. 回答1: Google compute engine currently doesn't support exporting VMs in OVF or OVA format. You can use free tools, e.g. VirtulBox to convert GCE images from the RAW format to VMD, VDI

GCP MySQL server has gone away (Google SQL MySQL 2nd Gen 5.7)

痞子三分冷 提交于 2019-12-24 02:23:58
问题 We are running on Google Compute Engine/Debian9/PHP/Lumen/Doctrine2 <-> Google SQL MySQL 2nd Gen 5.7. Usually it works without hiccups, but we are now getting error messages, similar to the one below, with increasing frequency: Error while sending QUERY packet. PID=123456 PDOStatement::execute(): MySQL server has gone away Any idea why this is happening and how i would fix it? 回答1: As noted here, there is a list of cases which may be causing this error. A few are: You have encountered a

Is is valid to assume that Google virtual CPUs are all on 1 socket (if < 16 VCPUs?)

岁酱吖の 提交于 2019-12-24 01:58:18
问题 We're building a high performance computing scientific application (lots and lots of computations) using Java. To the best of our knowledge, google compute engine does not provide the "true" physical socket information, nor do they have a service like AWS's dedicated hosting (https://aws.amazon.com/ec2/dedicated-hosts/ and then see section on "affinity") where (for a fee) one could see the actual physical sockets. However, based on our understanding, JIT compiler will do a lot better if it

Google compute engine load balancing not routing properly

寵の児 提交于 2019-12-24 01:58:08
问题 I am new to Google compute engine and I am try to setup network load balancing having 2 VMs for serving web pages. For ex, I have 2 VMs - app1 and app2 - both having apache server and serves simple web page. Both VMs are running with Red Hat Enterprise Linux Server release 7.0 (Maipo) I am able to access both web pages through the IP in browser. I created network load balancing setup and both apps are showing in green in target pool which means load balancer is able to connect to both VMs.

db.model_from_protobuf() equivalents outside of AppEngine?

六月ゝ 毕业季﹏ 提交于 2019-12-24 01:53:02
问题 In Google AppEngine(GAE) environment, I can do following to convert a Protobuf bytestring back to a Datastore model: from google.appengine.ext import db byte_str = .... model = db.model_from_protobuf(byte_str.decode("base64")) Outside of GAE, I normally use the google-cloud-datastore client to access Datastore models: from google.cloud import datastore ... client = datastore.Client(project_id) query = client.query(kind='Event', order=('-date',)) for result in query.fetch(limit=100): print

Google Cloud public hostname

懵懂的女人 提交于 2019-12-23 23:32:06
问题 Is there any solution to get a public hostname in google cloud like in other cloud platforms? Currently the machine name is: computername.c.googleprojectid.internal but I want something like in Amazon or in Azure: computername.cloudapp.net 回答1: You can use the Google Cloud DNS service to update the DNS record for your host on startup. (You could also use a service like dyn-dns, but I'm assuming that you want to us the Google tools where possible.) It looks like you'd want to use the "create

Autoscaling GCE Instance groups based on Cloud pub/sub queue

删除回忆录丶 提交于 2019-12-23 22:51:07
问题 Can GCE Instance groups be scaled up/down bases on Google Cloud PubSub queue counts or other asynchronous task queues such as PSQ? 回答1: Yes! The feature is now in alpha: https://cloud.google.com/compute/docs/autoscaler/scaling-queue-based 回答2: I haven't tried this myself but looking at the documentation, it looks possible to set up autoscaling against Pub/Sub message queue counts. This page [0] explains how to setup autoscaler to scale based on a standard metric provided by the Cloud

gcloud compute copy-files succeeds but no files appear

孤人 提交于 2019-12-23 12:44:35
问题 I am copying data from my local machine to a compute engine instance: gcloud compute copy-files /Users/me/project/data.csv instance-name:~/project The command runs and completes: data.csv 100% 74KB 73.9KB/s 00:00 However, I cannot find it anywhere on my compute engine instance. It is not visible in the ~/project folder. Is it failing silently or am I looking in the wrong place? 回答1: Short answer Most likely, you're looking into the wrong $HOME . Make sure you're looking in the home directory

Compute Engine : “This site can’t be reached”

天涯浪子 提交于 2019-12-23 06:47:07
问题 SITUATION: I am following this tutorial. When I get to the part where I create an instance and I execute the necessary commands, I get to the following: To see the application running, go to http://[YOUR_INSTANCE_IP]:8080, where [YOUR_INSTANCE_IP] is the external IP address of your instance. PROBLEM: The page deosn't load. I get the following error message: This site can’t be reached QUESTION: What could have gone wrong ? All previous steps worked perfectly and I was able to access my website