google-cloud-pubsub

Error code 503 in GCP pubsub.v1.Subscriber.StreamingPull

和自甴很熟 提交于 2020-06-29 20:42:59
问题 I am trying to utilize pub/sub service and I noticed in my dashboard following error code. Here link what is code 503 Is there anything that allow me to prevent that? -Askar 回答1: As explained in the documentation link about Error Codes that you shared, the HTTP code 503 ( "UNAVAILABLE" ) is returned when the Pub/Sub service was not able to process a request. In general, one could say that these types of errors tend to be transient, and there's no way to avoid them, you can just work around

Error code 503 in GCP pubsub.v1.Subscriber.StreamingPull

孤人 提交于 2020-06-29 20:42:02
问题 I am trying to utilize pub/sub service and I noticed in my dashboard following error code. Here link what is code 503 Is there anything that allow me to prevent that? -Askar 回答1: As explained in the documentation link about Error Codes that you shared, the HTTP code 503 ( "UNAVAILABLE" ) is returned when the Pub/Sub service was not able to process a request. In general, one could say that these types of errors tend to be transient, and there's no way to avoid them, you can just work around

How to manually ack/nack a PubSub message in Camel Route

无人久伴 提交于 2020-06-28 08:16:29
问题 I am setting up a Camel Route with ackMode=NONE meaning acknowlegements are not done automatically. How do I explicitly acknowledge the message in the route? In my Camel Route definition I've set ackMode to NONE. According to the documentation, I should be able to manually acknowledge the message downstream: https://github.com/apache/camel/blob/master/components/camel-google-pubsub/src/main/docs/google-pubsub-component.adoc "AUTO = exchange gets ack’ed/nack’ed on completion. NONE = downstream

Executing cloud functions after n seconds on demand

时光怂恿深爱的人放手 提交于 2020-06-17 13:09:07
问题 I am working on an application where I have to send notification to the users regarding some job, and user has to accept the job within 1 minute, if he doesn't, the job's request should be sent to the next user. I am using firestore as a database. When I create a job, trigger will send notification to the assigned user. Now I have to wait for 60 seconds to confirm if user has accepted the job and started the procedure, if not I have to assign that job to the new user. I am not sure how I can

Executing cloud functions after n seconds on demand

放肆的年华 提交于 2020-06-17 13:08:08
问题 I am working on an application where I have to send notification to the users regarding some job, and user has to accept the job within 1 minute, if he doesn't, the job's request should be sent to the next user. I am using firestore as a database. When I create a job, trigger will send notification to the assigned user. Now I have to wait for 60 seconds to confirm if user has accepted the job and started the procedure, if not I have to assign that job to the new user. I am not sure how I can

Get execution ID for Google Cloud Functions triggered from PubSub event

荒凉一梦 提交于 2020-06-17 07:54:05
问题 For Google Cloud Functions triggered from HTTP, it is possible to retrieve the execution id by inspecting the headers of the HTTP request ( "Function-Execution-Id" ) : package p import ( "fmt" "net/http" ) func F(w http.ResponseWriter, r *http.Request) { executionID := r.Header.Get("Function-Execution-Id") fmt.Println(executionID) } However, for GCF triggered by PubSub events, I can't find how to retrieve this execution ID : package p import ( "context" ) type PubSubMessage struct { Data [

How to implement the “locked” feature in AWS/SQS when using Google Cloud Pub/Sub?

99封情书 提交于 2020-06-13 06:13:09
问题 When you want to implement a producer/consumer pattern on top of Google Cloud Pub/Sub, you would expect each message can only be processed by one consumer. But Google Cloud Pub/Sub would send each message to all the subscribers. But the AWS/SQS has the following feature to easy guarantee this: When a message is received, it becomes “locked” while being processed. This keeps other computers from processing the message simultaneously. If the message processing fails, the lock will expire and

How does pubsub know how many messages I published at a point in time?

会有一股神秘感。 提交于 2020-05-17 07:41:26
问题 Code for publishing the messages here: async function publishMessage(topicName) { console.log(`[${new Date().toISOString()}] publishing messages`); const pubsub = new PubSub({ projectId: PUBSUB_PROJECT_ID }); const topic = pubsub.topic(topicName, { batching: { maxMessages: 10, maxMilliseconds: 10 * 1000, }, }); const n = 5; const dataBufs: Buffer[] = []; for (let i = 0; i < n; i++) { const data = `message payload ${i}`; const dataBuffer = Buffer.from(data); dataBufs.push(dataBuffer); } const

What is the IP range(s) of Google pub/sub?

放肆的年华 提交于 2020-05-14 09:08:26
问题 I have a Google pub/sub subscription that pushes messages for a topic to an AppEngine standard service endpoint. I want to restrict access to the AppEngine standard service to user IPs and still allow for messages coming from Google Pub/sub. In the AppEngine firewall, the only option is to allow certain IP ranges. What is the IP range(s) of Google pub/sub? 回答1: I've noticed that all the IP requests from Pub/Sub push subscriptions are coming from 2002:axx:xxxx:: . As per IETF RFC 3056, 2002:::

“Callback function is not a function” Error when following Google Cloud Scheduler / PubSub tutorial

别来无恙 提交于 2020-04-16 03:49:06
问题 I am trying to create to a start/stop schedule for my VM instance on Google Cloud. I am following this tutorialcreated by Google but when I get to the (Optional) Verify the functions work section and try to test the stopInstancePubSub function and pass the {"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsICJsYWJlbCI6ImVudj1kZXYifQo="} JSON object I get the following error: 2019-06-09 17:23:54.225 EDT stopInstancePubSub ipmdukx38xpw TypeError: callback is not a function at exports.stopInstancePubSub (/srv