Firebase Cloud Function with Firestore returning “Deadline Exceeded”

后端 未结 5 1562
暗喜
暗喜 2020-12-09 09:28

I took one of the sample functions from the Firestore documentation and was able to successfully run it from my local firebase environment. However, once I deployed to my fi

相关标签:
5条回答
  • 2020-12-09 09:56

    I tested this, by having 15 concurrent AWS Lambda functions writing 10,000 requests into the database into different collections / documents milliseconds part. I did not get the DEADLINE_EXCEEDED error.

    Please see the documentation on firebase.

    'deadline-exceeded': Deadline expired before operation could complete. For operations that change the state of the system, this error may be returned even if the operation has completed successfully. For example, a successful response from a server could have been delayed long enough for the deadline to expire.

    In our case we are writing a small amount of data and it works most of the time but loosing data is unacceptable. I have not concluded why Firestore fails to write in simple small bits of data.

    SOLUTION:

    I am using an AWS Lambda function that uses an SQS event trigger.

      # This function receives requests from the queue and handles them
      # by persisting the survey answers for the respective users.
      QuizAnswerQueueReceiver:
        handler: app/lambdas/quizAnswerQueueReceiver.handler
        timeout: 180 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
        reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit    
        events:
          - sqs:
              batchSize: 10 # Wait for 10 messages before processing.
              maximumBatchingWindow: 60 # The maximum amount of time in seconds to gather records before invoking the function
              arn:
                Fn::GetAtt:
                  - SurveyAnswerReceiverQueue
                  - Arn
        environment:
          NODE_ENV: ${self:custom.myStage}
    

    I am using a dead letter queue connected to my main queue for failed events.

      Resources:
        QuizAnswerReceiverQueue:
          Type: AWS::SQS::Queue
          Properties:
            QueueName: ${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}
            # VisibilityTimeout MUST be greater than the lambda functions timeout https://lumigo.io/blog/sqs-and-lambda-the-missing-guide-on-failure-modes/
    
            # The length of time during which a message will be unavailable after a message is delivered from the queue.
            # This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.
            VisibilityTimeout: 900 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
    
            # The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1,209,600 seconds (14 days).
            MessageRetentionPeriod: 345600  # The number of seconds that Amazon SQS retains a message. 
            RedrivePolicy:
              deadLetterTargetArn:
                "Fn::GetAtt":
                  - QuizAnswerReceiverQueueDLQ
                  - Arn
              maxReceiveCount: 5 # The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
        QuizAnswerReceiverQueueDLQ:
          Type: "AWS::SQS::Queue"
          Properties:
            QueueName: "${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}DLQ"
            MessageRetentionPeriod: 1209600 # 14 days in seconds
    
    0 讨论(0)
  • 2020-12-09 10:02

    If the error is generate after around 10 seconds, probably it's not your internet connetion, it might be that your functions are not returning any promise. In my experience I got the error simply because I had wrapped a firebase set operation(which returns a promise) inside another promise. You can do this

    return db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
            var SuccessResponse = {
                "code": "200"
            }
    
            var resp = JSON.stringify(SuccessResponse);
            return resp;
        }).catch(err => {
            console.log('Quiz Error OCCURED ', err);
            var FailureResponse = {
                "code": "400",
            }
    
            var resp = JSON.stringify(FailureResponse);
            return resp;
        });
    

    instead of

    return new Promise((resolve,reject)=>{ 
        db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
            var SuccessResponse = {
                "code": "200"
            }
    
            var resp = JSON.stringify(SuccessResponse);
            return resp;
        }).catch(err => {
            console.log('Quiz Error OCCURED ', err);
            var FailureResponse = {
                "code": "400",
            }
    
            var resp = JSON.stringify(FailureResponse);
            return resp;
        });
    
    });
    
    0 讨论(0)
  • 2020-12-09 10:05

    In my own experience, this problem can also happen when you try to write documents using a bad internet connection.

    I use a solution similar to Jurgen's suggestion to insert documents in batch smaller than 500 at once, and this error appears if I'm using a not so stable wifi connection. When I plug in the cable, the same script with the same data runs without errors.

    0 讨论(0)
  • 2020-12-09 10:06

    I have written this little script which uses batch writes (max 500) and only write one batch after the other.

    use it by first creating a batchWorker let batch: any = new FbBatchWorker(db); Then add anything to the worker batch.set(ref.doc(docId), MyObject);. And finish it via batch.commit(). The api is the same as for the normal Firestore Batch (https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes) However, currently it only supports set.

    import { firestore } from "firebase-admin";
    
    class FBWorker {
        callback: Function;
    
        constructor(callback: Function) {
            this.callback = callback;
        }
    
        work(data: {
            type: "SET" | "DELETE";
            ref: FirebaseFirestore.DocumentReference;
            data?: any;
            options?: FirebaseFirestore.SetOptions;
        }) {
            if (data.type === "SET") {
                // tslint:disable-next-line: no-floating-promises
                data.ref.set(data.data, data.options).then(() => {
                    this.callback();
                });
            } else if (data.type === "DELETE") {
                // tslint:disable-next-line: no-floating-promises
                data.ref.delete().then(() => {
                    this.callback();
                });
            } else {
                this.callback();
            }
        }
    }
    
    export class FbBatchWorker {
        db: firestore.Firestore;
        batchList2: {
            type: "SET" | "DELETE";
            ref: FirebaseFirestore.DocumentReference;
            data?: any;
            options?: FirebaseFirestore.SetOptions;
        }[] = [];
        elemCount: number = 0;
        private _maxBatchSize: number = 490;
    
        public get maxBatchSize(): number {
            return this._maxBatchSize;
        }
        public set maxBatchSize(size: number) {
            if (size < 1) {
                throw new Error("Size must be positive");
            }
    
            if (size > 490) {
                throw new Error("Size must not be larger then 490");
            }
    
            this._maxBatchSize = size;
        }
    
        constructor(db: firestore.Firestore) {
            this.db = db;
        }
    
        async commit(): Promise<any> {
            const workerProms: Promise<any>[] = [];
            const maxWorker = this.batchList2.length > this.maxBatchSize ? this.maxBatchSize : this.batchList2.length;
            for (let w = 0; w < maxWorker; w++) {
                workerProms.push(
                    new Promise((resolve) => {
                        const A = new FBWorker(() => {
                            if (this.batchList2.length > 0) {
                                A.work(this.batchList2.pop());
                            } else {
                                resolve();
                            }
                        });
    
                        // tslint:disable-next-line: no-floating-promises
                        A.work(this.batchList2.pop());
                    }),
                );
            }
    
            return Promise.all(workerProms);
        }
    
        set(dbref: FirebaseFirestore.DocumentReference, data: any, options?: FirebaseFirestore.SetOptions): void {
            this.batchList2.push({
                type: "SET",
                ref: dbref,
                data,
                options,
            });
        }
    
        delete(dbref: FirebaseFirestore.DocumentReference) {
            this.batchList2.push({
                type: "DELETE",
                ref: dbref,
            });
        }
    }
    
    0 讨论(0)
  • 2020-12-09 10:07

    Firestore has limits.

    Probably “Deadline Exceeded” happens because of its limits.

    See this. https://firebase.google.com/docs/firestore/quotas

    Maximum write rate to a document 1 per second

    https://groups.google.com/forum/#!msg/google-cloud-firestore-discuss/tGaZpTWQ7tQ/NdaDGRAzBgAJ

    0 讨论(0)
提交回复
热议问题