Firebase Cloud Function with Firestore returning “Deadline Exceeded”

后端 未结 5 1590
暗喜
暗喜 2020-12-09 09:28

I took one of the sample functions from the Firestore documentation and was able to successfully run it from my local firebase environment. However, once I deployed to my fi

5条回答
  •  盖世英雄少女心
    2020-12-09 09:56

    I tested this, by having 15 concurrent AWS Lambda functions writing 10,000 requests into the database into different collections / documents milliseconds part. I did not get the DEADLINE_EXCEEDED error.

    Please see the documentation on firebase.

    'deadline-exceeded': Deadline expired before operation could complete. For operations that change the state of the system, this error may be returned even if the operation has completed successfully. For example, a successful response from a server could have been delayed long enough for the deadline to expire.

    In our case we are writing a small amount of data and it works most of the time but loosing data is unacceptable. I have not concluded why Firestore fails to write in simple small bits of data.

    SOLUTION:

    I am using an AWS Lambda function that uses an SQS event trigger.

      # This function receives requests from the queue and handles them
      # by persisting the survey answers for the respective users.
      QuizAnswerQueueReceiver:
        handler: app/lambdas/quizAnswerQueueReceiver.handler
        timeout: 180 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
        reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit    
        events:
          - sqs:
              batchSize: 10 # Wait for 10 messages before processing.
              maximumBatchingWindow: 60 # The maximum amount of time in seconds to gather records before invoking the function
              arn:
                Fn::GetAtt:
                  - SurveyAnswerReceiverQueue
                  - Arn
        environment:
          NODE_ENV: ${self:custom.myStage}
    

    I am using a dead letter queue connected to my main queue for failed events.

      Resources:
        QuizAnswerReceiverQueue:
          Type: AWS::SQS::Queue
          Properties:
            QueueName: ${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}
            # VisibilityTimeout MUST be greater than the lambda functions timeout https://lumigo.io/blog/sqs-and-lambda-the-missing-guide-on-failure-modes/
    
            # The length of time during which a message will be unavailable after a message is delivered from the queue.
            # This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.
            VisibilityTimeout: 900 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
    
            # The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1,209,600 seconds (14 days).
            MessageRetentionPeriod: 345600  # The number of seconds that Amazon SQS retains a message. 
            RedrivePolicy:
              deadLetterTargetArn:
                "Fn::GetAtt":
                  - QuizAnswerReceiverQueueDLQ
                  - Arn
              maxReceiveCount: 5 # The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
        QuizAnswerReceiverQueueDLQ:
          Type: "AWS::SQS::Queue"
          Properties:
            QueueName: "${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}DLQ"
            MessageRetentionPeriod: 1209600 # 14 days in seconds
    

提交回复
热议问题