retry-logic

nginx keeps retrying request every 60 sec

偶尔善良 提交于 2021-01-29 05:16:10
问题 I have a request that sometimes needs more than a minute to execute. My service is behind Nginx -> Tyk API Gateway. What happens is that after the request is being executed by the service for 60 sec, Nginx sends the same request again to the service ignoring the first one. From the client perspective, it's all the same requests that run about 5 min (because in fact there are 5 requests). I tried to invoke it directly by IP (without nginx - only tyk and only service) and there are no retries.

EventGrid-triggered, Python Azure Function keeps triggering after successfully running?

蓝咒 提交于 2021-01-28 18:11:46
问题 There are a couple other topics out there, but none with solutions or none pertaining to Python Functions. Background: EventGrid-triggered, Python Azure Function EventGrid messages created only when a blob is uploaded to a given Storage Account Function receives message, downloads blob from message URL and does "stuff" Function can run for several seconds/minutes (up to 120 seconds for large blobs) Example of issue: 4 files uploaded to blob container in correct Storage Account Function

FCM: Retry-after and exponential backoff

强颜欢笑 提交于 2021-01-28 07:04:21
问题 As I understand, when a message fails to be delivered, the Retry-After header is sometimes included in the response and sometimes not. But what happens if I first receives an error response with Retry-After included, resends the message and receives another error response but without Retry-After? I know I should use exponential backoff but how does that work when the previous waiting time was from the Retry-After header? Imagine this sequence of requests and responses: Request 1: No waiting

Retry Logic - retry whole class if one tests fails - selenium

杀马特。学长 韩版系。学妹 提交于 2019-12-24 07:15:48
问题 Following are the classes used to implement retry logic TestRetry Class: public class TestRetry implements IRetryAnalyzer { int counter=0; int retryLimit=2; @Override public boolean retry(ITestResult result) { if (counter < retryLimit) { TestReporter.logStep("Retrying Test " +result.getName()+" for number of times: "+(counter+1)); counter++; return true; } return false; } RetryListener Class: public class RetryListener implements IAnnotationTransformer { @Override public void transform

How to do a `getOrWaitUntilNonEmpty` as a single liner?

≯℡__Kan透↙ 提交于 2019-12-13 15:10:40
问题 I have a high-level code structure that looks like this: val block: (=> Option[Seq[String]]) = ... val matches = block().get.toArray The problem is that this code may fail i.e. .get being None depending on the time e.g. I'm page-scrapping Google too often, then I'd wait and retry ... I can do the waiting like this i.e. random waits between 11-16s: val random = new Random() Thread.sleep((11000 * random.nextFloat() + 6000).ceil.toInt) What would be an elegant single-liner way to [waiting] loop

Is there a way I can delay the retry for a service bus message in an Azure function?

霸气de小男生 提交于 2019-12-11 08:35:59
问题 I have a function which pulls messages off a subscription, and forwards them to an HTTP endpoint. If the endpoint is unavailable, an exception is thrown. When this happens, I would like to delay the next attempt of that specific message for a certain amount of time, e.g. 15 minutes. So far, I have found the following solutions: Catch the exception, sleep, then throw. This is a terrible solution, as I will be charged for CPU usage while it is sleeping, and it will affect the throughput of the

What is the benefit of using exponential backoff?

本秂侑毒 提交于 2019-12-03 15:54:24
问题 When the code is waiting for some condition in which delay time is not deterministic, it looks like many people choose to use exponential backoff, i.e. wait N seconds, check if the condition satisfies; if not, wait for 2N seconds, check the condition, etc. What is the benefit of this over checking in a constant/linearly increasing time span? 回答1: This is the behavior of TCP congestion control. If the network is extremely congested, effectively no traffic gets through. If every node waits for