Sending request to ASP.Net Core Web API running on a specific node in a Service Fabric Cluster

后端 未结 2 763
名媛妹妹
名媛妹妹 2021-01-14 23:44

I am working on a Service Fabric Application, in which I am running my Application that contains a bunch of ASP.NET Core Web APIs. Now when I run my application on my local

2条回答
  •  悲&欢浪女
    2021-01-15 00:40

    Your problem is because you are running your APIs to do a Worker task.

    You should use your API to schedule the work in the Background(Process\Worker) and return to the user a token or operation id. The user will use this token to request the status or cancel the task.

    The first step: When you call your API the first time, you could generate a GUID(Or insert in DB) and put this message in a queue(i.e: Service Bus), and then return the GUID to the caller.

    The second step: A worker process will be running in your cluster listening for messages from this queue and process these messages whenever a message arrives. You can make this a single thread service that process message by message in a loop, or a multi-threaded service that process multiple messages using one Thread for each message. It will depend how complex you want to be:

    • In a single threaded listener, to scale you application, you have to span multiple instances so that multiple tasks will run in parallel, you can do that in SF with a simple scale command and SF will distribute the service instances across your available nodes.

    • In a multi-threaded version you will have to manage the concurrency for better performance, you might have to consider memory, cpu, disk and so on, otherwise you risk having too much load in a single node.

    The third step, the cancellation: The cancellation process is easy and there are many approaches:

    • Using a similar approach and enqueue a cancellation message
      • Your service will listen for the cancellation in a separate thread and cancel the running task(if running).
      • Using a different queue to send the cancellation messages is better
      • If running multiple listener instances you might consider a topic instead of a queue.
    • Using a cache key to store the job status and check on every iteration if the cancellation has been requested.
    • Table with job status, where you check on every iteration as you would do with the cache key.
    • Creating a Remote endpoint to make a direct call to the service and trigger a cancellation token.

    There are many approaches, these are simple, and you might make use of multiple in combination to have a better control of your tasks.

提交回复
热议问题