Azure Machine Learning Request Response latency

前端 未结 1 512
遥遥无期
遥遥无期 2020-12-15 12:28

I have made an Azure Machine Learning Experiment which takes a small dataset (12x3 array) and some parameters and does some calculations using a few Python modules (a linear

相关标签:
1条回答
  • 2020-12-15 12:45

    First, I am assuming you are doing your timing test on the published AML endpoint.

    When a call is made to the AML the first call must warm up the container. By default a web service has 20 containers. Each container is cold, and a cold container can cause a large(30 sec) delay. In the string returned by the AML endpoint, only count requests that have the isWarm flag set to true. By smashing the service with MANY requests(relative to how many containers you have running) can get all your containers warmed.

    If you are sending out dozens of requests a instance, the endpoint might be getting throttled. You can adjust the number of calls your endpoint can accept by going to manage.windowsazure.com/

    1. manage.windowsazure.com/
    2. Azure ML Section from left bar
    3. select your workspace
    4. go to web services tab
    5. Select your web service from list
    6. adjust the number of calls with slider

    By enabling debugging onto your endpoint you can get logs about the execution time for each of your modules to complete. You can use this to determine if a module is not running as you intended which may add to the time.

    Overall, there is an overhead when using the Execute python module, but I'd expect this request to complete in under 3 secs.

    0 讨论(0)
提交回复
热议问题