Inserting 1000 rows into Azure Database takes 13 seconds?

北慕城南 提交于 2020-01-02 14:07:09

问题


Can anyone please tell me why it might be taking 12+ seconds to insert 1000 rows into a SQL database hosted on Azure? I'm just getting started with Azure, and this is (obviously) absurd...

Create Table xyz (ID int primary key identity(1,1), FirstName varchar(20))
GO

create procedure InsertSomeRows as
set nocount on
Declare @StartTime datetime = getdate()
Declare @x int = 0;
While @X < 1000
Begin
    insert into xyz (FirstName) select 'john' 
    Set @X = @X+1;
End

Select  count(*) as Rows, DateDiff(SECOND, @StartTime, GetDate()) as SecondsPassed 
from    xyz
GO

Exec InsertSomeRows
Exec InsertSomeRows
Exec InsertSomeRows

GO
Drop Table xyz
Drop Procedure InsertSomeRows

Output:

Rows        SecondsPassed
----------- -------------
1000        11

Rows        SecondsPassed
----------- -------------
2000        13

Rows        SecondsPassed
----------- -------------
3000        14

回答1:


It's likely the performance tier you are on that is causing this. With a Standard S0 tier you only have 10 DTUs (Database throughput units). If you haven't already, read up on the SQL Database Service Tiers. If you aren't familiar with DTUs it is a bit of a shift from on-premises SQL Server. The amount of CPU, Memory, Log IO and Data IO are all wrapped up in which service tier you select. Just like on premises if you start to hit the upper bounds of what your machine can handle things slow down, start to queue up and eventually start timing out.

Run your test again just as you have been doing, but then use the Azure Portal to watch the DTU % used while the test is underway. If you see that the DTU% is getting maxed out then the issue is that you've chosen a service tier that doesn't have enough resources to handle you've applied without slowing down. If the speed isn't acceptable, then move up to the next service tier until the speed is acceptable. You pay more for more performance.

I'd recommend not paying too close attention to the service tier based on this test, but rather on the actual load you want to apply to the production system. This test will give you an idea and a better understanding of DTUs, but it may or may not represent the actual throughput you need for your production loads (which could be even heavier!).

Don't forget that in Azure SQL DB you can also scale your Database as needed so that you have the performance you need but can then back down during times you don't. The database will be accessible during most of the scaling operations (though note it can take a time to do the scaling operation and there may be a second or two of not being able to connect).




回答2:


Two factors made the biggest difference. First, I wrapped all the inserts into a single transaction. That got me from 100 inserts per second to about 2500. Then I upgraded the server to a PREMIUM P4 tier and now I can insert 25,000 per second (inside a transaction.)

It's going to take some getting used to using an Azure server and what best practices give me the results I need.




回答3:


My theory: Each insert is one log IO. Here, this would be 100 IOs/sec. That sounds like a reasonable limit on an S0. Can you try with a transaction wrapped around the inserts?

So wrapping the inserts in a single transaction did indeed speed this up. Inside the transaction it can insert about 2500 rows per second

So that explains it. Now the results are no longer catastrophic. I would now advise looking at metrics such as the Azure dashboard DTU utilization and wait stats. If you post them here I'll take a look.




回答4:


one way to improve performance ,is to look at Wait Stats of the query

Looking at Wait stats,will give you exact bottle neck when a query is running..In your case ,it turned out to be LOGIO..Look here to know more about this approach :SQL Server Performance Tuning Using Wait Statistics

Also i recommend changing while loop to some thing set based,if this query is not a Psuedo query and you are running this very often

Set based solution:

 create proc usp_test
    (
    @n int
    )
    Begin
begin try
begin tran
    insert into yourtable
    select n ,'John' from
    numbers
    where n<@n
commit
begin catch
--catch errors
end catch
end try
    end

You will have to create numbers table for this to work




回答5:


I had terrible performance problems with updates & deletes in Azure until I discovered a few techniques:

  1. Copy data to a temporary table and make updates in the temp table, then copy back to a permanent table when done.

  2. Create a clustered index on the table being updated (partitioning didn't work as well)

For inserts, I am using bulk inserts and getting acceptable performance.



来源:https://stackoverflow.com/questions/33573010/inserting-1000-rows-into-azure-database-takes-13-seconds

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!