How to Optimize Code Performance in .NET [closed]

爱⌒轻易说出口 提交于 2020-01-06 03:18:25

问题


I have an export job migrating data from an old database into a new database. The problem I'm having is that the user population is around 3 million and the job takes a very long time to complete (15+ hours). The machine I am using only has 1 processor so I'm not sure if threading is what I should be doing. Can someone help me optimize this code?

static void ExportFromLegacy()
{
    var usersQuery = _oldDb.users.Where(x =>
        x.status == 'active');

    int BatchSize = 1000;
    var errorCount = 0;
    var successCount = 0;
    var batchCount = 0;

    // Using MoreLinq's Batch for sequences
    // https://www.nuget.org/packages/MoreLinq.Source.MoreEnumerable.Batch
    foreach (IEnumerable<users> batch in usersQuery.Batch(BatchSize))
    {
        Console.WriteLine(String.Format("Batch count at {0}", batchCount));
        batchCount++;

        foreach(var user in batch)
        {
            try
            {               
                var userData = _oldDb.userData.Where(x =>
                    x.user_id == user.user_id).ToList();

                if (userData.Count > 0)
                {                   
                    // Insert into table
                    var newData = new newData()
                    {                       
                        UserId = user.user_id; // shortened code for brevity.                       
                    };

                    _db.newUserData.Add(newData);
                    _db.SaveChanges();

                    // Insert item(s) into table
                    foreach (var item in userData.items)
                    {
                        if (!_db.userDataItems.Any(x => x.id == item.id)
                        {
                            var item = new Item()
                            {                               
                                UserId = user.user_id, // shortened code for brevity.   
                                DataId = newData.id // id from object created above
                            };

                            _db.userDataItems.Add(item);                            
                        }

                        _db.SaveChanges();
                        successCount++;
                    }
                }               
            }
            catch(Exception ex)
            {
                errorCount++;
                Console.WriteLine(String.Format("Error saving changes for user_id: {0} at {1}.", user.user_id.ToString(), DateTime.Now));
                Console.WriteLine("Message: " + ex.Message);
                Console.WriteLine("InnerException: " + ex.InnerException);
            }
        }
    }

    Console.WriteLine(String.Format("End at {0}...", DateTime.Now));
    Console.WriteLine(String.Format("Successful imports: {0} | Errors: {1}", successCount, errorCount));
    Console.WriteLine(String.Format("Total running time: {0}", (exportStart - DateTime.Now).ToString(@"hh\:mm\:ss")));
}

回答1:


Unfortunately, the major issue is the number of database round-trip.

You make a round-trip:

  • For every user, you retrieve user data by user id in the old database
  • For every user, you save user data in the new database
  • For every user, you save user data item in the new database

So if you say you have 3 million users, and every user has an average of 5 user data item, it mean you do at least 3m + 3m + 15m = 21 million database round-trip which is insane.

The only way to dramatically improve the performance is by reducing the number of database round-trip.

Batch - Retrieve user by id

You can quickly reduce the number of database round-trip by retrieving all user data at once and since you don't have to track them, use "AsNoTracking()" for even more performance gains.

var list = batch.Select(x => x.user_id).ToList();
var userDatas = _oldDb.userData
                  .AsNoTracking()
                  .Where(x => list.Contains(x.user_id))
                  .ToList();

foreach(var userData in userDatas)
{
    ....
}

You should already have saved a few hours only with this change.

Batch - Save Changes

Every time you save a user data or item, you perform a database round-trip.

Disclaimer: I'm the owner of the project Entity Framework Extensions

This library allows to perform:

  • BulkSaveChanges
  • BulkInsert
  • BulkUpdate
  • BulkDelete
  • BulkMerge

You can either call BulkSaveChanges at the end of the batch or create a list to insert and use directly BulkInsert instead for even more performance.

You will, however, have to use a relation to the newData instance instead of using the ID directly.

foreach (IEnumerable<users> batch in usersQuery.Batch(BatchSize))
{
    // Retrieve all users for the batch at once.
   var list = batch.Select(x => x.user_id).ToList();
   var userDatas = _oldDb.userData
                         .AsNoTracking()
                         .Where(x => list.Contains(x.user_id))
                         .ToList(); 

    // Create list used for BulkInsert      
    var newDatas = new List<newData>();
    var newDataItems = new List<Item();

    foreach(var userData in userDatas)
    {
        // newDatas.Add(newData);

        // newDataItem.OwnerData = newData;
        // newDataItems.Add(newDataItem);
    }

    _db.BulkInsert(newDatas);
    _db.BulkInsert(newDataItems);
}

EDIT: Answer subquestion

One of the properties of a newDataItem, is the id of newData. (ex. newDataItem.newDataId.) So newData would have to be saved first in order to generate its id. How would I BulkInsert if there is a dependency of an another object?

You must use instead navigation properties. By using navigation property, you will never have to specify parent id but set the parent object instance instead.

public class UserData
{
    public int UserDataID { get; set; }
    // ... properties ...

    public List<UserDataItem> Items { get; set; }
}

public class UserDataItem
{
    public int UserDataItemID { get; set; }
    // ... properties ...

    public UserData OwnerData { get; set; }
}

var userData = new UserData();
var userDataItem = new UserDataItem();

// Use navigation property to set the parent.
userDataItem.OwnerData = userData;

Tutorial: Configure One-to-Many Relationship

Also, I don't see a BulkSaveChanges in your example code. Would that have to be called after all the BulkInserts?

Bulk Insert directly insert into the database. You don't have to specify "SaveChanges" or "BulkSaveChanges", once you invoke the method, it's done ;)

Here is an example using BulkSaveChanges:

foreach (IEnumerable<users> batch in usersQuery.Batch(BatchSize))
{
    // Retrieve all users for the batch at once.
   var list = batch.Select(x => x.user_id).ToList();
   var userDatas = _oldDb.userData
                         .AsNoTracking()
                         .Where(x => list.Contains(x.user_id))
                         .ToList(); 

    // Create list used for BulkInsert      
    var newDatas = new List<newData>();
    var newDataItems = new List<Item();

    foreach(var userData in userDatas)
    {
        // newDatas.Add(newData);

        // newDataItem.OwnerData = newData;
        // newDataItems.Add(newDataItem);
    }

    var context = new UserContext();
    context.userDatas.AddRange(newDatas);
    context.userDataItems.AddRange(newDataItems);
    context.BulkSaveChanges();
}

BulkSaveChanges is slower than BulkInsert due to having to use some internal method from Entity Framework but still way faster than SaveChanges.

In the example, I create a new context for every batch to avoid memory issue and gain some performance. If you re-use the same context for all batchs, you will have millions of tracked entities in the ChangeTracker which is never a good idea.




回答2:


Entity Framework is a very bad choice for importing large amounts of data. I know this from personal experience.

That being said, I found a few ways to optimize things when I tried to use it in the same way you are.

The Context will cache objects as you add them, and the more inserts you do, the slower future inserts will get. My solution was to limit each context to about 500 inserts before I disposed of that instance and created a new one. This boosted performance significantly.

I was able to make use of multiple threads to increase performance, but you will have to be very careful about resource contention. Each thread will definitely need its own Context, don't even think about trying to share it between threads. My machine had 8 cores, so threading will probably not help you as much; with a single core I doubt it will help you at all.

Turn off ChangeTracking with AutoDetectChangesEnabled = false;, change tracking is incredibly slow. Unfortunately this means you have to modify your code to make all changes directly through the context. No more Entity.Property = "Some Value";, it becomes Context.Entity(e=> e.Property).SetValue("Some Value"); (or something like that, I don't remember the exact syntax), which makes the code ugly.

Any queries you do should definitely use AsNoTracking.

With all that, I was able to cut a ~20 hour process down to about 6 hours, but I still don't recommend using EF for this. It was an extremely painful project due almost entirely to my poor choice of EF to add data. Please use something else... anything else...

I don't want to give the impression that EF is a bad data access library, it is great at what it was designed to do, unfortunately this is not what it was designed for.




回答3:


I can think on a few options.

1) A little speed increase could be done by moving your _db.SaveChanges() under your foreach() close bracket

foreach (...){
}
successCount += _db.SaveChanges();

2) Add items to a list, and then to context

List<ObjClass> list = new List<ObjClass>();
foreach (...)
{
  list.Add(new ObjClass() { ... });
}
_db.newUserData.AddRange(list);
successCount += _db.SaveChanges();

3) If it's a big amount of dada, save on bunches

List<ObjClass> list = new List<ObjClass>();
int cnt=0;
foreach (...)
{
  list.Add(new ObjClass() { ... });
  if (++cnt % 100 == 0) // bunches of 100
  {
    _db.newUserData.AddRange(list);
    successCount += _db.SaveChanges();
    list.Clear();
    // Optional if a HUGE amount of data
    if (cnt % 1000 == 0)
    {
      _db = new MyDbContext();
    } 
  }
}
// Don't forget that!
_db.newUserData.AddRange(list);
successCount += _db.SaveChanges();
list.Clear();

4) If TOOOO big, considere using bulkinserts. There are a few examples on internet and a few free libraries around. Ref: https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/

On most of these options you loose some control on error handling as it is difficult to know which one failed.



来源:https://stackoverflow.com/questions/38353112/how-to-optimize-code-performance-in-net

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!