MongoDB C# Driver doesn't release connections then errors

血红的双手。 提交于 2019-12-06 07:06:48

The C# driver has a connection pool, and the maximum size of the connection pool is 100 by default. So you should never see more than 100 connections to mongod from a single C# client process. The 1.1 version of the C# driver did have an occasional problem under heavy load, where an error on one connection could result in a storm of disconnects and connects. You would be able to tell if that was happening to you by looking at the server logs, where a log entry is written every time a connection is opened or closed. If so, can you try the 1.2 C# driver that was released this week?

You should not have needed to create a queue of pending updates. The connection pool acts as a queue of sorts by limiting the number of concurrent requests.

Let me know if you can find anything in the server logs, and if there is anything further I can help you with.

The solution was to stop saving records on each individual thread and to start adding them to a "pending to save" list in memory. Then have a separate thread and that handles all saves to mongodb synchronously. I don't know why the async calls cause the C# driver to trip up, but this is working beautifully now. Here is some sample code if others run into this problem:

public static class UserUpdateSaver
    {
        public static List<UserUpdateView> PendingUserUpdates;

        public static void Initialize()
        {
            PendingUserUpdates = new List<UserUpdateView>();
            var saveUserUpdatesTime = Convert.ToInt32(ConfigurationBL.ReadApplicationValue("SaveUserUpdatesTime"));
            LogWriter.Write("Setting up timer to save user updates every " + saveUserUpdatesTime + " seconds", LoggingEnums.LogEntryType.Warning);
            var worker = new BackgroundWorker();
            worker.DoWork += delegate(object s, DoWorkEventArgs args)
            {
                while (true)
                {//process pending user updates every x seconds.
                    Thread.Sleep(saveUserUpdatesTime * 1000);
                    ProcessPendingUserUpdates();
                }
            };
            worker.RunWorkerAsync();
        }

        public static void AddUserUpdateToSave(UserUpdateView userUpdate)
        {
            Monitor.Enter(PendingUserUpdates);
            PendingUserUpdates.Add(userUpdate);
            Monitor.Exit(PendingUserUpdates);
        }

        private static void ProcessPendingUserUpdates()
        {
            //get pending user updates.
            var pendingUserUpdates = new List<UserUpdateView>(PendingUserUpdates);
            if (pendingUserUpdates.Count > 0)
            {
                var startDate = DateTime.Now;

                foreach (var userUpdate in pendingUserUpdates)
                {
                    try
                    {
                        UserUpdateStore.Update(userUpdate);
                    }
                    catch (Exception exc)
                    {
                        LogWriter.WriteError(exc);
                    }
                    finally
                    {
                        Monitor.Enter(PendingUserUpdates);
                        PendingUserUpdates.Remove(userUpdate);
                        Monitor.Exit(PendingUserUpdates);
                    }
                }

                var duration = DateTime.Now.Subtract(startDate);
                LogWriter.Write(String.Format("Processed {0} user updates in {1} seconds",
                    pendingUserUpdates.Count, duration.TotalSeconds), LoggingEnums.LogEntryType.Warning);
            }
            else
            {
                LogWriter.Write("No user updates to process", LoggingEnums.LogEntryType.Warning);
            }
        }
    }

Have you heard about Message Queueing? You could put a bunch of boxes to handle such load and use message queueing mechanism to save your data to mongodb. But, in this case, your message queue must be able to run concurrent publish subscribe. A free message queue (very good in my opinion) is MassTransit with RabbitMQ.

The workflow would be: 1. Publish your data in message queue; 2. Once its there, launch as many boxes as you want with the subscribers that saves and processes your mongo data.

This approach will be good if you need to scale.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!