Caching in C#/.Net

前端 未结 12 1244
离开以前
离开以前 2020-11-30 02:22

I wanted to ask you what is the best approach to implement a cache in C#? Is there a possibility by using given .NET classes or something like that? Perhaps something like a

12条回答
  •  温柔的废话
    2020-11-30 03:06

    As mentioned in other answers, the default choice using the .NET Framework is MemoryCache and the various related implementations in Microsoft NuGet packages (e.g. Microsoft.Extensions.Caching.MemoryCache). All of these caches bound size in terms of memory used, and attempt to estimate memory used by tracking how total physical memory is increasing relative to the number of cached objects. A background thread then periodically 'trims' entries.

    MemoryCache etc. share some limitations:

    1. Keys are strings, so if the key type is not natively string, you will be forced to constantly allocate strings on the heap. This can really add up in a server application when items are 'hot'.
    2. Has poor 'scan resistance' - e.g. if some automated process is rapidly looping through all the items in that exist, the cache size can grow too fast for the background thread to keep up. This can result in memory pressure, page faults, induced GC or when running under IIS, recycling the process due to exceeding the private bytes limit.
    3. Does not scale well with concurrent writes.
    4. Contains perf counters that cannot be disabled (which incur overhead).

    Your workload will determine the degree to which these things are problematic. An alternative approach to caching is to bound the number of objects in the cache (rather than estimating memory used). A cache replacement policy then determines which object to discard when the cache is full.

    Below is the source code for a simple cache with least recently used eviction policy:

    public sealed class ClassicLru
    {
        private readonly int capacity;
        private readonly ConcurrentDictionary> dictionary;
        private readonly LinkedList linkedList = new LinkedList();
    
        private long requestHitCount;
        private long requestTotalCount;
    
        public ClassicLru(int capacity)
            : this(Defaults.ConcurrencyLevel, capacity, EqualityComparer.Default)
        { 
        }
    
        public ClassicLru(int concurrencyLevel, int capacity, IEqualityComparer comparer)
        {
            if (capacity < 3)
            {
                throw new ArgumentOutOfRangeException("Capacity must be greater than or equal to 3.");
            }
    
            if (comparer == null)
            {
                throw new ArgumentNullException(nameof(comparer));
            }
    
            this.capacity = capacity;
            this.dictionary = new ConcurrentDictionary>(concurrencyLevel, this.capacity + 1, comparer);
        }
    
        public int Count => this.linkedList.Count;
    
        public double HitRatio => (double)requestHitCount / (double)requestTotalCount;
    
        ///
        public bool TryGet(K key, out V value)
        {
            Interlocked.Increment(ref requestTotalCount);
    
            LinkedListNode node;
            if (dictionary.TryGetValue(key, out node))
            {
                LockAndMoveToEnd(node);
                Interlocked.Increment(ref requestHitCount);
                value = node.Value.Value;
                return true;
            }
    
            value = default(V);
            return false;
        }
    
        public V GetOrAdd(K key, Func valueFactory)
        {
            if (this.TryGet(key, out var value))
            {
                return value;
            }
    
            var node = new LinkedListNode(new LruItem(key, valueFactory(key)));
    
            if (this.dictionary.TryAdd(key, node))
            {
                LinkedListNode first = null;
    
                lock (this.linkedList)
                {
                    if (linkedList.Count >= capacity)
                    {
                        first = linkedList.First;
                        linkedList.RemoveFirst();
                    }
    
                    linkedList.AddLast(node);
                }
    
                // Remove from the dictionary outside the lock. This means that the dictionary at this moment
                // contains an item that is not in the linked list. If another thread fetches this item, 
                // LockAndMoveToEnd will ignore it, since it is detached. This means we potentially 'lose' an 
                // item just as it was about to move to the back of the LRU list and be preserved. The next request
                // for the same key will be a miss. Dictionary and list are eventually consistent.
                // However, all operations inside the lock are extremely fast, so contention is minimized.
                if (first != null)
                {
                    dictionary.TryRemove(first.Value.Key, out var removed);
    
                    if (removed.Value.Value is IDisposable d)
                    {
                        d.Dispose();
                    }
                }
    
                return node.Value.Value;
            }
    
            return this.GetOrAdd(key, valueFactory);
        }
    
        public bool TryRemove(K key)
        {
            if (dictionary.TryRemove(key, out var node))
            {
                // If the node has already been removed from the list, ignore.
                // E.g. thread A reads x from the dictionary. Thread B adds a new item, removes x from 
                // the List & Dictionary. Now thread A will try to move x to the end of the list.
                if (node.List != null)
                {
                    lock (this.linkedList)
                    {
                        if (node.List != null)
                        {
                            linkedList.Remove(node);
                        }
                    }
                }
    
                if (node.Value.Value is IDisposable d)
                {
                    d.Dispose();
                }
    
                return true;
            }
    
            return false;
        }
    
        // Thead A reads x from the dictionary. Thread B adds a new item. Thread A moves x to the end. Thread B now removes the new first Node (removal is atomic on both data structures).
        private void LockAndMoveToEnd(LinkedListNode node)
        {
            // If the node has already been removed from the list, ignore.
            // E.g. thread A reads x from the dictionary. Thread B adds a new item, removes x from 
            // the List & Dictionary. Now thread A will try to move x to the end of the list.
            if (node.List == null)
            {
                return;
            }
    
            lock (this.linkedList)
            {
                if (node.List == null)
                {
                    return;
                }
    
                linkedList.Remove(node);
                linkedList.AddLast(node);
            }
        }
    
        private class LruItem
        {
            public LruItem(K k, V v)
            {
                Key = k;
                Value = v;
            }
    
            public K Key { get; }
    
            public V Value { get; }
        }
    }
    

    This is just to illustrate a thread safe cache - it probably has bugs and can be a bottleneck under heavy concurrent workloads (e.g. in a web server).

    A thoroughly tested, production ready, scalable concurrent implementation is a bit beyond a stack overflow post. To solve this in my projects, I implemented a thread safe pseudo LRU (think concurrent dictionary, but with constrained size). Performance is very close to a raw ConcurrentDictionary, ~10x faster than MemoryCache, ~10x better concurrent throughput than ClassicLru above, and better hit rate. A detailed performance analysis provided in the github link below.

    Usage looks like this:

    int capacity = 666;
    var lru = new ConcurrentLru(capacity);
    
    var value = lru.GetOrAdd(1, (k) => new SomeItem(k));
    

    GitHub: https://github.com/bitfaster/BitFaster.Caching

    Install-Package BitFaster.Caching
    

提交回复
热议问题