How to maxmise the largest contiguous block of memory in the Large Object Heap

女生的网名这么多〃 提交于 2019-12-03 15:33:41

You might review the TransferMode property of your binding to see if you meet the requirements to change it from its default value of "Buffered", to "Streamed" or "StreamedResponse".

Also, review the values for maxBufferPoolSize and maxBufferSize. Increasing the size of the internal buffers used can help with memory utilization, especially with processing large messages.

maxReceivedMessageSize is also likely already set if your receiving large messages, but I would review that value as well.

I've seen one of the values above, if your exceeding the threshold, fail with an obscure, memory related message. The original exception was actually hidden by the message that was surfaced to my application. Enabling WCF Tracing helped diagnose the problem and see the real error - I needed to increase the value of one, or more, of the binding properties above.

I didn't get the feel for the binding your using from your post, but I believe that these settings are common across major ones. Check out MSDN documentation on basicHttpBinding for example.

If it is truly LOH fragmentation there isn't anything to be done about it once tuning efforts have been exhausted. Rolling recycle of the application might be required to mitigate it (I hate recommending that) but if you've exhausted other efforts you might be left with that.

I cannot address any of the WCF specific issues, but if you need to maximize LOH space for a 32 bit process, you should make the application large address aware and run it on 64 bit. A large address aware 32 bit process will be able to address the entire 4 GB address space when run on 64 bit Windows. This will give you a sizable chunk of memory above the address space normally used by the process.

I think your problem Might be an Assembly Leak caused by using XmlSerializer and not using one of two constructors as indicated in this MSDN article:

To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types. The infrastructure finds and reuses those assemblies. This behavior occurs only when using the following constructors:

XmlSerializer.XmlSerializer(Type)

XmlSerializer.XmlSerializer(Type, String)

If you use any of the other constructors, multiple versions of the same assembly are generated and never unloaded, which results in a memory leak and poor performance.

Nice, huh. The answer is to cache your XmlSerializer (assuming you even create it).

To really figure it out you need to do what Tess tells you to do. She's a freakin genius.

If possible I would go for a stream based approach and use a forward only Xml parser in combination, which should give you better performance as well.

If you don't absolutely have to use WCF, you can write your own HttpRequest and then pass the response to the XmlDeserializer and then parse the response like that. It might give you more control and insight into where the problem actually occurs. You can also experiment with a mock service that returns very large documents of the type you are looking for. We had a lot of headaches with the LOH fragmentation as well so I really feel your pain.

A problem I noticed when building buffers .NET tends to double capacity every time the buffer is filled up, which causes memory fragmentation since for a document of size 10mb, memory needs to be allocated in many steps. If you know the needed buffer size in advance, it is more efficient to allocate it at once. So if you know how big the incoming document will be, you can create a StringBuilder with exactly that size.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!