large-object-heap

How can I know the ACTUAL maximum number of elements a .net array of a given type can be allocated?

我与影子孤独终老i 提交于 2021-01-27 20:35:51
问题 I know that all arrays in .net are limited to 2 GB, under this premise, I try not to allocate more that n = ((2^31) - 1) / 8 doubles in an array. Nevertheless, that number of elements still doesn't seem to be valid. Anyone knows how can I determine at run time the maximum number of elements given sizeof(T)? I know that whatever quantity approaching that number is just a lot of elements but, for all intents and purposes, let's say I need it. Note: I'm in a 64-bit environment, with a target

StringBuilder growing over 85k and moving to LOH? [duplicate]

旧时模样 提交于 2020-01-03 03:00:10
问题 This question already has answers here : Closed 8 years ago . Possible Duplicate: How does StringBuilder's capacity change? Let's say a StringBuilder is allocated and then it grows to over 85k, will it get moved over to the Large Object Heap? 回答1: StringBuilder doesn't "grow". In pre-4.0 SB, it simply allocated a new bigger buffer and copied the content from the old to the new. So in the end yes, the internal buffer was moved to LOH. The SB object not, because it's very small (to make it

Difference between 3rd gen objects and large object heap

只谈情不闲聊 提交于 2020-01-02 06:25:17
问题 What is the difference between large object heap and GC 3rd generation objects? 回答1: The LOH (Large Object Heap) is a single heap where large objects are allocated directly and stay there until they are collected. Objects are directly allocated into the LOH based on their size e.g. being equal or greater than 85000 bytes. Generational objects are "small" objects that are allocated into the SOH (Small Object Heap) which is a single heap. Objects in the SOH have an associated generation which

Avoiding the LOH when reading a binary

 ̄綄美尐妖づ 提交于 2019-12-20 07:20:40
问题 This question is a follow up to Efficient way to transfer many binary files into SQL Server database I originally asked why using File.ReadAllBytes was causing rapid memory use and it was concluded using that method put the data on the large object heap which cannot be easily reclaimed during run-time. My question now is how to avoid that situation? using (var fs = new FileStream(path, FileMode.Open)) { using (var ms = new MemoryStream()) { byte[] buffer = new byte[2048]; int bytesRead; while

Issues parsing a 1GB json file using JSON.NET

孤街浪徒 提交于 2019-12-20 06:05:50
问题 I have gotten an application where the input has been scaled up from 50K location records to 1.1 Million location records. This has caused serious issues as the entire file was previously de-serialized into a single object. The size of the object is ~1GB for a production like file with 1.1 Million records. Due to large object GC issues I want to keep the de-serialized object below the 85K mark. I'm trying to parse out a single location object at a time and de-serialize it so I can control the

Memory usage serializing chunked byte arrays with Protobuf-net

旧巷老猫 提交于 2019-12-19 06:06:07
问题 In our application we have some data structures which amongst other things contain a chunked list of bytes (currently exposed as a List<byte[]> ). We chunk bytes up because if we allow the byte arrays to be put on the large object heap then over time we suffer from memory fragmentation. We've also started using Protobuf-net to serialize these structures, using our own generated serialization DLL. However we've noticed that Protobuf-net is creating very large in-memory buffers while

Large unexplained memory in the memory dump of a .NET process

送分小仙女□ 提交于 2019-12-14 01:44:37
问题 I can't explain most of the memory used by a C# process. The total memory is 10 GB, but the total reachable and unreachable objects altogether total 2.5 GB. I wonder what these 7.5 GB could be? I'm looking for the most likely explanations or a method to find out what this memory can be. Here is the precise situation. The process is .NET 4.5.1. It downloads pages from internet and process them with machine learning. The memory is almost entirely in the Managed Heap as shown by VMMap. This

StringBuilder growing over 85k and moving to LOH? [duplicate]

前提是你 提交于 2019-12-06 16:07:59
This question already has answers here : Closed 8 years ago . Possible Duplicate: How does StringBuilder's capacity change? Let's say a StringBuilder is allocated and then it grows to over 85k, will it get moved over to the Large Object Heap? StringBuilder doesn't "grow". In pre-4.0 SB, it simply allocated a new bigger buffer and copied the content from the old to the new. So in the end yes, the internal buffer was moved to LOH. The SB object not, because it's very small (to make it simple, it could have been simply a reference to the buffer and the length of the string in the buffer. It was a

Large array support in ASP.NET

折月煮酒 提交于 2019-12-06 07:08:17
问题 Recently with 4.5 .NET support, users can allocate more than 2 GB of memory for an object. In order to do that users can set the gcAllowVeryLargeObjects to true in the app.config file, and things would work fine. However I am having difficulty in finding this setting for ASP.NET. I have an web site for which I need to test if this is really supported in our web site. I know that the VS inbuilt server is a 32 bit process. So it users can't simply launch the website and test it for large arrays

Cannot create JVM with -XX:+UseLargePages enabled

我只是一个虾纸丫 提交于 2019-12-05 18:24:17
问题 I have a Java service that currently runs with a 14GB heap. I am keen to try out the -XX:+UseLargePages option to see how this might affect the performance of the system. I have configured the OS as described by Oracle using appropriate shared memory and page values (these can also be calculated with an online tool). Once the OS is configured, I can see that it allocates the expected amount of memory as huge-pages. However, starting the VM with the -XX:+UseLargePages option set always results