memory-management

How do I limit the memory resource of a group of docker containers?

♀尐吖头ヾ 提交于 2020-01-15 01:48:40
问题 I understand that I can use --memory and --memory-swap to limit memory resource per container. But, how do I limit memory resource per a group of containers? My system has 8GB RAM memory and consists of 2 docker containers. I want to set an 8 GB limit on both containers. I do not want to set a 4GB memory resource limit for each container as A container may use more than 4GB memory. Both containers won't use 4GB memory at the same time, so it would make sense to give the unused memory of

Gradual increase in Resident Memory usage by Jboss(Java) process

随声附和 提交于 2020-01-14 16:03:18
问题 We are facing an issue where the Resident memory of the Java process grows gradually. We have Xmx defined at 4096 MB and XX:MaxPermSize=1536m. The number of active threads ~1500 with an Xss of 256K defined. When the application server(JBoss 6.1) starts the resident memory used is ~5.6GB(have been using top command to monitor it); it gradually grows(around 0.3 to 0.5 Gb per day) till it grows to ~7.4 Gb, when the kernel's OOM killer kills the process due to shortage of RAM space(The server has

Achieving the equivalent of a variable-length (local) array in CUDA

喜夏-厌秋 提交于 2020-01-14 14:36:14
问题 I have some code which uses local memory (I might have used registers, but I need dynamic addressing). Since the amount of memory I use depends on the input and on the number of threads in the block (which also depends on the input, at run-time, although before launch-time) - it can't be a fixed-size array. On the other hand, I can't write __global__ foo(short x) { int my_local_mem_array[x]; } (which is valid but problematic C99, but not valid C++ even on the host side.) How can I achieve the

How do I use MemoryFailPoint?

假如想象 提交于 2020-01-14 13:43:09
问题 A MemoryFailPoint (MSDN) " checks for sufficient memory resources before executing an operation." But how is it actually used correctly? Does the MemoryFailPoint automatically reserve some memory for the next big object I create? Or does it simply check whether the memory would be free, without reserving it? Does it check physical memory, physical memory plus page file, virtual address space, or something else entirely? When do I dispose it? Do I need to dispose the MemoryFailPoint before

Correct way to allocate memory to std::shared_ptr

巧了我就是萌 提交于 2020-01-14 13:10:40
问题 I have implemented a function where the identity is given to me and out of my control. It returns std::shared_ptr<const void> . In the function i allocate an arbitrary amount of memory, and return access to it though the shared_ptr. My memory allocation is done with new unsigned char[123] . The problem is that valgrind detects a mismatch between the usage of new and delete variants. While i use new[](unsigned) to allocate memory, the shared_ptr destructor uses delete(void*) to deallocate it,

How do I implement dynamic shared memory resizing?

故事扮演 提交于 2020-01-14 09:56:10
问题 Currently I use shm_open to get a file descriptor and then use ftruncate and mmap whenever I want to add a new buffer to the shared memory. Each buffer is used individually for its own purposes. Now what I need to do is arbitrarily resize buffers. And also munmap buffers and reuse the free space again later. The only solution I can come up with for the first problem is: ftuncate(file_size + old_buffer_size + extra_size), mmap, copy data accross into the new buffer and then munmap the original

How do I implement dynamic shared memory resizing?

泄露秘密 提交于 2020-01-14 09:56:07
问题 Currently I use shm_open to get a file descriptor and then use ftruncate and mmap whenever I want to add a new buffer to the shared memory. Each buffer is used individually for its own purposes. Now what I need to do is arbitrarily resize buffers. And also munmap buffers and reuse the free space again later. The only solution I can come up with for the first problem is: ftuncate(file_size + old_buffer_size + extra_size), mmap, copy data accross into the new buffer and then munmap the original

Profiling memory leaks with Instruments- huge difference between iPhone 4 and iOS 5 Simulator

荒凉一梦 提交于 2020-01-14 07:46:08
问题 When profiling my app with Instruments (looking for memory leaks), I get extremely different results with the iOS 5 iPhone Simulator from those I get with my iPhone 4 running iOS 5. The first picture shows the results from the profiling with the real device, and the second is with the simulator: Real device: iOS 5 Simulator: This profile is taken up to the same point in the app in both cases: completion of viewDidLoad in the rootViewController's view lifecycle. I have waited in both of them

Is it guaranteed that std::vector default construction does not call new?

[亡魂溺海] 提交于 2020-01-14 07:06:09
问题 According to the reference a simple std::vector<T> vec; creates an emtpy container (default constructor). Does this guarantee that there is no dynamic memory allocation? Or may an implementation chose to reserve some memory? I known that, for this empty constructor, there is no construction of the type T since C++11. However, I wonder, if there is also a guarantee that nothing is allocated on heap. I.e. that the above line is just a few nullptr on stack/member. I tested it with vc140, where

application:didFinishLaunchingWithOptions: memory management

China☆狼群 提交于 2020-01-14 06:40:28
问题 I have a question about the memory management. In my app delegate, I have the following method; where welcomeViewController is an ivar. - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { welcomeViewController = [[CBWelcomeViewController alloc] init]; UINavigationController *appNavigationController = [[UINavigationController alloc] initWithRootViewController:welcomeViewController]; [self.window addSubview: [appNavigationController