memory-management

Why doesn't free() zero out the memory prior to releasing it?

懵懂的女人 提交于 2019-12-29 06:26:59
问题 When we free() memory in C, why is that memory not filled with zero? Is there a good way to ensure this happens as a matter of course when calling free() ? I'd rather not risk leaving sensitive data in memory released back to the operating system... 回答1: Zeroing out the memory block when freeing it will require extra time. Since most of time there's actually no need in it it is not done by default. If you really need (say you used memory for storing a password or a cryptographic key) - call

MPI Fortran code: how to share data on node via openMP?

元气小坏坏 提交于 2019-12-29 06:15:28
问题 I am working on an Fortan code that already uses MPI. Now, I am facing a situation, where a set of data grows very large but is same for every process, so I would prefer to store it in memory only once per node and all processes on one node access the same data. Storing it once for every process would go beyond the available RAM. Is it somehow possible to achieve something like that with openMP? Data sharing per node is the only thing I would like to have, no other per node paralellisation

Java “for” statement implementation prevents garbage collecting

北城余情 提交于 2019-12-29 05:46:23
问题 UPD 21.11.2017: the bug is fixed in JDK, see comment from Vicente Romero Summary: If for statement is used for any Iterable implementation the collection will remain in the heap memory till the end of current scope (method, statement body) and won't be garbage collected even if you don't have any other references to the collection and the application needs to allocate a new memory. http://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8175883 https://bugs.openjdk.java.net/browse/JDK-8175883

Memory write performance - GPU CPU Shared Memory

╄→尐↘猪︶ㄣ 提交于 2019-12-29 04:29:36
问题 I'm allocating both input and output MTLBuffer using posix_memalign according to the shared GPU/CPU documentation provided by memkite. Aside: it is easier to just use latest API than muck around with posix_memalign let metalBuffer = self.metalDevice.newBufferWithLength(byteCount, options: .StorageModeShared) My kernel function operates on roughly 16 million complex value structs and writes out an equal number of complex value structs to memory. I've performed some experiments and my Metal

Cocoa Touch: When does an NSFetchedResultsController become necessary to manage a Core Data fetch?

≡放荡痞女 提交于 2019-12-29 04:29:10
问题 I'm developing an iPhone application that makes heavy use of Core Data, primarily for its database-like features (such as the ability to set a sort order or predicate on fetch requests). I'm presenting all the data I fetch in various UITableViewControllers. What I'd like to know is a rough idea of how many objects I can fetch before it becomes a good idea to use an NSFetchedResultsController to handle the request. In the Core Data docs, it says that SQLite stores consider "10,000 objects to

Cocoa Touch: When does an NSFetchedResultsController become necessary to manage a Core Data fetch?

风流意气都作罢 提交于 2019-12-29 04:29:04
问题 I'm developing an iPhone application that makes heavy use of Core Data, primarily for its database-like features (such as the ability to set a sort order or predicate on fetch requests). I'm presenting all the data I fetch in various UITableViewControllers. What I'd like to know is a rough idea of how many objects I can fetch before it becomes a good idea to use an NSFetchedResultsController to handle the request. In the Core Data docs, it says that SQLite stores consider "10,000 objects to

Explicit getters/setters for @properties (MRC)

大城市里の小女人 提交于 2019-12-29 04:23:07
问题 I've started programming on Objective-C language in the middle of 2012 in the time when ARC replaced MRC as a general practice making the latter almost unnecessary to learn. Now I am trying to understand some basics of MRC to deepen my knowledge of Memory Management in Objective-C. The thing I am interested in now, is how to write getters/setters for declared @properties explicitly, by hands. By this time the only sane example I found is from "Advanced Memory Management Programming Guide" by

.NET Free memory usage (how to prevent overallocation / release memory to the OS)

邮差的信 提交于 2019-12-29 03:38:11
问题 I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from

How do I allocate memory and copy 2D arrays between CPU / GPU in CUDA without flattening them?

▼魔方 西西 提交于 2019-12-29 02:00:12
问题 So I want to allocate 2D arrays and also copy them between the CPU and GPU in CUDA, but I am a total beginner and other online materials are very difficult for me to understand or are incomplete. It is important that I am able to access them as a 2D array in the kernel code as shown below. Note that height != width for the arrays, that's something that further confuses me if it's possible as I always struggle choosing grid size. I've considered flattening them, but I really want to get it

Custom Memory Allocator for STL map

倾然丶 夕夏残阳落幕 提交于 2019-12-28 20:34:53
问题 This question is about construction of instances of custom allocator during insertion into a std::map. Here is a custom allocator for std::map<int,int> along with a small program that uses it: #include <stddef.h> #include <stdio.h> #include <map> #include <typeinfo> class MyPool { public: void * GetNext() { return malloc(24); } void Free(void *ptr) { free(ptr); } }; template<typename T> class MyPoolAlloc { public: static MyPool *pMyPool; typedef size_t size_type; typedef ptrdiff_t difference