memory

Random-access container that does not fit in memory?

浪子不回头ぞ 提交于 2020-01-02 03:54:43
问题 I have an array of objects (say, images), which is too large to fit into memory (e.g. 40GB). But my code needs to be able to randomly access these objects at runtime. What is the best way to do this? From my code's point of view, it shouldn't matter, of course, if some of the data is on disk or temporarily stored in memory; it should have transparent access: container.getObject(1242)->process(); container.getObject(479431)->process(); But how should I implement this container? Should it just

detecting the memory page size

不打扰是莪最后的温柔 提交于 2020-01-02 02:48:07
问题 Is there a portable way to detect (programmatically) the memory page size using C or C++ code ? 回答1: Since Boost is a pretty portable library you could use mapped_region::get_page_size() function to retrieve the memory page size. As for C++ Standard it gives no such a possibility. 回答2: C doesn't know anything about memory pages. On posix systems you can use long pagesize = sysconf(_SC_PAGE_SIZE); 回答3: Yes, this is platform-specific. On Linux there's sysconf(_SC_PAGESIZE) , which also seems to

Redis - monitoring memory usage

北慕城南 提交于 2020-01-02 01:53:30
问题 I am currently testing insertion of keys in a database Redis (on local). I have more than 5 millions keys and I have just 4GB RAM so at one moment I reach capacity of RAM and swap fill in (and my PC goes down)... My problematic : How can I make monitoring memory usage on the machine which has the Redis database, and in this way alert no more insert some keys in the Redis database ? Thanks. 回答1: Concerning memory usage, I'd advise you to look at the redis.io FAQ and this article about using

Python Pandas Merge Causing Memory Overflow

末鹿安然 提交于 2020-01-02 01:12:10
问题 I'm new to Pandas and am trying to merge a few subsets of data. I'm giving a specific case where this happens, but the question is general: How/why is it happening and how can I work around it? The data I load is around 85 Megs or so but I often watch my python session run up close to 10 gigs of memory usage then give a memory error. I have no idea why this happens, but it's killing me as I can't even get started looking at the data the way I want to. Here's what I've done: Importing the Main

Getting memory error trying to debug manage memory with a big minidump file

家住魔仙堡 提交于 2020-01-01 23:11:44
问题 I'm trying to "Debug Managed Memory" with Visual Studio 2015 Enterprise Edition. The file is at 1.2GB and after while loading I get the error message "Memory analysis could not be completed due to insufficient memory" after have been pressing "Debug Managed Memory" What can I do to still be able to look into the memory with the pdb files? Can I start Visual Studio 2015 with more memory (the computer has 25 GB memory free) I guess it has to do with Visual Studio being running with x86. 回答1: It

Getting memory error trying to debug manage memory with a big minidump file

守給你的承諾、 提交于 2020-01-01 23:11:24
问题 I'm trying to "Debug Managed Memory" with Visual Studio 2015 Enterprise Edition. The file is at 1.2GB and after while loading I get the error message "Memory analysis could not be completed due to insufficient memory" after have been pressing "Debug Managed Memory" What can I do to still be able to look into the memory with the pdb files? Can I start Visual Studio 2015 with more memory (the computer has 25 GB memory free) I guess it has to do with Visual Studio being running with x86. 回答1: It

running nokogiri in Jruby vs. just ruby

只谈情不闲聊 提交于 2020-01-01 22:29:54
问题 I found startling difference in CPU and memory consumption usage. It seems garbage collection is not happening when i run the following nokogiri script require 'rubygems' require 'nokogiri' require 'open-uri' def getHeader() doz = Nokogiri::HTML(open('http://losangeles.craigslist.org/wst/reb/1484772751.html')) puts doz.xpath("html[1]\/body[1]\/h2[1]") end (1..10000).each do |a| getHeader() end when run in Jruby, CPU consumption is over 10, and memory consumption % rises with time(starts from

Tomcat server's JVM free memory not returned to OS

半腔热情 提交于 2020-01-01 19:53:20
问题 My tomcat server is behaving strange, it has allocated 6GB of memory from system, but more than 4GB is marked as "free". This is a screen from tomcat server status: I understand what "Free memory" in JVM means, but I do not understand why it is not returning lets say in this situation at least 3GB back to system. Env: Java 8 Tomcat 8 Debian 8.3 Total memory on machine: 64 GB 回答1: Since you haven't overridden any JVM options, Tomcat uses the default garbage collector which is ParallelGC in JDK

What's the difference between “gld/st_throughput” and “dram_read/write_throughput” metrics?

余生颓废 提交于 2020-01-01 19:08:09
问题 In the CUDA visual profiler, version 5, I know that the "gld/st_requested_throughput" are the requested memory throughput of application. However, when I try to find the actual throughput of hardware, I am confused because there are two pairs of metrics which seem to be qualified, and they are "gld/st_throughput" and "dram_read/write_throughput". Which pair is actually the hardware throughput? And what does the other serve as? 回答1: gld/st_throughput includes transactions served by the L1 and

Using memory_order_relaxed for storing with memory_order_acquire for loading

只谈情不闲聊 提交于 2020-01-01 16:55:57
问题 I have question related with following code #include <atomic> #include <thread> #include <assert.h> std::atomic<bool> x, y; std::atomic<int> z; void write_x_then_y() { x.store(true, std::memory_order_relaxed); y.store(true, std::memory_order_relaxed); } void read_y_then_x() { while (!y.load(std::memory_order_acquire)); if (x.load(std::memory_order_acquire)) ++z; } int main() { x = false; y = false; z = 0; std::thread a(write_x_then_y); std::thread b(read_y_then_x); a.join(); b.join(); assert