memory-access

Is there fmemopen() in MinGW

一笑奈何 提交于 2019-12-01 17:28:38
I'm trying to compile some code that uses the fmemopen function in MinGW. I found out that this function is not available MinGW. I need a function equivalent to fmemopen() . Are there any alternative functions that I can use? there are no fmemopen equivalents on win32 because of missing functionality in the kernel, I think cygwin implements it using a temp file like this one: https://github.com/kespindler/python-tesseract/blob/master/util-fmemopen.c 来源: https://stackoverflow.com/questions/7307955/is-there-fmemopen-in-mingw

Qt signal slot cv::Mat unable to read memory access violation

浪子不回头ぞ 提交于 2019-12-01 14:31:58
I have a Microsoft visual studio application that is grabbing frames from cameras and I am trying to display those frames in a Qt application. I am doing some processing with the frames using OpenCV, so the frames are Mat objects. I use QThreads to parallelize the application. I am getting a Access Violation reading location when I try to emit a Mat signal from my CameraThread class. main.cpp int main(int argc, char *argv[]) { QApplication app(argc, argv); MainWindow window; window.show(); return app.exec(); } mainwindow.cpp #include "main_window.h" MainWindow::MainWindow() { // create a

Qt signal slot cv::Mat unable to read memory access violation

不想你离开。 提交于 2019-12-01 12:40:14
问题 I have a Microsoft visual studio application that is grabbing frames from cameras and I am trying to display those frames in a Qt application. I am doing some processing with the frames using OpenCV, so the frames are Mat objects. I use QThreads to parallelize the application. I am getting a Access Violation reading location when I try to emit a Mat signal from my CameraThread class. main.cpp int main(int argc, char *argv[]) { QApplication app(argc, argv); MainWindow window; window.show();

How can the L1, L2, L3 CPU caches be turned off on modern x86/amd64 chips?

时光毁灭记忆、已成空白 提交于 2019-11-30 12:22:00
Every modern high-performance CPU of the x86/x86_64 architecture has some hierarchy of data caches: L1, L2, and sometimes L3 (and L4 in very rare cases), and data loaded from/to main RAM is cached in some of them. Sometimes the programmer may want some data to not be cached in some or all cache levels (for example, when wanting to memset 16 GB of RAM and keep some data still in the cache): there are some non-temporal (NT) instructions for this like MOVNTDQA ( https://stackoverflow.com/a/37092 http://lwn.net/Articles/255364/ ) But is there a programmatic way (for some AMD or Intel CPU families

How can the L1, L2, L3 CPU caches be turned off on modern x86/amd64 chips?

时光总嘲笑我的痴心妄想 提交于 2019-11-29 12:01:10
问题 Every modern high-performance CPU of the x86/x86_64 architecture has some hierarchy of data caches: L1, L2, and sometimes L3 (and L4 in very rare cases), and data loaded from/to main RAM is cached in some of them. Sometimes the programmer may want some data to not be cached in some or all cache levels (for example, when wanting to memset 16 GB of RAM and keep some data still in the cache): there are some non-temporal (NT) instructions for this like MOVNTDQA (https://stackoverflow.com/a/37092

What happens if two threads read & write the same piece of memory

这一生的挚爱 提交于 2019-11-29 02:23:35
It's my understanding that if two threads are reading from the same piece of memory, and no thread is writing to that memory, then the operation is safe. However, I'm not sure what happens if one thread is reading and the other is writing. What would happen? Is the result undefined? Or would the read just be stale? If a stale read is not a concern is it ok to have unsynchronized read-write to a variable? Or is it possible the data would be corrupted, and neither the read nor the write would be correct and one should always synchronize in this case? I want to say that I've learned it is the

Multiple accesses to main memory and out-of-order execution

眉间皱痕 提交于 2019-11-28 12:50:05
问题 Let us assume that I have two pointers that are pointing to unrelated addresses that are not cached, so they will both have to come all the way from main memory when being dereferenced. int load_and_add(int *pA, int *pB) { int a = *pA; // will most likely miss in cache int b = *pB; // will most likely miss in cache // ... some code that does not use a or b int c = a + b; return c; } If out-of-order execution allows executing the code before the value of c is computed, how will the fetching of

What happens if two threads read & write the same piece of memory

。_饼干妹妹 提交于 2019-11-27 16:36:39
问题 It's my understanding that if two threads are reading from the same piece of memory, and no thread is writing to that memory, then the operation is safe. However, I'm not sure what happens if one thread is reading and the other is writing. What would happen? Is the result undefined? Or would the read just be stale? If a stale read is not a concern is it ok to have unsynchronized read-write to a variable? Or is it possible the data would be corrupted, and neither the read nor the write would

Efficiency: arrays vs pointers

烂漫一生 提交于 2019-11-26 18:24:49
Memory access through pointers is said to be more efficient than memory access through an array. I am learning C and the above is stated in K&R. Specifically they say Any operation that can be achieved by array subscripting can also be done with pointers. The pointer version will in general be faster I dis-assembled the following code using visual C++.(Mine is a 686 processor. I have disabled all optimizations.) int a[10], *p = a, temp; void foo() { temp = a[0]; temp = *p; } To my surprise I see that memory access through a pointer takes 3 instructions to the two taken by memory access through

Using the extra 16 bits in 64-bit pointers

随声附和 提交于 2019-11-26 18:00:58
I read that a 64-bit machine actually uses only 48 bits of address (specifically, I'm using Intel core i7). I would expect that the extra 16 bits (bits 48-63) are irrelevant for the address, and would be ignored. But when I try to access such an address I got a signal EXC_BAD_ACCESS . My code is: int *p1 = &val; int *p2 = (int *)((long)p1 | 1ll<<48);//set bit 48, which should be irrelevant int v = *p2; //Here I receive a signal EXC_BAD_ACCESS. Why this is so? Is there a way to use these 16 bits? This could be used to build more cache-friendly linked list. Instead of using 8 bytes for next ptr,