low-level

Is there a way to enforce specific endianness for a C or C++ struct?

会有一股神秘感。 提交于 2019-11-30 06:24:59
问题 I've seen a few questions and answers regarding to the endianness of structs, but they were about detecting the endianness of a system, or converting data between the two different endianness. What I would like to now, however, if there is a way to enforce specific endianness of a given struct . Are there some good compiler directives or other simple solutions besides rewriting the whole thing out of a lot of macros manipulating on bitfields? A general solution would be nice, but I would be

Which is faster: x<<1 or x<<10?

眉间皱痕 提交于 2019-11-30 06:19:18
问题 I don't want to optimize anything, I swear, I just want to ask this question out of curiosity. I know that on most hardware there's an assembly command of bit-shift (e.g. shl , shr ), which is a single command. But does it matter (nanosecond-wise, or CPU-tact-wise) how many bits you shift. In other words, is either of the following faster on any CPU? x << 1; and x << 10; And please don't hate me for this question. :) 回答1: Potentially depends on the CPU. However, all modern CPUs (x86, ARM) use

Can I change a user's keyboard input?

血红的双手。 提交于 2019-11-30 03:31:09
问题 I found this keyboard hook code, which I'm trying to slightly modify for my purposes: http://blogs.msdn.com/toub/archive/2006/05/03/589423.aspx As an overview, I want to have the user press a key, say 'E', and have the keyboard return a different character, 'Z', to whatever app is in focus. The relevant method I changed now looks like: private static IntPtr HookCallback(int nCode, IntPtr wParam, IntPtr lParam) { if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN) { //The truely typed character:

Bit hacking and modulo operation

£可爱£侵袭症+ 提交于 2019-11-30 01:36:57
问题 While reading this: http://graphics.stanford.edu/~seander/bithacks.html#ReverseByteWith64BitsDiv I came to the phrase: The last step, which involves modulus division by 2^10 - 1, has the effect of merging together each set of 10 bits (from positions 0-9, 10-19, 20-29, ...) in the 64-bit value. (it is about reversing the bits in a number)... so I did some calculations: reverted = (input * 0x0202020202ULL & 0x010884422010ULL) % 1023; b = 74 : 01001010 b * 0x0202020202 :

How to simulate mouse click from Mac App to other Application

别说谁变了你拦得住时间么 提交于 2019-11-30 01:10:09
问题 I am trying to simulate mouse click on iphone simulator from macos App for that I am using CGEvents . the process id is 33554 for iPhone simulator let point = CGPoint(x: 500 , y:300) let eventMouseDown = CGEvent(mouseEventSource: nil, mouseType: .leftMouseDown, mouseCursorPosition: point, mouseButton: .left) let eventMouseUp = CGEvent(mouseEventSource: nil, mouseType: .leftMouseUp, mouseCursorPosition: point, mouseButton: .left) eventMouseDown?.postToPid(33554) eventMouseUp?.postToPid(33554)

Which programming languages aren't considered high-level? [closed]

大兔子大兔子 提交于 2019-11-30 01:08:12
In informatics theory I hear and read about high-level and low-level languages all time. Yet I don't understand why this is still relevant as there aren't any (relevant) low-level languages except assembler in use today. So you get: Low-level Assembler Definitely not low-level C BASIC FORTRAN COBOL ... High-level C++ Ruby Python PHP ... And if assembler is low-level, how could you put for example C into the same list. I mean: C is extremely high-level compared to assembler. Same even for COBOL, Fortran, etc. So why does everybody keep mentioning high and low-level languages if assembler is

Why is vectorization, faster in general, than loops?

白昼怎懂夜的黑 提交于 2019-11-29 19:32:47
Why, at the lowest level of the hardware performing operations and the general underlying operations involved (i.e.: things general to all programming languages' actual implementations when running code), is vectorization typically so dramatically faster than looping? What does the computer do when looping that it doesn't do when using vectorization (I'm talking about the actual computations that the computer performs, not what the programmer writes), or what does it do differently? I have been unable to convince myself why the difference should be so significant. I could probably be persuaded

Is there an un-buffered I/O in Windows system?

感情迁移 提交于 2019-11-29 19:16:50
问题 I want to find low-level C/C++ APIs, equivalent with "write" in linux systems, that don't have a buffer. Is there one? The buffered I/O such as fread, fwrite are not what I wanted. 回答1: Look at CreateFile with the FILE_FLAG_NO_BUFFERING option 回答2: http://www.codeproject.com/Articles/51678/Improve-responsiveness-in-Windows-with-the-FILE_FL The only method to prevent swapping out cache is to open files with the FILE_FLAG_NO_BUFFERING flag. This, however, requires disk I/O requests to have

What has a better performance: multiplication or division?

我的未来我决定 提交于 2019-11-29 11:22:33
Which version is faster ? x * 0.5 or x / 2 Ive had a course at the university called computer systems some time ago. From back then i remember that multiplying two values can be achieved with comparably "simple" logical gates but division is not a "native" operation and requires a sum register that is in a loop increased by the divisor and compared to the dividend. Now i have to optimise an algorithm with a lot of divisions. Unfortunately its not just dividing by two so binary shifting is no option. Will it make a difference to change all divisions to multiplications ? update: I have changed

Are there fixed size integers in GCC?

廉价感情. 提交于 2019-11-29 09:35:52
On the MSVC++ compiler, one can use the __int8 , __int16 , __int32 and similar types for integers with specific sizes. This is extremely useful for applications which need to work with low-level data structures like custom file formats, hardware control data structures and the like. Is there a similar equivalent I can use on the GCC compiler? Jason Coco ISO standard C, starting with the C99 standard, adds the standard header <stdint.h> that defines these: uint8_t - unsigned 8 bit int8_t - signed 8 bit uint16_t - unsigned 16 bit int16_t - signed 16 bit uint32_t - unsigned 32 bit int32_t -