x86-64

Why isn't RDTSC a serializing instruction?

萝らか妹 提交于 2019-12-28 12:14:07
问题 The Intel manuals for the RDTSC instruction warn that out of order execution can change when RDTSC is actually executed, so they recommend inserting a CPUID instruction in front of it because CPUID will serialize the instruction stream (CPUID is never executed out of order). My question is simple: if they had the ability to make instructions serializing, why didn't they make RDTSC serializing? The entire point of it appears to be to get cycle accurate timings. Is there a situation under which

Repeated integer division by a runtime constant value

本秂侑毒 提交于 2019-12-28 12:03:54
问题 At some point in my program I compute an integer divisor d . From that point onward d is going to be constant. Later in the code I will divide by that d several times - performing an integer division, since the value of d is not a compile-time known constant. Given that integer division is a relatively slow process compared to other kind of integer arithmetic, I would like to optimize it. Is there some alternative format that I could store d in, so that the division process would perform

Why is the construction of std::optional<int> more expensive than a std::pair<int, bool>?

我的未来我决定 提交于 2019-12-28 11:40:51
问题 Consider these two approaches that can represent an "optional int ": using std_optional_int = std::optional<int>; using my_optional_int = std::pair<int, bool>; Given these two functions... auto get_std_optional_int() -> std_optional_int { return {42}; } auto get_my_optional() -> my_optional_int { return {42, true}; } ...both g++ trunk and clang++ trunk (with -std=c++17 -Ofast -fno-exceptions -fno-rtti ) produce the following assembly: get_std_optional_int(): mov rax, rdi mov DWORD PTR [rdi],

How to Tell if a .NET Assembly Was Compiled as x86, x64 or Any CPU

允我心安 提交于 2019-12-28 05:48:05
问题 What's the easiest way to discover (without access to the source project) whether a .NET assembly DLL was compiled as 'x86', 'x64' or 'Any CPU'? Update: A command-line utility was sufficient to meet my immediate needs, but just for the sake of completeness, if someone wants to tell me how to do it programmatically then that would be of interest too, I'm sure. 回答1: If you just want to find this out on a given dll, then you can use the CorFlags tool that is part of the Windows SDK: CorFlags.exe

Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly

喜夏-厌秋 提交于 2019-12-28 05:46:26
问题 I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx

Force gcc to compile 32 bit programs on 64 bit platform

不想你离开。 提交于 2019-12-28 04:54:22
问题 I've got a proprietary program that I'm trying to use on a 64 bit system. When I launch the setup it works ok, but after it tries to update itself and compile some modules and it fails to load them. I'm suspecting it's because it's using gcc and gcc tries to compile them for a 64 bit system and therefore this program cannot use these modules. Is there any way (some environmental variables or something like that) to force gcc to do everything for a 32 bit platform. Would a 32 bit chroot work?

How to convert Linux 32-bit gcc inline assembly to 64-bit code? [closed]

旧街凉风 提交于 2019-12-28 04:09:03
问题 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 7 years ago . I'm attempting to convert RR0D Rasta Ring 0 Debugger from 32-bit mode to 64-bit mode (long mode) in Linux, using gcc. I'm familiar with x86 32-bit assembly (in MS-DOS environment) but I'm a beginner in x86 64-bit

push on 64bit intel osx

我们两清 提交于 2019-12-28 03:06:10
问题 I want to push 64 bit address on stack as below, __asm("pushq $0x1122334455667788"); But I get compilation error and I can only push in following way, __asm("pushq $0x11223344"); Can someone help me understand my mistake? I am new to assembly, so please excuse me if my question sounds stupid. 回答1: x86-64 has some interesting quirks, which aren't obvious even if you're familiar with 32-bit x86... Most instructions can only take a 32-bit immediate value, which is sign-extended to 64 bits if

What specifically marks an x86 cache line as dirty - any write, or is an explicit change required?

青春壹個敷衍的年華 提交于 2019-12-28 03:05:27
问题 This question is specifically aimed at modern x86-64 cache coherent architectures - I appreciate the answer can be different on other CPUs. If I write to memory, the MESI protocol requires that the cache line is first read into cache, then modified in the cache (the value is written to the cache line which is then marked dirty). In older write-though micro-architectures, this would then trigger the cache line being flushed, under write-back the cache line being flushed can be delayed for some

How to use RIP Relative Addressing in a 64-bit assembly program?

。_饼干妹妹 提交于 2019-12-27 14:42:24
问题 How do I use RIP Relative Addressing in a Linux assembly program for the AMD64 archtitecture? I am looking for a simple example (a Hello world program) that uses the AMD64 RIP relative adressing mode. For example the following 64-bit assembly program would work with normal (absolute addressing): .text .global _start _start: mov $0xd, %rdx mov $msg, %rsi pushq $0x1 pop %rax mov %rax, %rdi syscall xor %rdi, %rdi pushq $0x3c pop %rax syscall .data msg: .ascii "Hello world!\n" I am guessing that