x86-64

Linking OpenCV 2.3 program in Mac OS X Lion: symbol(s) not found for architecture x86_64

血红的双手。 提交于 2019-12-20 04:19:43
问题 I'm having a problem when trying to compile the program in this tutorial from the OpenCV 2.3 official documentation. I have created the CMakeList.txt like it's said in the link. Well, it didn't work. After a good time searching Google and trying to fix it, I have added the correct lib and include folders to the OpenCVConfig.make (at /opt/local/share/opencv here). Well, this is the output when I try to make it: $ cmake . -- Configuring done -- Generating done -- Build files have been written

x86-64 segmentation fault saving stack pointer

落花浮王杯 提交于 2019-12-20 03:17:45
问题 I am currently following along with this tutorial, but I'm not a student of that school. GDB gives me a segmentation fault in thread_start on the line: movq %rsp, (%rdi) # save sp in old thread's tcb Here's additional info when I backtrace: #0 thread_start () at thread_start.s:16 #1 0x0000000180219e83 in _cygtls::remove(unsigned int)::__PRETTY_FUNCTION__ () from /usr/bin/cygwin1.dll #2 0x00000000ffffcc6b in ?? () Backtrace stopped: previous frame inner to this frame (corrupt stack?) Being a

64-bit executable runs slower than 32-bit version

无人久伴 提交于 2019-12-20 03:07:17
问题 I have a 64-bit Ubuntu 13.04 system. I was curious to see how 32-bit applications perform against 64-bit applications on a 64-bit system so I compiled the following C program as 32-bit and 64-bit executable and recorded the time they took to execute. I used gcc flags to compile for 3 different architectures: -m32 : Intel 80386 architecture (int, long, pointer all set to 32 bits (ILP32)) -m64 : AMD's x86-64 architecture (int 32 bits; long, pointer 64 bits (LP64)) -mx32 : AMD's x86-64

Why do 32-bit applications work on 64-bit x86 CPUs?

我怕爱的太早我们不能终老 提交于 2019-12-19 21:44:39
问题 32-bit application executables contain machine code for a 32-bit CPU, but the assembly and internal architecture (number of registers, register width, calling convention) of 32-bit and 64-bit Intel CPU's differ, so how can a 32-bit exe run on a 64-bit machine? Wikipedia's x86-64 article says: x86-64 is fully backwards compatible with 16-bit and 32-bit x86 code. Because the full x86 16-bit and 32-bit instruction sets remain implemented in hardware without any intervening emulation , existing

ROL / ROR on variable using inline assembly in Objective-C

非 Y 不嫁゛ 提交于 2019-12-19 21:26:06
问题 I would like to perform ROR and ROL operations on variables in an Objective-C program. However, I can't manage it – I am not an assembly expert. Here is what I have done so far: uint8_t v1 = ....; uint8_t v2 = ....; // v2 is either 1, 2, 3, 4 or 5 asm("ROR v1, v2"); the error I get is: Unknown use of instruction mnemonic with unknown size suffix How can I fix this? Edit: The code does not need to use inline assembly. However, I haven't found a way to do this using Objective-C / C++ / C

How to tell gcc to disable padding inside struct? [duplicate]

我们两清 提交于 2019-12-19 19:49:34
问题 This question already has answers here : memory alignment within gcc structs (6 answers) Closed 3 years ago . I’m unsure on whether it’s normal or it’s a compiler bug but I have a C struct with lot of members. Among of them, there’s, : struct list { ... ... const unsigned char nop=0x90; // 27 bytes since the begining of the structure const unsigned char jump=0xeb; // 28 bytes since the begining of the structure const unsigned char hlt=0xf4; // 29 bytes since the begining of the structure

Is it possible to execute 32-bit code in 64-bit process by doing mode-switching?

自闭症网瘾萝莉.ら 提交于 2019-12-19 17:41:32
问题 In this page, http://www.x86-64.org/pipermail/discuss/2004-August/005020.html He said that there is a way to mix 32-bit code and 64-bit code in a application. He assumed the application is 32-bit (in compatibility mode) and then switch to 64-bit mode to execute 64-bit code and vice versa. Assume my OS is 64-bit linux and my application is 64-bit. I do a far jump to switch to compatibility mode and execute 32-bit code. Does it can work correctly when I do a system call or function call ? Is

Printing floating point numbers from x86-64 seems to require %rbp to be saved

浪子不回头ぞ 提交于 2019-12-19 11:35:11
问题 When I write a simple assembly language program, linked with the C library, using gcc 4.6.1 on Ubuntu, and I try to print an integer, it works fine: .global main .text main: mov $format, %rdi mov $5, %rsi mov $0, %rax call printf ret format: .asciz "%10d\n" This prints 5, as expected. But now if I make a small change, and try to print a floating point value: .global main .text main: mov $format, %rdi movsd x, %xmm0 mov $1, %rax call printf ret format: .asciz "%10.4f\n" x: .double 15.5 This

Linux's security measures against executing shellcode

南楼画角 提交于 2019-12-19 10:25:21
问题 I'm learning the basics of computer security and I'm trying to execute some shellcode I've written. I followed the steps given here http://dl.packetstormsecurity.net/papers/shellcode/own-shellcode.pdf http://webcache.googleusercontent.com/search?q=cache:O3uJcNhsksAJ:dl.packetstormsecurity.net/papers/shellcode/own-shellcode.pdf+own+shellcode&cd=1&hl=nl&ct=clnk&gl=nl $ cat pause.s xor %eax,%eax mov $29,%al int $0x80 $ as -o pause.o pause.s $ ld -o pause pause.o ld: warning: cannot find entry

x86 Code Injection into an x86 Process from a x64 Process

…衆ロ難τιáo~ 提交于 2019-12-19 09:08:32
问题 I realize the title's a bit convoluted, so let me explain what I'm trying to do: I just finished writing a simple DLL injector for a proof of concept I'm trying to write. The program takes a snapshot of the current processes, enumerates the process tree, and injects a DLL into its direct parent process. Now, under ideal conditions, that works fine: the 32-bit version of the injector can inject into 32-bit parent processes, and the 64-bit version of the injector can inject into the 64-bit