x86-64

Execution stuck at printf — assembly

╄→尐↘猪︶ㄣ 提交于 2019-12-08 07:39:13
问题 I am modifying some assembly code to do a printf. The first time the printf is called, its working perfectly. But on the second time, it is getting suck at the call to printf. I am using gdb to debug. The original partial code is movq 192(%rsp), %rax # 8-byte Reload movq (%rax), %rcx movq 200(%rsp), %rdx # 8-byte Reload movq (%rdx), %rsi movq %rcx, %rdi movq 24(%rsp), %r8 # 8-byte Reload addq %r8, %rdi movq $0, (%rsi) movq 216(%rsp), %r9 # 8-byte Reload movq (%r9), %r10 movq 208(%rsp), %r11 #

Looking for mmap flag values

蹲街弑〆低调 提交于 2019-12-08 05:11:35
问题 I was wondering where I could find mmap flag values on os x. The manpages for mmap say to use MAP_PRIVATE, MAP_... and such, but if you are dealing with assembly you have to know the actual values to make the syscall. I tried looking for the header files that defined these constants but I could not find it. Could someone link it possibly? 回答1: Using the -E option with gcc allows you to see the output of a source file after the preprocessor. Using gcc -E test.c on the following source file

Link object files without libSystem macOS

牧云@^-^@ 提交于 2019-12-08 04:54:52
问题 I'm writing a compiler for macOS on x86-64, but when I link the object files together, ld says ld: dynamic main executables must link with libSystem.dylib for inferred architecture x86_64 But since libSystem contains libc, I don't want to use it (it would give me lots of duplicates). How can I get around this? 回答1: Use -macosx_version_min 10.6 as ld parameter. This will generate LC_UNIXTHREAD instead of LC_MAIN in your executable. If you want even more control you'd need to get rid of ld in

Is xmm8 register value preserved across calls?

心不动则不痛 提交于 2019-12-08 03:17:30
问题 My Windows program compiled using Visual Studio 2017 does the following: calls a routine that has a default argument with value 35.05. initializes the Java Virtual Machine through the C interface calls the routine again that has the default argument with value 35.05. In the first call, the default argument gets the correct 35.05. In the second call that value is garbage. I looked at the generated assembly and during the first call with the default argument 35.05 is copied to xmm8 from a

Python图片处理模块Pillow

一笑奈何 提交于 2019-12-08 03:01:14
原文来自 Pillow 安装 警告 * Pillow 不能和PIL 同时存在于一个环境中,在安装Pillow之前需要先卸载PIL * Pillow 1.0 版本后已经不支持import Image,请使用from PIL import Image来代替 * Pillow 2.1.0版本以后已经不支持import _imaging,请使用from PIL.Image import core as _imaging来代替 提示 * Pillow 2.0.0 版本之前只支持Python 2.4、2.5、2.6、2.7 * Pillow 2.0.0 版本之前只支持Python 2.6、2.7、3.2、3.3、3.4、3.5 基本安装 提示:使用PyPI安装可以工作在Windows、OS X和Linux中,使用源码包需要组建依赖环境 使用pip安装Pillow: $ pip install Pillow 或者使用easy_install $ easy_install Pillow Windows安装 官方提供了wheel,egg和二进制程序32位和64位平台的Pillow二进制文件,这些二进制文件拥有所有的第三方库 $ pip install Pillow 或者: $ easy_install Pillow OS X 安装 官方提供了OS X

x86_64 - Self-modifying code performance

醉酒当歌 提交于 2019-12-08 00:32:48
问题 I am reading the Intel architecture documentation, vol3, section 8.1.3 ; Self-modifying code will execute at a lower level of performance than non-self-modifying or normal code. The degree of the performance deterioration will depend upon the frequency of modification and specific characteristics of the code. So, if I respect the rules: (* OPTION 1 *) Store modified code (as data) into code segment; Jump to new code or an intermediate location; Execute new code; (* OPTION 2 ) Store modified

Are Intel x86_64 processors not only pipelined architecture, but also superscalar?

烂漫一生 提交于 2019-12-07 22:58:32
问题 Are Intel x86_64 processors not only pipelined architecture, but also superscalar? Pipelining - these two sequences execute in parallel (different stages of the same pipeline-unit in the same clock, for example ADD with 4 stages): stage1 -> stage2 -> stage3 -> stage4 -> nothing nothing -> stage1 -> stage2 -> stage3 -> stage4 Superscalar - these two sequences execute in parallel (two instructions can be launched to different pipeline-units in the same clock, for example ADD and MUL): ADD

Calling C function from x64 assembly with registers instead of stack

筅森魡賤 提交于 2019-12-07 22:12:25
问题 This answer puzzled me. According to the standard C calling conventions, the standard way to call C functions is to push arguments to the stack and to call the subroutine. That is clearly different from syscalls, where you set different registers with appropriate arguments and then syscall . However, the answer mentioned above gives this GAS code: .global main .section .data hello: .asciz "Hello\n" .section .text main: movq $hello, %rdi movq $0, %rax call printf movq $0, %rax ret which works

Does a legitmate epilog need to include a dummy rsp adjustment even if not otherwise necessary?

末鹿安然 提交于 2019-12-07 21:24:36
问题 The x86-64 Windows ABI has the concept of a legitimate epilog , which is a special type of function epilog that can be simulated during exception handling in order to restore the callers context 1 as described here: If the RIP is within an epilog [when an exception occurs], then control is leaving the function, ... and the effects of the epilog must be continued to compute the context of the caller function. To determine if the RIP is within an epilog, the code stream from RIP on is examined.

JMP unexpected behavior in Shellcode when next(skipped) instruction is a variable definition

扶醉桌前 提交于 2019-12-07 20:23:56
问题 Purpose : I was trying to take advantage of the RIP mode in x86-64. Even though the assembly performs as expected on its own, the shellcode does not. The Problem : Concisely what I tried was this, jmp l1 str1: db "some string" l1: other code lea rax, [rel str1] I used the above at various places, it failed only at certain places and succeeded in other places. I tried to play around and could not find any pattern when it fails. When variable(str1: db instruction) position is after the