shellcode

Address woes from Hacking: The Art of Exploitation [closed]

狂风中的少年 提交于 2019-12-04 20:40:37
Closed. This question is off-topic . It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . I bought this book recently titled: Hacking: The Art of Exploitation (2nd Edition) and it's been bugging me so much lately. Anyway, with one of the examples, firstprog.c : #include <stdio.h> int main() { int i; for(i=0; i < 10; i++) { // Loop 10 times. printf("Hello, world!\n"); // put the string to the output. } return 0; // Tell OS the program exited without errors. } It has you compile it with gcc (obviously

Can't link object file using ld - Mac OS X

耗尽温柔 提交于 2019-12-04 13:50:11
问题 /********* exit.asm */ [SECTION .text] global _start _start: xor eax, eax xor ebx, ebx mov al, 1 int 0x80 //**************************** First I used nasm -f elf exit.asm to generate the object file. then I ran the following "ld" command on my Mac OS X 10.7, it has the these outputs and warning, I tried to run it on my 32 bit linux machine, everything went through just fine, Could you please explain why would not the linker work on my Mac? Thank you! Alfred says: ld -o exiter exit.o ld:

why can't Javascript shellcode exploits be fixed via “data execution prevention”?

假如想象 提交于 2019-12-04 10:31:15
问题 The "heap spraying" wikipedia article suggests that many javascript exploits involve positioning a shellcode somewhere in the script's executable code or data space memory and then having interpreter jump there and execute it. What I don't understand is, why can't the interpreter's entire heap be marked as "data" so that interpreter would be prevented from executing the shellcode by DEP? Meanwhile the execution of javascript derived bytecode would be done by virtual machine that would not

execle() also specifies the environment. What does that mean?

北城余情 提交于 2019-12-04 10:03:33
I am reading a book called "Hacking: The art of exploitation" and I came across this paragraph: With execl(), the existing environment is used, but if you use execle(), the entire environment can be specified. If the environment array is just the shellcode as the first string (with a NULL pointer to terminate the list), the only environment variable will be the shellcode. This makes its address easy to calculate. In Linux, the address will be 0xbffffffa, minus the length of the shellcode in the environment, minus the length of the name of the executed program. Since this address will be exact,

Why cast “extern puts” to a function pointer “(void(*)(char*))&puts”?

限于喜欢 提交于 2019-12-04 05:14:58
I'm looking at example abo3.c from Insecure Programming and I'm not grokking the casting in the example below. Could someone enlighten me? int main(int argv,char **argc) { extern system,puts; void (*fn)(char*)=(void(*)(char*))&system; char buf[256]; fn=(void(*)(char*))&puts; strcpy(buf,argc[1]); fn(argc[2]); exit(1); } So - what's with the casting for system and puts? They both return an int so why cast it to void? I'd really appreciate an explanation of the whole program to put it in perspective. [EDIT] Thank you both for your input! Jonathan Leffler , there is actually a reason for the code

Format string bugs - exploitation

谁说胖子不能爱 提交于 2019-12-03 20:30:35
I'm trying to exploit my format string bug, which lies in this program: #include <sys/types.h> #include <sys/uio.h> #include <unistd.h> #include <stdio.h> #include <string.h> void foo(char* tmp, char* format) { /* write into tmp a string formated as the format argument specifies */ sprintf(tmp, format); /* just print the tmp buffer */ printf("%s", tmp); } int main(int argc, char** argv) { char tmp[512]; char format[512]; while(1) { /* fill memory with constant byte */ memset(format, '\0', 512); /* read at most 512 bytes into format */ read(0, format, 512); /* compare two strings */ if (

Can't link object file using ld - Mac OS X

不打扰是莪最后的温柔 提交于 2019-12-03 08:55:38
/********* exit.asm */ [SECTION .text] global _start _start: xor eax, eax xor ebx, ebx mov al, 1 int 0x80 //**************************** First I used nasm -f elf exit.asm to generate the object file. then I ran the following "ld" command on my Mac OS X 10.7, it has the these outputs and warning, I tried to run it on my 32 bit linux machine, everything went through just fine, Could you please explain why would not the linker work on my Mac? Thank you! Alfred says: ld -o exiter exit.o ld: warning: -arch not specified ld: warning: -macosx_version_min not specified, assuming 10.7 ld: warning:

why can't Javascript shellcode exploits be fixed via “data execution prevention”?

我们两清 提交于 2019-12-03 06:07:39
The "heap spraying" wikipedia article suggests that many javascript exploits involve positioning a shellcode somewhere in the script's executable code or data space memory and then having interpreter jump there and execute it. What I don't understand is, why can't the interpreter's entire heap be marked as "data" so that interpreter would be prevented from executing the shellcode by DEP? Meanwhile the execution of javascript derived bytecode would be done by virtual machine that would not allow it to modify memory belonging to the interpreter (this wouldn't work on V8 that seems to execute

Shellcode for a simple stack overflow: Exploited program with shell terminates directly after execve(“/bin/sh”)

允我心安 提交于 2019-12-02 16:22:14
I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. The execve syscall succeeds but afterwards it just terminates. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s : .data .fmtsp:

x86_64 Executing Shellcode fails:

风格不统一 提交于 2019-12-02 09:06:38
I'm using Python 2.7 on 64-bit Linux. I have the following Python script witch should execute a simple Hello World shellcode. import urllib2 import ctypes shellcode = "\xb8\x01\x00\x00\x00\xbf\x01\x00\x00\x00\x48\xbe\xd8\x00\x60\x00\x00\x00\x00\xba\x0e\x00\x00\x00\x0f\x05\xb8\x3c\x00\x00\x00\xbf\x00\x00\x00\x00\x0f\x05" #Create buffer in memory shellcode_buffer = ctypes.create_string_buffer(shellcode, len(shellcode)) #Funktionszeiger shellcode_func = ctypes.cast(shellcode_buffer, ctypes.CFUNCTYPE(ctypes.c_void_p)) #Shellcode execute shellcode_func() If i run python Scriptname.py I get a memory