Trying to disable paging through cr0 register

三世轮回 提交于 2021-01-29 11:11:54

问题


I'm trying to disable paging completely with an LKM (don't ask me why I'm just experimenting).

I've tried just changing the value directly with the LKM.

void disable_paging(void)
{
    asm("movq %cr0, %rax\n\t"
        "movq $0xFFFFFFFEFFFFFFFF, %rbx\n\t"
        "and  %rbx, %rax\n\t"
        "movq %rax, %cr0\n\t");
}

Well the expected result would be the bit being flipped. The actual result is a segfault.


回答1:


TL:DR: This can't work, but your attempt didn't disable paging because you cleared bit 32 instead of bit 31. IDK why that would result in a SIGSEGV for any user-space process, though.

Any badness you get from this is from clobbering RAX + RBX without telling the compiler.


You're obviously building a module for x86-64 Linux which runs in long mode. But long mode requires paging to be enabled.

According to an osdev forum thread x86_64 - disabling paging?

If you disable paging in long mode, you will no longer be in long mode.

If that's actually true (rather than just trapping with a #GP exception or something), then obviously it's a complete disaster!!

Code fetch from EIP instead of RIP is extremely unlikely to fetch anything, and REX prefixes would decode as inc/dec if you do happen to end up with EIP pointing at some 64-bit code somewhere in the low 4GiB of physical address space. (Kernel addresses are in the upper canonical range, but it's remotely possible that the low 32 bits of RIP could be the physical address of some code.)

Also related: Why does long mode require paging - probably because supporting unpaged 64-bit mode is an unnecessary hardware expense that would never get much real use.


I'm not sure why you'd get a segfault. That's what I'd expect if you tried to run this code in user-space, where mov %cr0, %rax faults because it's privileged, and the kernel delivers SIGSEGV in response to that user-space #GP exception.

If you are running this function from an LKM's init function, like Brendan says the expected result would be crashing the kernel on that core. Or possibly the kernel would catch that and deliver SIGSEGV to modprobe(1).


Also, you're using GNU C Basic asm (without any clobbers), so GCC's code-gen assumes that registers (including RAX and RBX) aren't modified. Of course disabling paging is also a jump when your code isn't in an identity-mapped page, so it doesn't really matter whether make other small lies to the compiler or not. If this function doesn't inline into anything, then in practice clobbering RAX won't hurt. But clobbering RBX definitely can; it's call-preserved in the x86-64 System V calling convention.

And BTW, CR0 only has 32 significant bits. You could and $0x7fffffff, %eax to clear it. Or btr $31, %rax if you like to clear bit 31 in a 64-bit register. https://wiki.osdev.org/CPU_Registers_x86

According to Section 2.5 of the Intel manual Volume 3 (January 2019):

Bits 63:32 of CR0 and CR4 are reserved and must be written with zeros. Writing a nonzero value to any of the upper 32 bits results in a general-protection exception, #GP(0).

According to Section 3.1.1 of the AMD manual Volume 2 (December 2017):

In long mode, bits 63:32 are reserved and must be written with zero, otherwise a #GP occurs.

So it would be fine to truncate RAX to EAX, at least for the foreseeable future. New stuff tends to get added to MSRs, not CR bits. Since there's no way to do this in Linux without crashing, you might as well just keep it simple for silly computer tricks.


0xFFFFFFFEFFFFFFFF clears bit 32, not bit 31

All of the above is predicated on the assumption that you were actually clearing the paging-enable bit. So maybe SIGSEGV is simply due to corrupting registers with GNU C basic asm without actually changing the control register at all.

https://wiki.osdev.org/CPU_Registers_x86 shows that Paging is bit 31 of CR0, and that there are no real bits in the high half. https://en.wikipedia.org/wiki/Control_register#CR0 says CR0 is a 64-bit register in long mode. (But there still aren't any bits that do anything in the high half.)

Your mask actually clears bit 32, the low bit of the high half. The right AND mask is 0x7FFFFFFF. Or btr $31, %eax. Truncating RAX to EAX is fine.

This will actually crash your kernel in long mode like you were trying to:

// disable paging, should crash
    asm volatile(
        "mov  %%cr0, %%rax        \n\t"   // assembles with no REX prefix, same as mov %cr0,%eax
        "btr  $31, %%eax          \n\t"   // reset (clear) bit 31
        "mov  %%rax, %%cr0        \n\t"
        ::
        : "rax", "memory"
     );


来源:https://stackoverflow.com/questions/57123211/trying-to-disable-paging-through-cr0-register

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!