How do Operating Systems prevent programs from accessing memory?

别等时光非礼了梦想. 提交于 2021-02-08 19:12:53

问题


My understanding currently is,

  • I can write an operating system in C

  • I can write a program for that operating system in C

  • When I write an operating system I can see all of the memory

  • When I write a program the operating system hides memory from other programs from me.

  • Whenever a program runs inside an OS it appears to the program as if the memory it is allocated is all the memory the computer has

How does the CPU / OS achieve this? Is this something purely implemented on the software level? Or does it require a hardware implementation as well?


回答1:


It is not purely on software level. For Intel architecture in a few sentences:

Address space for each process is isolated; each process has the same virtual address space (let's simplify: 0x00000000 to 0xffffffff), which maps to different physical locations.

Address space represents collection of memory pages. Pages are physically mapped only when needed. Pages, which were not accessed a long time (there are special algorithms) are removed from physical memory; in case they contain something dynamically modified, they are stored in a 'swap' file on the hard drive.

Each page belongs to specific process (except for some system pages), has assigned virtual address, and access flags: read/write/execute. What appears to be continuous array, could be allocated on several non-contiguous pages, and some of them could be even swapped out to hard drive at the moment.

Program (process) can see only its own address space. There are a few ways to reach other process' spaces, but regular programs rarely do that.

Address space is not completely accessible: if the program will try to access unallocated address, or write to write-protected page, will be triggered memory violation.

Generally, program can allocate, deallocate, or change access flags for pages only in its own address space. There are types of memory (to load executable image, for stack, and for several different flavors of allocatable memory).

Sorry, I do not remember the book title, read it very long ago.




回答2:


How do Operating Systems prevent programs from accessing memory?

Short answer: On x86 processors they do it by activating Protected Mode(32-bit) or Long Mode(64-bit). ARM or other processors implement similar concepts. The Protected Mode protects the memory space of different Processes from each other - giving each process its own memory space. This concept is called Virtual Memory.

In hardware this is realized by the MMU (for memory) or the IOMMU (for IO memory) that blocks the access to certain regions of the memory space.

How does the CPU / OS achieve this? Is this something purely implemented on the software level? Or does it require a hardware implementation as well?

As mentioned above, this is better be implemented in hardware to be efficient. It cannot be done (efficiently) purely on a software level.

As a thought experiment for the advanced readers:
try to implement process isolation (preventing another process from accessing this process'es memory) in Real Mode.

A (reasonable) answer:
The only way of a software implementation I know of is that of Virtual Machine which checks all of the boundaries (of all instructions) of memory accesses - which is essentially what a MMU does.




回答3:


Current common solution is to use an MMU, memory management unit. No need to think only intel or arm.

You can look for the terms virtual memory and physical memory although there is a problem with the use of the term virtual memory.

Physical memory is the processors address space from 0x000...0000 to 0xFFF...FFF however many bits of address.

Virtual memory does not require a separate processor mode but in general implementations do and this allows for isolate between the kernel (the OS if you will) and the application(s). At the core address bus between the processor and the mmu an id is presented as well as the address and data. The operating system sets up mmu tables which define a chunk of virtual memory and describes the physical address. So the virtual address chunk of 16K bytes at 0x00000000 for a specific application may map to 0x12300000 in physical memory. For that same application 0x00004000 may map to 0x32100000 and so on, this makes memory allocation much easier for the operating system, if you wanted to allocate a meg of memory it doesnt have to find a linear/aligned chunk of free memory but can build it out of smaller chunks of unallocated/free memory. This among other things allows the application to think it has access to a large portion of the processors memory space.

There are different design implementations, but for protection between the os and application the id that is used on the bus distinguishes between the applications and os. If the bus transaction contains the combination of an id and an address that id does not have access to (each chunk has access/protection bits to indicate in some form if an id has access to that virtual address) then the mmu generates a fault which is some sort of an exception/interrupt to the processor in a processor specific way that switches the processor to the protected/kernel mode and hits an interrupt/exception handler. This is not necessarily a bad thing. For example when running a virtual machine instead of an application the virtual machine software could intentionally be designed such that a particular virtual address is an emulation of some peripheral, an ethernet controller for example so the vm can have access to the network. When the application hits that address the fault happens, but instead of shutting down the application and notifying the user there was a problem, you instead based on that address emulate the peripheral by reacting to or returning a result back to the application that the application cant tell from a real peripheral. Another feature of faults is the laymans (not programmer / software/hardware engineer) version of virtual memory. And this is where your application could think that it has access to all of the computers memory. The application(s) may have used up all the free memory (RAM) in the system. But within their virtual address spaces none of them have actually done that, at one point an application may have had physical 0x11100000 allocated to virtual 0x20000000, but there is a demand on the system for an allocation of memory and there is no more available. The operating system can use an algorithm to decide that this application has not used its space for a while or more likely a randomized lottery and takes the chunk at 0x11100000 physical and copies its contents to a hard drive/(non ram storage), marks virtual 0x20000000 so that it will fault if accessed and gives physical 0x11100000 to the current memory allocation request (could be the same application or a different application). When this application comes around and accesses the memory chunk at 0x20000000, the operating system gets the fault, picks some other chunk of memory, saves it to disk, marks it to fault, takes what was at this applications 0x20000000 pulls it from disk places it in ram, releases the fault and the application keeps going. This is why performance falls off a cliff when you run out of memory in your system and it runs into "swap" memory sometimes also called virtual memory.

If the mmu is there and the processor is designed to be use with operating systems, then ideally there is a fast way to switch the mmu tables. For a single threaded processor to make this simpler only one thing can run at a time even though it feels to the user there are many things going on, only one set of instructions are running at a time and they are either from a specific application or handler within the operating system. Each processor id needs an mmu table each application and the kernel itself (you dont turn off the mmu normally you simply give the kernel full access to the memory space or the mmu knows a specific id is not checked, specific to the design of the mmu/system). The mmu tables live in memory but the mmu does not have to go through itself to get there its not a chicken and egg thing, the operating system simply never allocates that memory to anyone, it protects it. The mmu can either be such that it combines the id and upper section of the virtual address to find the mmu table entry or in a single threaded system there could be one active table and the os switches which table is used or which id has access to chunks, or lets think of it this way you could have only two ids for a single threaded system. Getting too vague here, you would need to look at specific processors/architectures/implementations to see how that one works, how the processor modes works, what ids are generated from that how the mmu reacts to those, etc.

Another feature here which makes life so much easier for all of us is that this also allows application A to have its program at 0x00000000 and application B have its program at (virtual address) 0x00000000 and application C have its program at 0x00000000 because their physical addresses are all at different places. But we can now compile programs for that operating system so that they operate in the same memory space. Pre-mmu or without an mmu then 1) you are likely unprotected but 2) you can certainly still have an operating system with applications. You would need to have the operating system move memory around or force position independent code so that when launched each application either starts at a known address but the os has moved/swapped another application out of the way or position independent and each application starts in a different space. In order to support memory allocation the os would need to work harder to keep track, and try to have an algorithm that tries to avoid fragmentation, sometimes having to copy data when an application re-allocates.



来源:https://stackoverflow.com/questions/58896670/how-do-operating-systems-prevent-programs-from-accessing-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!