According to Wikipedia:
A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual add
"page that is mapped in the virtual address space, but not loaded in physical memory" does not imply that it previously was in physical memory. Suppose you map a file? It's still on disk, not in memory yet.
Suppose you map a log file and keep appending to it. Every time you exceed the end of committed memory, a page fault occurs, the OS will provide you with a new empty page and adjust the file length.
It could also be access violations which are caught and handled by the program.
It could also be that the program uses more memory segments than fit in the TLB (which is a cache for the page tables). When pages are contiguous, they can all be handled by a single page table entry. But if memory is fragmented in physical address space, many page table entries are needed, and they may not fit in the TLB. When a TLB miss occurs, the OS page fault handler is invoked and looks up the mapping in the process's page table.
In some ways, this is a variation on Dean's answer: the pages are already in physical RAM, and the OS does need to load those mappings into the TLB, but not because of IPC.
Brian pointed out that x86 (and therefore all Win32 systems) handles this without a page fault.
Yet another cause of page faults is triggering guard pages used for stack growth and copy-on-write, but usually those would not occur without bound. I'm not 100% sure if those would show up as access violations or not, because they will be marked as an access violation on entry to the MMU trap, but are probably handled by the OS page fault handler and not transformed into the user mode (SEH) access violation.
(I'm the author of Process Hacker.)
Firstly:
A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory.
That's not entirely correct, as explained later in the same article (Minor page fault). There are soft page faults, where all the kernel needs to do is add a page to the working set of the process. Here's a table from the Windows Internals book (I've excluded the ones that result in an access violation):
Page faults can occur for a variety of reasons, as you can see above. Only one of them has to do with reading from the disk. If you try to allocate a block from the heap and the heap manager allocates new pages, then accesses those pages, you'll get a demand-zero page fault. If you try to hook a function in kernel32 by writing to kernel32's pages, you'll get a copy-on-write fault because those pages are silently being copied so your changes don't affect other processes.
Now to answer your question more specifically: Process Hacker only seems to have page faults when updating its service information - that is, when it calls EnumServicesStatusEx, which RPCs to the SCM (services.exe). My guess is that in the process, a lot of memory is being allocated, leading to demand-zero page faults (the service information requires several pages to store, IIRC).
Operating Systems use paging to group items witch should be placed in physical memory and move them between physical memory and shared memory. most of the time, data items witch place in a single page, are related to each other. when data items in a page are not used for a long time, operating system moves it to virtual memory to free some space in physical memory. and then when a page is required witch is in virtual memory, operating system moves it from virtual memory (hard disk) to physical memory. this is Page Fault !
and remember, different operating systems are different in paging algorithms.
Basics of Page Faults
A slow but steady source of page faults is the OS probing for infrequently accessed pages. In this case, the operating system marks some pages not present, but leaves them in memory as-is. If an application accesses the page, then the #PF trap occurs and the OS simply marks the page present again without further ado. If a "long time" passes and a page never trips a fault, then the OS knows the page is a good candidate for swapping should the need arise. This mechanism can run proactively even in times of no resource pressure.
Any time a mmap'd section is read, a page fault is generated, which includes whenever you load a DLL. So, loading a DLL doesn't actually read all of the DLL into memory, it only causes it to be faulted in as the code is executed.
You'll see soft page faults when memory is being shared between processes. Basically, if you have a memory-mapped file shared between two processes, when the second process loads the memory-mapped file, soft page faults are generated - the memory is already in physical RAM, but the operating system needs to fix up the memory manager's tables so that the virtual memory address in your process points to the correct physical page.
Particularly for something like Process Hacker, which is likely injecting code into every running process (in order to collect information) it's likely making quite heavy use of shared memory for doing IPC.