Is there a limit on the stack size of a process in Linux? Is it simply dependent on the RAM of the machine?
I want to know this in ord
It largely depends what architecture you're on (32 or 64-bit) and whether you're multithreaded or not.
By default in a single threaded process, i.e. the main thread created by the OS at exec() time, your stack usually will grow until it hits something else in the address space. This means that it is generally possible, on a 32-bit machine, to have, say 1G of stack.
However, this is definitely NOT the case in a multithreaded 32-bit process. In multithreaded procesess, the stacks share address space and hence need to be allocated, so they typically get given a small amount of address space (e.g. 1M) so that many threads can be created without exhausting address space.
So in a multithreaded process, it's small and finite, in a single threaded one, it's basically until you hit something else in the address-space (which the default allocation mechanism tries to ensure doesn't happen too soon).
In a 64-bit machine, of course there is a lot more address space to play with.
In any case you can always run out of virtual memory, in which case you'll get a SIGBUS or SIGSEGV or something.