How does the compiler or OS distinguish between sig_atomic_t type and a normal int type variable, and ensures that the operation will be atomic? Programs using both have sam
Programs using both have same assembler code. How extra care is taken to make the operation atomic?
Although this is an old question, I think it's still worth addressing this part of the question specifically. On Linux, sig_atomic_t is provided by glibc. sig_atomic_t in glibc is a typedef for int and has no special treatment (as of this post). The glibc docs address this:
In practice, you can assume that int is atomic. You can also assume that pointer types are atomic; that is very convenient. Both of these assumptions are true on all of the machines that the GNU C Library supports and on all POSIX systems we know of.
In other words, it just so happens that regular int already satisfies the requirements of sig_atomic_t on all the platforms that glibc supports and no special support is needed. Nonetheless, the C and POSIX standards mandate sig_atomic_t because there could be some exotic machine on which we want to implement C and POSIX for which int does not fulfill the requirements of sig_atomic_t.