Why does getpid() return pid_t instead of int?

こ雲淡風輕ζ 提交于 2019-11-28 19:32:08

问题


What's the logic behind calls like getpid() returning a value of type pid_t instead of an unsigned int? Or int? How does this help?

I'm guessing this has to do with portability? Guaranteeing that pid_t is the same size across different platforms that may have different sizes of ints etc.?


回答1:


I think it's the opposite: making the program portable across platforms, regardless of whether, e.g., a PID is 16 or 32 bits (or even longer).




回答2:


The reason is to allow nasty historical implementations to still be conformant. Suppose your historical implementation had (rather common):

short getpid(void);

Of course modern systems want pids to be at least 32-bit, but if the standard mandated:

int getpid(void);

then all historical implementations that had used short would become non-conformant. This was deemed unacceptable, so pid_t was created and the implementation was allowed to define pid_t whichever way it prefers.

Note that you are by no means obligated to use pid_t in your own code as long as you use a type that's large enough to store any pid (intmax_t for example would work just fine). The only reason pid_t needs to exist is for the standard to define getpid, waitpid, etc. in terms of it.




回答3:


On different platforms and operating systems, different types (pid_t for example) might be 32 bits (unsigned int) on a 32-bit machine or 64 bits (unsigned long) on a 64-bit machine. Or, for some other reason, an operating system might choose to have a different size. Additionally, it makes it clear when reading the code that this variable represents an "object", rather than just an arbitrary number.




回答4:


The purpose of it is to make pid_t, or any other type of the sort, platform-independent, such that it works properly regardless of how it's actually implemented. This practice is used for any type that needs to be platform-independent, such as:

  • pid_t: Has to be large enough to store a PID on the system you're coding for. Maps to int as far as I'm aware, although I'm not the most familiar with the GNU C library.
  • size_t: An unsigned variable able to store the result of the sizeof operator. Generally equal in size to the word size of the system you're coding for.
  • int16_t (intX_t): Has to be exactly 16 bits, regardless of platform, and won't be defined on platforms that don't use 2n-bit bytes (typically 8- or 16-bit) or, much less frequently, provide a means of accessing exactly 16 bits out of a larger type (e.g., the PDP-10's "bytes", which could be any number of contiguous bits out of a 36-bit word, and thus could be exactly 16 bits), and thus don't support 16-bit two's complement integer types (such as a 36-bit system). Generally maps to short on modern computers, although it may be an int on older ones.
  • int_least32_t (int_leastX_t): Has to be the smallest size possible that can store at least 32 bits, such as 36 bits on a 36-bit or 72-bit system. Generally maps to int on modern computers, although it may be a long on older ones.
  • int_fastX_t: Has to be the fastest type possible that can store at least X bits. Generally, it's the system's word size if (X <= word_size) (or sometimes char for int_fast8_t), or acts like int_leastX_t if (X > word_size))
  • intmax_t: Has to be the maximum integer width supported by the system. Generally, it'll be at least 64 bits on modern systems, although some systems may support extended types larger than long long (and if so, intmax_t is required to be the largest of those types).
  • And more...

Mechanically, it allows the compiler's installer to typedef the appropriate type to the identifier (whether a standard type or an awkwardly-named internal type) behind the scenes, whether by creating appropriate header files, coding it into the compiler's executable, or some other method. For example, on a 32-bit system, Microsoft Visual Studio will implement the intX_t and similar types as follows (note: comments added by me):

// Signed ints of exactly X bits.
typedef signed char int8_t;
typedef short int16_t;
typedef int int32_t;

// Unsigned ints of exactly X bits.
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;

// Signed ints of at least X bits.
typedef signed char int_least8_t;
typedef short int_least16_t;
typedef int int_least32_t;

// Unsigned ints of at least X bits.
typedef unsigned char uint_least8_t;
typedef unsigned short uint_least16_t;
typedef unsigned int uint_least32_t;

// Speed-optimised signed ints of at least X bits.
// Note that int_fast16_t and int_fast32_t are both 32 bits, as a 32-bit processor will generally operate on a full word faster than a half-word.
typedef char int_fast8_t;
typedef int int_fast16_t;
typedef int int_fast32_t;

// Speed-optimised unsigned ints of at least X bits.
typedef unsigned char uint_fast8_t;
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;

typedef _Longlong int64_t;
typedef _ULonglong uint64_t;

typedef _Longlong int_least64_t;
typedef _ULonglong uint_least64_t;

typedef _Longlong int_fast64_t;
typedef _ULonglong uint_fast64_t;

On a 64-bit system, however, they may not necessarily be implemented the same way, and I can guarantee that they won't be implemented the same way on an archaic 16-bit system, assuming you can find a version of MSVS compatible with one.

Overall, it allows code to work properly regardless of the specifics of your implementation, and to meet the same requirements on any standards-compatible system (e.g. pid_t can be guaranteed to be large enough to hold any valid PID on the system in question, no matter what system you're coding for). It also prevents you from having to know the nitty-gritty, and from having to look up internal names you may not be familiar with. In short, it makes sure your code works the same regardless of whether pid_t (or any other similar typedef) is implemented as an int, a short, a long, a long long, or even a __Did_you_really_just_dare_me_to_eat_my_left_shoe__, so you don't have to.


Additionally, it serves as a form of documentation, allowing you to tell what a given variable is for at a glance. Consider the following:

int a, b;

....

if (a > b) {
    // Nothing wrong here, right?  They're both ints.
}

Now, let's try that again:

size_t a;
pid_t b;

...

if (a > b) {
    // Why are we comparing sizes to PIDs?  We probably messed up somewhere.
}

If used as such, it can help you locate potentially problematic segments of code before anything breaks, and can make troubleshooting much easier than it would otherwise be.




回答5:


Each process in the program has a specific process ID. By calling pid, we know the assigned ID of the current process.Knowing the pid is exceptionally important when we use fork(), because it returns 0 and !=0 values for child and parent copies receptively.These two videos have clear explanations: video#1 Video#2

An example: Suppose we have the following c program:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>


int main (int argc, char *argv[])
{

  printf("I am %d\n", (int) getpid());
  pid_t pid = fork();
  printf("fork returned: %d\n", (int) pid);
  if(pid<0){
    perror("fork failed");
  }
  if (pid==0){
    printf("This is a child with pid %d\n",(int) getpid());
  }else if(pid >0){
    printf("This is a parent with pid %d\n",(int)getpid());
  }

  return 0;
}

If you run it, you will get 0 for child and non zero/greater than zero for the parent.




回答6:


One thing to point out, in most answers I saw something along the lines of "using pid_t makes the code work on different systems", which is not necessarily true.

I believe the precise wording should be: it makes the code 'compile' on different systems.

As, for instance, compiling the code on a system that uses 32-bit pid_t will produce a binary that will probably break if run on another system that uses 64-bit pid_t.



来源:https://stackoverflow.com/questions/7378854/why-does-getpid-return-pid-t-instead-of-int

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!