Is it safe to disable buffering with stdout and stderr?

独自空忆成欢 提交于 2019-12-02 23:54:02

why all stream are by default line buffered

They are buffered for performance reasons. The library tries hard to avoid making the system call because it takes long. And not all of them are buffered by default. For instance stderr is usually unbuffered and stdout is line-buffered only when it refers to a tty.

then is this safe to do this?

It is safe to disable buffering but I must say it's not the best debugging technique.

A possible way might be to have a bool dodebug global flag and define a macro like e.g.

#ifdef NDEBUG
#define debugprintf(Fmt,...) do{} while(0)
#else
#define debugprintf(Fmt,...) do {if (dodebug) {                 \
   printf("%s:%d " Fmt "\n", __FILE__, __LINE__, ##__VA_ARGS__); \
   fflush(stdout); }} while(0)
#endif

Then inside your code, have some

debugprintf("here i=%d", i);

Of course, you could, in the macro above, do fprintf instead... Notice the fflush and the appended newline to the format.

Disabling buffering should probably be avoided for performance reasons.

Uh, well. You're wrong. Precisely for this reason, stderr is not buffered by default.

EDIT: Also, as a general suggestion, try using debugger breakpoints instead of printfs. Makes life much easier.

It is "safe" in one sense, and unsafe in another. It is unsafe to add debug printfs, and for the same reason unsafe to add code to modify the stdio buffering, in the sense that it is a maintenance nightmare. What you are doing is NOT a good debugging technique. If your program gets a segfault, you should simply examine the core dump to see what happened. If that is not adequate, run the program in a debugger and step through it to follow the action. This sounds difficult, but it's really very simple and is an important skill to have. Here's a sample:

$ gcc -o segfault -g segfault.c   # compile with -g to get debugging symbols
$ ulimit -c unlimited             # allow core dumps to be written
$ ./segfault                      # run the program
Segmentation fault (core dumped)
$ gdb -q segfault /cores/core.3632  # On linux, the core dump will exist in
                                    # whatever directory was current for the
                                    # process at the time it crashed.  Usually
                                    # this is the directory from which you ran
                                    # the program.
Reading symbols for shared libraries .. done
Reading symbols for shared libraries . done
Reading symbols for shared libraries .. done
#0  0x0000000100000f3c in main () at segfault.c:5
5               return *x;          <--- Oh, my, the segfault occured at line 5
(gdb) print x                       <--- And it's because the program dereferenced
$1 = (int *) 0x0                     ... a NULL pointer.

If your program writes a lot of output, disabling buffering will likely make it somewhere between 10 and 1000 times slower. This is usually undesirable. If your aim is just consistency of output when debugging, try adding explicit fflush calls where you want output flushed rather than turning off buffering globally. And preferably don't write crashing code...

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!