The commonly used definition of a translation unit is what comes after preprocessing (header files inclusions, macros, etc along with the source file). This definit
Compilers are free to translate several source files at the same time, but they cannot change their semantics.
Translating several files together will likely be somewhat faster (because the compiler starts only once) and will permit better whole program optimization: Source code of called functions in other translation units is then available at the point of call from other translation units. The compiler can inspect the called code and use the information, much as it can with a single translation unit. From the gcc 6.3.0 manual:
The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.
Called functions can be inspected for absence of aliasing, factual const-ness of pointed-to objects etc., enabling the compiler to perform optimizations which would be wrong in the general case.
And, of course, such functions can be inlined.
But there are semantics of (preprocessing) translation units (which correspond to source files after preprocessing, per your standard quote) which the compiler must respect. @Malcolm mentioned one, file-static variables. My gut feeling is that there may be other, more subtle issues concerning declarations and declaration order.
Another obvious source code scope issue concerns defines. From the n1570 draft, 6.10.3.5:
A macro definition lasts (independent of block structure) until a corresponding #undef directive is encountered or (if none is encountered) until the end of the preprocessing translation unit.
Both issues forbid simple C source file concatenation; the compiler must additionally apply some rudimentary logic.