I am researching high performance logging in Python and so far have been disappointed by the performance of the python standard logging module - but there seem to be no alte
Python is not truly multi-threaded in a traditional sense. Whenever a thread is executing it has to own the gil (global interpreter lock). "threads" yield whenever they call into the system or have to wait on IO. This allows the interpreter thread to run other python "threads". This equates to asynchronous I/O.
Regardless of if the result of the logging message is used or dropped all of the work to evaluate the arguments for the logging message is done. As mentioned in other responses. However what is missed (and where the multi-threaded part of you question comes in) is that while writing a large amount to disk may be slow since modern computers have many cores the process of writing the output to the file will be farmed out to another core while the interpreter moves on to another python "thread". The operating system will complete the async disk write and little to no time will be lost to the disk write.
As long as the interpreter always has another thread to switch to virtually no time will be lost to the writes. The interpreter will only actually lose time if all python "threads" are blocked on I/O. Which is likely the case unless you are really swamping your disk.