Logback losing my log messages to file

和自甴很熟 提交于 2019-12-05 00:21:03

问题


I wrote a test program to verify the performance improvements of logback over log4j. But to my surprise, I ran into this strange problem. I am writing some 200k log messages in a loop to a file using their Async and file appenders. But, every time, it only logs some 140k or so messages and stops after that. It just prints my last log statement indicating that it has written everything in the buffer and the program terminates. If I just run the same program with Log4j, i can see all 200k messages in the log file. Is there any fundamental architectural differences making this happen? Is there anyway to avoid it? We are thinking switching from log4j to logback and now this is making me re-think.

This is my logback configuraiton:

<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
        <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
        </pattern>
    </encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>logback.log</file>
<encoder>
  <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
</encoder>
 </appender>

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="FILE" />
</appender>

<root level="info">
    <appender-ref ref="ASYNC" />
</root>
 </configuration>

This is my code ------------------

      public static void main(String[] args) throws InterruptedException {
        org.slf4j.Logger logbackLogger = LoggerFactory
                .getLogger(LogbackTest.class);

        List<Integer> runs = Arrays.asList(1000, 5000, 50000, 200000);
        ArrayList<Long> logbackRuntimes = new ArrayList<>(4);

        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info("------------------------>Starting run: "
                    + (run + 1));
            // logback test
            long stTime = System.nanoTime();
            int i = 0;
            for (i = 1; i <= runs.get(run); i++) {
                Thread.sleep(1);
                logbackLogger
                .info("This is a Logback test log, run: {},     iter: {}",
                                run, i);
            }
            logbackRuntimes.add(System.nanoTime() - stTime);
            logbackLogger.info("logback run - " + (run + 1) + " " + i);
        }
        Thread.sleep(5000);
        // print results
        logbackLogger.info("Run times:");
        logbackLogger
            .info("Run\tNoOfMessages\tLog4j Time(ms)\tLogback Time(ms)");
        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info((run + 1) + "\t" + runs.get(run) + "\t"
                    + logbackRuntimes.get(run) / 10e6d);
        }
    }

回答1:


According to the documentation:

[...] by default, when less than 20% of the queue capacilty remains, AsyncAppender will drop events of level TRACE, DEBUG and INFO keeping only events of level WARN and ERROR. This strategy ensures non-blocking handling of logging events (hence excellent performance) at the cost loosing events of level TRACE, DEBUG and INFO when the queue has less than 20% capacity. Event loss can be prevented by setting the discardingThreshold property to 0 (zero).



来源:https://stackoverflow.com/questions/19107813/logback-losing-my-log-messages-to-file

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!