logging

Intercept console.log but keep stack

我是研究僧i 提交于 2020-01-13 20:29:07
问题 I know its easy to intercept a function in js, among other ways: console.log = (function () { var log = console.log; return function () { alert(arguments); log.apply(console, arguments); })(); but Is there a way to wrap console.log such that when a user calls console.log("hi")//in random.js in the console it shows the random.js origin, and not the location of the intercept? 回答1: Use a try/catch rather than returning a function: console.log = Function("a", "try { console.info(a); } catch(e)

how to make rails+unicorn logger thread safe?

走远了吗. 提交于 2020-01-13 19:12:27
问题 We've been using unicorn to deploy our application. Everything went fine except for the production.log file, which turned out to be unreadable because the way unicorn was designed. Every instance of unicorn wrote to the same file, causing all the lines spaghetti'ed together. So is there a way to tell the logger to log independently across multiple unicorn instances? 回答1: edit your unicorn.conf.rb , and change the after_fork block to something like: after_fork do |server, worker| filepath = "#

Common.Logging for TraceSource

为君一笑 提交于 2020-01-13 16:26:10
问题 I am trying to adopt Common.Logging in our application, however I am having some trouble setting it up with system.diagnostics. It works with straight up Trace, but not TraceSource. I was using Common.Logging.Simple.TraceLoggerFactoryAdapter . Do i need a different adapter for TraceSource? 回答1: This is pretty late, but maybe it will still help you... According to the Common.Logging source here, the TraceLoggerFactoryAdapter does support configuring such that it uses TraceSources. The

Spark output: log-style vs progress-style

大憨熊 提交于 2020-01-13 10:53:27
问题 spark-submit output on two different clusters (both run spark 1.2) look different: one is "log-style", i.e., a voluminous stream of messages like 15/04/06 14:53:13 INFO TaskSetManager: Starting task 262.0 in stage 4.0 (TID 894, XXXXX, PROCESS_LOCAL, 1785 bytes) 15/04/06 14:53:13 INFO TaskSetManager: Finished task 255.0 in stage 4.0 (TID 892) in 155 ms on XXXXX (288/300) 15/04/06 14:53:13 INFO BlockManagerInfo: Added rdd_16_262 in memory on XXXXX:49388 (size: 14.3 MB, free: 1214.5 MB) 15/04/06

Spark output: log-style vs progress-style

我与影子孤独终老i 提交于 2020-01-13 10:53:19
问题 spark-submit output on two different clusters (both run spark 1.2) look different: one is "log-style", i.e., a voluminous stream of messages like 15/04/06 14:53:13 INFO TaskSetManager: Starting task 262.0 in stage 4.0 (TID 894, XXXXX, PROCESS_LOCAL, 1785 bytes) 15/04/06 14:53:13 INFO TaskSetManager: Finished task 255.0 in stage 4.0 (TID 892) in 155 ms on XXXXX (288/300) 15/04/06 14:53:13 INFO BlockManagerInfo: Added rdd_16_262 in memory on XXXXX:49388 (size: 14.3 MB, free: 1214.5 MB) 15/04/06

spring boot war log4j2

旧时模样 提交于 2020-01-13 10:28:31
问题 i'm developing a application using spring boot version 1.3.5 (spring 4.2.6). i use log4j2 version 2.4.1 as logging system. when working on sts (spring tool suite) and executing on embeeded tomcat, the logs works fine (on console and on file) but when building a war file and deploying it on external tomcat 8 the log file is created but my logs don't appear in it. i've looked for similar issue and tested some solutions: setting the logging.config property on tomcat configuring 'application

Entreprise Library Rolling flat file is not rolling

僤鯓⒐⒋嵵緔 提交于 2020-01-13 10:27:29
问题 I'm trying to rotate log files, one per day of week and this configuration file is not working. If I change it to rotate instead of midnight to minute it only records one single file with one minute duration. No new files are being generated. Are there any known bugs of the latest version of entreprise library that focus on rolling flat files not working? Is there any problem with my current configuration? Thank you! <loggingConfiguration name="" tracingEnabled="true" defaultCategory="General

logging module for python reports incorrect timezone under cygwin

回眸只為那壹抹淺笑 提交于 2020-01-13 09:44:30
问题 I am running python script that uses logging module under cygwin on Windows 7. The date command reports correct time: $ date Tue, Aug 14, 2012 2:47:49 PM However, the python script is five hours off: 2012-08-14 19:39:06,438: Done! I don't do anything fancy when I configure logging for the script: logging.basicConfig(format='%(asctime)-15s: %(message)s', level=logging.DEBUG) Can someone tell me what is going on and how I can fix it? 回答1: You need to unset the environment "TZ" in your python

Logging camel exceptions and sending to the dead letter channel

不羁岁月 提交于 2020-01-13 09:43:30
问题 I have a Camel route, running within Karaf, for which I've added a Dead Letter Channel. This is to handle cases where the route fails and I want to keep the problem message and log the cause. I can't throw the exception back to the calling application as I'm handling some processing asynchronously. From reading the documentation and trying a number of cases, it's not clear to me how to both log the exception into Karaf's log and deposit the original message onto the dead letter queue. Here's

How to input variables in logger formatter?

回眸只為那壹抹淺笑 提交于 2020-01-13 09:25:10
问题 I currently have: FORMAT = '%(asctime)s - %(levelname)s - %(message)s' logging.basicConfig(format=FORMAT, datefmt='%d/%m/%Y %H:%M:%S', filename=LOGFILE, level=getattr(logging, options.loglevel.upper())) ... which works great, however I'm trying to do: FORMAT = '%(MYVAR)s %(asctime)s - %(levelname)s - %(message)s' and that just throws keyerrors, even though MYVAR is defined. Is there a workaround? MYVAR is a constant, so it would be a shame of having to pass it everytime I invoke the logger.