logrotate

How do I create a logrotate friendly file writer in Java or other platform?

我们两清 提交于 2019-12-03 06:11:09
What are the best practices for implementing a file writer/logger in Java that is compatible with logrotate ? The goal would be allow logrotate to be used for all log management, instead of using built-in rotation/management of a logging API (Log4J, etc). I'd be interested in hearing comments/answers for other development platforms, aside from Java. You simply need to periodically close and re-open the log file inside your application. You need a handler that keeps last close time. The handler should close and reopens the file if (for example) 20 seconds passed since last close and log entry

Forever log and logrotate

醉酒当歌 提交于 2019-12-03 03:10:30
I use forever to launch my nodeJs server and I choose the log file : forever -l /home/api/log/api_output.log start server.js I use logrotate to move logfile every day (like advise here : NodeJS/Forever archive logs ), after one day my directory is like this : -rw-r--r-- 1 root root 0 avril 18 12:00 api_output.log -rw-r--r-- 1 root root 95492 avril 18 12:01 api_output.log-20140418 So, rotation is working, but the logs messages are now written in api_output.log-20140418, instead of api_output.log Maybe somebody can help me ? I forgot copytruncate option in my config file, now it's working : /etc

logrotate cron job not rotating certain logs

做~自己de王妃 提交于 2019-12-03 02:10:00
I added two scripts in "logrotate.d" directory for my application logs to be rotated. This is the config for one of them: <myLogFilePath> { compress copytruncate delaycompress dateext missingok notifempty daily rotate 30 } There is a "logrotate" script in "cron.daily" directory (which seems to be running daily as per cron logs): #!/bin/sh echo "logrotate_test" >>/tmp/logrotate_test #/usr/sbin/logrotate /etc/logrotate.conf >/dev/null 2>&1 /usr/sbin/logrotate -v /etc/logrotate.conf &>>/root/logrotate_error EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited

Logrotate to clean up date stamped files

爱⌒轻易说出口 提交于 2019-12-02 22:08:22
I'm currently trying to work out a method of tidying up Oracle Recover log files that are created by Cron... Currently, our Oracle standby recover process is invoked by Cron every 15mins using the following command: 0,15,30,45 * * * * /data/tier2/scripts/recover_standby.sh SID >> /data/tier2/scripts/logs/recover_standby_SID_`date +\%d\%m\%y`.log 2>&1 This creates files that look like: $ ls -l /data/tier2/scripts/logs/ total 0 -rw-r--r-- 1 oracle oinstall 0 Feb 1 23:45 recover_standby_SID_010213.log -rw-r--r-- 1 oracle oinstall 0 Feb 2 23:45 recover_standby_SID_020213.log -rw-r--r-- 1 oracle

如何解决日志文件过大问题

风流意气都作罢 提交于 2019-12-01 14:50:37
我们在运行tomcat的时候,有一个日志文件不会被拆分,会一直累积,直到预警或磁盘占用完毕,导致了文件无法打开、影响性能、无法归档等问题,故我们这里通过介绍logrotate工具来解决自动日志拆分问题 1.验证机器是否有logrotate工具,如果没有则需要安装,如centos系统安装方式(由于该工具很早就存在Linux中,目前在很多发行版本都存在,故一般不需要安装,但简化版操作系统可能有做阉割,所以需要确认) #yum install -y logrotate 2.编写执行脚本 #vim /etc/logrotate.d/tomcat #输入如下内容,其中/home/admin/tomcat/logs/catalina.out代表具体需要分割的日志文件(如果是多个文件,可以空格分开,文件名中有空格,要用””,如”/home/admin/access.log” /home/dubbo.log) /home/admin/tomcat/logs/catalina.out { daily rotate 30 dateext dateformat .%Y-%m-%d notifempty copytruncate } 示例section表明按文件大小触发日志自动切分,大小单位除了上面所示的k外,还可以是M或G #vim /etc/logrotate.d/nginx /usr/local

MongoDB日志文件过大的解决方法 清理

别说谁变了你拦得住时间么 提交于 2019-12-01 14:46:12
MongoDB日志文件过大的解决方法 2016年05月09日 14:43:11 jjwen 阅读数 1261 MongoDB的日志文件在设置 logappend=true 的情况下,会不断向同一日志文件追加的,时间长了,自然变得非常大。 解决如下:(特别注意:启动的时候必须是--logpath指定了log路径的) cd /home/myleguan/mongo sudo mongod -f /etc/mongod.conf 清理日志: use admin db.auth('root','myleguan_root') db.adminCommand( { logRotate : 1 } ) 手动删除日志 或者设置脚本定时删除 用mongo连接到服务端 use admin //切换到admin数据库 db.runCommand({logRotate:1}) 这样会使mongo关闭当前日志文件,重启一个新的日志文件,不需要停止mongodb服务。 2016年05月09日 14:43:11 jjwen 阅读数 1261 MongoDB的日志文件在设置 logappend=true 的情况下,会不断向同一日志文件追加的,时间长了,自然变得非常大。 解决如下:(特别注意:启动的时候必须是--logpath指定了log路径的) 用mongo连接到服务端 use admin /

logrotate

天涯浪子 提交于 2019-12-01 08:44:49
手动执行logrote 测试命令 logrotate -d debug 调试 -f force 强制执行, 跟想要执行的 日志轮询的 单独配置文件 配置文件 ,参数 create 和  copytruncate 的区别: 总的说 就是 create = mv + cerate , copytruncate = cp + echo > log file 详情如下: 1)create: 这也就是默认的方案,可以通过 create 命令配置文件的权限和属组设置;这个方案的思路是重命名原日志文件,创建新的日志文件。详细步骤如下: 重命名正在输出日志文件,因为重命名只修改目录以及文件的名称,而进程操作文件使用的是 inode,所以并不影响原程序继续输出日志。 创建新的日志文件,文件名和原日志文件一样,注意,此时只是文件名称一样,而 inode 编号不同,原程序输出的日志还是往原日志文件输出。 最后通过某些方式通知程序,重新打开日志文件;由于重新打开日志文件会用到文件路径而非 inode 编号,所以打开的是新的日志文件。 如上也就是 logrotate 的默认操作方式,也就是 mv+create 执行完之后,通知应用重新在新文件写入即可。mv+create 成本都比较低,几乎是原子操作,如果应用支持重新打开日志文件,如 syslog, nginx, mysql 等,那么这是最好的方式。 不过

How to configure logrotate with php logs

泪湿孤枕 提交于 2019-12-01 00:24:25
问题 I'm running php5 FPM with APC as an opcode and application cache. As is usual, I am logging php errors into a file. Since that is becoming quite large, I tried to configure logrotate. It works, but after rotation, php continues to log to the existing logfile, even when it is renamed. This results in scripts.log being a 0B file, and scripts.log.1 continuing to grow further. I think (haven't tried) that running php5-fpm reload in postrotate could resolve this, but that would clear my APC cache

logrotate日志轮转

妖精的绣舞 提交于 2019-11-30 12:13:48
1 日志管理 1.1 问题 查看rsyslog服务是否开启 查看/var/log/admin.log文件是否存在 配置rsyslog服务,把本主机的所有日志信息全部额外保存一份到/var/log/admin.log里面去 1.2 方案 存放日志的重点目录:/var/log。 重要的日志文件:/var/log/messages。 Linux日志服务是rsyslog,在5里面是syslog。 服务名称是rsyslog,配置文件:/etc/rsyslog.conf,是一个独立服务。 /etc/rsyslog.conf 记录格式:设备.优先级... 记录位置。 1.3 步骤 实现此案例需要按照如下步骤进行。 步骤一:查看rsyslog服务是否开启 命令操作如下所示: [root@youyi /]# /etc/init.d/rsyslog status rsyslogd (pid 1513) 正在运行... [root@youyi /]# 步骤二:查看/var/log/admin.log文件是否存在 命令操作如下所示: [root@youyi /]# ls /var/log/admin.log ls: 无法访问/var/log/admin.log: 没有那个文件或目录 [root@youyi /]# 步骤三:配置rsyslog服务,把本主机的所有日志信息全部额外保存一份到/var/log

Apache and logrotate configuration

半城伤御伤魂 提交于 2019-11-30 10:32:35
问题 last week I found a problem on my server, because the disk usage was 100% and i found out apache had created a huge error.log file of 60GB. I changed then the LogLevel to emerg, but after one week it's again 1.3GB which is definitely too much. Moreover i have an access.log of 6MB and an other_vhosts_access.log of 167MB. So i found out that the problem could be logrotate not working. Actually the gzipped files of the logs have a very old date (23rd February). So i tried first to change the