2015-05-31

linux trick: too many logs

Recently I found my self again in that situation on a linux server. The partition where logs are stored went 100%. In such case, It’s clever top purge old useless logfiles. Typical move for me would be to run logrotate manually with

logrotate -f /etc/logrotate.conf

But I had a case where that was not enough. A developer forgot to remove a debugging output and the logs were just gathering way too much information, more than what I could free with some janitoring.

To avoid losing logs, we can move the logfile where there is space and replace the file with a symbolic link. That’s good enough for until the partition gets resized of the logs get cleaned. But when it’s done on a live logfile, the running process that writes into it still has the same file descriptor. The process has to be relaunched so the new fd can be taken in account, on the new partition, as instructed by the symbolic link.

So a colleague pointed out that could be done without restart by using gdb. It’s a pretty neat trick (if you have gdb installed on your production server, which may not always be the case, and for good reasons). Anyways I had it at hand, and here is the sequence:

touch /path/to/new/logfile

gdb -p pid

(gdb) call dup(1)
$1 = 3
(gdb) call close(1)
$2 = 0
(gdb) call open("/path/to/new/logfile", 2)
$3 = 1
(gdb) call close($1)
$4 = 0
(gdb)

This gave me the taste of digging up a little bit more on how gdb can interact with live processes.

comments powered by Disqus