Seasonal photo, (c) 2006 Christopher P. Lindsey, All Rights Reserved: do not copy

Saving core dumps from mod_perl

I’ve had some free time lately and have been spending it working on hort.net. Some of the pages still had design elements from 2000 (tables galore!) that I’ve been converting to a more responsive layout. The design is still, well, what it is (I’m not a graphic designer), but the tables and colored pixels stretched to all different sizes are mostly gone. I did uncover other weirdnesses, but more on that later.

The site leverages mod_perl on Rocky 9 heavily and the logs would occasionally say something like

[Sun Sep 15 05:55:53.652955 2024] [core:notice] [pid 1475:tid 1475] AH00051: child pid 8943 exit signal Segmentation fault (11), possible coredump in /tmp

Of course, there weren’t any core dumps to look at, and I never really saw the logs in time to do anything about it anyhow. Today, I decided to change that.

The first thing to do was allow core dumps in systemd, so I created /etc/systemd/system/httpd.service.d/91-core.conf:


[Service]
LimitCORE=infinity

The next step is to allow core dumps in Apache. I created /etc/httpd/conf.d/00-core.conf:

CoreDumpDirectory /var/crash/httpd

Finally, I ran

mkdir /var/crash/httpd
chown apache.apache /var/crash/httpd
systemctl daemon-reload
systemctl restart httpd

At this point, any core dumps will go into /var/crash/httpd. And they will keep going until the disk is full if they happen often enough. The other problem is that I would never notice they were there until I ran out of disk space, and at that point I’d have to delete them anyhow. A plan was needed.

I arbitrarily decided that I only needed the last twenty core dumps, but I also knew that I needed to get a notification any time a core dump was generated. This would give me a sense of frequency, a time stamp outside of the system to use when looking at logs, and an automated cleanup process.

inotify seemed like a great tool for this. It watches for filesystem events in a given directory, so I wrote a script that used it for my earlier plan.

First, I installed inotify-tools:


dnf -y install inotify-tools

Then I created this script in /usr/local/scripts/watchforcoredumps:

#!/bin/bash

# the directory to monitor
WATCH_DIR="/var/crash/httpd/"
# who to notify (this isn't the real address)
EMAIL="devnull@mallorn.com"

inotifywait -m -e create "$WATCH_DIR" | while read path action file; do
if [[ "$file" == core* ]]; then
   # Send an email notification
   echo "A new core dump was created on `hostname`: $file in $path" | mail -s "Core Dump Notification" "$EMAIL"
   find $WATCH_DIR -type f -maxdepth 1 | sed -e '1,20d' | tr '\n' '\0' | xargs -0 rm -f
fi
done

And then I ran the script:


nohup ~root/watchforcoredumps > /dev/null

What do you need to change if you’re running this script on your own site? Change WATCH_DIR and EMAIL to what you need for your site. If you want more than twenty files, change the 20 in ‘1,20d’ to however many you want to keep. If you want 50, make it ‘1,50d’.

And finally, one last comment about the deletion line.

You don’t know what filename you’ll end up with for your core dump, or if you have non-core files (like a file upload site) you don’t know what characters will be in the filename. What if there’s a semi-colon in the filename? An exclamation point? Generally speaking you’re probably safe from ne’er-do-wells, but the best way to protect against command injections via filenames is to remove the newlines. If there’s no newline, no command can be run.

So how do you get rid of newlines?

We take the output of our find command that’s piped into sed and run it through tr:


tr '\n' '\0' | xargs -0 [command]

That takes every linefeed and makes it a NULL character. xargs has a -0 argument that uses NULL characters as the end of the string, helping to prevent exploits due to weird character trickery.

Now I’ve delayed things long enough, and I probably need to start loading up these dumps with gdb. That’s not going to be nearly as fun…