Seasonal photo, (c) 2006 Christopher P. Lindsey, All Rights Reserved: do not copy

What effect will tariffs have on home gardeners in the US?

Gardener shocked by bill

How did these tomatoes end up costing so much?  Enough to give me telekinesis!  (AI-generated)

Tariffs have been in the news a lot lately. As a home gardener, how much will it impact you?

Right now you probably won’t see much of a difference. Most seeds, cuttings, pots, soil mixes, and fertilizers for the 2025 spring season were already ordered before tariffs were enacted. You might see higher prices on soil mixes after your first frost-free date, so if you’re watching your budget it could make sense to stock up now.

If tariffs continue through the summer, expect price increases on fall offerings like bulbs, chrysanthemums, and cool-season vegetables.

If they’re still around by winter, many 2026 seed orders will be affected, and you’ll likely see noticeable price jumps on spring vegetable starts, ornamentals, and the supplies used to grow them.

Why will our prices go up if tariffs are imposed on other countries?

Well, there’s no guarantee that they will — but it’s very likely. Basically, the U.S. is charging other countries more money to bring their goods into the country. To maintain their profit margins, those suppliers will often raise their prices to offset the tariffs.

That means a U.S. distributor or grower will now be paying more than they used to for the same materials — at least until a less expensive alternative is found. Since they also need to make a profit, they’ll likely raise their prices as well.

Essentially, that cost increase gets passed along the entire supply chain — all the way to the end: you.

So what exactly comes from other countries?

Pots and plastic trays China and other Asian countries for low cost trays
Bulbs Primarily the EU, Israel, and New Zealand
Seeds (non-open-pollinated heirlooms) Netherlands, Chile, China, India, Thailand, the EU
Young plants and cuttings Canada and Netherlands, Central America, Asia
Unrooted cuttings like geraniums and petunias Central America, Mexico, China, Israel, Brazil, Ethiopia
Fertilizer Canada, Trinidad, Morocco
Rockwool Denmark, Netherlands, Germany, India, China
Sphagnum Canada
Coco coir Sri Lanka, India, Philippines, Mexico
Florist mums Colombia, Ecuador

The United States imports more than 50% of its seeds from other countries.  If tariffs persist, we may see higher prices and fewer variety options.

Where do most US sellers source their seed stock?

Netherlands Hybrid vegetables, flowers, onions, bulbs
Italy Tomatoes, chicories, beans, radicchio,
India Eggplants, okra, cucurbits, carrots, tomatoes, peppers, melons
Thailand Hot peppers, Asian vegetables
China Brassicas, cucurbits, beans, Asian vegetables
Japan High-end hybrid tomatoes, radishes, cucumbers, broccoli
Chile Hybrid vegetable seeds such as sweet corn, cucurbits, tomato

 

What’s still grown domestically?

  • Open pollinated heirlooms from small farms and domestic seed savers
  • Most corn seed
  • Some beans, peas, and squash
  • Many native plant seeds

These may become more prominent in future seasons if import prices continue to rise.

What else should you know?

While gardening for food may go up in price because of seed and supply costs, ornamentals from certain nurseries may not be hit as hard. Some nurseries source their stock from wholesalers who import materials from the countries listed above, but others — especially smaller mail-order nurseries — propagate plants directly from their own gardens. That means they might collect seed, take cuttings, divide plants, etc., so those particular plants wouldn’t be subject to tariffs.

That said, they’ll still be affected by rising prices on pots, fertilizers, and soil mixes.  Less directly, over time, costs for things like irrigation and greenhouse supplies, tools, lighting, vehicle repairs, etc. will also indirectly increase the cost of raising plants.

There’s one last caveat: the economy is complicated. The actual impact of tariffs could shift depending on how the U.S. dollar performs against foreign currencies. If the dollar strengthens, tariffs may not increase prices much. But if it weakens, the added cost could be even more significant.

Saving core dumps from mod_perl

I’ve had some free time lately and have been spending it working on hort.net. Some of the pages still had design elements from 2000 (tables galore!) that I’ve been converting to a more responsive layout. The design is still, well, what it is (I’m not a graphic designer), but the tables and colored pixels stretched to all different sizes are mostly gone. I did uncover other weirdnesses, but more on that later.

The site leverages mod_perl on Rocky 9 heavily and the logs would occasionally say something like

[Sun Sep 15 05:55:53.652955 2024] [core:notice] [pid 1475:tid 1475] AH00051: child pid 8943 exit signal Segmentation fault (11), possible coredump in /tmp

Of course, there weren’t any core dumps to look at, and I never really saw the logs in time to do anything about it anyhow. Today, I decided to change that.

The first thing to do was allow core dumps in systemd, so I created /etc/systemd/system/httpd.service.d/91-core.conf:


[Service]
LimitCORE=infinity

The next step is to allow core dumps in Apache. I created /etc/httpd/conf.d/00-core.conf:

CoreDumpDirectory /var/crash/httpd

Finally, I ran

mkdir /var/crash/httpd
chown apache.apache /var/crash/httpd
systemctl daemon-reload
systemctl restart httpd

At this point, any core dumps will go into /var/crash/httpd. And they will keep going until the disk is full if they happen often enough. The other problem is that I would never notice they were there until I ran out of disk space, and at that point I’d have to delete them anyhow. A plan was needed.

I arbitrarily decided that I only needed the last twenty core dumps, but I also knew that I needed to get a notification any time a core dump was generated. This would give me a sense of frequency, a time stamp outside of the system to use when looking at logs, and an automated cleanup process.

inotify seemed like a great tool for this. It watches for filesystem events in a given directory, so I wrote a script that used it for my earlier plan.

First, I installed inotify-tools:


dnf -y install inotify-tools

Then I created this script in /usr/local/scripts/watchforcoredumps:

#!/bin/bash

# the directory to monitor
WATCH_DIR="/var/crash/httpd/"
# who to notify (this isn't the real address)
EMAIL="devnull@mallorn.com"

inotifywait -m -e create "$WATCH_DIR" | while read path action file; do
if [[ "$file" == core* ]]; then
   # Send an email notification
   echo "A new core dump was created on `hostname`: $file in $path" | mail -s "Core Dump Notification" "$EMAIL"
   find $WATCH_DIR -type f -maxdepth 1 | sed -e '1,20d' | tr '\n' '\0' | xargs -0 rm -f
fi
done

And then I ran the script:


nohup ~root/watchforcoredumps > /dev/null

What do you need to change if you’re running this script on your own site? Change WATCH_DIR and EMAIL to what you need for your site. If you want more than twenty files, change the 20 in ‘1,20d’ to however many you want to keep. If you want 50, make it ‘1,50d’.

And finally, one last comment about the deletion line.

You don’t know what filename you’ll end up with for your core dump, or if you have non-core files (like a file upload site) you don’t know what characters will be in the filename. What if there’s a semi-colon in the filename? An exclamation point? Generally speaking you’re probably safe from ne’er-do-wells, but the best way to protect against command injections via filenames is to remove the newlines. If there’s no newline, no command can be run.

So how do you get rid of newlines?

We take the output of our find command that’s piped into sed and run it through tr:


tr '\n' '\0' | xargs -0 [command]

That takes every linefeed and makes it a NULL character. xargs has a -0 argument that uses NULL characters as the end of the string, helping to prevent exploits due to weird character trickery.

Now I’ve delayed things long enough, and I probably need to start loading up these dumps with gdb. That’s not going to be nearly as fun…

Scheduling a reboot via puppet after a kernel update with RHEL style distributions

I’ve had people ask me how to schedule a reboot of their system via puppet if a kernel upgrade was installed. I like to use the ‘at’ command


class yoursite::common::rebootforkernelchange {

   exec {'schedule-kernel-reboot':
      command => '/bin/echo "/usr/sbin/reboot" | at -m 4:00',
      path => '/usr/bin:/bin',
      unless => ['test `atq | wc -l` != 0', 'test `uname -r` = `rpm -q kernel --queryformat "%{RPMTAG_BUILDTIME} %{VERSION}-%{RELEASE}.%{ARCH}\n" | sort -nr | cut -d" " -f 2 | head -1`']
   }
}

This compares the current kernel against the latest built kernel that is installed, and if they don’t match it sets an at job to reboot the system at 4am.

If you want to check the latest installed kernel instead of the latest built kernel, change RPMTAG_BUILDTIME to RPMTAG_INSTALLTIME.

Setting up GitLab as a docker container behind a secure reverse proxy on CentOS 7.x

Last weekend I decided to install GitLab for Mallorn Computing. I’d been meaning to do it for ages, but I never really got around to it. Surprisingly, I couldn’t find much about the process online, so I thought I’d document and share it.

I dutifully downloaded the installation package to a fairly empty server, ran it, and immediately freaked out. My server wasn’t just going to run GitLab; it was going to have Apache on it, other Redis databases, MariaDB, etc. GitLab didn’t seem to play nice with other applications and took over all kinds of ports. The install eventually failed when it realized it couldn’t bind to port 80 since Apache was already installed, and I was left with a mess (hint: the command to clean things up is gitlab-ctl cleanse).

I really didn’t want to set up a new dedicated server, so I hoped there was a Docker container I could install instead. Thankfully, a quick Google search showed there was.

The first thing I did was to install the Docker container:

docker run --detach --hostname git.mallorn.com --publish 4443:4443 --publish 2211:2211 --name gitlab --restart always gitlab/gitlab-ee:latest

Then I edited the GitLab config file and tried a bunch of settings for a few hours. Ultimately, I figured out which ones worked. Running docker exec -it gitlab vi /etc/gitlab/gitlab.rb opened a vi session editing the configuration file for GitLab. I had to change the variables below to the values you see.

   
   external_url 'https://git.mallorn.com:4443'
   nginx['ssl_client_certificate'] = "/etc/gitlab/trusted-certs/cert.pem"
   nginx['ssl_certificate'] = "/etc/gitlab/trusted-certs/fullchain.pem"
   nginx['ssl_certificate_key'] = "/etc/gitlab/trusted-certs/privkey.pem"
   gitlab_rails['trusted_proxies'] = ["172.16.1.10"]
   nginx['proxy_set_headers'] = {
     "X-Forwarded-Proto" => "https",
     "X-Forwarded-Ssl" => "on"
   }

Note that trusted_proxies is my internal network; set it to the IP of the interface on your proxy server that will talk to your GitLab installation.

Once that was done, I had to edit the ssh configuration file to tell it to listen on port 2211 (I now realize I could have just changed my startup parameters when I started the container, but too late now): I had to run docker exec -it gitlab vi /assets/sshd_config and change the port to 2211, then save the file. The last step was to restart the sshd process with docker exec -it gitlab /etc/init.d/sshd restart.

At this point I needed to import my SSL certificates because I had told GitLab to use certs that didn’t exist yet. I used letsencrypt to generate a certificate, then copied it into my Docker container:


   cd /etc/letsencrypt/live/git.mallorn.com
   docker cp -a -L cert.pem gitlab:/etc/gitlab/trusted-certs
   docker cp -a -L privkey.pem gitlab:/etc/gitlab/trusted-certs
   docker cp -a -L fullchain.pem gitlab:/etc/gitlab/trusted-certs

Your directory will be different; change into the one for your server. After this is done, restart your container by running sudo docker restart gitlab.

The last step was to create /etc/httpd/conf.d/git.mallorn.com.conf:


   <VirtualHost *:80 *:443>

      ServerName git.mallorn.com
      RewriteCond %{HTTPS} !=on
      RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]
           
      SSLEngine on
      SSLHonorCipherOrder on
      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
      SSLCipherSuite "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA!RC4:EECDH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS"
      SSLCompression off
      SSLSessionTickets off
      SSLCertificateFile /etc/letsencrypt/live/git.mallorn.com/cert.pem
      SSLCertificateKeyFile /etc/letsencrypt/live/git.mallorn.com/privkey.pem
      SSLCertificateChainFile /etc/letsencrypt/live/git.mallorn.com/chain.pem

      <Proxy *>
         Require all granted
      </Proxy>
   
      ProxyPreserveHost on
           
      SSLProxyEngine on
      SSLProxyVerify none
      SSLProxyCheckPeerCN off
      SSLProxyCheckPeerName off
      SSLProxyCheckPeerExpire off
      ProxyRequests Off
      ProxyPass / https://git.mallorn.com:4443/ nocanon
      ProxyPassReverse / https://git.mallorn.com:4443/ nocanon
      AllowEncodedSlashes NoDecode

      Header edit Location ^http://git.mallorn.com/ https://git.mallorn.com/
      RequestHeader unset Accept-Encoding
      RequestHeader set Host "git.mallorn.com"
      RequestHeader add X-Forwarded-Ssl on
      RequestHeader set X-Forwarded-Proto "https"
           
      LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\""
      ErrorLog /var/log/httpd/git.mallorn.com/error
      TransferLog /var/log/httpd/git.mallorn.com/access
   </VirtualHost>

I ran apachectl graceful to reload Apache, and from that point forward git.mallorn.com answered HTTP requests.

Next I had to make ssh work. Create an ssh key:

ssh-keygen -t rsa -b 4096 -f ~/.ssh/gitlab.id_rsa

Then set up ~/.ssh/config to reflect that key:


   host git.mallorn.com
    HostName git.mallorn.com
    Port 2211
    IdentityFile ~/.ssh/mallorn-git.id_rsa
    User [your login in GitLab]

Paste the contents of your ~/.ssh/gitlab.id_rsa.pub file into your key settings in GitLab and you will now be able to check files in and out via ssh.

Finally, set up a cron job to copy your SSL certs into the container regularly. This cron entry (saved as /etc/cron.d/gitlab) copies the certificates in and restarts the container if they have changed.

   5 30 * * * root (cd /etc/letsencrypt/live/git.mallorn.com ; find . -mmin 2 -exec "docker cp -a -L cert.pem gitlab:/etc/gitlab/trusted-certs; docker cp -a -L privkey.pem gitlab:/etc/gitlab/trusted-certs; docker cp -a -L fullchain.pem gitlab:/etc/gitlab/trusted-certs; docker restart gitlab)

Note that you could make the certs available directly to the container instead by starting it with the -v option, and that may ultimately be the route that I take.

Lastly, don’t forget to setup backups!