< Back
20250617 Emil - Well<br/>Would You Look At The Time

On Solving All Of The Problems At The Same Time

I fixed the clinical issue with Ergo the exact way you're supposed to, by being lucky! I made a backup of Ergo, got a copy of Ergo 2.16.0 - the latest version, of which I had an earlier experimental version. I simply copied over my configuration & database and magically the issue was fixed and the database was recovered from the abyss of "who the hell knows" to a mainline release. Automation of upgrades is technically done on the server, but not for Ergo as it's not in any release and the database format could change at any time meaning upgrading is more involved than downloading the package and restarting the daemon.

I had to fix one issue with Ergo, it lacked the proper certs for a new domain it was supposed to be on. Egor on the IRC told me the proper fix faster than I could read the manual and it was solved as follows. I'm not a fan of YAML, but this wasn't unpleasant past not being my expectation.

server: name: xolatile.top listeners: ":6697": tls-certificates: - cert: .../xolatile.top/fullchain.pem key: .../xolatile.top/privkey.pem - cert: .../chud.cyou/fullchain.pem key: .../chud.cyou/privkey.pem

Limiting filthy mail users was an easy process. I had to do some slightly evil things like:

fallocate -l 64G .../home mkfs.btrfs .../home echo '.../home /home btrfs noexec,autodefrag,compress=zstd 0 0' >> /etc/fstab mkdir .../mig; mount .../home .../mig; mv -a /home/* /home/.* .../mig # I should probably of made a backup here of /home, but I have a recent snapshot on my VPS before I started doing all this umount .../mig ; mount -a

Past that, I wanted to properly limit user account storage limits. I unfortunately still do mail on a userbasis instead of a database or centralized system that makes multidomain hosting easy - which is something I still need to do. With limits, their relatively easy with subvolumes, so I went that route. I made snapshot, deleted the user directories, and ran

for i in $(cat .userlist) ; do btrfs subvolume create $i ; done

I wrote two files containing the numbers matching the subvolumes to the users that would be in a group - I didn't want mass allocations as one user could easily then remove storage from another. There was already a limited global capacity that would protect this from being excessive.

for i in $(cat .usermail) ; do btrfs qgroup limit 256M 0/$i . ; done # Mail users. AKA all users for i in $(cat .userssh) ; do btrfs qgroup limit 2G 0/$i . ; done # Privileged SSH users. for i in $(cat .userlist) ; do cp snapshot/$i/* snapshot/$i/.* $i/ -rf ; done

This was relatively pleasant to to setup. I had also looked into ext4 user quotas, however I decided, since I was going to be using a loopback filesystem either way, I may aswell opt for the one with compression and learn both ways of doing it.

due to new users, I had gotten a bug to secure precious data, and checked permissions on /etc/ files, such as Postfix, Dovecot, and my SSL directory. This will have consequences. I went on to do what I've already talked about and returned to verify that mail was still working. It broke tragically. This lead me down a path of finding several other issues with the system I had inplace. A broken Postfix configuration that should've been using the inet connection schema for OpenDKIM instead of the generally faulty socket schema. I made some mistakes with my overly zealous permission changes and had to debug them quickly.

I decided to check the postfix queue while debugging and found many strange mails in holdings. Some were there temporarily due to the interuption in OpenDKIM, others were root to daemon emails, which brought me to setup aliases for many system users. I found a bunch of mail in system users' directories that led me to finding issues with a couple older cleanup scripts & some mail about a bug I had already fixed.

All in all, this was pretty quick and painless for dealing with mail.

I fixed chud.cyou, as I didn't properly manage the NGINX script by not including it in the deployed directory. I decided to go from a granular description of each subdomain and apex in available, to simply just the apex. To simplify the description of many subdomains - I'm not even sure if I know the proper way here - is to make a include in the base directory called the apex's name, that contains:

listen 80; listen [::]:80; listen 443 ssl http2; listen [::]:443 ssl http2; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM,EDH+AESGCM"; ssl_dhparam /etc/nginx/dhparam; ssl_session_timeout 1d; ssl_certificate .../xolatile.top/fullchain.pem; ssl_certificate_key .../xolatile.top/privkey.pem; add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always; location ~ /\.git { # see the last bloog for the justification - in short git repos are placed in directories both indirectly and directly deny all; } error_page 404 /404.html;

This is shared among all apexes, ideally I could generize this with some variables - however while writing it was simply easier to copy over and make the small changes needed. Don't do this! Use variables. Additionally, I would include HTTP/3 support herein, however I'm using a earlier version of NGINX without support. Ideally, in a few thousand thousand million decades Debian can get to at least NGINX 1.26.0.

I secured my uploading page - as it is completely unvetted. This wasn't so bad, but I'm sure there's a better way to do than the following:

location = / { include authorized; deny all; # FastCGI PHP here... }

Ideally the Firewall would be the BSD pf, however I've yet to bake my own kernel with it sprinkled in. IPTables confuses and angers me, and pf is comparitively divine. For now, there is no strict fail2ban configuration or firewall - All I've done is make sure that all bindings are correctly oriented. However I'm not sure if this is sane, and I'm sure it is not fool-proof.

The server has yet to explode.

Nothing ever happens.

/feed.xml

Copyright © emil $CURRENT_YEAR Public Domain