why unix | RBL service | netrs | please | ripcalc | linescroll
debian related

debian related

This is mostly a work in progress of snippets for things which I have stumbled across.

ipv6

Using he.net's tunnelbroker.

The tunnel broker gave me the following:

IPv6 Tunnel Endpoints
    Server IPv4 address:    216.66.80.26
    Server IPv6 address:    2001:470:1f08:ed8::1/64
    Client IPv4 address:    91.212.182.147
    Client IPv6 address:    2001:470:1f08:ed8::2/64

Available DNS Resolvers
    Anycasted IPv6 Caching Nameserver:  2001:470:20::2
    Anycasted IPv4 Caching Nameserver:  74.82.42.42

Routed IPv6 Prefixes and rDNS Delegations
    Routed /48:         2001:470:9411::/48
    Routed /64:         2001:470:1f09:ed8::/64
    RDNS Delegation NS1:    ns2.ednevitable.co.uk
    RDNS Delegation NS2:    ns1.ednevitable.co.uk 

in this situation I've made the following for the interface file /etc/network/interfaces:

auto 6in4
iface 6in4 inet6 v4tunnel
        address 2001:470:1f08:ed8::2
        netmask 64
        endpoint 216.66.80.26
        gateway 2001:470:1f08:ed8::1
        ttl 64
        pre-up /bin/sleep 5
        up /sbin/ip link set mtu 1280 dev $IFACE
        up /sbin/ip address add 2001:470:1f08:ed8::3/64 dev $IFACE
        up /sbin/ip address add 2001:470:1f08:ed8::4/64 dev $IFACE
        up /sbin/ip address add 2001:470:9411::1/48 dev $IFACE
        up /sbin/ip address add 2001:470:9411::2/48 dev $IFACE
        up /sbin/ip address add 2001:470:9411::3/48 dev $IFACE
        up /sbin/ip address add 2001:470:9411::4/48 dev $IFACE
        up /sbin/ip address add 2001:470:1f09:ed8::1/64 dev $IFACE
        up /sbin/ip address add 2001:470:1f09:ed8::2/64 dev $IFACE
        up /sbin/ip address add 2001:470:1f09:ed8::3/64 dev $IFACE
        up /sbin/ip address add 2001:470:1f09:ed8::4/64 dev $IFACE

The important thing here is the sleep (since at boot time the eth0 interface isn't always ready) and the fact that I'm using the routed /64 and /48 blocks rather than the endpoint prefix. I found that the routed blocks were usable with reverse dns which I was banging my head against the wall over.

If you need help with reverse DNS for blocks like this please feel free to give me a shout and I can provide the reverse DNS for your block.

wheezy

So, I got a little frustrated with Ubuntu 12.04 being slow and just irritating me. It is very hard to put my finger on what it was exactly that made me decide to drop the axe on it and split the laptop and Ubuntu asunder.

One of the nice things that Ununtu provided was a working install with random proprietary firmware, such as that for the laptops Intel Corporation PRO/Wireless 3945ABG [Golan] Network.

So, first things first, the net-install disk didn't work very well, despite trying to get the firmware onto my mobile phone and pointing it at that USB device. Time was short so I got the first full ISO and installed from that. This was enough to give me a working system.

It would not have been possible to get the network running without my mobile phone. Thankfully the system was complete enough to recognise the USB tethering, from which I could get the firmware-iwlwifi package and bingo, the wireless worked.

Why was this required? Some vendors don't release free drivers, shamefully. Intel, I'm looking at you.

The default pointer scheme wasn't to my liking either, so I needed to get the dmz-cursor-theme package and set that to default.

# apt-get install dmz-cursor-theme 
# update-alternatives --config x-cursor-theme

to set the pointer scheme. It was nice getting a blast from the past though.

systemd

If you're wanting to experiment with systemd on wheezy, you can do so as simply as:

# apt-get install systemd

to pull down and deploy all the systemd packages required. If this all works to plan then you'll want to modify the init process during a grub boot:

# vi /etc/defaults/grub

and modify GRUB_CMDLINE_LINUX_DEFAULT="quiet" to GRUB_CMDLINE_LINUX_DEFAULT="quiet init=/bin/systemd".

So that this takes effect you'll need to run

# update-grub

This takes care of the preparation, next you'll need to reboot, do this at a time that suits you best. If you wish to boot into regular SysV, at boot time modify the grub target using 'e' and remove the init argument or set to /sbin/init.

To investigate systemd have a play with:

# systemctl
# systemd-journalctl

avoiding systemd

So, if you're thinking of going systemd less, there's something else you can try:

# apt-get install sysvinit-core && reboot

That's it. Job done. Debian seem to have gone to a lot of trouble to make the systemd alteration as compatible with other init systems as possible. In my opinion they've done a good job.

The majority of the systemd "the world's going to split open and we're going to die" posts on slashdot et al seem to be blown out of proportion. Basically trolling.

If you have gnome3, then you're screwed as they're the guys who seem to be pushing the systemd infection. Find something else, IMO. I'm using evilwm... and it works well, especially on small displays. Ratpoison is also a favourite of mine.

Installing XFCE will bring in xfce4-session which requires libsystemd-daemon0, libsystemd-login0 and libpam0-systemd.

So, time to find another Window Manager, not distro. I think this is all stemming from gnome3, rather than debian. Annoyingly the default is now systemd, but I think this is just to keep the desktop install hassle-free, else you'd have to reboot to use gnome, which I think would annoy desktop users more.

nexus4 mounting

Unlike phones which came before it, the Nexus doesn't act as a USB mass storage device. Instead if you wish to copy data onto it you'll need to interface with it as a mtpfs device.

# apt-get install libmtp-dev fuse libmtp9 pkg-config libfuse-dev libglib2.0-dev libmad0-dev libid3tag0-dev

get mtpfs-1.1.tar.gz from http://www.adebenham.com/mtpfs/

$ tar zxvf mtpfs-1.1.tar.gz
$ cd mtpfs-1.1 && make && sudo make install
$ mkdir nexus && mtpfs nexus

As this is a fuse mounted device, you'll need to umount using

$ fusermount -u nexus

keys

W: GPG error: http://ftp.debian.org bookworm InRelease: The following
signatures couldn't be verified because the public key is not available:
NO_PUBKEY 648ACFD622F3D138 NO_PUBKEY 0E98404D386FA1D9
...
W: There is no public key available for the following key IDs:
F8D2585B8783D481

either:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0E98404D386FA1D9
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138

or:

apt-get install debian-archive-keyring

journald filling

I dislike system'd log defaults. Here's a one-liner to change them:

(cat /etc/systemd/journald.conf | grep -v -E '^#+?SystemMaxUse=' \
| sed -e 's/^\[Journal\]/[Journal]\nSystemMaxUse=50M/') \
> /etc/systemd/journald.conf.$$ \
&& mv /etc/systemd/journald.conf.$$ /etc/systemd/journald.conf \
&& systemctl restart systemd-journald.service

To just rotate it now:

journalctl --rotate --vacuum-size=100M

apache SSL auth

See also apache.

Forbidden

You don't have permission to access this resource.
Reason: Cannot perform Post-Handshake Authentication.

Chances are your mysql connector no longer speaks to mysql or mariadb. Try adding the libmariadb-java and including that in your CLASSPATH or downloading the mariadb/mysql connector and referencing that.

sbuild

To use sbuild inside a lxc, I found I needed to do something a bit like this:

please apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y dist-upgrade \
&& apt-get -y install rustc cargo build-essential git debcargo sbuild \
devscripts reprepro debootstrap dh-cargo schroot autopkgtest vim \
please useradd -m ed
please usermod -a -G tty ed
please sbuild-adduser ed

# replace ip address with your local apt-cacher-ng
please sbuild-createchroot \
--include=eatmydata,ccache,gnupg,dh-cargo,cargo,lintian,perl-openssl-defaults \
--chroot-prefix debcargo-unstable unstable \
/srv/chroot/debcargo-unstable-amd64-sbuild \
http://192.168.1.100:3142/ftp.us.debian.org/debian

please sed -i -e 's,^union-type=none,union-type=overlay,' \
/etc/schroot/chroot.d/debcargo-unstable-amd64-sbuild-*

what if the proxy is offline?

You can override the proxy setting with an option:

please apt-get -o Acquire::http::proxy=false install ...

Failed to connect: org.bluez.Error.Failed br-connection-profile-unavailable

This may happen because pulse audio is unavailable

$ systemctl enable pulseaudio
$ systemctl start pulseaudio

Then reconnect via the bluetooth panel/or bluetoothctl connect.

clang

thread 'main' panicked at 'Unable to find libclang: "couldn't find any valid shared libraries matching: ['libclang.so', 'libclang-*.so', 'libclang.so.*',
'libclang-*.so.*'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [])"', /home/...
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

clang libraries were missing:

please apt-get install libclang-dev

mariadb

Exporting from MySQL dump to MariaDB:

ERROR 1227 (42000) at line : Access denied; you need (at least one of) the SUPER, BINLOG ADMIN privilege(s) for this operation

This can be done with a trivial sed:

mysqldump ... | \
sed -e 's/^\(SET @@GLOBAL.GTID_PURGED=\|SET @@SESSION.SQL_LOG_BIN\)/--\1/g' \ 
| mysql ...

ERROR 1273 (HY000) at line : Unknown collation: 'utf8mb4_0900_ai_ci'

Another sed!

mysqldump ... | sed -e 's/utf8mb4_0900_ai_ci/utf8mb4_unicode_ci/g' | mysql ...

load local infline

java.sql.SQLException: The used command is not allowed because the MariaDB server or client has
disabled the local infile capability
        at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:130)
        at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:916)
        at com.mysql.cj.jdbc.ClientPreparedStatement.execute(ClientPreparedStatement.java:354)

Enable it in the server's cnf, something like pleaseedit /etc/mysql/mariadb.conf.d/50-client-local.cnf:

[client]
loose-local-infile = 1

Or in the connection string:

     conn = DriverManager.getConnection(
     "jdbc:mysql://" + host + "/" + database
     + "?" + "user=" + user + "&password=" + pass
     + "&allowLoadLocalInfile=true");

lost connection

mysqldump: Error 2013: Lost connection to server during query when dumping table ... at row: 135

This interesting error happens sometimes on very large BLOB data. Set the server with a large packet variable:

[mysqld]
max_allowed_packet=1G

You'll also need to set the client dump variable too if you're using mysqldump:

[mysqldump]
max_allowed_packet=1G

upgrade

/var/log/syslog filling?

2023-08-27T09:55:08.352813+01:00 ns1 mariadbd[7909]: 2023-08-27  9:55:08 155693 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB').

2023-08-27T09:55:08.352868+01:00 ns1 mariadbd[7909]: 2023-08-27  9:55:08 155693 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255).

mysql_upgrade --user=root --password as either the root user or mysql `

mail queue of spam

Debian normally ships with postfix these days. If the queue gets populated with spam, you can either drop the whole queue, or be more selective.

Deleting the whole queue is pretty straight forward:

postsuper -D all

To be more selective, use mailq, to get the queue ID:

-Queue ID-  --Size-- ----Arrival Time---- -Sender/Recipient-------
126D85ECDD      655 Fri Jul 24 06:25:05  root
                                         root

and feed it into postcat:

postcat -qv 126D85ECDD
...
regular_text: Subject:
...

The message lines will be prefixed with regular_text. You have to find something common between each spam message in order to then feed the message ID into postsuper -d

mailq | grep -E '^[A-Z0-9]+\s' | sed -e 's/\s.*//g' \
| while IFS= read ID; do
  postcat -vq "$ID" | grep -E '^regular_text: Subject: (nasty|strings)'
  && postsuper -d "$ID";
done

This will not work for every situation as it is highly dependant on the exact spam that is in the queue.

If this is from web spam, you can try and align the date (Fri Jul 24 06:25:05 in this case) with something in the web logs:

grep 06:25: /var/log/apache2/*log

That might help a little.

logrotate but keep existing file perms

Sometimes you need to use the existing file perms, you might not know what they are ahead of time, such as with a path that includes a glob with multiple file owners.

"/path/to/*/file" {
    daily
    rotate 7
    postrotate
        touch "$1"
        chown --reference "$2" "$1"
        chmod --reference "$2" "$1"
    endscript
}

ajp marshal

[proxy_ajp:error] [pid 836016] [client 127.0.0.1:57040] AH02646: ajp_marshal_into_msgb: Error appending attribute AJP_LOCAL_ADDR=127.0.0.1
[proxy_ajp:error] [pid 836016] [client 127.0.0.1:57040] AH00988: ajp_send_header: ajp_marshal_into_msgb failed
[proxy_ajp:error] [pid 836016] (120001)APR does not understand this error code: [client 127.0.0.1:57040] AH00868: request failed to 127.0.0.1:8009 (127.0.0.1:8009)

SEVERE [ajp-nio-8009-exec-10] org.apache.coyote.ajp.AjpMessage.processHeader Invalid message received with signature 514

In short, apache/tomcat are trying to send too much to each other.

In your vhost (not within a location etc), set the proxyiobuffersize:

ProxyIOBufferSize 16384

Also set the packsetSize in the conf/server.xml:

    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"
               packetSize="16384"
               socket.appReadBufSize="16384"
               connectionTimeout="20000"
    />

userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]

Add to etc/ssh/sshd_config:

PubkeyAuthentication yes
PubkeyAcceptedKeyTypes=+ssh-rsa

varnish vcl

There's a neat feature of varnish that helps weed out expensive (backend) requests.

Within a vcl you can hook into the cache 'miss' target:

sub vcl_miss {
    if (req.http.CIDR_RS_CC ~ "^(CC|CODE)$" ) {
        return(synth(491, "Access denied"));
    }
}

Replace CC with a lits of | separated country codes.

What this does is effectively abort the request if the cache content doesn't exist already, you'd use this potentially in a case where a bot in another country is hitting the backend a little too greedily.

Want to check the conf?

imaps login

Helpful test for IMAPS authentication with openssl:

$ openssl s_client -quiet -connect imap.server:993 -crlf
* OK ...
a login USERNAME PASSWORD
a OK [CAPABILITY ... ] Logged in
a logout
* BYE Logging out
a OK Logout completed.

apt-cacher-ng

I have too many VM's and containers. Far too many. Each needs to be updated with a regular set of packages and OS maintenance. Retrieving this over the wider internet is consuming of resource at a mirror and the WAN port $here.

apt-cacher-ng is a great tool for storing a local copy of the packags that are retrieved and improves the WAN resource use by storing the packaages in a cache locally. You could use something like squid to cache but that sometimes treats requsts more uniquely, such as the mirror site DNS etc.

Setting up apt-cacher-ng is easy, run this on the machine that is to be the cache, lets call it cachy:

apt-get install apt-cacher-ng

For each machine that's to use the apt-cache all you need do is run this (replacing cachy with the name or DNS of the machine where you just ran the apt-get):

echo 'Acquire::http { Proxy "http://cachy:3142"; }' \
| please tee /etc/apt/apt.conf.d/01proxy

On all the desktops, mail, or web etc, I'd run the above echo line and they'd send their apt requests through the cache.

Not only is this faster, it frees up the WAN device as the package updates go through the LAN. This makes a noticeable difference when some users might already be heavy internet users.

missing backend

E: The repository 'http://gb.archive.ubuntu.com/ubuntu jammy Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://gb.archive.ubuntu.com/ubuntu jammy-updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://gb.archive.ubuntu.com/ubuntu jammy-backports Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Check by hand that your backends file has a working mirror, normally naemed /etc/apt-cacher-ng/backends*.

rdp

If you are in a situation where you need to connect to a RDP server that's within a remote network you can SSH to, rather than setting up port forwards you can often use socks and xfreerdp. The beauty of socks is already known, but not all RDP clients can be wrapped with a socksifying library. The xfreerdp maintainers have done a wonderful job and implemented socks support, the CLI syntax is as follows:

xfreerdp /proxy:socks5://localhost:1085/ /v:10.6.6.6

localhost:1085 corresponds to the -D [port] argument in your ssh command, such as

ssh -D 1085 remotefirewall

Where remotefirewall is the internet-connected remote machine.

10.6.6.6 is the RDP server wihtin the remote network that you reach via 'remotefirewall'.

monit

I was using an old example config, when I got this error from a monit reload:

error    : 'apache2' error -- unknown resource ID: [4]

After removing the loadavg entry the config parsed ok.

check process apache2 with pidfile /var/run/apache2/apache2.pid
      start program = "/usr/bin/systemctl restart apache2.service"
      stop program = "/usr/bin/systemctl stop apache2.service || killall -9 apache2"
      alert ed@s5h.net
      if cpu is greater than 75% for 5 cycles then restart
      if failed host localhost port 80 protocol http then restart
      mode active
      group server

apt release info

Repository '... buster InRelease' changed its 'Suite' value from 'stable' to 'oldstable'

This happens becuase the release has changed name. It is trivial to manage, just run

apt-get update --allow-releaseinfo-change

done!

kernel package

Dependencies

apt-get install libncurses-dev flex bison devscripts bc rsync libelf-dev libssl-dev gcc make

Building

make bindeb-pkg

pwgen

pwgen is an extremely useful password generator. It's often helpful to generate the password and the crypt at the same time, this way there's no copy/paste errors when putting passwords into the 'passwd'-like programs.

With this patch you can do the following:

$ ./pwgen -1 -e
amaeX8nu $5$KNMblpAQ.maXLkF9$.xgnuJp53NMgD3iyXQKFFPqvYxrtvq9E0BxUbs0MW31

You can change the crypt method by setting the 'SALTPREFIX' environment (default is $5$, which currently has good strength and backward compatibility):

$ SALTPREFIX= ./pwgen -1 -e
Una8jai3 auDtFc2Uo2Wy2

$ SALTPREFIX='$1$' ./pwgen -1 -e
pesahR5i $1$UT8osyYI$QVBABZ84AVbZ4CoNJMMyF.

$ SALTPREFIX='$6$' ./pwgen -1 -e
AiGhoh0k $6$Wodl2O5dOQcz1a7B$U.8a1tzhqDAdwzdREt87qL32QOJ/ruScU3S5wfslKeyWVVithsxai9PHzDypbswcq/w4F9NkWWxw/IstPApnO1

You see the password and the encrypted string, which can be supplied into user admin programs like this:

usermod -p '$5$KNMblpAQ.maXLkF9$.xgnuJp53NMgD3iyXQKFFPqvYxrtvq9E0BxUbs0MW31' username

or

useradd -m -s /bin/bash -p '$5$KNMblpAQ.maXLkF9$.xgnuJp53NMgD3iyXQKFFPqvYxrtvq9E0BxUbs0MW31' username

This is also useful with htauth files:

AuthType Basic
AuthName "Keep out!"
AuthUserFile "/etc/apache2/restricted"
Require valid-user

/etc/apache2/restricted would then contain:

username:$5$KNMblpAQ.maXLkF9$.xgnuJp53NMgD3iyXQKFFPqvYxrtvq9E0BxUbs0MW31

That file would be just fine with nginx's auth too:

location / {
    auth_basic "Keep out!";
    auth_basic-user_file /etc/nginx/restricted;
}

To checkout and compile:

apt-get install git automake gcc make
git clone -b crypt https://github.com/edneville/pwgen.git
cd pwgen && autoupdate && autoconf && ./configure && make

squid with certbot

I needed to front a site with squid and lets encrypt, here's a simple way, in /etc/squid/conf.d/local.conf:

http_port 80 accel defaultsite=www.s5h.net no-vhost
https_port 443 accel tls-cert=/etc/letsencrypt/live/www.s5h.net/fullchain.pem
tls-key=/etc/letsencrypt/live/www.s5h.net/privkey.pem
defaultsite=www.s5h.net no-vhost

cache_peer s5h.net parent 80 0 no-query originserver name=bf

acl challenge urlpath_regex ^/.well-known/acme-challenge

cache_peer_access bf deny challenge

cache_peer 127.0.0.1 parent 5555 0 no-query originserver name=certbot
cache_peer_access certbot allow challenge
cache_peer_access certbot deny all

acl all src 0.0.0.0/0.0.0.0
http_access allow all

All traffic will go to the cache_peer bf, unless it matches the acme-challenge urlpath, in that case it will go to localhost on port 5555.

The certbot needs to run as follows:

certbot certonly --standalone --preferred-challenges http --http-01-port 5555 --deploy-hook 'systemctl reload squid' -d www.s5h.net

When this runs it will receive the challenge. Until the certbot runs though, we don't have a tls-key or tls-cert, so use snakeoil until then, the above wont startup of course, but that's what it should look similar to in the end.

Snakeoils are normally like this:

tls-cert=/etc/ssl/certs/ssl-cert-snakeoil.pem
tls-key=/etc/ssl/private/ssl-cert-snakeoil.key

Squid oddly needs the tls-cert specified first.

squid in front of multiple web servers

For the sake of example, abc.s5h.net and xyz.s5h.net are to be considered as two isolated web sites.

http_port 80 accel
https_port 443 accel \
tls-cert=/etc/letsencrypt/live/abc.s5h.net/fullchain.pem \
tls-key=/etc/letsencrypt/live/abc.s5h.net/privkey.pem \
tls-cert=/etc/letsencrypt/live/xyz.s5h.net/fullchain.pem \
tls-key=/etc/letsencrypt/live/xyz.s5h.net/privkey.pem

cache_peer 192.168.1.100 parent 443 0 no-query originserver name=server_1 tls
acl sites_server_1 dstdomain abc.s5h.net
cache_peer_access server_1 allow sites_server_1

cache_peer 192.168.2.100  parent 443 0 no-query originserver name=server_2 tls
acl sites_server_2 dstdomain xyz.s5h.net
cache_peer_access server_2 allow sites_server_2

acl challenge urlpath_regex ^/.well-known/acme-challenge

cache_peer_access bf deny challenge

cache_peer 127.0.0.1 parent 5555 0 no-query originserver name=certbot
cache_peer_access certbot allow challenge
cache_peer_access certbot deny all

acl all src all
http_access allow all

limiting cache_peer (backend) requests

During some high-load events you may wish to prevent a class of requests from causing backend access, but be happy to serve them content that was already in the cache:

acl bad_ua req_header User-Agent -i .*curl.*
cache_peer_access server_1 deny bad_ua

memory limiting services

systemd, for all it's faults, does allow memory limiting services:

[Unit]
Description=hungry memory service
After=remote-fs.target

[Service]
ExecStart=/home/www/hungry.sh
Restart=on-failure
MemoryMax=1G
MemorySwapMax=128M

[Install]
WantedBy=multi-user.target

Run systemctl daemon-reload and restart your service. With his unit the service will only be able to access 1G of RSS memory and will swap up to 128MB.

Previously, MemoryMax was MemoryLimit.

serial console

If, like me, you prefer to use text consoles than web browsers or graphical KVMs then the following may be of interest. Once this is setup you can use a serial device to connect to your system. Remember things like Lantronix?

If you have a web console, run this now, to then connect using the serial to do the remaining steps with copy-and-paste:

systemctl start serial-getty@ttyS0.service

Put this in /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="splash quiet"
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 rootdelay=60"
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"

Run update-grub, then next boot will send the grub menu to your serial port. Then run these two commands to start a serial getty and next boot and one now:

systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service

If you're a proxmox user, you can test this with qm terminal <vmid>, or use something like cu, tip, or minicom to attach to a physical port.

A qemu serial can be started with a TCP socket:

-serial telnet:localhost:3123,server,nowait

Replace 3123 with a listening port of your choice. As this is a network socket you can connect to it with telnet:

telnet localhost 3123

If the serial device is physical (serial port, or USB), then you could use screen too, replacing /dev/ttyS0 with the device:

screen /dev/ttyS0 115200

debugfs

Ever needed to change a ctime?

for i in `seq 1 1000`; do
   touch "$i";
   echo "set_inode_field /home/ed/$i ctime 20240101121201";
done | debugfs -w -f - /dev/mapper/host--vg-root

c code from djb

You might hit errors like this:

./load cdbget cdb.a buffer.a unix.a byte.a
/usr/bin/ld: errno: TLS definition in /lib/x86_64-linux-gnu/libc.so.6 section .tbss mismatches non-TLS reference in cdb.a(cdb.o)
/usr/bin/ld: /lib/x86_64-linux-gnu/libc.so.6: error adding symbols: bad value
collect2: error: ld returned 1 exit status
make: *** [Makefile:116: cdbget] Error 1

Just change conf-cc:

gcc -O2 -include /usr/include/errno.h

test a php pool file before reloading

Typically if you don't, and there's an error, the pool will crash exit on a reload. This should be a pre-reload test within the systemd unit.

/usr/sbin/php-fpm7.4 -t

To test the default php conf, or if you wish to test a modified conf file elsewhere, specify that with the -y argument:

/usr/sbin/php-fpm8.2 -t -y /etc/php/8.2/fpm/php-fpm-edit.conf

test a varnish conf before reloading

Like with other daemons, if the conf is invalid, the daemon will shutdown.

/usr/sbin/varnishd -C -f /etc/varnish/default.vcl

If there are faults it will complain with "VCL compilation failed", so lets grep for that.

/usr/sbin/varnishd -C -f /etc/varnish/default.vcl 2>&1 \
| grep -c 'VCL compilation failed' | grep ^0