Adventures in upgrading a Laravel Forge 20.04 server to 22.04 and then 24.04

Disclaimer: Upgrading an Ubuntu LTS server that’s running Forge from one LTS to the next isn’t supported.

It isn’t supported because it’s a bit too complicated, and too much can go wrong. Also the Forge team doesn’t want to do server fixing the whole day long (refer to Laravel Vapour).

But, if you’re an experienced sys-admin / programmer and you are brave enough, you can try it. I tried, and here are my upgrade notes. It was really messy, but so far, it’s a lot of fun, including disk (grub) recovery.

When you upgrade Ubuntu from one version to the next, key configuration files are compared to your existing ones and the new version that the upgrade wants to install. The geniuses (Taylor?) at Laravel have one key “thing” in mind when making changes.

  • Stick to the defaults
  • Stick to the defaults
  • Don’t make changes where changes aren’t needed

So just like you get “MVP” for programming, there is a secret society of sys-admins that also does “MVP”, but they call it SWTDS (stick with the defaults stupid). But what if you can’t? That’s where CHANGE CONTROL and documentation comes in. But that’s another topic.

Having said that, upgrading my 4 year old Ubuntu 20.04 server to 22.04 was…a mission. Of course, all Laravel applications stopped working as the upgrade progressed, but this was to be expected.

When Ubuntu upgrades it ask you this key question “every now and again”. I’ll write it in my own words:

You have non-default configuration version of a CONF file. Do you want to keep yours, or go with a new one?

Of course, there is no single answer. If you favour stability, you can say keep the old one. But if you “want to keep up with the times”, you can install the package maintainer’s newer version. Will you loose all your settings? Yes you will. BUT, if you have another newer server you will be able to recover.

With that out of the way, let’s get the upgrade going (20.04 to 22.04):

You start so:

do-release-upgrade

Then you pour a strong drink. This will be a long night.

These are the configuration files the system warned me about:

  • /etc/nginx/fastcgi_params
  • /etc/nginx/nginx.conf
  • /etc/nginx/sites-available/default
  • /etc/crontab
  • /etc/sysctl.conf
  • /etc/redis/redis.conf
  • /etc/apt/apt.conf.d/50unattended-upgrades
  • /etc/ssh/sshd_config
  • /etc/snmp/snmpd.conf

All of them are worth discussing, some more than others.

The main one that will make your applications go offline is default. Forge has customised this file so restore from a newer Ubuntu 24.04. After having restored that, and still having nginx running, I tried restoring nginx.conf. No luck so far. But let’s leave that till later.

I didn’t make any changes. Instead, when the 22.04 was completed, I did this:

do-release-upgrade

Might as well get the long-tail work out of the way.

From 22.04 to 24.04 we got this CONF to fix.

  • /etc/adduser.conf

Then, on reboot, we got this:

SeaBIOS
Machine UUID
Booting from Hard Disk…
error: symbol grub_disk_native_sectors not found.

Entering rescue mode:
grub rescue>

Don’t waste time with grub commands. Especially if you can’t find normal. Rather do this:

If you can boot from a live Ubuntu Desktop ISO and open a terminal, you’ll need to chroot into your installed system, fix GRUB, and ensure that all files are properly in place. Here’s a ChatGPT step-by-step guide that worked really well.

1. Mount the root partition:

Assuming /dev/sda1 is your root partition, mount it somewhere (e.g., /mnt):

sudo mount /dev/sda1 /mnt

2. Mount essential filesystems:

Once the main partitions are mounted, bind the essential directories:

sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys

3. Enter the chroot environment:

This lets you operate as if you’re running the installed system:

sudo chroot /mnt

4. Check and reinstall GRUB:

First, ensure that GRUB’s configuration is correct:

update-grub

Then reinstall GRUB to the disk (not a partition—this should be the disk where the system boots from):

grub-install /dev/sda
update-grub

Replace `/dev/sda` with the correct disk if necessary. The `update-grub` command will ensure that GRUB finds your kernel and initrd files.

5. Exit the chroot and unmount everything  or just reboot FFS (for fuck’s sake).

Once GRUB is reinstalled, exit the chroot and unmount the partitions:

exit
sudo umount /mnt/dev
sudo umount /mnt/proc
sudo umount /mnt/sys
sudo umount /mnt/boot
sudo umount /mnt

6. Reboot:

Remove the ISO and reboot your VM. It should boot into the repaired GRUB menu and allow you to select your installed Ubuntu system.

Next Steps

  • Panel beat PHP back into shape because DNS server was missing.
  • Fix all PHP 7.4 issues by blindly ignoring compatibility and just upgrading to PHP 8.3.

At this point, the applications were STILL not working. Error:

File not found.

Two hours later, restart Nginx, and websites load. Well kind of. This is the next error:

[2025-03-20 22:46:57] production.ERROR: SQLSTATE[HY000] [2002] Connection refused (Connection: mysql, SQL: select * from `sessions` where `id` = 0DGhbZgMlQSJTXEJa52EpeJw0IIdIJgfVmXkCzUE limit 1) {“exception”:”[object] (Illuminate\\Database\\QueryException(code: 2002): SQLSTATE[HY000] [2002] Connection refused (Connection: mysql, SQL: select * from `sessions` where `id` = 0DGhbZgMlQSJTXEJa52EpeJw0IIdIJgfVmXkCzUE limit 1) at /home/forge/manage2.vander.host/vendor/laravel/framework/src/Illuminate/Database/Connection.php:825)
[stacktrace]

Oh la-laah. MySQL is broken. Just as ChatGPT warned me:

Other potential reconfigurations include:

Database connections: If the upgrade includes a newer version of MySQL or MariaDB, you may encounter authentication method changes or updated configuration directives that need to be addressed.

/root/.forge/provision-97819105.sh: line 3: mysql: command not found

Oops.

Next,

root@server01:~# mariadb
ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
root@server01:~# mysql
ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
root@server01:~# apt install mariadb-client-core

Next, we discover that MariaDB, nor MySQL, is actually installed anymore!! Haha, wow.

This is completely nuts. Let’s hope the data is there 😉

Configuration file '/etc/mysql/mariadb.conf.d/50-server.cnf'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** 50-server.cnf (Y/I/N/O/D/Z) [default=N] ?

 

YES, let’s go with the packager maintainers.

# service mysql status
× mariadb.service - MariaDB 10.11.8 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: enabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─migrated-from-my.cnf-settings.conf
Active: failed

What’s really interesting about this on a new Ubuntu 24.04 we havef version 11.4.2. So I guess upgrading OS didn’t upgrade MariaDB. What a drag.

What I thought was going to be 3 hours is now going to be at least 6 hours.

Good night for now.

Part 2 – Fixing MariaDB

2025-03-21 4:00:55 0 [ERROR] COLLATION 'utf8mb4_general_ci' is not valid for CHARACTER SET 'utf8mb3'
vi /etc/mysql/my.cnf

Comment out:

#character-set-server = utf8

Upgrading MariaDB from 10.11.8 to 11.4.2 – failure

Since our fresh and newly installed reference server on Ubuntu 24.04 has MariaDB 11.4.2, let’s do the upgrade.

Added this to /etc/apt/sources.list.d which completely broke the system:

deb https://archive.mariadb.org/mariadb-11.4/repo/ubuntu/ noble main
# deb-src https://archive.mariadb.org/mariadb-11.4/repo/ubuntu/ noble main

Then

apt update

Then

remove mariadb-server

Then

sudo apt-get install mariadb-server galera-4 mariadb-client libmariadb3 mariadb-backup mariadb-common

Next mariadb doesn’t start. Now I’m actually sorry  I attempted an upgrade:

Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [ERROR] /usr/sbin/mariadbd: unknown variable 'provider_bzip2=force_plus_permanent'
Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [ERROR] /usr/sbin/mariadbd: unknown variable 'provider_lz4=force_plus_permanent'
Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [ERROR] /usr/sbin/mariadbd: unknown variable 'provider_lzma=force_plus_permanent'
Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [ERROR] /usr/sbin/mariadbd: unknown variable 'provider_lzo=force_plus_permanent'
Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [ERROR] /usr/sbin/mariadbd: unknown variable 'provider_snappy=force_plus_permanent'
Mar 21 04:20:41 server01.example.net mariadbd[246815]: 2025-03-21 4:20:41 0 [ERROR] Aborting

I decide to abort the .DEB apt files and revert back to the old version.

  1. Remove .list file.
  2. Apt update
  3. Remove all kind of MySQL stuff
  4. Reinstall version that comes with OS.
  5. I get a scary messages about having /var/lib/mysql is being renamed and I have to restore all databases.
  6. I simply rename the new one back and things are running again.

I decide to abort MySQL upgrade and move on to making sure all my applications are working.

Final Notes

I found this weirdo on the server:

# systemctl list-units –type=service | grep mta-sts
postfix-mta-sts-resolver.service loaded active running Provide MTA-STS policy map to Postfix
root@server01:/home/forge/phpmyadmin.fintechsystems.net/public# systemctl disable postfix-mta-sts-resolver.service
Synchronizing state of postfix-mta-sts-resolver.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install disable postfix-mta-sts-resolver
Removed “/etc/systemd/system/multi-user.target.wants/postfix-mta-sts-resolver.service”.
root@server01:/home/forge/phpmyadmin.fintechsystems.net/public# systemctl stop postfix-mta-sts-resolver.service

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *