Table of Contents
Background
Hyper-V server with fresh Ubuntu installed. Disk was set to 25GB on install, and LVM was chosen.
This procedure works for any Ubuntu file system that has been formatted with LVM, and has also been tested on a Proxmox server.
The guide is quite specific to LVM and Ubuntu and running an actual Virtual Machine guest.
If you’re using Ubuntu and VMWare and you don’t have LVM, you might be able to skip the pvresize
command and substitute lvextend
with growpart
instead. See here. The actual commands for this scenario is:
sudo growpart /dev/sda 2
(your partition number)
sudo resize2fs /dev/sda2
But for now, let’s head back to:
- Ubuntu
- A virtual machine guest
- LVM volume
Issue and How to Do it
Only 3.9 GB available on the root volume. At this point, it’s important to point out that the root volume’s name is /dev/mapper/ubuntu--vg-ubuntu--lv
as this will be used in multiple places in this document.
Because LVM was chosen, you are in good luck as LVM partitions can be resized online, so no downtime is required. In fact, even with EXT4 and a VM it’s easy to resize online.
Steps TL;DR
- fdisk -l (note it’s partition 3 by looking at the current Size)
- parted
- resizepart, Fix, 3, 100% (type this instead), quit
- pvresize /dev/sda3
- lvextend -l +100%FREE /dev/mapper/ubuntu–vg-ubuntu–lv
- resize2fs /dev/mapper/ubuntu–vg-ubuntu–lv
- df -h
Transcript of 50 GB to 100 GB Resizing
If you just want to see how a 50 GB to 100 GB Resizing event goes, scroll to the end of the article.
Complete Steps and Explanation
ssh to server, sudo -i to become root
Confirm there is a limited amount of disk space. In the output below, note the / (root) volume only has 3.9 GB disk space:
user@server:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 394M 1.1M 393M 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 3.9G 3.2G 489M 87% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/loop0 89M 89M 0 100% /snap/core/7270 /dev/sda2 976M 77M 833M 9% /boot /dev/loop1 90M 90M 0 100% /snap/core/7713 tmpfs 394M 0 394M 0% /run/user/1000
Next confirm in Linux that there is actually a lot more space available. As you will see from the output below, there is 24G
on the /dev/sda3
volume. The other 1GB is used in the boot volume and the BIOS boot
user@server:~# fdisk -l Disk /dev/loop0: 88.5 MiB, 92778496 bytes, 181208 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 89 MiB, 93327360 bytes, 182280 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes ... Disk /dev/sda: 25 GiB, 26843545600 bytes, 52428800 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: ED41F7A6-5D09-457B-A55C-C7F1E30DE419 Device Start End Sectors Size Type /dev/sda1 2048 4095 2048 1M BIOS boot /dev/sda2 4096 2101247 2097152 1G Linux filesystem /dev/sda3 2101248 52426751 50325504 24G Linux filesystem Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 4 GiB, 4294967296 bytes, 8388608 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
The one set of instructions referred to starting this process using parted
, so parted log is below. The instructions as per Stack read something like:
“First you need to run parted and use its resizepart command to expand the partition to use the whole disk, then run pvresize to tell LVM about the new space, then run lvresize to grow the logical volume, and finally resize2fs on the logical volume to grow the filesystem to use the new space. This can be done without a reboot.”
Please note however I diverted slightly from these steps. The first command I learnt / used in parted, was just used to print the partition information:
user@server:~# parted GNU Parted 3.2 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: Msft Virtual Disk (scsi) Disk /dev/sda: 26.8GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 1076MB 1074MB ext4 3 1076MB 26.8GB 25.8GB (parted) quit
This was followed by me checking and re-checking information using fdisk and df -h to make sure I was understanding the current situation.
Then I finally started the procedure:
user@server:~# parted GNU Parted 3.2 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) resizepart Partition number? 3 End? [26.8GB]? (parted) quit Information: You may need to update /etc/fstab.
…
user@server:~# pvresize /dev/sda3 Physical volume "/dev/sda3" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
At this point I tried using lvresize
but couldn’t get it to work. It appears there is both lvresize
and lvextend
. As per the “microhowto.info” article:
“The difference is that lvextend can only increase the size of a volume, whereas lvresize can increase or reduce it.”
The first attempt at specifying 24GB did not work. After I completed the procedure I found that you can also specify a flag 100%, but unfortunately, I didn’t get that information in time and just ended up specifying 23GB. If I have the time I might try and use the 100% flag. For reference, the flag is lvextend -l +100%FREE /dev/VGNAME/LVNAME
Below first attempt but insufficient space:
user@server:~# lvextend --size 24G /dev/mapper/ubuntu--vg-ubuntu--lv Insufficient free space: 5120 extents needed, but only 5119 available
Second attempt reducing space to allocate:
user@server:~# lvextend --size 23G /dev/mapper/ubuntu--vg-ubuntu--lv Size of logical volume ubuntu-vg/ubuntu-lv changed from 4.00 GiB (1024 extents) to 23.00 GiB (5888 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized.
After lvextend
, one has to use resize2fs
. This was kind of scary because it appeared I was going to work on a live disk, but it went quick.
user@server:~# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv resize2fs 1.44.1 (24-Mar-2018) Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 3 The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 6029312 (4k) blocks long.
TADA! The disk is now 23G big as seen by df-h output below:
user@server:~# df -h Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 394M 1.1M 393M 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 23G 3.2G 19G 15% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/loop0 89M 89M 0 100% /snap/core/7270 /dev/sda2 976M 77M 833M 9% /boot /dev/loop1 90M 90M 0 100% /snap/core/7713 tmpfs 394M 0 394M 0% /run/user/1000 user@server:~#
Caveats
As a side note, I logged into Hyper-V Manager but could find an easy place to resize the disk. On right click, Inspect, I could however see the disk was 25GB as specified when I installed the VM.
Transcript of 50 GB to 100 GB Resizing
Here is the output of a 50 GB to 100 GB resizing. In this case, the Resize Disk option in Proxmox was chosen.
The console spits out red giving you a heads up that it detected the host has changed the disk size. All commands and important information is in a bold or black font, and all commands or typing is in blue.
Please note a sector is not the same as byte but by analyzing the numbers one can see that an increase is on the way.
root@server:~# fdisk -l
...
GPT PMBR size mismatch (104857599 != 209715199) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 396A0315-7944-49AC-BBFF-7AD2F7747AC9
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 104857566 102756319 49G Linux filesystem
Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 48.102 GiB, 52609155072 bytes, 102752256 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@server:~# parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) resizepart
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an
extra 104857600 blocks) or continue with the current setting?
Fix/Ignore? Fix
Partition number? 3
End? [53.7GB]? 100%
(parted) quit
Information: You may need to update /etc/fstab.
root@server:~# pvresize /dev/sda3
Physical volume "/dev/sda3" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
root@server:~# lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Size of logical volume ubuntu-vg/ubuntu-lv changed from <49.00 GiB (12543 extents) to <99.00 GiB (25343 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
root@server:~# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 13
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 25951232 (4k) blocks long.
root@server:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 595M 1.2M 594M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 98G 32G 62G 34% /
tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/sda2 976M 198M 712M 22% /boot
/dev/loop1 56M 56M 0 100% /snap/core18/1932
/dev/loop2 72M 72M 0 100% /snap/lxd/16099
/dev/loop0 56M 56M 0 100% /snap/core18/1944
/dev/loop3 68M 68M 0 100% /snap/lxd/18150
/dev/loop4 32M 32M 0 100% /snap/snapd/10492
/dev/loop5 32M 32M 0 100% /snap/snapd/10707
tmpfs 595M 0 595M 0% /run/user/1000
References
- https://askubuntu.com/questions/983890/how-to-allow-partitions-to-dynamically-extend
- https://www.linuxtechi.com/extend-lvm-partitions/
- http://www.microhowto.info/howto/increase_the_size_of_an_lvm_logical_volume.html#idp25840
- http://manpages.ubuntu.com/manpages/trusty/man8/lvresize.8.html
- https://serverfault.com/questions/692340/how-can-i-tell-pvresize-to-expand-a-physical-volume-to-include-all-available-spa
- https://www.thegeekdiary.com/centos-rhel-converting-an-existing-root-filesystem-to-lvm-partition/
- https://www.thegeekdiary.com/centos-rhel-converting-an-existing-root-filesystem-to-lvm-partition/
Interested in More?
If you’re interested in file systems, I learnt a ton by reading this article:
The key take-away from that article and this knowledgebase article is that this article relies on LVM which is not mentioned at all in that article.
More References
File systems can be a complex and wide topic. Here is another article to get your appetite going:
40 thoughts on “How to resize/expand an Ubuntu LVM disk”
This was EXACTLY what I needed. It helped me emensely.
Thanks
This helped me too, thanks for the great write up and steps
@DON You’re welcome!
Super useful. thank you!
Any ideas what I missed when installing the server to end up with only a 3G root?
Seems to catch out lots of others too….
Cheers
Hi @KEITHY,
I don’t believe you did anything wrong. In fact, I believe you made the correct choice as prompted by the installation wizard to use “LVM”. The Ubuntu wizard encourages it’s use because of snapshotting and ease of resizing, but I’ve noted on a few newer systems that it then goes on to choose a really small partition. In my opinion it’s a poorly designed installation wizard as it’s not entirely obvious you’re going to end up with that small partition, nor how to easily make it bigger at installation time.
this worked
wow!
lvextend –size 893.2G /dev/mapper/ubuntu–vg-ubuntu–lv
then
resize2fs /dev/mapper/ubuntu–vg-ubuntu–lv
Great Work! Saved me time and helped me a lot
Thanks for this. I installed a new 220 GB disc with new linux and did not understand why the filesystem was only half the size. But this fixed it. Great,
Thanks a lot. I was stuck at the /dev/VGNAME/LVNAME chapter ’cause I intended to use your tip about expanding to the 100% free space. The key (after googling a while, of course): lvdisplay.
Not an error by far. Like a charm.
Mad genius
Excellent!!!! You saved my drive!
Thanks for the detailed step by step guide, you’re awesome
solved it, should be 1st on Google, tnxs very much
You sir saved my life. THANK YOU A LOT.
thanks so much !
Thanks, god job
Well done and this article helped me for my needs. Thank you!!
Wow this just saved me a ton of time! Thank you!
Your instruction and sample demo was to the point.
This fixed our volume space issue in minutes
Luckily, I found this article, thank you very much.!!!
Anyone reading this, this actually did work for me. For some reason when I used installed Ubuntu 20 my WD 500 GB drive, it only partitioned half of it. These instructions allowed me to recover the rest of the drive without a rebuild, while gparted would not. Just take your time and read carefully, it will work.
Very helpfull! Tnx!
Thank you it helped me
For some reasons I did a fresh install of ubuntu 20 and it used 200gb out of a 800 disk. THanks to this guide I reverted to use 100% of the disk. THANKS
genius
It Works for me. Really thanks for the easy and clear method with greats explanation.
Great works !!!
i can confirm (did it just now) that “-l +100%FREE” in place of –size does work like a charm
Just brilliant mate, thanks for the write-up.
FYI your CSS isn’t working properly, it’s showing up as:
” fdisk -l“
Hi Ted,
Thanks so much for pointing that out. There is something weird going on, because as you say on the front-end it’s outputting HTML but when I go into the classic editor in WordPress in the back-end it’s 100%. We’ll keep on working on it till it’s solved.
kind regards,
Eugene
Thank you so much for this. I’m a linux idiot and every other page I found assumed I had some basic knowledge to get started.
This resolved my issue of resizing my Zabbix filesystem so now I’ve got tons of room! Awesome!
Thanks again
Awesome guide! I was about to buy a larger SSD but was able to use the available but unused space because of your guide.
Thank you very much!
It worked perfectly in my case, on an ubuntu server 20.04, resizing a volume from 29 GiB to 60 GiB
Thank you
thank you very much, I searched so many websites but didn’t found such a neat tutorial. It fixed my problem in vmware with ubuntu vm easily
Thanks, worked beautifully for me! Bookmarked for future reference. 🙂
Worked like a charm. Thanks alot
On point. It works! Thank you so much
Thank you so much! 100% on point and saved me!
Very good, this solve my problem.
Thank you so much.
Thank you, it is very helpful!
Thank you so much!!! This helps me a lot!
Thanks! Great short guide. Never used parted to do this but this was much easier than the old fdisk way that I use to do. Thanks!