Error Mounting Docker/Linstor Volume after Pool Resize

Hello,

I am currently trying out the Linstor Docker plugin on small cluster. Everything worked well, but after I resized my storage pool I can no longer start docker containers with new volumes. I did actually do a linstor volume resize after the pool resize and then a resize2fs from inside the container. Maybe this was breaking something?

The old volumes - created before pool resize - make no problems. Maybe this is a known problem and someone can give me a direction how to solve.

I get error messages like:

root@brh-tools01:/opt/docker-ha/mc-dev02# docker compose up -d
[+] Running 1/1
 ✘ Container mc-dev02  Error response from daemon: failed to populate volume: error while mounting volume '': VolumeDriver.Mount: resize of device /dev/drbd1020 f...                      2.6s 
Error response from daemon: failed to populate volume: error while mounting volume '': VolumeDriver.Mount: resize of device /dev/drbd1020 failed: exit status 1. resize2fs output: resize2fs 1.45.7 (28-Jan-2021)
Filesystem at /dev/drbd1020 is mounted on /var/lib/docker-volumes/linstor/dckvol_minecraft_data_mc-dev02; on-line resizing required
resize2fs: Permission denied to resize filesystem
old_desc_blocks = 2, new_desc_blocks = 2

Some details:

  • I tried it with new compose setups, new volumes
  • I tried it with “docker run …”
  • I rebooted the complete cluster
  • Manually running resize2fs works fine, but returns nothing to do.
  • After the unsuccessfull try a mounted DRBD device is stuck and has to be manually unmounted.

Happy to get some ideas from you!

Cheers,

Melanie

I think you should show the whole series of steps to reproduce the problem from scratch, including exactly what sort of storage pool you’ve set up (is it LVM?)

In principle, growing a storage pool, growing a volume, and using online resize2fs inside the volume when its size increases are all fine things to do (and I have done them successfully without problems, albeit using Proxmox not Docker). But resize2fs will not reduce the size of a volume while it is mounted.

Now, I see you’re getting an “Error response from [docker] daemon” at startup time. Is docker itself trying to resize the volume again at container start?

You should check the exact sizes of volumes e.g. with blockdev --getsize64 /dev/drbdXXXX

Note that the volume you get from Linstor may be slightly bigger than you asked for. For example, suppose you ask for 1GiB = 1024MiB. On LVM that’s 256 extents (4MiB). DRBD needs some space for metadata, so Linstor asks for the next size up, which is 1028MiB (257 extents). DRBD metadata goes at the end, but it doesn’t use the whole 4MiB, so you end up with a volume that’s slightly larger than 1GiB.

Usually that’s not a problem, unless you subsequently try to copy this onto a volume on some other system which is exactly 1GiB.

However, I can imagine the following scenario:

  1. You said you called “resize2fs” yourself inside the container. If you didn’t specify the size it will grown the filesystem to the full available size, which is slightly more than 1GiB
  2. Docker mounts the volume
  3. Docker itself calls “resize2fs” again at container startup, but explicitly specifying the size it originally requested (i.e. 1GiB)
  4. At this point, resize2fs barfs because Docker is asking it to do an online shrink of the volume

The way to fix this would be to run resize2fs when the volume is unmounted, giving the exact size required. resize2fs will then move stuff around to shrink the filesystem. Aside: you need to run e2fsck -f before resize2fs in this case, but resize2fs will refuse to run you if you haven’t.

And if this is the problem, the solution is not to call resize2fs yourself inside the container, but let docker do it.

Good luck!

Hello Brian,

thank you so much for reacting so quickly.

I will take the time next week to look deeper into this.

I will follow up on your suggestion that the problem could be related so slightly different sizes expected in the multiple involved layers.
And if I am not successful there, try to reproduce the problem with a new storage pool.

I am happy to reply to the group with more detailled information in a few days. I was actually hoping that this is something simple someone else already had experienced before. But maybe I have to dive in deeper! :smiley:

Cheers,

Melanie

On Sun, 2025-08-03 at 09:19 +0000, Brian Candler via LINBIT Community wrote:

candlerb
August 3

I think you should show the whole series of steps to reproduce the problem from scratch, including exactly what sort of storage pool you’ve set up (is it LVM?)

In principle, growing a storage pool, growing a volume, and using online resize2fs inside the volume when its size increases are all fine things to do (and I have done them successfully without problems, albeit using Proxmox not Docker). But resize2fs will not reduce the size of a volume while it is mounted.

Now, I see you’re getting an “Error response from [docker] daemon” at startup time. Is docker itself trying to resize the volume again at container start?

You should check the exact sizes of volumes e.g. with blockdev --getsize64 /dev/drbdXXXX

Note that the volume you get from Linstor may be slightly bigger than you asked for. For example, suppose you ask for 1GiB = 1024MiB. On LVM that’s 256 extents (4MiB). DRBD needs some space for metadata, so Linstor asks for the next size up, which is 1028MiB (257 extents). DRBD metadata goes at the end, but it doesn’t use the whole 4MiB, so you end up with a volume that’s slightly larger than 1GiB.

Usually that’s not a problem, unless you subsequently try to copy this onto a volume on some other system which is exactly 1GiB.

However, I can imagine the following scenario:

  1. You said you called “resize2fs” yourself inside the container. If you didn’t specify the size it will grown the filesystem to the full available size, which is slightly more than 1GiB
  2. Docker mounts the volume
  3. Docker itself calls “resize2fs” again at container startup, but explicitly specifying the size it originally requested (i.e. 1GiB)
  4. At this point, resize2fs barfs because Docker is asking it to do an online shrink of the volume

The way to fix this would be to run resize2fs when the volume is unmounted, giving the exact size required. resize2fs will then move stuff around to shrink the filesystem. Aside: you need to run e2fsck -f before resize2fs in this case, but resize2fs will refuse to run you if you haven’t.

And if this is the problem, the solution is not to call resize2fs yourself inside the container, but let docker do it.

Good luck!


Visit Topic or reply to this email to respond.

To unsubscribe from these emails, click here.

Hello Brian,

sorry for my late reply I had a bad flu. I only had time to check now. Actually it was now functioning without problems to create volumes for new, other Docker Compose setups.

Still for the two old volumes the problem persisted.

I was then finally able to solve the problem for one of them by creating an xfs volume instead of ext4 what I was using before.

This did not work for the second volume out of the box. Although I was repeating the same steps. But after a few repetitions the following sequence of commands succeeded. Please see, that I on purpose changed the name of the volume when I recreated it:

sudo umount /dev/drbd1020
sudo rmdir /var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e8ab698212ebab90c5/propagated-mount/dckvol_minecraft_data_mc-dev02
docker volume rm dckvol_minecraft_data_mc-dev02
docker volume create --opt fs=xfs --opt size=15G -d linbit/linstor-docker-volume dckvol_mc_data_mc-dev02
docker run --rm -d --name deleteme2 --mount volume-driver=linbit/linstor-docker-volume,source=dckvol_mc_data_mc-dev02,destination=/data ubuntu sleep 1h

Actually I did not totally grasp the reason of the problem. But as it is a test cluster I will not dig deeper on this point. If others might have the same problem, feel free to contact me, in this case I would be happy to investigate deeper. The same offer if is seems worth to make an issue/bug report out of this. But I might expect the reason to be in my cluster not beeing set up 100% correctly. If I stumble into the same problem again, I might investigate deeper and will be happy to share my findings.

Maybe still some details of the problem for others at this point:

My problem was, that the following order of commands did not function like expected:

  1. Create a volume with “docker volume create --opt fs=xfs --opt size=15G -d linbit/linstor-docker-volume dckvol_minecraft_data_mc-dev01”

  2. Mount the volume to a container “docker run --rm -d --name deleteme --mount volume-driver=linbit/linstor-docker-volume,source=dckvol_minecraft_data_mc-dev01,destination=/data ubuntu sleep 1h”
    This lead to the following error:
    Error response from daemon: failed to populate volume: error while mounting volume ‘’: VolumeDriver.Mount: resize of device /dev/drbd1006 failed: exit status 1. resize2fs output: resize2fs 1.45.7 (28-Jan-2021)
    Filesystem at /dev/drbd1006 is mounted on /var/lib/docker-volumes/linstor/dckvol_minecraft_data_mc-dev01; on-line resizing required
    resize2fs: Permission denied to resize filesystem
    old_desc_blocks = 2, new_desc_blocks = 2

  3. After this the volume was stuck mounted. Irritatingly mount reported it mounted twice??

mel@brh-tools01:/opt/docker-ha/mc-dev01$ mount | grep 1006
/dev/drbd1006 on /var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e8ab698212ebab90c5/propagated-mount/dckvol_minecraft_data_mc-dev01 type ext4 (rw,relatime)
/dev/drbd1006 on /var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e8ab698212ebab90c5/propagated-mount/dckvol_minecraft_data_mc-dev01 type ext4 (rw,relatime)

Cheers,

Melanie

On Sun, 2025-08-03 at 09:19 +0000, Brian Candler via LINBIT Community wrote:

candlerb
August 3

I think you should show the whole series of steps to reproduce the problem from scratch, including exactly what sort of storage pool you’ve set up (is it LVM?)

In principle, growing a storage pool, growing a volume, and using online resize2fs inside the volume when its size increases are all fine things to do (and I have done them successfully without problems, albeit using Proxmox not Docker). But resize2fs will not reduce the size of a volume while it is mounted.

Now, I see you’re getting an “Error response from [docker] daemon” at startup time. Is docker itself trying to resize the volume again at container start?

You should check the exact sizes of volumes e.g. with blockdev --getsize64 /dev/drbdXXXX

Note that the volume you get from Linstor may be slightly bigger than you asked for. For example, suppose you ask for 1GiB = 1024MiB. On LVM that’s 256 extents (4MiB). DRBD needs some space for metadata, so Linstor asks for the next size up, which is 1028MiB (257 extents). DRBD metadata goes at the end, but it doesn’t use the whole 4MiB, so you end up with a volume that’s slightly larger than 1GiB.

Usually that’s not a problem, unless you subsequently try to copy this onto a volume on some other system which is exactly 1GiB.

However, I can imagine the following scenario:

  1. You said you called “resize2fs” yourself inside the container. If you didn’t specify the size it will grown the filesystem to the full available size, which is slightly more than 1GiB
  2. Docker mounts the volume
  3. Docker itself calls “resize2fs” again at container startup, but explicitly specifying the size it originally requested (i.e. 1GiB)
  4. At this point, resize2fs barfs because Docker is asking it to do an online shrink of the volume

The way to fix this would be to run resize2fs when the volume is unmounted, giving the exact size required. resize2fs will then move stuff around to shrink the filesystem. Aside: you need to run e2fsck -f before resize2fs in this case, but resize2fs will refuse to run you if you haven’t.

And if this is the problem, the solution is not to call resize2fs yourself inside the container, but let docker do it.

Good luck!


Visit Topic or reply to this email to respond.

To unsubscribe from these emails, click here.

Hello Brian, 

please ignore the message before. I was to stupid to handle the forums editor. :(

here my post again:

actually just yesterday the problem occured again and I took quite some
effort to look into it.

I came to the following observation. It aligns well with what you
suggested:

For a functioning volume the sizes of the drbd volume and the
filesystem are exactely matching. For reference, I used the following
commands on a volume that worked fine after resize:

mel@brh-tools02:~$ sudo dumpe2fs /dev/drbd1048 | grep 'Block
count\|Block size'
dumpe2fs 1.47.0 (5-Feb-2023)
Block count:              38894
Block size:               4096
mel@brh-tools02:~$ dc
38894
4096*
p
159309824
--
mel@brh-tools02:~$ echo $(( $(cat /sys/block/drbd1048/size) * 512 ))
159309824

But for the problematic volume the sizes do not match. Unfortunately
resize2fs does not succeed to align them exactely. Any ideas?

This is the size of the problematicdrbd volume:
mel@brh-tools01:/opt/docker-ha-nfs$ echo $(( $(cat
/sys/block/drbd1046/size) * 512 ))
53688164352

Should be in fs-blocks:
mel@brh-tools01:/opt/docker-ha-nfs$ dc
8k
53688164352
4096/
p
13107462.00000000

But resize2fs finishes with different size instead of the requested
13107462 blocks it only gives me 13107200:
mel@brh-tools01:/opt/docker-ha-nfs$ sudo resize2fs /dev/drbd1046
13107462
resize2fs 1.47.0 (5-Feb-2023)
Filesystem at /dev/drbd1046 is mounted on
/var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e
8ab698212ebab90c5/propagated-mount/dckvol_nfs_shares; on-line resizing
required
old_desc_blocks = 5, new_desc_blocks = 7
The filesystem on /dev/drbd1046 is now 13107200 (4k) blocks long.

Could this be the reason?  Any ideas how to solve? I would even be
happy, to know if there was a calculation possible which volume sizes
are expected to work and which will not be functioning.

Additional information: I also tried to shrink the filesytem to the
minimum size by using the "-M" parameter to resize2fs. Hoping that
Docker would in the end happily grow back again. But the error
persisted.

The log messages of docker are like:

Nov 15 13:37:39 brh-tools01 kernel: EXT4-fs (drbd1046): mounted
filesystem 20e42172-e67d-4002-806a-5d02e5b3033a r/w with ordered data
mode. Quota mode: none.
Nov 15 13:37:39 brh-tools01 systemd[1]: var-lib-docker-overlay2-
6454faa4dea035127afa44f158204a89eee56086f1689e55ceed99d5f960f3dd-
merged.mount: Deactivated successfully.
Nov 15 13:37:40 brh-tools01 dockerd[2073756]: time="2025-11-
15T13:37:40.112033316+01:00" level=error msg="Handler for POST
/v1.51/containers/f4d892651151d6dba502dedaea1ac71e69506a4013d5aa419c159
6121bc23805/start returned error: error while mounting volume
'/var/lib/docker-volumes/linstor/dckvol_nfs_shares/data':
VolumeDriver.Mount: resize of device /dev/drbd1046 failed: exit status
1. resize2fs output: resize2fs 1.45.7 (28-Jan-2021)\nFilesystem at
/dev/drbd1046 is mounted on /var/lib/docker-
volumes/linstor/dckvol_nfs_shares; on-line resizing
required\nresize2fs: Permission denied to resize
filesystem\nold_desc_blocks = 5, new_desc_blocks = 7\n"
spanID=45502b40a27ea882 traceID=20a62bc4a45925b28de006dd3790dd71

And yes, it is LVM storage-pools.

I am happy for any ideas how to proceed from here.

Cheers,

Melanie