Error Mounting Docker/Linstor Volume after Pool Resize

Hello,

I am currently trying out the Linstor Docker plugin on small cluster. Everything worked well, but after I resized my storage pool I can no longer start docker containers with new volumes. I did actually do a linstor volume resize after the pool resize and then a resize2fs from inside the container. Maybe this was breaking something?

The old volumes - created before pool resize - make no problems. Maybe this is a known problem and someone can give me a direction how to solve.

I get error messages like:

root@brh-tools01:/opt/docker-ha/mc-dev02# docker compose up -d
[+] Running 1/1
 ✘ Container mc-dev02  Error response from daemon: failed to populate volume: error while mounting volume '': VolumeDriver.Mount: resize of device /dev/drbd1020 f...                      2.6s 
Error response from daemon: failed to populate volume: error while mounting volume '': VolumeDriver.Mount: resize of device /dev/drbd1020 failed: exit status 1. resize2fs output: resize2fs 1.45.7 (28-Jan-2021)
Filesystem at /dev/drbd1020 is mounted on /var/lib/docker-volumes/linstor/dckvol_minecraft_data_mc-dev02; on-line resizing required
resize2fs: Permission denied to resize filesystem
old_desc_blocks = 2, new_desc_blocks = 2

Some details:

  • I tried it with new compose setups, new volumes
  • I tried it with “docker run …”
  • I rebooted the complete cluster
  • Manually running resize2fs works fine, but returns nothing to do.
  • After the unsuccessfull try a mounted DRBD device is stuck and has to be manually unmounted.

Happy to get some ideas from you!

Cheers,

Melanie

I think you should show the whole series of steps to reproduce the problem from scratch, including exactly what sort of storage pool you’ve set up (is it LVM?)

In principle, growing a storage pool, growing a volume, and using online resize2fs inside the volume when its size increases are all fine things to do (and I have done them successfully without problems, albeit using Proxmox not Docker). But resize2fs will not reduce the size of a volume while it is mounted.

Now, I see you’re getting an “Error response from [docker] daemon” at startup time. Is docker itself trying to resize the volume again at container start?

You should check the exact sizes of volumes e.g. with blockdev --getsize64 /dev/drbdXXXX

Note that the volume you get from Linstor may be slightly bigger than you asked for. For example, suppose you ask for 1GiB = 1024MiB. On LVM that’s 256 extents (4MiB). DRBD needs some space for metadata, so Linstor asks for the next size up, which is 1028MiB (257 extents). DRBD metadata goes at the end, but it doesn’t use the whole 4MiB, so you end up with a volume that’s slightly larger than 1GiB.

Usually that’s not a problem, unless you subsequently try to copy this onto a volume on some other system which is exactly 1GiB.

However, I can imagine the following scenario:

  1. You said you called “resize2fs” yourself inside the container. If you didn’t specify the size it will grown the filesystem to the full available size, which is slightly more than 1GiB
  2. Docker mounts the volume
  3. Docker itself calls “resize2fs” again at container startup, but explicitly specifying the size it originally requested (i.e. 1GiB)
  4. At this point, resize2fs barfs because Docker is asking it to do an online shrink of the volume

The way to fix this would be to run resize2fs when the volume is unmounted, giving the exact size required. resize2fs will then move stuff around to shrink the filesystem. Aside: you need to run e2fsck -f before resize2fs in this case, but resize2fs will refuse to run you if you haven’t.

And if this is the problem, the solution is not to call resize2fs yourself inside the container, but let docker do it.

Good luck!

Hello Brian,

thank you so much for reacting so quickly.

I will take the time next week to look deeper into this.

I will follow up on your suggestion that the problem could be related so slightly different sizes expected in the multiple involved layers.
And if I am not successful there, try to reproduce the problem with a new storage pool.

I am happy to reply to the group with more detailled information in a few days. I was actually hoping that this is something simple someone else already had experienced before. But maybe I have to dive in deeper! :smiley:

Cheers,

Melanie

On Sun, 2025-08-03 at 09:19 +0000, Brian Candler via LINBIT Community wrote:

candlerb
August 3

I think you should show the whole series of steps to reproduce the problem from scratch, including exactly what sort of storage pool you’ve set up (is it LVM?)

In principle, growing a storage pool, growing a volume, and using online resize2fs inside the volume when its size increases are all fine things to do (and I have done them successfully without problems, albeit using Proxmox not Docker). But resize2fs will not reduce the size of a volume while it is mounted.

Now, I see you’re getting an “Error response from [docker] daemon” at startup time. Is docker itself trying to resize the volume again at container start?

You should check the exact sizes of volumes e.g. with blockdev --getsize64 /dev/drbdXXXX

Note that the volume you get from Linstor may be slightly bigger than you asked for. For example, suppose you ask for 1GiB = 1024MiB. On LVM that’s 256 extents (4MiB). DRBD needs some space for metadata, so Linstor asks for the next size up, which is 1028MiB (257 extents). DRBD metadata goes at the end, but it doesn’t use the whole 4MiB, so you end up with a volume that’s slightly larger than 1GiB.

Usually that’s not a problem, unless you subsequently try to copy this onto a volume on some other system which is exactly 1GiB.

However, I can imagine the following scenario:

  1. You said you called “resize2fs” yourself inside the container. If you didn’t specify the size it will grown the filesystem to the full available size, which is slightly more than 1GiB
  2. Docker mounts the volume
  3. Docker itself calls “resize2fs” again at container startup, but explicitly specifying the size it originally requested (i.e. 1GiB)
  4. At this point, resize2fs barfs because Docker is asking it to do an online shrink of the volume

The way to fix this would be to run resize2fs when the volume is unmounted, giving the exact size required. resize2fs will then move stuff around to shrink the filesystem. Aside: you need to run e2fsck -f before resize2fs in this case, but resize2fs will refuse to run you if you haven’t.

And if this is the problem, the solution is not to call resize2fs yourself inside the container, but let docker do it.

Good luck!


Visit Topic or reply to this email to respond.

To unsubscribe from these emails, click here.

Hello Brian,

sorry for my late reply I had a bad flu. I only had time to check now. Actually it was now functioning without problems to create volumes for new, other Docker Compose setups.

Still for the two old volumes the problem persisted.

I was then finally able to solve the problem for one of them by creating an xfs volume instead of ext4 what I was using before.

This did not work for the second volume out of the box. Although I was repeating the same steps. But after a few repetitions the following sequence of commands succeeded. Please see, that I on purpose changed the name of the volume when I recreated it:

sudo umount /dev/drbd1020
sudo rmdir /var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e8ab698212ebab90c5/propagated-mount/dckvol_minecraft_data_mc-dev02
docker volume rm dckvol_minecraft_data_mc-dev02
docker volume create --opt fs=xfs --opt size=15G -d linbit/linstor-docker-volume dckvol_mc_data_mc-dev02
docker run --rm -d --name deleteme2 --mount volume-driver=linbit/linstor-docker-volume,source=dckvol_mc_data_mc-dev02,destination=/data ubuntu sleep 1h

Actually I did not totally grasp the reason of the problem. But as it is a test cluster I will not dig deeper on this point. If others might have the same problem, feel free to contact me, in this case I would be happy to investigate deeper. The same offer if is seems worth to make an issue/bug report out of this. But I might expect the reason to be in my cluster not beeing set up 100% correctly. If I stumble into the same problem again, I might investigate deeper and will be happy to share my findings.

Maybe still some details of the problem for others at this point:

My problem was, that the following order of commands did not function like expected:

  1. Create a volume with “docker volume create --opt fs=xfs --opt size=15G -d linbit/linstor-docker-volume dckvol_minecraft_data_mc-dev01”

  2. Mount the volume to a container “docker run --rm -d --name deleteme --mount volume-driver=linbit/linstor-docker-volume,source=dckvol_minecraft_data_mc-dev01,destination=/data ubuntu sleep 1h”
    This lead to the following error:
    Error response from daemon: failed to populate volume: error while mounting volume ‘’: VolumeDriver.Mount: resize of device /dev/drbd1006 failed: exit status 1. resize2fs output: resize2fs 1.45.7 (28-Jan-2021)
    Filesystem at /dev/drbd1006 is mounted on /var/lib/docker-volumes/linstor/dckvol_minecraft_data_mc-dev01; on-line resizing required
    resize2fs: Permission denied to resize filesystem
    old_desc_blocks = 2, new_desc_blocks = 2

  3. After this the volume was stuck mounted. Irritatingly mount reported it mounted twice??

mel@brh-tools01:/opt/docker-ha/mc-dev01$ mount | grep 1006
/dev/drbd1006 on /var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e8ab698212ebab90c5/propagated-mount/dckvol_minecraft_data_mc-dev01 type ext4 (rw,relatime)
/dev/drbd1006 on /var/lib/docker/plugins/1550f84a82a15e0738b44199d0e1e24e47a6aee44a98e5e8ab698212ebab90c5/propagated-mount/dckvol_minecraft_data_mc-dev01 type ext4 (rw,relatime)

Cheers,

Melanie

On Sun, 2025-08-03 at 09:19 +0000, Brian Candler via LINBIT Community wrote:

candlerb
August 3

I think you should show the whole series of steps to reproduce the problem from scratch, including exactly what sort of storage pool you’ve set up (is it LVM?)

In principle, growing a storage pool, growing a volume, and using online resize2fs inside the volume when its size increases are all fine things to do (and I have done them successfully without problems, albeit using Proxmox not Docker). But resize2fs will not reduce the size of a volume while it is mounted.

Now, I see you’re getting an “Error response from [docker] daemon” at startup time. Is docker itself trying to resize the volume again at container start?

You should check the exact sizes of volumes e.g. with blockdev --getsize64 /dev/drbdXXXX

Note that the volume you get from Linstor may be slightly bigger than you asked for. For example, suppose you ask for 1GiB = 1024MiB. On LVM that’s 256 extents (4MiB). DRBD needs some space for metadata, so Linstor asks for the next size up, which is 1028MiB (257 extents). DRBD metadata goes at the end, but it doesn’t use the whole 4MiB, so you end up with a volume that’s slightly larger than 1GiB.

Usually that’s not a problem, unless you subsequently try to copy this onto a volume on some other system which is exactly 1GiB.

However, I can imagine the following scenario:

  1. You said you called “resize2fs” yourself inside the container. If you didn’t specify the size it will grown the filesystem to the full available size, which is slightly more than 1GiB
  2. Docker mounts the volume
  3. Docker itself calls “resize2fs” again at container startup, but explicitly specifying the size it originally requested (i.e. 1GiB)
  4. At this point, resize2fs barfs because Docker is asking it to do an online shrink of the volume

The way to fix this would be to run resize2fs when the volume is unmounted, giving the exact size required. resize2fs will then move stuff around to shrink the filesystem. Aside: you need to run e2fsck -f before resize2fs in this case, but resize2fs will refuse to run you if you haven’t.

And if this is the problem, the solution is not to call resize2fs yourself inside the container, but let docker do it.

Good luck!


Visit Topic or reply to this email to respond.

To unsubscribe from these emails, click here.