Zpool creation failed on linux in OCI
Recently I was provisioning a RackWare server for a DR exercise using OCI as the cloud provider, and I ran into an issue creating the storage pool for RackWare images. Though I encountered the issue working with RackWare it's really an issue with createing ZFS pools in a defaul OCI image with standard block storage attached.
The issue I encountered, occurred when RackWare, or I manually tried to create a ZFS pool, using the command 'zpool create -f rwzpool {device name}'; the error returned is "/dev/sdb is in use and contains a unknown filesystem."
[root@rmm-mt-lab ~]# zpool create -f rwzpool /dev/sdb
/dev/sdb is in use and contains a unknown filesystem.
The error is related to multipath in the linux OS. When a block storage device is mounted it is given a multipath attribute which the ZFS utility interprets as the device is in use.
Show the list of devices using the 'lsblk' command
[root@rmm-mt-lab ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 512G 0 disk
└─3603e5aea26a9462ea95921984ac0d1a7 252:0 0 512G 0 mpath
├─3603e5aea26a9462ea95921984ac0d1a7p1 252:1 0 512G 0 part
└─3603e5aea26a9462ea95921984ac0d1a7p9 252:2 0 8M 0 part
sdc 8:32 0 512G 0 disk
└─3609a331b373e41abafffb97c6848725d 252:3 0 512G 0 mpath
sda 8:0 0 200G 0 disk
├─sda2 8:2 0 8G 0 part [SWAP]
├─sda3 8:3 0 38.4G 0 part /
└─sda1 8:1 0 200M 0 part /boot/efi
To show the multipath ID's I used the command 'multipath -ll'
[root@rmm-mt-lab ~]# multipath -ll
3609a331b373e41abafffb97c6848725d dm-3 ORACLE ,BlockVolume
size=512G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 3:0:0:2 sdc 8:32 active ready running
3603e5aea26a9462ea95921984ac0d1a7 dm-0 ORACLE ,BlockVolume
size=512G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 4:0:0:3 sdb 8:16 active ready running
Notice in the 'lsblk' command the device /dev/sdb which is a block storage volume has long string labeled "mpath"
sdb 8:16 0 512G 0 disk
└─3603e5aea26a9462ea95921984ac0d1a7 252:0 0 512G 0 mpath
When we look at the 'multipath -ll' command we see it again
3603e5aea26a9462ea95921984ac0d1a7 dm-0 ORACLE ,BlockVolume
size=512G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 4:0:0:3 sdb 8:16 active ready running
When trying to create a ZFS storage pool, that "mpath" ID is preventing ZFS from creating the pool. To resolve the conflict, and allow ZFS to create the pool, remove the "mpath" ID from the block volume(s) the ZFS pool will contain. This is done with the 'multipath -f {multipath id}' command.
[root@rmm-mt-lab ~]# multipath -f 3603e5aea26a9462ea95921984ac0d1a7
The multipath id will be removed
[root@rmm-mt-lab ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 512G 0 disk
sdc 8:32 0 512G 0 disk
└─3609a331b373e41abafffb97c6848725d 252:3 0 512G 0 mpath
sda 8:0 0 200G 0 disk
├─sda2 8:2 0 8G 0 part [SWAP]
├─sda3 8:3 0 38.4G 0 part /
└─sda1 8:1 0 200M 0 part /boot/efi
Now we can create our ZFS pool using the device /dev/sdb
[root@rmm-mt-lab ~]# zpool create rwzpool /dev/sdb
We can verify the pool was created using 'zpool -list'
[root@rmm-mt-lab ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rwzpool 508G 360K 508G - - 0% 0% 1.00x ONLINE -