From 24f98939ae0cb05f4b400192de03fa3cd6f0635f Mon Sep 17 00:00:00 2001 From: Laszlo Ersek Date: Mon, 20 Sep 2021 07:23:33 +0200 Subject: [PATCH] test-md-and-lvm-devices: work around RAID0 regression in Linux v3.14/v5.4 The "test-md-and-lvm-devices" test case creates, among other things, a RAID0 array (md127) that spans two *differently sized* block devices (sda1: 20MB, lv0: 16MB). In Linux v3.14, the layout of such arrays was changed incompatibly and undetectably. If an array were created with a pre-v3.14 kernel and assembled on a v3.14+ kernel, or vice versa, data could be corrupted. In Linux v5.4, a mitigation was added, requiring the user to specify the layout version of such RAID0 arrays explicitly, as a module parameter. If the user fails to specify a layout version, the v5.4+ kernel refuses to assemble such arrays. This is why "test-md-and-lvm-devices" currently fails, with any v5.4+ appliance kernel. Until we implement a more general solution (see the bugzilla link below), work around the issue by sizing sda1 and lv0 identically. For this, increase the size of sdb1 to 24MB: when one 4MB extent is spent on LVM metadata, the resultant lv0 size (20MB) will precisely match the size of sda1. This workaround only affects sizes, and does not interfere with the original purpose of this test case, which is to test various *stackings* between disk partitions, software RAID (md), and LVM logical volumes. Related: https://bugzilla.redhat.com/show_bug.cgi?id=2005485 Signed-off-by: Laszlo Ersek Message-Id: <20210920052335.3358-3-lersek@redhat.com> Acked-by: Richard W.M. Jones [lersek@redhat.com: remove stray empty line] --- tests/md/test-md-and-lvm-devices.sh | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/tests/md/test-md-and-lvm-devices.sh b/tests/md/test-md-and-lvm-devices.sh index 5e82e3a4f..5c4fb9319 100755 --- a/tests/md/test-md-and-lvm-devices.sh +++ b/tests/md/test-md-and-lvm-devices.sh @@ -53,7 +53,7 @@ trap cleanup INT QUIT TERM EXIT # sda2: 20M PV (vg1) # sda3: 20M MD (md125) # -# sdb1: 20M PV (vg0) +# sdb1: 24M PV (vg0) [*] # sdb2: 20M PV (vg2) # sdb3: 20M MD (md125) # @@ -66,6 +66,9 @@ trap cleanup INT QUIT TERM EXIT # vg3 : VG (md125) # lv3 : LV (vg3) # +# [*] The reason for making sdb1 4M larger than sda1 is that the LVM metadata +# will consume one 4MB extent, and we need lv0 to offer exactly as much space +# as sda1 does, for combining them in md127. Refer to RHBZ#2005485. guestfish <