ディスクサイズのアップグレード時にRAIDサイズを増やす/調整

ディスクサイズのアップグレード時にRAIDサイズを増やす/調整

ブートパーティションがRAID1で、ルートイメージイメージとVMイメージを持つパーティションがRAID10のLinux RAID設定に8つのディスクを持つサーバーがあります。より大きなディスクに変更してRAIDサイズを増やして、VMイメージを含むパーティションに追加のスペースをすべて入れる必要があります。各ディスクのパーティション設定は次のとおりです。

Disk /dev/sda: 480.1 GB, 480103981056 bytes, 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: E2951863-ACAA-4516-A8C0-58FC2098382C


#         Start          End    Size  Type            Name
 1         2048    921036799  439.2G  Linux RAID
 2    921036800    934143999    6.3G  Linux RAID
 3    934144000    935194623    513M  Linux RAID
 4    935194624    935604223    200M  Linux RAID
 5    935604224    937701375      1G  Linux RAID

480GBディスク8個をすべて960GBディスクに交換します。ご覧のとおり、このサーバーでは、私の仮想マシンイメージはパーティション1にあり、大きなパーティションがあります。この場合、交換して拡張できますか? RAID設定は次のとおりです。

[root@nebula208-1 ~]# cat /proc/mdstat
Personalities : [raid10] [raid1]
md123 : active raid1 sdc4[10] sdb4[9] sde4[8] sdh4[6] sdd4[3] sda4[0] sdg4[5] sdf4[4]
      204736 blocks super 1.0 [8/8] [UUUUUUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid10 sdc1[10] sdb1[9] sde1[8] sdd1[3] sda1[0] sdh1[6] sdg1[5] sdf1[4]
      1841541120 blocks super 1.2 512K chunks 2 near-copies [8/8] [UUUUUUUU]
      bitmap: 8/14 pages [32KB], 65536KB chunk

md125 : active raid1 sdc5[10] sdb5[9] sde5[8] sdf5[4] sdd5[3] sda5[0] sdg5[5] sdh5[6]
      1046528 blocks super 1.2 [8/8] [UUUUUUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid10 sdc2[10] sdb2[9] sde2[8] sdf2[4] sdd2[3] sdh2[6] sdg2[5] sda2[0]
      26193920 blocks super 1.2 512K chunks 2 near-copies [8/8] [UUUUUUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid10 sdc3[10] sdb3[9] sde3[8] sdf3[4] sdh3[6] sda3[0] sdg3[5] sdd3[3]
      2093056 blocks super 1.2 512K chunks 2 near-copies [8/8] [UUUUUUUU]

unused devices: <none>
[root@nebula208-1 ~]# mdadm --detail /dev/md12
md123  md124  md125  md126  md127
[root@nebula208-1 ~]# mdadm --detail /dev/md123
/dev/md123:
           Version : 1.0
     Creation Time : Sun Nov 28 10:07:23 2021
        Raid Level : raid1
        Array Size : 204736 (199.94 MiB 209.65 MB)
     Used Dev Size : 204736 (199.94 MiB 209.65 MB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Dec 31 01:00:04 2023
             State : clean
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : boot_efi
              UUID : f0f402f2:cdf1d443:8ef69168:9e61cca3
            Events : 644

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       9       8       20        1      active sync   /dev/sdb4
      10       8       36        2      active sync   /dev/sdc4
       3       8       52        3      active sync   /dev/sdd4
       4       8       84        4      active sync   /dev/sdf4
       5       8      100        5      active sync   /dev/sdg4
       6       8      116        6      active sync   /dev/sdh4
       8       8       68        7      active sync   /dev/sde4
[root@nebula208-1 ~]# mdadm --detail /dev/md124
/dev/md124:
           Version : 1.2
     Creation Time : Sun Nov 28 10:01:31 2021
        Raid Level : raid10
        Array Size : 1841541120 (1756.23 GiB 1885.74 GB)
     Used Dev Size : 460385280 (439.06 GiB 471.43 GB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Jan  6 11:03:14 2024
             State : active
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : vm_images
              UUID : a88079e5:87afdc5a:ace3b41f:11cb43c0
            Events : 280803

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync set-A   /dev/sda1
       9       8       17        1      active sync set-B   /dev/sdb1
      10       8       33        2      active sync set-A   /dev/sdc1
       3       8       49        3      active sync set-B   /dev/sdd1
       4       8       81        4      active sync set-A   /dev/sdf1
       5       8       97        5      active sync set-B   /dev/sdg1
       6       8      113        6      active sync set-A   /dev/sdh1
       8       8       65        7      active sync set-B   /dev/sde1
[root@nebula208-1 ~]# mdadm --detail /dev/md125
/dev/md125:
           Version : 1.2
     Creation Time : Sun Nov 28 10:07:16 2021
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Jan  3 09:46:31 2024
             State : clean
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : boot
              UUID : f2c80fb6:2b15cd8b:0b9968b3:c776c45e
            Events : 726

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       9       8       21        1      active sync   /dev/sdb5
      10       8       37        2      active sync   /dev/sdc5
       3       8       53        3      active sync   /dev/sdd5
       4       8       85        4      active sync   /dev/sdf5
       5       8      101        5      active sync   /dev/sdg5
       6       8      117        6      active sync   /dev/sdh5
       8       8       69        7      active sync   /dev/sde5
[root@nebula208-1 ~]# mdadm --detail /dev/md126
/dev/md126:
           Version : 1.2
     Creation Time : Sun Nov 28 10:07:07 2021
        Raid Level : raid10
        Array Size : 26193920 (24.98 GiB 26.82 GB)
     Used Dev Size : 6548480 (6.25 GiB 6.71 GB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Jan  6 11:10:48 2024
             State : clean
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : root
              UUID : 01f17235:c48db095:1dd7d4c7:6a78beed
            Events : 950594

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync set-A   /dev/sda2
       9       8       18        1      active sync set-B   /dev/sdb2
      10       8       34        2      active sync set-A   /dev/sdc2
       3       8       50        3      active sync set-B   /dev/sdd2
       4       8       82        4      active sync set-A   /dev/sdf2
       5       8       98        5      active sync set-B   /dev/sdg2
       6       8      114        6      active sync set-A   /dev/sdh2
       8       8       66        7      active sync set-B   /dev/sde2
[root@nebula208-1 ~]# mdadm --detail /dev/md12
md123  md124  md125  md126  md127
[root@nebula208-1 ~]# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Sun Nov 28 10:07:29 2021
        Raid Level : raid10
        Array Size : 2093056 (2044.00 MiB 2143.29 MB)
     Used Dev Size : 523264 (511.00 MiB 535.82 MB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

       Update Time : Sat Jan  6 03:07:05 2024
             State : clean
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : swap
              UUID : d5dc111a:9935871c:6b7c4333:304a60f3
            Events : 389689

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync set-A   /dev/sda3
       9       8       19        1      active sync set-B   /dev/sdb3
      10       8       35        2      active sync set-A   /dev/sdc3
       3       8       51        3      active sync set-B   /dev/sdd3
       4       8       83        4      active sync set-A   /dev/sdf3
       5       8       99        5      active sync set-B   /dev/sdg3
       6       8      115        6      active sync set-A   /dev/sdh3
       8       8       67        7      active sync set-B   /dev/sde3

ベストアンサー1

おすすめ記事