NAS542: How to remove disks from RAID 1?

Options
frameworker
frameworker Posts: 23  Freshman Member
edited April 2019 in Personal Cloud Storage
I made a mistake when expanding a RAID 1 composed by two 1TB disks. I've added two 4TB disks and expand the RAID by the appropriate option "expand by adding more disks".
The RAID synced and now I thought I could remove the two 1TB disks, the RAID degraded and I could choose the repair option (as the warn dialoge says!).
But there's no choosable option, the Action menu is greyed out.
At the moment the two 1 TB disks are inserted again and the RAID is repairing for the next five hours.

So the question still remains: If in theory a RAID 1 hold exact copies of data on each disk, why I can't remove two of them the way mentioned above?
And how I should manage repair action?

#NAS_Apr_2019
«1

All Replies

  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    What is the volume size of the array? I think you added a 3th disk, converting the array to raid5, instead of exchanging the disks.
    If it is raid1, the volume should be 1TB.
  • frameworker
    frameworker Posts: 23  Freshman Member
    edited April 2019
    Options
    No, its just an expanded RAID1 with 1TB capacity:
    <div></div><pre class="CodeBlock"><code><div>Bay&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 2 3 4<br>hdd&nbsp; TB&nbsp; 1 1 4 4</div>
    now expand and repair so we have just a RAID 1 with 1TB capacity

    No I want to remove the two 1TB disks like this:
    <br>

    hdd  TB  x x 4 4<div><br><code><code>Bay&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 2 3 4

    </code></div><div><br><code><code>
    The RAID degrades and I thought I could it repair to 1TB RAID with "unused" space (because there are two 4TB disks now)

    At least expand the RAID from 1TB to 4TB capacity by volume manager.

    P.S. the code formatting seems not works correctly



  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    You can remove the disks manually from the array, from the command line. Maybe there is also a way in the GUI, I don't know.

    Login over ssh (maybe you'll have to enable the ssh server first).
    Then execute
    <div><div>cat /proc/partitions</div><div><br></div>cat /proc/mdstat</div>
    The first command shows you the disks and partitions. (And flash partitions, which you can ignore). The size given is in blocks, which is 1KiB. Here you can find the device names of the 'unwanted' members of the raid array.
    The second command shows the status of the raid arrays. There are at least 3 arrays, 2 raid1 array from 2GB each for the firmware, and 1 (probably md2) containing your data partition.
    If that array is indeed raid1, you can remove the two partitions with
    <div>su</div><div><br></div><div>mdadm /dev/md2 --fail /dev/sda3 --remove /dev/sda3<br></div><div></div>
    Where the info from /proc/mdstat and /proc/partitions you can tell what to put for md2 and sda3

    P.S. the code formatting seems not works correctly
    I know. I think that instead of using one of the freely available forum platforms ZyXEL decided to build their own. Fortunately it's possible to edit the raw html, using the '</>' button.
  • frameworker
    frameworker Posts: 23  Freshman Member
    edited April 2019
    Options
    Now the RAID1 with four disks are repaired with 1TB entore capacity. I don't understand the internals:

    ~ # cat /proc/mdstat <br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] <br>md2 : active raid1 sdb3[6] sda3[5] sde3[4] sdd3[3]<br>      972631040 blocks super 1.2 [4/4] [UUUU]<br>      <br>md1 : active raid1 sdb2[7] sda2[4] sdd2[5] sde2[6]<br>      1998784 blocks super 1.2 [4/4] [UUUU]<br>      <br>md0 : active raid1 sdb1[7] sda1[4] sdd1[5] sde1[6]<br>      1997760 blocks super 1.2 [4/4] [UUUU]<br> 
    What does this mean? For such internals I'm completely lost. Expecting only one row for this command.

    ~ # df -k<br>Filesystem           1K-blocks      Used Available Use% Mounted on<br>ubi7:ubi_rootfs2         92892     49816     38296  57% /firmware/mnt/nand<br>/dev/md0               1966352    183152   1683312  10% /firmware/mnt/sysdisk<br>/dev/loop0              142799    126029     16770  88% /ram_bin<br>/dev/loop0              142799    126029     16770  88% /usr<br>/dev/loop0              142799    126029     16770  88% /lib/security<br>/dev/loop0              142799    126029     16770  88% /lib/modules<br>/dev/loop0              142799    126029     16770  88% /lib/locale<br>/dev/ram0                 5120         4      5116   0% /tmp/tmpfs<br>/dev/ram0                 5120         4      5116   0% /usr/local/etc<br>ubi3:ubi_config           2408       140      2112   6% /etc/zyxel<br>/dev/md2             957368980 929637600  27731380  97% /i-data/90901ca4<br>/dev/md2             957368980 929637600  27731380  97% /usr/local/apache/htdocs/desktop,/pkg<br>/dev/sdc1            961402172 924432268  36969904  96% /e-data/11ff2243f8c8e18f5ac5aa3f1df7d693<br><br>
    So it points out, that md2 is my repaired RAID with sda,sdb,sdd and sde.
    The device sdc is an external backup usb disk.

    What can I do now? I want to remove /dev/sda and /dev/sdb from the RAID. Is this possible?

     # mdadm --detail /dev/md2<br>/dev/md2:<br>        Version : 1.2<br>  Creation Time : Thu Jun 29 17:20:50 2017<br>     Raid Level : raid1<br>     Array Size : 972631040 (927.57 GiB 995.97 GB)<br>  Used Dev Size : 972631040 (927.57 GiB 995.97 GB)<br>   Raid Devices : 4<br>  Total Devices : 4<br>    Persistence : Superblock is persistent<br><br>    Update Time : Sat Apr 20 15:10:01 2019<br>          State : clean <br> Active Devices : 4<br>Working Devices : 4<br> Failed Devices : 0<br>  Spare Devices : 0<br><br>           Name : nas542:2  (local to host nas542)<br>           UUID : 90901ca4:d807cf67:b7f8bd07:32031856<br>         Events : 16753<br><br>    Number   Major   Minor   RaidDevice State<br>       6       8       19        0      active sync   /dev/sdb3<br>       5       8        3        1      active sync   /dev/sda3<br>       4       8       67        2      active sync   /dev/sde3<br>       3       8       51        3      active sync   /dev/sdd3<br><br>
    So the guides to mdadm available https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Removing_a_disk_from_an_array tells some details about dearranging the RAID.
    Can I use them?



  • frameworker
    frameworker Posts: 23  Freshman Member
    Options
    Will try the command line today. After making a complete backup via "Archive" method from the GUI. (taking hours...)
    It seems there's no way to achieve this task as from the command line. So, non experts in UNIX the Zyxel GUI - regarding at least the 542 model - is not user friendly I mean.
  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    edited April 2019
    Options
    Yes, that command is suitable. But you can't assume it's sda and sdb which have to be removed. Look at the output of 'cat /proc/partitions'

    What does this mean?
    md2 : active raid1 sdb3[6] sda3[5] sde3[4] sdd3[3]<br>      972631040 blocks super 1.2 [4/4] [UUUU]
    It means that array md2 is a raid1 array, containing partitions sdb3,sda3,sde3 and sdd3 as members. Total size is a bit less than 1TB, it has a version 1.2 header,  4 member out of 4 are available, and they are all up ([UUUU])
  • frameworker
    frameworker Posts: 23  Freshman Member
    edited April 2019
    Options
    Not successful at all. I could remove the first disk via mdadm, the GUI recognize the free disk, the RAID was shown normally with three disks. But the NAS beeps like if it as degraded. So I reboot them. After that, the entire volume configuration is gone. Bump! I have the backup... the external disk is displayed normally.
    <br>
    <pre class="CodeBlock"><code>~ # mdadm /dev/md2 --fail /dev/sda3 --remove /dev/sda3<br>mdadm: set /dev/sda3 faulty in /dev/md2<br>mdadm: hot removed /dev/sda3 from /dev/md2<br>~ # mdadm --grow /dev/md2 --raid-devices 3<br>raid_disks for /dev/md2 set to 3<br><br>
    And now:<br>
    <pre class="CodeBlock"><code>~ # cat /proc/mdstat<br>Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] <br>md1 : active raid1 sda2[4] sde2[6] sdd2[5] sdb2[7]<br> 1998784 blocks super 1.2 [4/4] [UUUU]<br> <br>md0 : active raid1 sda1[4] sde1[6] sdd1[5] sdb1[7]<br> 1997760 blocks super 1.2 [4/4] [UUUU]<br><br>
    RAID is gone....

    The user, group and directory configuration is still there but shown as lost. Just the RAID config is lost. I suspect the data on all of the disks are OK, so the question is whether there's a way to create a RAID1 by using the existing disks. Without to create a complete new RAID.
    Another way would be to restore a recent NAS configuration. I'm not sure what the better way would be.

  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Interesting. The array was not degraded, the '--fail --remove' turned the 4 disk raid1 into a 3 disk one. From 300% redundancy to 200%. Don't know the effect of that grow command.
    The firmware is running mdadm as daemon, to detect raid changes, and apparently it gave feedback on your action. I can imagine the firmware isn't programmed to deal with this kind of changes.

    But I can't think of what happened to your 3 disk raid1 on reboot. How did you reboot?

    Can you post the output of
    <div><br></div><div>mdadm --examine /dev/sd[abde]3</div>

  • frameworker
    frameworker Posts: 23  Freshman Member
    Options
    Seems ok:

    ~ # mdadm --examine /dev/sda3<br>/dev/sda3:<br>          Magic : a92b4efc<br>        Version : 1.2<br>    Feature Map : 0x0<br>     Array UUID : 90901ca4:d807cf67:b7f8bd07:32031856<br>           Name : nas542:2  (local to host nas542)<br>  Creation Time : Thu Jun 29 16:20:50 2017<br>     Raid Level : raid1<br>   Raid Devices : 4<br><br> Avail Dev Size : 1945262080 (927.57 GiB 995.97 GB)<br>     Array Size : 972631040 (927.57 GiB 995.97 GB)<br>    Data Offset : 262144 sectors<br>   Super Offset : 8 sectors<br>          State : clean<br>    Device UUID : 4890160e:c8f2b046:22ea7efc:8db5b2e0<br><br>    Update Time : Mon Apr 22 09:44:17 2019<br>       Checksum : 4a55252f - correct<br>         Events : 16753<br><br><br>   Device Role : Active device 1<br>   Array State : AAAA ('A' == active, '.' == missing)<br>~ # mdadm --examine /dev/sdb3<br>/dev/sdb3:<br>          Magic : a92b4efc<br>        Version : 1.2<br>    Feature Map : 0x0<br>     Array UUID : 90901ca4:d807cf67:b7f8bd07:32031856<br>           Name : nas542:2  (local to host nas542)<br>  Creation Time : Thu Jun 29 16:20:50 2017<br>     Raid Level : raid1<br>   Raid Devices : 3<br><br> Avail Dev Size : 1945262080 (927.57 GiB 995.97 GB)<br>     Array Size : 972631040 (927.57 GiB 995.97 GB)<br>    Data Offset : 262144 sectors<br>   Super Offset : 8 sectors<br>          State : clean<br>    Device UUID : e2be7b18:b594ce83:f457b14a:415a624b<br><br>    Update Time : Mon Apr 22 09:49:10 2019<br>       Checksum : 4ab60965 - correct<br>         Events : 16758<br><br><br>   Device Role : Active device 0<br>   Array State : AAA ('A' == active, '.' == missing)<br>~ # mdadm --examine /dev/sdd3<br>/dev/sdd3:<br>          Magic : a92b4efc<br>        Version : 1.2<br>    Feature Map : 0x0<br>     Array UUID : 90901ca4:d807cf67:b7f8bd07:32031856<br>           Name : nas542:2  (local to host nas542)<br>  Creation Time : Thu Jun 29 16:20:50 2017<br>     Raid Level : raid1<br>   Raid Devices : 3<br><br> Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)<br>     Array Size : 972631040 (927.57 GiB 995.97 GB)<br>  Used Dev Size : 1945262080 (927.57 GiB 995.97 GB)<br>    Data Offset : 262144 sectors<br>   Super Offset : 8 sectors<br>          State : clean<br>    Device UUID : 4dd54617:7b76ba9a:003fadb0:582e5929<br><br>    Update Time : Mon Apr 22 09:49:10 2019<br>       Checksum : 1b00cb8 - correct<br>         Events : 16758<br><br><br>   Device Role : Active device 2<br>   Array State : AAA ('A' == active, '.' == missing)<br>~ # mdadm --examine /dev/sde3<br>/dev/sde3:<br>          Magic : a92b4efc<br>        Version : 1.2<br>    Feature Map : 0x0<br>     Array UUID : 90901ca4:d807cf67:b7f8bd07:32031856<br>           Name : nas542:2  (local to host nas542)<br>  Creation Time : Thu Jun 29 16:20:50 2017<br>     Raid Level : raid1<br>   Raid Devices : 3<br><br> Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)<br>     Array Size : 972631040 (927.57 GiB 995.97 GB)<br>  Used Dev Size : 1945262080 (927.57 GiB 995.97 GB)<br>    Data Offset : 262144 sectors<br>   Super Offset : 8 sectors<br>          State : clean<br>    Device UUID : 8a1a317f:8d7bbb7c:c351db88:6ca8a67e<br><br>    Update Time : Mon Apr 22 09:49:10 2019<br>       Checksum : 7916e3df - correct<br>         Events : 16758<br><br><br>   Device Role : Active device 1<br>   Array State : AAA ('A' == active, '.' == missing)<br>~ # <br><br>


  • Mijzelf
    Mijzelf Posts: 2,605  Guru Member
    First Anniversary 10 Comments Friend Collector First Answer
    Options
    Ok, I have a theory. sd[bde]3 concur that they are member of a 3 disk raid1 array.
    <code>('A' == active, '.' == missing)<br></code>   Array UUID : 90901ca4:d807cf67:b7f8bd07:32031856
    </code></pre></div>but sda3 thinks it is a member of the same array, and it has 4 disks:<br><pre class="CodeBlock"><code> Array State : AAAA ('A' == active, '.' == missing)<br> Array UUID : 90901ca4:d807cf67:b7f8bd07:32031856
    </code></pre><div>Somehow this confuses the raid manager. So clear the header of sda3, and reboot:</div><div><pre class="CodeBlock"><code>mdadm --zero-superblock /dev/sda3 Array State : AAA




Consumer Product Help Center