ZFS disk remove

Because I need to help some friend with backup up data from his broken PC I need to temporary remove my 2TB disks from my zfs pool so I have enough diskspace.

This is my current pool with the status:

file# zpool status
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h18m, 28.93% done, 0h45m to go
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    ad8     ONLINE       0     0     0
	    ad6     ONLINE       0     0     0  83.7G resilvered
	  mirror    ONLINE       0     0     0
	    ad10    ONLINE       0     0     0
	    ad14    ONLINE       0     0     0

errors: No known data errors
file# zpool list 
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank  2.27T   550G  1.73T    23%  ONLINE  -
file# zpool iostat -v
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         550G  1.73T      4      0   348K  48.0K
  mirror     294G  1.53T    875     47  78.1M  86.8K
    ad8         -      -      3      0   391K  51.8K
    ad6         -      -      0    773    235  78.1M
  mirror     256G   208G      4      0   293K  47.9K
    ad10        -      -      2      0   294K  47.9K
    ad14        -      -      2      0   294K  47.9K
----------  -----  -----  -----  -----  -----  -----

file#

As you see I have 2TB disks and 2 500GB disks both in a mirror in my pool. The plan is to remove the 2TB disks from the pool, and store the data temporary on the 2 500GB disks. This means I will temporary loose some duplication, but I cant help that really :).Right now, the plan is as follows:

Remove the first 2TB disks from the mirror, create a new pool with just this disk, and move all data from the other pool to this new pool. After this, I can destroy the pool temporary and create a new pool to store the data temporary. This seems pretty easy, doesnt it? 😛 It actually took me some time to find out the best way to do it, as I dont want to loose any data.

To create the new pool the following commands are needed:

file# zpool detach tank /dev/ad8
file# zpool create tank2 /dev/ad8
file# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank   2.27T   550G  1.73T    23%  ONLINE  -
tank2  1.81T    75K  1.81T     0%  ONLINE  -

Now the new pool exists, the temporary data need to be moved from the old pool to the temporary new pools:


file# zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
tank                    550G  1.69T    66K  /tank
tank/annemarie         30.6G  1.69T  30.6G  /tank/annemarie
tank/annemarie/muziek  1.89M  2.00G  1.89M  /tank/annemarie/muziek
tank/dell               128G   372G   126G  /tank/dell
tank/downloads          220G  1.69T   220G  /tank/downloads
tank/open              6.04G   494G  6.04G  /tank/open
tank/prive               21K  1.69T    21K  /tank/prive
tank/toshiba           79.1G   121G  78.9G  /tank/toshiba
tank/zolder            86.1G  1.69T  86.1G  /tank/zolder
tank2                  70.5K  1.78T    21K  /tank2
file# zfs create tank2/annemarie
file# zfs create tank2/annemarie/muziek
file# zfs create tank2/dell
file# zfs create tank2/downloads
file# zfs create tank2/toshiba
file# zfs create tank2/zolder

Besides creating the (actually unneeded, but ok) several existing mounts, I enabled compression on them as well. After that, I just did:

file# time cp -r /tank /tank2

But after a few minutes I already discovered a few issues, as it was missing files and the speed was not really good. So I decided to go with rsync after a few minutes, and it looks like thats going lots faster.

Once all data was copied I can remove the other 2TB disk:

file# zpool detach tank /dev/ad6
cannot detach /dev/ad6: only applicable to mirror and replacing vdevs

This is pretty logical, as this action will make all data lost. I want that in this case, but normally you don’t :).
So instead, I just destroy the pool:

file# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank   2.27T   548G  1.73T    23%  ONLINE  -
tank2  1.81T   565G  1.26T    30%  ONLINE  -
file# zpool destroy tank
file# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank2  1.81T   565G  1.26T    30%  ONLINE  -

And now create a new pool with the 2 500GB disks:

file# zpool create tank /dev/ad10 /dev/ad14
file# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank    928G    78K   928G     0%  ONLINE  -
tank2  1.81T   565G  1.26T    30%  ONLINE  -
file# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  ad10      ONLINE       0     0     0
	  ad14      ONLINE       0     0     0

errors: No known data errors

  pool: tank2
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank2       ONLINE       0     0     0
	  ad8       ONLINE       0     0     0

errors: No known data errors
file# zpool add tank2 /dev/ad6 
file# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  ad10      ONLINE       0     0     0
	  ad14      ONLINE       0     0     0

errors: No known data errors

  pool: tank2
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank2       ONLINE       0     0     0
	  ad8       ONLINE       0     0     0
	  ad6       ONLINE       0     0     0

errors: No known data errors
file# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank    928G    78K   928G     0%  ONLINE  -
tank2  3.62T   565G  3.07T    15%  ONLINE  -
file# 

Besides creating the new pool I also added the other 2TB disk to the temporary pool that I needed in first case :).
Now the only thing I need to do is move all data from /tank2 to /tank and we are done :).

Leave a Reply

Your email address will not be published. Required fields are marked *