My First Manual ZPOOL

Written by Michael Cole - June 18th 2015


This is my first manually created ZFS pool experience. Before I get into the details, let me say this is on FreeBSD 10.1 , so if you are trying to use this as a guide it may not work on other versions. As well I am not responsible to anything that may occur if you try any of these commands, etc. Use this information at your own risk.

So I figured my drives were 4k sector, but I thought I would double check to make sure. I ran "camcontrol identify da0", and looked for the line "sector size logical 512, physical 4096, offset 0". As you can see the physical sector is 4k. So vfs.zfs.min_auto_ashift (a kernel parameter) by default is set to 9 which is 512, so I changed it to 12 which is 4k.

Now I created all my GPT (GUID Partition Table ) for each disk. For those just catching my posts for the first time I have six 3TB Western Digital Red drives. This is "gpart create -s gpt " and I had to do it 6 times da0-5.

So looking at my device names, they were generic da0-5. I thought if my ZFS pool has an issue in the future how on earth do I know which drive to replace. So I tested this out a little, I unplugged the top drive in my case, then I checked dmesg. Guess what, the drive I would like to be drive 1 is da5. So my schema for labeling will be simple, since all of my disks are internal for now. I'm going to label them internal disk # partition #. So in my case it was "gpart add -a 4k -l id1p1 -t freebsd-zfs da5". If you want to view the labels look at "glabel status". I will follow the same format for all my drives. This way when I do have an issue, I'm not trying to figure out which disk is which. I figure it's better to find the information while the device is actually working and commands will run against it. If it does fail later on and I have a degraded pool, unplugging a good drive could lead to more issues. For reference of how crazy it can be da3 was disk 2, da2 was 3, da4 was 4, da1 was 5, and da0 was 6. And yes I know that re-cabling could help with this, but storing the labels on the drives has several advantages. For starters I can move them around all I like and the labels will be there. Second mine could be matched up but only because they started at da0. If I used other controllers, or hooked internal connections up to say ESATA, or some other external device, I may wish to label it in more detail, like enclosure 1 drive 1 or e1d1p1.

Anyways enough about labels. Once you have them all labeled. Now we have our labels and partitions. It's time to make the zpool now. I have decided on RAIDZ2. Here goes nothing, "zpool create storage raidz2 gpt/id1p1 gpt/id2p1 gpt/id3p1 gpt/id4p1 gpt/id5p1 gpt/id6p1". It worked, no warnings, no errors. With the overhead, hard drive space rounding and my 4k aligning of the drives, total usable is about 10.5TB. Not too bad.

zpool list
NAME      SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
storage  16.2T   756K  16.2T     0%         -     0%  1.00x  ONLINE  -

zpool status
        NAME           STATE     READ WRITE CKSUM
        storage        ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            gpt/id1p1  ONLINE       0     0     0
            gpt/id2p1  ONLINE       0     0     0
            gpt/id3p1  ONLINE       0     0     0
            gpt/id4p1  ONLINE       0     0     0
            gpt/id5p1  ONLINE       0     0     0
            gpt/id6p1  ONLINE       0     0     0

zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
storage              480K  10.5T   192K  /storage

Now that I have my space, I can start working on defining some datasets and jails, etc. But that will be another day.