TrueNAS ZFS pool expansion

First, in a word of quick introduction, for details on ZFS I invite you to the material [TrueNas ZFS - why is it awesome?].

 

In ZFS, the organizational unit in which we keep data is a pool. To this pool we connect the so-called vDev. The basic vDev without which our NAS game would not make sense is the data vDev in which data is kept. This vDev connected to the pool must have the disks organized somehow, whether in a stipe, mirror or RAID-Z. It is mainly the expansion of this RAID-Z that today's material is about.

ZFS pool - when to expand?

To begin with, some recommendations for today's topic.

 

When is it worth considering expanding the field? The occupancy limit we should allow is 80%. To keep our NAS with ZFS at optimal performance levels, we should always keep a minimum of 20% free space. One reason for this is that ZFS uses a Copy on Write mechanism. In short, it works in such a way that if we overwrite some data, ZFS underneath writes it in a completely different place and does not touch the place where the data is originally. Only when the data is successfully written to the destination does the address to the data change and start pointing to the new location. This makes ZFS significantly less susceptible to data corruption during writing but at the expense of precisely the fact that we need to keep 20% free pool space for its performance to be optimal. Will something blow up if the space goes below 20%? Of course you can't use all the space just count on the fact that the performance of such a burnt-out pool will drop.

ZFS pool - how to expand?

Regarding the expansion itself, there are three main ways to expand the pool. The first, perhaps even the most obvious, is to buy a new disk pack, create a new pool, or even a new server and migrate the data. But we're definitely not about that today, at least mostly not about that. Today we'll try a more subtle approach.

ZFS pool expansion - adding a new vDev

One way is to add a new vDev Data. This is a potentially cool solution, if only we had considered it before. Consider it something of a natural planned way to expand the pool. When we buy a server considering future growth, we either leave pockets free or the possibility of connecting external disk shelves. It's just that we all know that this is not always the case.

 

The obvious disadvantage of this solution is that we have to keep slots free from the beginning. The next restriction is that it is recommended to expand the pool with identical vDev data. That is, same drives, same RAID-Z level.

 

The reason for such requirements is that ZFS performs load balancing when writing to the pool. That is, it will evenly load all vDev with writes, and if one is larger or faster, the whole thing will work by adjusting to the slower one. Again, will anything happen if there are other drives? Not if you accept the undusted performance of your server.

 

But but but I think we can mostly give this sub-optimality to the cog because the same load balancing that splits the write work equally between the two vDevs means that theoretically, to a good approximation, we can expect write performance twice the performance of this weaker vDev. In practice, I would not expect as much as double. However, the acceleration of writing in particular will be significant. Regarding reading, it depends on which vDev we have specific data on. With time, they should disperse fairly evenly and reading should also speed up significantly.

 

The performance and efficiency of ZFS very much depends on how the disks are organized in vDev and vDev in pool. This is a complex topic and there should be a separate piece about it soon. I'll just add as an incentive that the difference in some applications in performance in organizing disks right vs wrong can be multiple.

ZFS pool expansion - disk replacement

However, if you have your server and it has all the slots filled there is also another interesting way. Just swap out the disks, piece by piece for larger ones and the pool will figure out on its own that it has more space. I think this can also be a good solution in many cases. We don't have to migrate anything, change just swap drives for bigger ones.

 

Ideally, if we have free disk space to plug in a new one and migrate piece by piece. In such a case, if we have hot-swap disks, we will carry out the operation even without a server outage. Otherwise, you can unplug a disk, insert a new one and rebuild the RAID, but this carries the risk of data loss if anything happens to the remaining disks during synchronization. Keep in mind that rebuilding a large RAID is such a stress test for the disks and if they are already older we can shoot ourselves in the knee. To avoid this we can even use an adapter by connecting the drive via USB. Then we connect a new drive via such USB, replace the drive in the pool, take out the replaced drive, put in the one with USB. Then we connect the next one via USB. And so on in turn.

 

The advantage of the solution is that there is no need to replace the server, keep the slots free or buy disk shelves. The disadvantage compared to adding a new vDev may, be that the old disks stop working and if we have no other use for them they lie useless. Of course, this option more or less leaves the performance of the pool unchanged if the difference in disks is not huge.

 

Before you start your labs, please remember that always such an operation can fail. Something may happen that we do not foresee. Power failure, disk failure. Before you proceed at your site, make sure you have copies of your TrueNAS configuration and the data itself elsewhere.

 

 

If you would like to learn more about TrueNAS write to us. We will tell you how it works and why it is worth it?