At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I’ll need to IT flash the HBA, or get another. I’m guessing it’s best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

  • SzethFriendOfNimiEnglish
    arrow-up
    3
    arrow-down
    6
    ·
    13 days ago
    link
    fedilink

    If I recall correctly it’s important to be running ECC memory right?

    Otherwise corrupter bites/data can cause file system issues or loss.

    • ShortN0teEnglish
      arrow-up
      22
      arrow-down
      1
      ·
      13 days ago
      link
      fedilink

      You recall wrong. ECC is recommended for any server system but not necessary.

      • RaccoonBallEnglish
        arrow-up
        8
        arrow-down
        0
        ·
        13 days ago
        link
        fedilink

        And if you dont have ECC zfs just might save your bacon when a more basic fs would allow corruption

        • Avid AmoebaEnglish
          arrow-up
          2
          arrow-down
          0
          ·
          12 days ago
          edit-2
          12 days ago
          link
          fedilink

          It might also save it from shit controllers and cables which ECC can’t help with. (It has for me)

        • conorabEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          9 days ago
          link
          fedilink

          I don’t think ZFS can do anything for you if you have bad memory other than help in diagnosing. I’ve had two machines running ZFS where they had memory go bad and every disk in the pool showed data corruption errors for that write and so the data was unrecoverable. Memory was later confirmed to be the problem with a Memtest run.

    • snowfalldreamlandEnglish
      arrow-up
      6
      arrow-down
      0
      ·
      13 days ago
      link
      fedilink

      I think ecc isn’t more required for zfs then for any other file system. But the idea that many people have is that if somebody goes through the trouble of using raid and using zfs then the data must be important and so ecc makes sense.

      • farcallerEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        13 days ago
        link
        fedilink

        ECC is slightly more required for ZFS because its ARC is generally more aggressive than the usual linux caching subsystem. That said, it’s not a hard requirement. My curent NAS was converted from my old windows box (which apparently worked for years with bad ram). Zfs uncovered the problem in the first 2 days by reporting the (recoverable) data corruption in the pool. When I fixed the ram issue and hash-checked against the old backup all the data was good. So, effectively, ZFS uncovered memory corruption and remained resilient against it.