Currently working on an Arch server for my self hosting needs. I love arch, in my eyes its the perfect platform for self hosting. There is no bloat, making it lightweight and resource efficient. Its also very stable if you go down the lts route and have the time and skills to head off problems before they become catastrophic.

The downsides. For someone who is a semi-noob there is a very steep learning curve. Arch is very well documented but when you hit a problem or a brick wall its very frustrating. My low tolerence for bullshit means I take hours/days long breaks from it. There’s also time demands in the real world so needless to say I’ve been going at it for a few weeks now.

Unraid is very appealing - nice clean interface, out-of-the-box solutions for whatever you want to do, easy NAS management… What’s not to like? If it was fully open-source I would’ve bought into it from the start. At least once a day I think “I’m done. Sign me up unraid”. Its taking an age to set up the Arch server. If I went for unraid I could be self hosting in a matter of hours. Unraid is the antitheses of Arch. Arch is for masochists.

Do you ever look at products like unraid and think “fuck this shit, gimme some of that”? What is your version of this? Have you ever actually done it and regretted it/lived happily ever after?

  • Flamekebab@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 hours ago

    Looks like I angered people by not loving ZFS. I don’t feel like being bagged on further for using it wrong or whatever.

      • Flamekebab@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I was trying to use it for a mirrored setup with TrueNAS and found it to be flakey to the point of uselessness. I was essentially told that I was using it wrong because I had USB disks. It allowed me to set it up and provided no warnings but after losing my test data for the fifth time (brand new disks - that wasn’t the issue) I gave up and setup a simple rsync job to mirror data between the two ext4 disks.

        If losing power effectively wipes my data then it’s no damn use to me. I’m sure it’s great in a hermetically sealed data centre or something but if I can’t pull one of the mirrored disks and plug it into another machine for data recovery then it’s no damn good to me.

        • JGrffn@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          29 minutes ago

          Wait so you built a pool using removable USB media, and was surprised it didn’t work? Lmao

          That’s like being angry that a car wash physically hurt you because you drove in on a bike, then using a hose on your bike and claiming that the hose is better than the car wash.

          Zfs is a low level system meant for pcie or sata, not USB, which is many layers above sata & pcie. Rsync was the right choice for this scenario since it’s a higher level program which doesn’t care about anything other than just the data and will work over USB, Ethernet, wifi, etc., but you gotta understand why it was the right choice instead of just throwing shade at one of the most robust filesystems out there just because it wasn’t designed for your specific usecase.

        • non_burglar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          44 minutes ago

          Ah, I hear you, and sorry you had that experience. GUI controls of ZFS aren’t usually very intuitive.

          Also, ZFS assumes it has direct access to the block device, and certain USB implementations (not UAS) use sync operations that sit between the HAL and userland somewhere. So ZFS likes direct-attached storage, it’s a caveat to be sure.

          If you ever change your mind, https://klarasystems.com/zfs/ has a ton of reading and tutorials on ZFS.