Currently working on an Arch server for my self hosting needs. I love arch, in my eyes its the perfect platform for self hosting. There is no bloat, making it lightweight and resource efficient. Its also very stable if you go down the lts route and have the time and skills to head off problems before they become catastrophic.

The downsides. For someone who is a semi-noob there is a very steep learning curve. Arch is very well documented but when you hit a problem or a brick wall its very frustrating. My low tolerence for bullshit means I take hours/days long breaks from it. There’s also time demands in the real world so needless to say I’ve been going at it for a few weeks now.

Unraid is very appealing - nice clean interface, out-of-the-box solutions for whatever you want to do, easy NAS management… What’s not to like? If it was fully open-source I would’ve bought into it from the start. At least once a day I think “I’m done. Sign me up unraid”. Its taking an age to set up the Arch server. If I went for unraid I could be self hosting in a matter of hours. Unraid is the antitheses of Arch. Arch is for masochists.

Do you ever look at products like unraid and think “fuck this shit, gimme some of that”? What is your version of this? Have you ever actually done it and regretted it/lived happily ever after?

  • Flamekebab@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 hours ago

    I was trying to use it for a mirrored setup with TrueNAS and found it to be flakey to the point of uselessness. I was essentially told that I was using it wrong because I had USB disks. It allowed me to set it up and provided no warnings but after losing my test data for the fifth time (brand new disks - that wasn’t the issue) I gave up and setup a simple rsync job to mirror data between the two ext4 disks.

    If losing power effectively wipes my data then it’s no damn use to me. I’m sure it’s great in a hermetically sealed data centre or something but if I can’t pull one of the mirrored disks and plug it into another machine for data recovery then it’s no damn good to me.

    • Andres@social.ridetrans.it
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      @Flamekebab @non_burglar Sounds like snapraid might be a better fit for your needs. Since it runs over top of the filesystem, if you lose a disk you can still access files from the other disk(s). It’s better than rsync, in that it would provide regular data validation (‘snapraid scrub’ once per week or so). It is more designed to work in raid5 rather than mirroring (raid1) setup, however.

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Ah, I hear you, and sorry you had that experience. GUI controls of ZFS aren’t usually very intuitive.

      Also, ZFS assumes it has direct access to the block device, and certain USB implementations (not UAS) use sync operations that sit between the HAL and userland somewhere. So ZFS likes direct-attached storage, it’s a caveat to be sure.

      If you ever change your mind, https://klarasystems.com/zfs/ has a ton of reading and tutorials on ZFS.

      • Flamekebab@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        I found the whole experience tremendously frustrating and as you can see from some of the other responses and votes, the community does not consider that to be a reasonable reaction.

        Hence why I bailed on the whole thing. I don’t need the grief.

    • JGrffn@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 hours ago

      Wait so you built a pool using removable USB media, and was surprised it didn’t work? Lmao

      That’s like being angry that a car wash physically hurt you because you drove in on a bike, then using a hose on your bike and claiming that the hose is better than the car wash.

      Zfs is a low level system meant for pcie or sata, not USB, which is many layers above sata & pcie. Rsync was the right choice for this scenario since it’s a higher level program which doesn’t care about anything other than just the data and will work over USB, Ethernet, wifi, etc., but you gotta understand why it was the right choice instead of just throwing shade at one of the most robust filesystems out there just because it wasn’t designed for your specific usecase.

      • Flamekebab@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        I was told a tool was a resilient approach to drive management. It wasn’t, outside of a very specific set of circumstances.

        Your analogy not only makes no sense but is exactly why I’m hostile about this. I’m not an expert at the specific limitations of a niche hard disk technology is, I must be a fucking moron or something, and ridicule is a clearly an appropriate reaction.

        My idea of a useful tool for dealing with hard disks is not one that loses its shit when a hard disk is temporarily disconnected. That is not a ridiculous assumption. If that’s an issue then that should be made abundantly clear.

        I assigned drives based on serial number and passed them through to TrueNAS and it couldn’t handle that reliably. I do not think I was asking for the moon on a stick.

        The USB interface is a temporary measure, I was going to move the disks to an internal setup after testing but if it can’t handle something that basic then like fuck am I trusting it with something like migrating from USB SATA to internal SATA.

        If I need both disks to access mirrored data then it’s as useful as a chocolate teapot.