• 3 Posts
  • 92 Comments
Joined 2 years ago
cake
Cake day: December 12th, 2023

help-circle

  • My web facing server has just enough packages installed to (kinda securely) host a Caddy and Kiwix docker container to work with my domain name and make a comfortable work environment through SSH. My Pi for my HomeAssistant docker container has less because it’s locked down to just my local network.

    I also wrote my own install scripts so reinstalling everything and getting it back to a running state would take about 15 minutes for each device.

    And I also wrote my own backup/restore scripts that evolved over 3/4 of a year. I use them often so I have confidence in those scripts.

    I personally don’t really care too much. I have multiple ways of dealing with issues for something that’s a hobby to me. Which is why I stick to simplicity.

    I’m sure this is a thing for people to worry about when dealing with more complex setups. I just wanna vibe out in my tiny corner of the internet.



  • I’ve read about that and I already have that in my notes as well.

    It doesn’t really affect my needs because my ISP blocks incoming on those ports anyways. Also I’m choosing not to use a tunnel at the moment so I’ll be using a higher port anyways.

    The last time I asked about it, a few people seemed to agree it was something to do with the firewall settings. That seems most likely since I was able to connect when I disabled my firewall. I’m not a fan of working with iptables. The language for that type of networking is gibberish to me.

    I had also tried going from docker compose to rootful podman compose and ran into the same issue. Although I’m trying to work away from podman compose in the future, just taking it in steps.


  • Yeah, I mainly just want to move away to more open projects. When I first started, everyone kept suggesting using Cloudflare. After half a year using their service, I just felt icky the entire time.

    In the past couple months I was able to move away and chose to protect myself by learning how to harden my server as well as hiding my server behind multiple layers of obscurity.

    With my current setup, the only site traffic I get has only been myself and my custom ssh port only gets hit by bots about 3-10 times a week according to my logs. Only time will tell how effective my layers of obscurity will hold up but so far it seems to satisfy my needs better than I was expecting.

    Once I get podman in a state I like, I’ll pretty much be all open sourced and all I’ll have to do for myself is be in maintenance mode unless I care to add a new service. I like to keep things simple so I don’t normally go crazy adding new services anyways.


  • Thank you for the offer. I still need a bit more more time to experiment and zero in on the issue again. Fortunately my setup is quite simple and the only bottleneck will be Caddy.

    I basically run Caddy which redirects to a static generated blog, simple file server page and a Kiwix instance. I’m mostly making a self hosted reference site of materials for Linux and Scripting resources.

    One day I may add a Forgeo instance but currently my entire workflow exists around rsync. I’m happy just having my single file scripts hosted as text files and don’t really need the power of git. At least not at the moment.


  • I’ve been making another attempt to replace Docker with Podman. The issue is I can’t connect to my server through a web browser. I think it’s a firewall issue.

    Networking and networking troubleshooting is a bit confusing for me and that’s the least favourite part about self hosting for me. Turns out I actually enjoy writing scripts more and the challenge of writing POSIX scripts especially.

    If I can figure it out, I’ll probably write a guide for setting up Podman and Caddy on Alpine Linux since there isn’t a lot of recent information out there from what I found in my searches so far.


  • I had what appeared to be two Orb Weavers living in my garden. One amongst the beans growing up my deck railing (https://lemmy.dbzer0.com/post/54351215) and another that lived in my tomato plant leaves (https://lemmy.dbzer0.com/post/53053891) in my other garden.

    I accidentally damaged the web in the tomato plants and after that I made every effort to not damage it any further. It made for some interesting acrobatics trying to pick tomatoes from behind the web.

    Both spiders got crazy fat because I made an effort to plant lots of pollinator attracting plants as well as letting local “weeds” grow with my crops. Both spiders were well fed.

    I was very happy to have both spiders plus an additional, very fat and fluffy bunny living in my gardens this year.

    Nice to see more spider friends out and about :)


  • I use rsync for many of the reasons covered in the video. It’s widely available and has a long history. To me that feels important because it’s had time to become stable and reliable. Using Linux is a hobby for me so my needs are quite low. It’s nice to have a tool that just works.

    I use it for all my backups and moving my backups to off network locations as well as file/folder transfers on my own network.

    I even made my own tool (https://codeberg.org/taters/rTransfer) to simplify all my rsync commands into readable files because rsync commands can get quite long and overwhelming. It’s especially useful chaining multiple rsync commands together to run under a single command.

    I’ve tried other backup and syncing programs and I’ve had bad experiences with all of them. Other backup programs have failed to restore my system. Syncing programs constantly stop working and I got tired of always troubleshooting. Rsync when set up properly has given me a lot less headaches.


  • I don’t have root access on my phone but I still copy backups of my media and apps that export data to accessible files.

    I keep my process very simple using Termux with rsync openssh and termux-services packages.

    I created a folder dedicated on my for syncing between phone to computer called sync but you can change this for your needs.

    From a fresh Termux install, the setup should look something like the following:

    # Update package list and packages
    pkg update && pkg upgrade
    # Install required packages
    pkg install rsync openssh termux-services
    # Setup Termux's access to your phone's files
    termux-setup-storage
    # Make the required folder
    mkdir ~/storage/shared/sync/
    cd ~/storage/shared/sync/
    # Automatically start your SSH server when you open Termux
    sv-enable sshd
    
    • Get your phone’s username:
    ~ $ whoami
    u0_a205
    
    • Optional: Setup a password with the command passwd (I can’t remember if this step is important)

    A quick note: Termux on android has a file system quite different than a computer so file and directory names can get quite long. The pwd command would show /data/data/com.termux/files/home/storage/shared/sync for my sync folder.

    This can be made simpler by using the realpath command. realpath /data/data/com.termux/files/home/storage/shared/sync then shows /storage/emulated/0/sync as a result. If you’re using CLI, this may make your commands easier to read.

    Now you can start to build your rsync command to transfer your files. When setting up an rsync command, ALWAYS use the --dry-run- option. This performs a “transfer” without any files being moved.

    • From my computer (data transfer direction: Phone -> Computer):
    rsync --dry-run --archive --verbose --human-readable --partial --progress --compress -e 'ssh -p 8022' u0_a205@192.168.40.210:/storage/emulated/0/sync/ /home/computer_username/backup/
    
    • From my phone (data transfer direction: Phone -> Computer):
    rsync --dry-run --archive --verbose --human-readable --partial --progress --compress /storage/emulated/0/sync/ computer_username@192.168.40.205:/home/computer_username/backup/
    

    Explanation:

    • --archive preserves several file attributes
    • --verbose --human-readable --partial --progress creates a readable output to see what is happening
    • --compress compresses the data during the actual transfer (good for over a network)
    • -e 'ssh -p 8022' SSH on termux runs on port 8022
    • u0_a205@192.168.40.210:/storage/emulated/0/sync/ and computer_username@192.168.40.205:/home/computer_username/backup/ are how rsync identifies remote folders. Basic format is <username>@<remote IP address>:/path/to/folder/
    • /home/computer_username/backup/ and /storage/emulated/0/sync/ are the local folders, relative to what machine the rsync command is being run from.

    In order to reverse the direction of a command relative to the machine you are running on, simple swap the remote folder and local folder in the command. Example: From only my computer:

    # Direction: Phone -> Computer
    rsync --dry-run --archive --verbose --human-readable --partial --progress --compress -e 'ssh -p 8022' u0_a205@192.168.40.210:/storage/emulated/0/sync/ /home/computer_username/backup/
    
    # Direction: Computer -> Phone
    rsync --dry-run --archive --verbose --human-readable --partial --progress --compress -e 'ssh -p 8022' /home/computer_username/backup/ u0_a205@192.168.40.210:/storage/emulated/0/sync/
    

    In order to actually transfer files, remove the --dry-run option from the previous rsync commands. The output in your terminal will show additional information regarding transfer status.

    Additionally, you can also add the --delete option to the rsync command, this will “deduplicate” files, meaning the source folder will force the destination folder match, file by file. That means deleting any file in the destination folder that does not match the source folder list of files.

    A command WITHOUT --dry-run and WITH --delete would look like the following (CAUTION: THIS CAN DELETE FILES IF UNTEST):

    rsync --delete --archive --verbose --human-readable --partial --progress --compress -e 'ssh -p 8022' u0_a205@192.168.40.210:/storage/emulated/0/sync/ /home/computer_username/backup/
    
    

    I personally manually transfer my backups into an encrypted external drive which I manually decrypt. /u/emhl@feddit.org has a suggestion for automated encrypted backups if that’s more to your needs.


  • Yeah, a few weeks ago a achieved my state of “secure” for my server. I just happened to notice a dramatic decrease in activity and that’s what prompted this question that’s been sitting in the back of my mind for weeks now.

    I do think it’s important to talk about it though because there seems to be a lack of talk about security in general for self hosting. So many guides focus on getting services up and running as fast as possible but don’t give security much thought.

    I just so happened to have gained an interest for the security aspect of self hosting over hosting actual services. My risks for self hosting is extremely low so I’ve reached a point of diminishing returns on security but the mind is still curious and wants to know more.

    I might write up a guide/walkthrough of my setup in the future but that’s low priority. I have some other not self hosting related things I want to focus on first.


  • I think I am already doing that. My Kiwix docker container port is set to 127.0.0.1:8080:8080 and my reverse proxy is only open to port 12345 but will redirect kiwi.example. com:12345 to port 8080 on the local machine.

    I’ve learned that docker likes to manipulate iptables without any notice to other programs like UFW. I have to be specific in making sure docker containers only announce themselves to the local machine only.

    I’ve also used this guide to harden Caddy and adjusted that to my needs. I took the advice from another user and use wildcard domain certs instead of issuing certs for each sub domain, that way only the wildcard domain is visible when I search it up at https://crt.sh/ . That way I’m not advertising my sub domains that I am using.



  • My ISP blocks incoming data to common ports unless you get a business account. That’s why I used Cloudflare’s tunnel service initially. I changed my plans with the domain name I currently own and I don’t feel comfortable giving more power and data to an American Tech company so this is my alternative path.

    I use Caddy as my reverse proxy so I only have one uncommon port open. My plans changed from many people accessing my site to just me and very few select friends of mine which does not need a business account.


  • I get that.

    I was generally (in my head) speaking about all my devices. If someone stole my computer, the full disk encryption is more of a deterrence than the idea of my data being fully secured. My hope is that the third party is more likely to delete than to access. If I catch the attention of someone that actually wants my data, I have bigger issues to worry about than security of my electronic devices.


  • I agree with the last point, I only mentioned that because I don’t really know what other setting in my SSHD config is hiding my SSH port from nmap scans. That just happened to be the last change I remember doing before running an nmap scan again and finding my SSH port no longer showed up.

    Accessing SSH still works as expected with my keys and for my use case, I don’t believe I need an additional passphrase. Self hosting is just a hobby for me and I am very intentional with what I place on my web facing server.

    I want to be secure enough but I’m also very willing to unplug and walk away if I happen to catch unwanted attention.


  • Thanks for the insight. It’s useful to know what tools are out there and what they can do. I was only aware of nmap before which I use to make sure the only ports open are the ports I want open.

    My web facing device only serves static sites and a file server with non identifiable data I feel indifferent about being on the internet. No databases or stress if it gets targeted or goes down.

    Even then, I still like to know how things work. Technology today is built on so many layers of abstraction, it all feels like an infinite rabbit hole now. It’s hard to look at any piece of technology as secure these days.


  • I use a different port for SSH, I also have use authorized keys. My SSHD is setup to only accept keys with no passwords and no keyboard input. Also when I run nmap on my server, the SSH port does not show up. I’ve never been too sure how hidden the SSH port is beyond the nmap scan but just assumed it would be discovered somehow if someone was determined enough.

    In the past month I did rename my devices and account names to things less obvious. I also took the suggestion from someone in this community and setup my TLS to use wildcard domain certs. That way my sub domains aren’t being advertised on the public list used by Certificate Authorities. I simply don’t use the base domain name anymore.


  • Early when I was learning self hosting, I lost my work and progress a lot. Through all that I learned how to make a really solid backup/restore system that works consistently.

    Each device I own has it’s own local backup. I copy those backups to a partition on my computer dedicated to backups, and that partition gets copied again to an external SSD which can be disconnected. Restoring from external SSD to my Computer’s backup partition to each device all works to my liking. I feel quite confident with my setup. It took a lot of failure to gain that confidence.

    I also spent time hardening my system. I went through this Linux hardening guide and applied what I thought would be appropriate for my web facing server. Since the guide seems more for a personal computer (I think), the majority of it didn’t apply to my use case. I also use Alpine Linux so there was even less I could do for my system but it was still helpful in understanding how much effort it is to secure a computer.


  • That’s been my main goal throughout securing my personal devices including my web facing server. To make things inconvenient as possible for potential outside interference. Even if it means simply wasting their time.

    With how complex computers and other electronic devices have become, I never expect anything I own to be 100% secure even if I take steps I think will make me secure.

    I’ve been on the internet long enough to have built a habit of obscuring my online or digital presence. It won’t save me but it makes me less or a target.