I know for many of us every day is selfhosting day, but I liked the alliteration. Or do you have fixed dates for maintenance and tinkering?
Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
This post is proudly sent from my very own Lemmy instance that runs at my homeserver since about ten days. So far, it’s been a very nice endeavor.
what’s maintenance? is that when an auto-update breaks everything and you spend an entire weeknight looking up tutorials because you forgot what you did to get this mess working in the first place?
No you just continue updating until it’s fixed again.
I’ve had this happen twice in two weeks since installing Watchtower and have since scheduled it to only run on Friday evening…
Nothing greater than crashing your weekend evening just trying to watch a movie on a broken jellyfin server :'D
I do love how little maintenance is needed until you have to re-learn everything you forgot
Yes
I know you’re half joking. But nevertheless, I’m not missing this opportunity to share a little selfhosting wisdom.
Never use auto update. Always schedule to do it manually.
Virtualize as many services as possible and take a snapshot or backup before updating.
And last, documentation, documentation, documentation!
Happy selfhosting sunday.
I think auto update is perfectly fine, just check out what kind of versioning the devs are using and pin the part of the version that will introduce breaking changes.
I just like it when things break on scheduled maintenance and I have time to fix it or the possibility to roll back with minimal data loss, instead of an auto update forcing me spend a week night fixing it or running a broken system till I have the time.
You can have the best of both worlds - scheduled auto updates on a time that usually works for you.
With growing complexity, there are so many components to update, it’s too easy to miss some in my experience. I don’t have everything automated yet (in fact, most updates aren’t) but I definitely strive towards it.
In my experience, the more complex a system is, the more auto updates can mess things up and make troubleshooting a nightmare. I’m not saying auto updates can’t be a good solution in some cases, but in general I think it’s a liability. Maybe I’m just at the point where I want my setup to work without the risk of it breaking unexpectedly and having to tinker with it when I’m not in the mood. :)
There’s a fine line between “auto-updates are bad” and “welp, the horribly outdated and security hole riddled CI tool or CMS is how they got in”. I tend to lean toward using something like renovate to queue up the updates and then approve them all at once. I’ve been seriously considering building out a staging and prod env for my homelab. I’m just not sure how to test stuff in staging to the point that I’d feel comfortable auto promoting to prod.
I recently setup Music Assistant and have been trying to make it work in my VLANs with my esp32 devices. It has been slow going. Nothing has the level of logging required to easily debug the issues I’ve encountered but I’m slowly working through it all.
I’m patiently (cf impatiently) awaiting the arrival of an Aoostar WTR Pro and components to build my first NAS and full Arr stack for Linux ISO’s.
I completed a proof of concept and learning a month ago on a Pi 5, and I can’t wait to get my hands dirty with something more real!
I’ll take any advice anyone throws my way :D and thanks to this community for the learning and inspiration since I joined Lemmy!
I’ve been hosting Emby forever (and the requisite software to acquire content 😉).
Recently I added Nextcloud to facilitate cutting several Google products out of my life. Combined with a few FOSS apps, it’s currently doing the job of Drive (storage) and Keep (notes), and I’m planning to move my contacts and calendar this week.
I’m doing that as well (mostly done except some tinkering and optimizations). It’s my third time setting up nextcloud, but this time it’s for real.
I got a new job, and the group chat is on WhatsApp, so I’m looking into running a Synapse server with a bridge to it. I really don’t want to have to use Meta’s apps on my phone.
From what I’ve read so far, it seems like it’s going to be the most convoluted install process I’ll have encountered in my self-hosting journey. I’m excited to tackle it, but also a bit overwhelmed. Which is why I’ve been putting it off :P
It was a huge pita to get it running, but I have it.
One thing about the WA bridge is that element won’t let me give display names or look up the contact number, so the people in chatting with don’t have names, just “their number (WA)”
Holy crap, you’re me. Except I plan on using slidge-whatsapp.
Try conduwuit instead of Synapse if you get stuck. For me, it was really simple to install and the dev is really nice.
I wrote myself a new python script for a palworld server I run. Wanted to figure out a generic way to track active connections without running something in front of the daemon. That’s easy to do for TCP, but since UDP has no concept of an established connection, the regular tools wouldn’t work. Realized I could use conntrack to get the linux firewalls connection tracking data, which works outside of tcp/udp concepts and maintains its own active connection state based on timeouts, which is what I was gonna do anyways. Now I can issue SIGSTOP/SIGCONT to keep buildings from degrading on the server when nobody’s online to deal with it, along with saving the cpu resources of an empty game server. Rather niche project, but I figured I’d publish it anyways. https://github.com/sugoidogo/pausepal
Looking to install Immich, BitDefender Password Manager and YouTube downloader on the NAS this week.
I’m working on my first kubernetes cluster. I’m trying to set the systems up with NixOS. I can get a kublet and a control plane running. But I’m getting permission errors when trying to use kubectl rootless on the system running the control plane. I think I figured out which file i need to change, now I just want to record that change in my configuration.nix.
I’m curious how this goes for you. I run all my machines on NixOS except my k8s cluster which is Talos for now. I have been thinking of switching to Nix for that too.
I followed along the nixos wiki for kubernetes and creating the “master” kublet is super easy when you set easyCerts = true. Problem is, it spits out files to /var/lib/kubernetes/secrets/ that is owned by root. Specifically, the cluster-admin.pem file. If I want to push commands to the cluster using kubectl I have to elevate to a root shell. I could just chmod or chown the file but that seems like a security risk.
Now I’m not familiar with k8s at all. This is my first go through, so I could be doing something wrong or missing a step. I saw something about the role based security but I haven’t jumped down that rabbit hole yet. Any tips for running kubectl without root?
nixos doesn’t play well with rootless containers in my experience
Ah sorry to hear that. Did you find something better that works for you? I’m open to suggestions :D
Not who you asked but I moved to Talos Linux for k8s
OciContainers just added rootless mode for podman. I was planning on playing a bit more with it but I’m quite busy and haven’t fount the time recently. For the time being I run everything as rootfull since I don’t expose stuff directly through the internet.
I might repond here if I don’t forget once I’ve experimented a bit more.
Migrating from proxmox to incus, continued.
- got a manually-built wireguard instance rolling and tested, it’s now “production”
- setting up and testing backups now
- going to export some NFS and iscsi to host video files to test playback over the network from jellyfin
- building ansible playbooks to rebuild instances
- looking into ansible to add system monitoring, should be easy enough
Lots of fun, actually!
What’s your motivation for the switch? Second time in a short while I’ve heard about people migrating to incus.
I’ve moved to all containers and I’m gradually automating everything. The metaphor for orchestration and provisioning is much clearer in incus than it was in lxd, and makes way more sense than proxmox.
Proxmox is fine, I’ve used it for going on 8 years now, I’m still using it, in fact. But it’s geared toward a “safe” view of abstraction that makes lxc containers seem like virtual machines, and they absolutely aren’t, they are much, much more flexible and powerful than vms.
There are also really annoying deficiencies in proxmox that I’ve taken for granted for a long time as well:
- horrible builtin resource usage metrics. And I’m happy to run my influxdb/grafana stack to monitor, but users should be able to access those metrics locally and natively, especially if they’re going to be exported by the default metrics export anyway.
- weird hangovers from early proxmox versions on io delay. Proxmox is still making users go chase down iostat rabbit holes to figure out why io_wait and “io delay” are not the same metric, and why the root cause is almost always disk, yet proxmox shows the io_wait stat as if it could be “anything”
- integration of pass through devices is a solved problem, even for lxc, yet the bulk of questions for noobs is about just that. Pass through is solved for so many platforms, why proxmox just doesn’t have that as a GUI option for lxc is baffling.
- no install choices for zfs on root on single disk (why???)
- etc
Ultimately, I have more flexibility with a vanilla bookworm install with incus.
Thanks a lot for your response! I too was a bit misguided by the way Proxmox presents LXCs but I’m mostly on VMs and haven’t explored LXCs further so far.
No worries. And don’t misunderstand: I think proxmox is great, I’ve simply moved on to a different way of doing thing.
I finally got IPv6 working in Docker Swarm…by moving from Docker Swarm to regular Docker.
Traefik now properly gets IPv6 addresses and forwards them to the backend.
What’s the big benefit of moving to IPv6 for a LAN? Just wondering if there is any other benefits over addresses? My unifi kit can convert us to IPv6 but I’m hesitant without knowing what devices it will break.
Copying from an older comment of mine:
IPv6 is pretty much identical to IPv4 in terms of functionality.
The biggest difference is that there is no more need for NAT with IPv6 because of the sheer amount of IPv6 addresses available. Every device in an IPv6 network gets their own public IP.
For example: I get 1 public IPv4 address from my ISP but 4,722,366,482,869,645,213,696 IPv6 addresses. That’s a number I can’t even pronounce and it’s just for me.
There are a few advantages that this brings:
- Any client in the network can get a fresh IP every day to reduce tracking
- It is pretty much impossible to run a full network scan on this amount of IP addresses
- Every device can expose their own service on their own IP (For example: You can run multiple web servers on the same port without a reverse proxy or multiple people can host their own game server on the same port)
There are some more smaller changes that improve performance compared to IPv4, but it’s minimal.
My unifi kit can convert us to IPv6 but I’m hesitant without knowing what devices it will break.
You don’t usually “convert” to IPv6 but run in dual stack, with both IPv4 and IPv6 working simultaneously. Make sure your ISP supports IPv6 first, there is little use to only run IPv6 internally.
Very helpful thanks for digging out up for me.
Fumbling around with k3s to get my toes into deploying a Kubernetes cluster from scratch for the first time ever. No real long term usage planned, just some testing to gather experience.
Finally upgrading my Plex server from Ubuntu 22.04 to 24.04! I’ve been putting it off out of habit, as I always wait for the *.1 releases but I’ve done several of these for clients and every single one went flawlessly. But I still waited it out.
Also thinking about switching my Ext4 mirrored softRAID to ZFS… Since Ubuntu has the only acceptable ZFS implementation outside of UNIX proper (Ubuntu’s is in-kernel, everyone else uses kernel modules, which i hate). But that’s going to be extra work I may not be in the mood for. But damn would compression and deduplication be nice! So still maybe
That is one thing I still need to do, upgrade my Ubuntu server from 22.04 to 24.04. laat time I tried this I noticed many python packages were missing or failing. Reverted to the backup. Maybe now is the time to do the switch and iron out the crinks that may be left after.
Wait, you mean you host plex servers for clients? Or that you work with Ubuntu in general? And for the ZFS thing, it doesn’t really matter if it’s in-kernel or something else, at the end of the day, they all work the same. I’m using zfs on my arch machine for example, and everything works just fine (dkms). And zfs is super easy in general, you should definetly try it
I’m integrating my Mac mini (running Asahi Linux) into my server setup. It’s slow going as I also have to move some data around so I can repurpose some hard drives.
Finally setup Synology surveillance station and got my local cameras all hooked in with motion events. Very swish.
Attempted and failed to set up some sort of fail2ban between my Cloudflared container and my website I host at home.
I run everything off my gaming rig, so maintenance is kinda already a part of it.
I just don’t really look forward to the day I need to reinstall :p