I know for many of us every day is selfhosting day, but I liked the alliteration. Or do you have fixed dates for maintenance and tinkering?

Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.

This post is proudly sent from my very own Lemmy instance that runs at my homeserver since about ten days. So far, it’s been a very nice endeavor.

  • metaStatic
    link
    fedilink
    811 month ago

    what’s maintenance? is that when an auto-update breaks everything and you spend an entire weeknight looking up tutorials because you forgot what you did to get this mess working in the first place?

    • @IronKrill@lemmy.ca
      link
      fedilink
      English
      130 days ago

      I’ve had this happen twice in two weeks since installing Watchtower and have since scheduled it to only run on Friday evening…

      • @Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        130 days ago

        Nothing greater than crashing your weekend evening just trying to watch a movie on a broken jellyfin server :'D

    • @daddycool@lemmy.world
      link
      fedilink
      English
      10
      edit-2
      1 month ago

      I know you’re half joking. But nevertheless, I’m not missing this opportunity to share a little selfhosting wisdom.

      Never use auto update. Always schedule to do it manually.

      Virtualize as many services as possible and take a snapshot or backup before updating.

      And last, documentation, documentation, documentation!

      Happy selfhosting sunday.

      • @tofu@lemmy.nocturnal.gardenOP
        link
        fedilink
        English
        31 month ago

        I think auto update is perfectly fine, just check out what kind of versioning the devs are using and pin the part of the version that will introduce breaking changes.

        • @daddycool@lemmy.world
          link
          fedilink
          English
          71 month ago

          I just like it when things break on scheduled maintenance and I have time to fix it or the possibility to roll back with minimal data loss, instead of an auto update forcing me spend a week night fixing it or running a broken system till I have the time.

          • @tofu@lemmy.nocturnal.gardenOP
            link
            fedilink
            English
            11 month ago

            You can have the best of both worlds - scheduled auto updates on a time that usually works for you.

            With growing complexity, there are so many components to update, it’s too easy to miss some in my experience. I don’t have everything automated yet (in fact, most updates aren’t) but I definitely strive towards it.

            • @daddycool@lemmy.world
              link
              fedilink
              English
              31 month ago

              In my experience, the more complex a system is, the more auto updates can mess things up and make troubleshooting a nightmare. I’m not saying auto updates can’t be a good solution in some cases, but in general I think it’s a liability. Maybe I’m just at the point where I want my setup to work without the risk of it breaking unexpectedly and having to tinker with it when I’m not in the mood. :)

              • @iggy@lemmy.world
                link
                fedilink
                English
                130 days ago

                There’s a fine line between “auto-updates are bad” and “welp, the horribly outdated and security hole riddled CI tool or CMS is how they got in”. I tend to lean toward using something like renovate to queue up the updates and then approve them all at once. I’ve been seriously considering building out a staging and prod env for my homelab. I’m just not sure how to test stuff in staging to the point that I’d feel comfortable auto promoting to prod.

  • @tux7350@lemmy.world
    link
    fedilink
    English
    5
    edit-2
    30 days ago

    I’m working on my first kubernetes cluster. I’m trying to set the systems up with NixOS. I can get a kublet and a control plane running. But I’m getting permission errors when trying to use kubectl rootless on the system running the control plane. I think I figured out which file i need to change, now I just want to record that change in my configuration.nix.

      • @tux7350@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        30 days ago

        Ah sorry to hear that. Did you find something better that works for you? I’m open to suggestions :D

        • @L_Acacia@lemmy.ml
          link
          fedilink
          English
          123 days ago

          OciContainers just added rootless mode for podman. I was planning on playing a bit more with it but I’m quite busy and haven’t fount the time recently. For the time being I run everything as rootfull since I don’t expose stuff directly through the internet.

          I might repond here if I don’t forget once I’ve experimented a bit more.

    • @refreeze@lemmy.world
      link
      fedilink
      English
      31 month ago

      I’m curious how this goes for you. I run all my machines on NixOS except my k8s cluster which is Talos for now. I have been thinking of switching to Nix for that too.

      • @tux7350@lemmy.world
        link
        fedilink
        English
        230 days ago

        I followed along the nixos wiki for kubernetes and creating the “master” kublet is super easy when you set easyCerts = true. Problem is, it spits out files to /var/lib/kubernetes/secrets/ that is owned by root. Specifically, the cluster-admin.pem file. If I want to push commands to the cluster using kubectl I have to elevate to a root shell. I could just chmod or chown the file but that seems like a security risk.

        Now I’m not familiar with k8s at all. This is my first go through, so I could be doing something wrong or missing a step. I saw something about the role based security but I haven’t jumped down that rabbit hole yet. Any tips for running kubectl without root?

  • @rumba@lemmy.zip
    link
    fedilink
    English
    530 days ago

    What should I do next?

    1. Set up peertube in a proxmox, difficulty: My hosting provider doesn’t allow 443 or 80, I have cloudflare working for other things but I think this invades their TOS

    2. Set up immich in a proxmox. Difficulty: I need regular backups off site and it’s going to be pretty large.My wife is a professional photographer.

    3. Set up my Coral TPU with frigate replacing my aging win10 blue iris.

    • @samsi@lemmy.world
      link
      fedilink
      English
      229 days ago

      I am also struggling with off-site backups. Mainly because I don’t have a cheap and regular way of doing it.

      • Estebiu
        link
        fedilink
        English
        129 days ago

        You could have a friend to them for you, and viceversa.

        • @samsi@lemmy.world
          link
          fedilink
          English
          229 days ago

          That would be the idea, but then my friend would need to have a server running at his place. And there is still the problem of how to transfer the data securely over the network to my friend, without poking (too many) holes in the firewall

    • @rumba@lemmy.zip
      link
      fedilink
      English
      330 days ago

      Non SSL behind your ingress proxy is acceptable professionally in most circumstances, assuming your network is properly segmented it’s not really a big deal.

      Self-signing and adding the CA is a bit of a pain in the ass and adds another unnecessary layer for failure in a home network.

      If it really grinds your gears you could issue yourself a real wild card cert from lets encrypt then at DNS names with that wild card on your local DNS server with internal IPs, but to auto renew it you’re going to have to do some pretty decent DNS work.

      To be honest I’ve scrapped most of my reverse proxies for a nice tailscale network. Less moving parts, encrypted end-to-end.

  • @Little8Lost@lemmy.world
    link
    fedilink
    English
    429 days ago

    Yesterday i managed to successfully host a simple html safely (its more of a network test)
    The path is nginx->openwrt->router to internet Now i only need to:

    • backup
    • set up domain (managing via cloudflare)
    • set up certificates
    • properly documentbthe setup + some guides on stuff that i will repeat

    and then i can throw everything i want on it :D

  • @sugoidogo@discuss.online
    link
    fedilink
    English
    227 days ago

    I wrote myself a new python script for a palworld server I run. Wanted to figure out a generic way to track active connections without running something in front of the daemon. That’s easy to do for TCP, but since UDP has no concept of an established connection, the regular tools wouldn’t work. Realized I could use conntrack to get the linux firewalls connection tracking data, which works outside of tcp/udp concepts and maintains its own active connection state based on timeouts, which is what I was gonna do anyways. Now I can issue SIGSTOP/SIGCONT to keep buildings from degrading on the server when nobody’s online to deal with it, along with saving the cpu resources of an empty game server. Rather niche project, but I figured I’d publish it anyways. https://github.com/sugoidogo/pausepal

  • @non_burglar@lemmy.world
    link
    fedilink
    English
    830 days ago

    Migrating from proxmox to incus, continued.

    • got a manually-built wireguard instance rolling and tested, it’s now “production”
    • setting up and testing backups now
    • going to export some NFS and iscsi to host video files to test playback over the network from jellyfin
    • building ansible playbooks to rebuild instances
    • looking into ansible to add system monitoring, should be easy enough

    Lots of fun, actually!

      • @non_burglar@lemmy.world
        link
        fedilink
        English
        230 days ago

        I’ve moved to all containers and I’m gradually automating everything. The metaphor for orchestration and provisioning is much clearer in incus than it was in lxd, and makes way more sense than proxmox.

        Proxmox is fine, I’ve used it for going on 8 years now, I’m still using it, in fact. But it’s geared toward a “safe” view of abstraction that makes lxc containers seem like virtual machines, and they absolutely aren’t, they are much, much more flexible and powerful than vms.

        There are also really annoying deficiencies in proxmox that I’ve taken for granted for a long time as well:

        • horrible builtin resource usage metrics. And I’m happy to run my influxdb/grafana stack to monitor, but users should be able to access those metrics locally and natively, especially if they’re going to be exported by the default metrics export anyway.
        • weird hangovers from early proxmox versions on io delay. Proxmox is still making users go chase down iostat rabbit holes to figure out why io_wait and “io delay” are not the same metric, and why the root cause is almost always disk, yet proxmox shows the io_wait stat as if it could be “anything”
        • integration of pass through devices is a solved problem, even for lxc, yet the bulk of questions for noobs is about just that. Pass through is solved for so many platforms, why proxmox just doesn’t have that as a GUI option for lxc is baffling.
        • no install choices for zfs on root on single disk (why???)
        • etc

        Ultimately, I have more flexibility with a vanilla bookworm install with incus.

        • @tofu@lemmy.nocturnal.gardenOP
          link
          fedilink
          English
          130 days ago

          Thanks a lot for your response! I too was a bit misguided by the way Proxmox presents LXCs but I’m mostly on VMs and haven’t explored LXCs further so far.

          • @non_burglar@lemmy.world
            link
            fedilink
            English
            230 days ago

            No worries. And don’t misunderstand: I think proxmox is great, I’ve simply moved on to a different way of doing thing.

  • @harsh3466@lemmy.ml
    link
    fedilink
    English
    330 days ago

    I’m integrating my Mac mini (running Asahi Linux) into my server setup. It’s slow going as I also have to move some data around so I can repurpose some hard drives.

  • @bigDottee@geekroom.tech
    link
    fedilink
    English
    41 month ago

    Just found Redirecterr and set that up, but that’s just for me since no one else seems to use Overseerr.

    Purchased a new to me EOL enterprise switch that will enable me to expand my network while replacing existing hardware that is limited. It also enables me to move to 10G networking woot!

  • @dfense@lemmy.world
    link
    fedilink
    English
    6
    edit-2
    1 month ago

    Currently trying to step up my game bv setting up kubernetes. Cluster is running, but I am really struggling getting the combination domain name, let’s encrypt and traefik, but without a cloud load balancer, to work. I feel like I went through most tutorials available, but it seems each one is missing a crucial part. Gonna invest some more hours today…

    • @dfense@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      6 days ago

      Just a quick update and shout-out to a cool project. After trying cloudflared, but not getting it to run stable, I ended up using Pangolin, a tunneled Mesh reverse proxy.

    • @Cpo@lemm.ee
      link
      fedilink
      English
      11 month ago

      Without supported loadbalancer Kubernetes is no fun / not doable in my opinion.

      For Hetzner for example, there are some recipes to be found to use an LB and also volumes.

      I’ve stepped back to docker compose with a traefik proxy which takes labels from the containers to decide where to route what.

      Highly recommended!

  • @Wrongdoer4094@lemmy.world
    link
    fedilink
    English
    61 month ago

    I have had success with a monthly reminder in my google calendar. Sometimes I skip it, but I have been updating and keeping everything nice and tidy much more frequent than I used to!

  • @dingdongitsabear@lemmy.ml
    link
    fedilink
    English
    61 month ago

    switched my server from i7-870 (my ex-workstation) to Pentium G6405 (got it free). switch went without a hitch, debian with a ton of docker services (jellyfin, servarr, pihole, radicale, etc.), 8 GB RAM only. although it’s a quadcore to dualcore switch, no performance issues. I know there are better options out there, but I don’t spend money unless I really have to.

    • @MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      4
      edit-2
      1 month ago

      That G6405 is actually about 25% faster overall and 50% faster per thread, so performance should be better now. Not to mention much faster RAM and IO.

      Core count doesn’t mean much when the CPUs are 12 years apart!

      • @dan@upvote.au
        link
        fedilink
        English
        31 month ago

        Not to mention all the extra instruction sets the newer CPU supports. The i7-870 is old enough that it doesn’t even support AES-NI, so encryption/decryption is significantly slower compared to even the lowest-end modern Intel or AMD x86 CPU.

  • SmokeyDope
    link
    fedilink
    English
    630 days ago

    I just spent a good few hours optimizing my LLM rig. Disabling the graphical interface to squeeze 150mb of vram from xorg, setting programs cpu niceness to highest priority, tweaking settings to find memory limits.

    I was able to increase the token speed by half a second while doubling context size. I don’t have the budget for any big vram upgrade so I’m trying to make the most of what ive got.

    I have two desktop computers. One has better ram+CPU+overclocking but worse GPU. The other has better GPU but worse ram, CPU, no overclocking. I’m contemplating whether its worth swapping GPUs to really make the most of available hardware. Its bee years since I took apart a PC and I’m scared of doing somthing wrong and damaging everything. I dunno if its worth the time, effort, and risk for the squeeze.

    Otherwise I’m loving my self hosting llm hobby. Ive been very into l learning computers and ML for the past year. Crazy advancements, exciting stuff.