Just an explorer in the threadiverse.

  • 2 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: June 4th, 2023

help-circle
rss
  • I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).

    1. K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don’t want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that’s not quite precisely k8s either. If I’m going to start trimming off the parts of k8s I don’t need, I end up going all the way to single-node podman/docker… not the halfway point that is k3s.
    2. If you don’t use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It’s totally necessary with you have a thousand engineers slinging services around your cluster, but there’s no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
    3. Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.

    Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.


  • This is a great approach, but I find myself not trusting Jellyfin’s preauth security posture. I’m just too concerned about a remote unauthenticated exploit that 2fa does nothing to prevent.

    As a result, I’m much happier having Jellyfin access gated behind tailscale or something similar, at which point brute force attacks against Jellyfin directly become impossible in normal operation and I don’t sweat 2fa much anymore. This is also 100% client compatible as tailscale is transparent to the client, and also protects against brute force vs Jellyfin as direct network communication with Jellyfin isn’t possible. And of course, Tailscale has a very tightly controlled preauth attack surface… essentially none of you use the free/commercial tailscale and even self-hosting headscale I’m much more inclined to trust their code as being security-concscious than Jellyfin’s.



  • This isn’t exactly an answer to your question, but an alternative monitoring architecture that elides this problem entirely is to run netdata on each server you run.

    • It appears to collect WAY more useful data than uptime Kuma, and requires basically no config. It also collects data on docker containers running on the server so you automatically get per-service metrics as well.
    • Health probes for several protocols including ping and http can be custom-defined in config-files if you want that.
    • There’s no cross server config or discovery required, it just collects data from the system it’s running on (though health probes can hit remote systems if you wish).
    • If any individual or collection of services is down, I see it immediately in their metrics.
    • If the server itself is down, it’s obvious and I don’t need a monitoring system to show a red streak for me to know. I’ve never wasted more than minute differentiating between a broken service and a broken server.

    This approach needs no external monitoring hosts. It’s not as elegant as a remote monitoring host that shows everything from a third-party perspective, but that also has the benefit of not false-positiving because the monitoring host went down or lost its network path to the monitored host… Netdata can always see what’s happening because it’s right there when it happens.


    1. If a service supports sqlite, I often will use that option. It provides everything a self-hoster needs from a DB with basically no operational overhead.
    2. If I do need a proper RDBMS (because the software I’m using doesn’t support sqlite), I’m going to use…
      1. A single Postgres container.
      2. Configured with multiple logical “databases” (the container for schemas and tables), one DB for each app connecting.

    I do this because I’m always memory constrained and the rdbms is generally the most memory-hungry part of any software stack. By sharing one db-process across all the apps that need it I get the most out of my db cache memory, etc. And by using multiple logical db’s, I get good separation between my apps, and they’re straightforward to migrate to a truly isolated physical DB if needed… but that’s never been needed.


  • … advertisement and push they did on sites like reddit…

    The lemmy world admins advertised on Reddit? Can you link an example?

    … their listing on join-lemmy.org

    Until recently EVERY lemmy instance was listed on join-lemmy.

    And with the name Lemmy.world they did nothing to dissuade anyone from thinking that.

    They run a family of servers under the world tld, including at least mastodon, lemmy, and calckey. They’re all named similarly.

    I also saw nothing from .world not claiming to be the bigger instance(super lemmy)

    They ARE the biggest instance, but that happened organically. It’s not based on any marketing claims from the admin team about being a flagship/super/mega/whatever instance. People just joined, and the admins didn’t stop them (nor should they). It’s not a conspiracy to take over lemmy. It’s just an instance that… until recently… happened to work pretty well when some were struggling.


  • I think the issue is that .world has put itself forward as some sort of super lemmy.

    Citation needed. All the admins of lemmy world ever purported to do was host a well-run general-purpose (aka not topic-oriented) lemmy instance. It was and remains that, and part of being a well-run general purpose instance is managing legal risk when a small subset of the community generates an outsized portion of it.

    Being well run meant that they scaled up and remained operational during the first reddit migration wave. People appreciated that, but continuing to function does not amount to a declaration of being a super lemmy.

    World also has kept signups open through good times, and more recently bad. Other instances at various times shut down signups or put irritating steps and purity tests along the way. Keeping signups open is a pretty bare-minimum bar for running a service though, it is again not a declaration of being a super-lemmy.

    Essentially lemmy world just… kept working (until recently when it has done a pretty poor job of that). I dunno where you found a declaration that lemmy world is a super-lemmy, but it’s not coming from the lemmy world admins, it’s likely randos spouting off.


  • I use postgres for my install and had a similar thing happen to me. I tried moving an org credential to a folder, which moved the folder to the org, and kicked all other credentials to “no folder”.

    Thanks for confirming with your DB. That saves me sweating whether I should rebuild on PG at least, and also makes me feel better that it’s a folder bug and not generalized database corruption.

    Having finished the heavy organizing, my rate of big org transfers has slowed and I haven’t reproduced again yet. Hopefully this will be uncommon enough to be a non-issue. Thanks again for the info.




  • A very common DDoS attack uses UDP services to amplify your request to a bigger response, but then spoof your src ip to the target.

    Having followed many reports of denial of service activity of Lemmy, I don’t think this is the common mode. Attacks I’d heard of involve:

    • Using regular lemmy APIs backed by heavy database queries. I haven’t heard discussion of query rates, but Lemmy instances are typically single-machine deployments on modest 4-core to 32-core hardware. Dozens to thousands of queries per second to the heaviest API endpoints are sufficient to saturate them. There’s no need for distributed attack networks to be involved.
    • Uploading garbage images to fill storage.

    Essentially the low-hanging fruit is low enough that distributed attacks, amplification, and attacks on bandwidth or the networking stack itself are just unnecessary. A WAF is still a good if indeed OPs instance is getting attacked, but I’d be surprised if wafs has built-in rules for lemmy yet. I somewhat suspect one would have to do the DB query analysis to identify slow queries and then write custom waf rules to rate limit the corresponding API calls. But it’s worth noting that OP has provided no evidence of an attack. It’s at least equally likely that they dos’ed themselves by running too many services on a crappy VPS and running out of ram. The place to start is probably basic capacity analysis.

    Some recent sources:


  • I have to wonder if NLNet has some process for amending commitments made in light of new lessons learned. By a wide variety of metrics, the impact of the project has been increased beyond all imagination and ambition that people could have had in January. And the technology and quality of the project has improved way way faster as its accrued new contributors. This is really a case where the the right milestones to measure by have changed.

    One might also hope that a call for help from contributors on these specific milestones might just get them back on track.

    But speculation aside… yeah your description of their funding challenges is accurate.


  • If you’re serious about this, there’s a post up calling for sysops: https://lemmy.world/post/2769245

    It’s somewhat of a commitment, rather than drop-in drop-out… but that’s what it takes to make a difference here. There are already several sharp and experienced database engineers working on the Lemmy world team. The problem is that the site is under repeated denial of service attack, and there isn’t one bad query to fix… each time one query gets addressed, the attackers move on to a new one.

    While it’s always possible that someone has missed a silver bullet, it’s much more likely that a a series of ongoing independent mitigations and optimizations are needed to achieve a tipping point where lemmy is more or less protectable with some hidden dos-able bits rather than more or less trivially dos-able everywhere.


  • Docker is a powerful tool to increase confidence in your backups.

    • In a VM, the way you figure out which files to backup is to read the docs. If they’re wrong or you misread them, the only way you’ll find out is by doing a full restore test… which is often painful and complex in home setups.
    • In docker, the filesystem outside volumes is destroyed between every container restart. If your volume setup is insufficient, you’ll repeatedly lose state during your initial installation process between container restarts. You’ll continually test your state management throughout the lifetime of the service during restarts. This leaves a much smaller window for backup mistakes.

    The tradeoff with docker is that the networking is complex (well, everything is complex… but the networking is where it often hurts). But if you’re able to deal with that one-time pain, it’s superior almost all the time for home setups. I think the only things I run outside docker are ssh and netdata. SSH because it’s stateless and works perfectly out of the box, and netdata because it wants permissions to everything… and is functionally stateless for me because I don’t care if I drop my observability data.




  • In the vast majority of cases, one can support variation in admin preferences by exposing a configuration parameter. Your downvote example is perfect because Beehaw doesn’t run a customized lemmy codebase. There is a checkbox exposed to lemmy admins that enables/disabled downvotes.

    Running a custom-codebase is generally the highest-hassle method of achieving some custom-config goal. The absence of communities around this approach isn’t an accident, the people who develop customizations generally try to work with the upstream unless the devs give them good reason not to.


  • Are there instances that run modified versions of the base Lemmy software? For example, that use their own sorting algorithms, or provide users ways to block instances or specific users, etc?

    If one had developed code to do these things, why would one not upstream it so it’s released in core lemmy and all instances can benefit from that capability?


  • ZFS zRAID is pretty good for this I think. You hook up the drives from one “pool” to a new machine, and ZFS can detect them and see that they constitute a pool and import them.

    I second this approach, but if one isn’t down with ZFS, LVM can bodge a raid onto any filesystem at the block layer. I don’t remember when I got over hardware raid envy and decided that I preferred software raid for my home lab, but it was a long while ago and I’ve never regretted it. Being able to plug some drives into any old USB, sata, or whatever port on any Linux box is super valuable when things start going sideways and you don’t have budget for spare hardware or rapid-response support contracts.


  • I enabled it and out of the box none of my containers could resolve DNS, even though aardvark was running.

    I experienced this on Ubuntu as well, and addressed it by opening up a firewall rule on the network interface for my podman network allowing the ip-range of the podman network to issue DNS requests to the gateway-ip (which is where aardvark-dns sets up shop).

    Also had to add a firewall rule to open whatever ports I exposed from all src-ips to the podman network range before exposing hostPorts would work.

    Again, not critiquing the very capable macvlan setup, just sharing tips I’ve picked up on making netavark work.


  • This is a pretty awesome how-to. I knew nothing about containerizing GPU workloads before this, and it seems quite a lot less scary/involved than I feared.

    FWIW, I think some of your DNS and general networking woes may be due to the macvlan setup rather than using netavark. Netavark seems like the golden path going forward for a batteries-included experience. Not that I have anything against macvlan, in many ways macvlan feels simplest and nicest for homelab setups and I’ve used it with LXC and other container runtimes in the past. But for the most docker-like “it just works” experience, I feel like netavark is getting the upstream love.