beefnugs 5 days ago

I tried to get a 3 node bare metal cluster running, and i came to the conclusion that its not worth it unless you have over 50+ services running on over 5+ servers and really really need instant automatic failover. Otherwise docker compose is so so so much easier.

I found very poor documentation on how to recover from failure scenarios, i dont like how the control plane has to be separate from the workers, i happen to choose k3s which was abandoned, i never understood why etcd NEEDS more than 2, i hated all the "beta" configuration tags/domains whatever they are called, there was no good way to know when they changed or what they changed to, I thought the whole point of helm was that you could update things easily but its the opposite: it breaks things. I couldn't believe there was no built in support for uninterruptible power supplies? maybe just a k3s thing.

I am sure it works for huge scale entire/multiple data center stuff, but anything in one location i bet is a mistake

  • dvfjsdhgfv 3 days ago

    > why etcd NEEDS more than 2

    The consensus algorithm (Raft in this case) requires that a majority agrees, so you need to have an odd number of nodes (and 1 wouldn't make sense, hence 3 is the minimum, and in practice works very well).

  • Thev00d00 5 days ago

    Wait k3s is abandoned?!

    • speedgoose 5 days ago

      I can’t find anything about this. The project looks still active.

      Perhaps it’s a confusion with k3os, a k3s operating system. https://github.com/rancher/k3os

      • zorlack 4 days ago

        Lol. Probably because it's actually confusing.

        I love all of the oddball projects being spun out by Rancher/Suse. It can be hard to keep track of them.

        Oh wait this one is kubernetes distro, and that one's an OS and this thing... well it's a... err... binary container something-or-other?

    • nunez 4 days ago

      Nope; still active afaik.

mstaoru 4 days ago

We have a couple of 4U8N Supermicro "fat twins" hooked to a Miktrotik CRS17 (10 GbE). Each node has 6 drive bays – to simplify things, we have installed similar capacity drives and use ZFS raidz2. 7 nodes out of 8 are production, and the last node is staging. Each node runs 2x older v4 Xeons (to a total of 64 virtual cores per node), with 256G RAM. This cost us about US$8k upfront per server, the chassis were decommissioned 2nd hand (looked nice), 2nd hand CPUs, and everything else new. We pay around US$2k for unmetered 30M colocation in Shanghai. In EU/US it must be cheaper and faster.

We provisioned the servers by hand, just installing Proxmox on each of them by hand. Proxmox is nice because it has ZFS root support out of the box without any magic. All other Proxmox services are disabled.

After that we have installed K3s on it via Ansible, with each node being both master and worker. There's no need for separation unless you have some really write heavy workloads that will throttle etcd and cause split brain.

On top of K3s we have three types of workloads:

- stateless workloads, or workloads with volumes from ConfigMaps or Secrets,

- stateful highly available workloads where HA is managed by application (Postgres, Redis...),

- stateful single unit workloads.

For HA workloads we use ZFS directly via openebs-zfs CSI driver, this ties workload to nodes, but since HA is managed by application, and we do minimum 5 or 7 nodes, we can usually easily turn a node off for maintenance and reseed the workload on another node. Most Helm charts will have this functionality built-in (e.g. Postgres has Patroni).

For stateful non-HA workloads (e.g. our knowledgebase) we use Longhorn on top of zvol formatted in ext4. Longhorn gives us nice backups to any S3-compatible storage, plus durability. We usually go with 5 replicas, or with 3 for less important workloads.

It worked nicely like this for a few years and survived several minor faults. Failing node is drained and replaced without any drama.

We mostly use Helm charts for applications, Bitnami for Redis and other things, TimescaleDB's chart for Postgres, own charts for our applications. Before rolling out to production we test on the staging machines. Of course, it's not the same, but close enough.

For dev we use k3d, which is multi-node K3s on top of Docker.

  • mingus88 2 days ago

    It wasn’t until reading this comment that I mis-read “in house” as “at home”

    That would have been a very impressive home lab.

    • mstaoru 2 days ago

      Hah, you can totally run K3s on a cluster of RPis these days!

eldridgea 5 days ago

I use a single node microk8s instance mainly so I can stay familiar with syntax and things for work rather than an actual high availability system.

But I use Portainer and store my files on Github so Portainer can auto-update a deployment if I submit a code change, so kind of rudimentary CI/CD which could be fleshed out more with some GitHub Actions probably.

I use iSCSI mounted storage from my NAS on the host and k8s volumes storing configs there. Actual app data is on the NAS accessed via NFS from the relevant apps.

So a new deployment is usually test locally on my laptop, once it's good commit the code to github and either let the deployment auto update or go to Portainer and do it manually if it's a new deployment. Ingress traffic is done via Cloudflare Tunnels deployed in k8s.

I keep most apps in a single namespace called prod unless they need more than 1-2 pods. If I was doing this again I'd use a namespace per app, I do use a dedicated namespace for anything with a Helm deployment or needs a lot of pods (e.g. Immich)

  • LikeAnElephant 5 days ago

    Can you tell me more about your Portainer setup? Does it just update your app from an image or is it checking out code from a git repo on deploy? This approach sounds very interesting

speedgoose 4 days ago

I stopped doing it, but I used to manage in house k8s servers like pets and not cattle.

We had a MAAS installation at some point, which was neat while it worked. The server boots from the network, has some kind of tiny Linux distribution to register itself in MAAS, and shutoff. You can later provision it from MAAS and it will boot with wake on lan, install the image you selected, and be ready to SSH after a little while.

We also had an OpenStack cluster, but we went bare metal after some years because it was more cool at the time. This infrastructure was there to learn, experiment, and have fun.

Monitoring was done in a strange way. Nowadays I would install kube-prometheus-stack and be done. At the time it was Munin and some bespoke monitoring script linked to a stack light. https://en.m.wikipedia.org/wiki/Stack_light

iooi 3 days ago

I run k8s "in my house", which is probably different than what you had in mind, but it might still be useful for others.

I use ESXi and run a separate master node and a separate worker node. There's around ~12 services running in the worker node, mostly things related to media. I maintain a workbook for how I bring up a new node and how I upgrade the cluster, which are the normal operations I've had to do in the past. For example, in the past I had too little storage allocated to the worker node and it was easier to bring up a new one than to edit the existing one.

I use dynamic volumes that use NFS on my NAS for any data that needs to be persisted across pod restarts. This works surprisingly well. I use nfs-client-provisioner with helm.

I also use a combination of MetalLB, an nginx ingress controller, and a BIND service so I can point the DNS on my laptop to the BIND server and I can access all my services using DNS instead of IPs.

juancn 3 days ago

It's rather complex, there's a whole team handling it, we have servers in ~20 data centers distributed globally, large pools have > 20k pods running on each, each dc has 100TB to 1PB of RAM available.

We have pod affinity rules (we usually flush entire racks for infra updates) so failures don't bring down services.

Node failure is rather unusual, it's more likely that we either need to flush a rack to update it or some service has some issue.

We have separate environments with isolated hardware pools for production and testing (it may be colocated in the same dc).

Nodes have high performance NAS available and ephemeral local storage (SSDs) that it's wiped on pod restart.

If a node fails, you remove it from the pool and send someone to replace it when feasible.

Provisioning depends on the application, you can provision your own pod (if you have the right access), but applications tend to have deployer services that handle provisioning for them.

mindcrash 4 days ago

My current client, since recently, is running several Kubernetes clusters to support DTAP on NKE (Nutanix Kubernetes Engine) on several VMs on a single physical Nutanix cluster.

Although we faced some initial hickups setting it up because the client does not have physical equipment for application delivery and Nutanix does not provide MetalLB out of the box everything seems to work beautifully as we speak.

Management of the cluster is basically a combination of webbased Nutanix tools and kubectl, and nodes are virtualized and thus will survive hardware outages.

raarts 4 days ago

We run a 7 node cluster in a proxmox cluster, currently considering of two Proliants and an MSA using SAS controllers. Use NFS for permanent storage. K8s can be redeployed using Kubespray from a Gitlab pipeline. Currently experimenting with Capsule to run tenants inside the cluster.

zorlack 4 days ago

My team does quite a bit of this. We handle it in two different ways:

For some clusters we carve nodes out of VMWare simply using OS templates. For other nodes we we use cheap-and-deep blade servers and install the OS on bare-metal using PXE. Once the nodes are provisioned we use ansible to deploy Kubernetes. (Lately it's been RKE2 on top of Rocky.)

Generally speaking VM-based nodes are extremely reliable and seldom have to be rebuilt. (If we're paying to run VMWare its because the underlying hardware is high-quality.) Bare-metal nodes, on the other hand, are built on inexpensive hardware and they tend to fail in different ways. When they fail we cordon and remove them from the cluster and put them in a list. (We maintain sufficient overcapacity to soak failures as they come.)

If we're using persistence we have to take care that the statefulsets are configured correctly. Sometimes we use local-disk persistence so that our services can benefit from local NVME performance. Other times we use NFS (when we need persistence but not performance.)

We monitor cluster node health internally to Kubernetes and also externally using Nagios (shudder).

Kubernetes upgrades are a pain in the ass. Lots of times we'll just set up a second cluster to avoid the risk of a failure during an upgrade.

surrTurr 3 days ago

Talos + Terraform on Hetzner

  • cassianoleal 2 days ago

    I'm doing a poc deploying Talos on Proxmox via Terraform.

    One snag I've hit is applying specifig config to the cluster. Some things need to be patched on a single master node, which requires me to have some ugly conditionals in the HCL code.

    Have you had that situation? How did you solve it?

    • chaostheo a day ago

      Didn't you still think that zfs is not a bit slow for nfs file service (real clients or for any vm/container) ? 2x Srv, Xeon 62xy, 192G, 13x18T r6 xfs, 100Gb connectx6, ipoib, nfs4: ./elbencho -r -b 1M -t 8 -s 10g --direct F{1..8} OPERATION RESULT TYPE FIRST DONE LAST DONE ========= ================ ========== ========= READ Elapsed time : 7.151s 7.786s IOPS : 10789 10521 Throughput MiB/s : 10789 10521 Total MiB : 77157 81920 --- 09:01:35 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:36 PM ibp216s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:01:36 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:37 PM ibp216s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:01:37 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:38 PM ibp216s0 1500050.00 53390.00 3055651.88 3698.86 0.00 0.00 0.00 25.03 09:01:38 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:39 PM ibp216s0 5653739.00 200654.00 11516727.26 13894.61 0.00 0.00 0.00 94.35 09:01:39 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:40 PM ibp216s0 5705062.00 203018.00 11621316.50 14052.73 0.00 0.00 0.00 95.20 09:01:40 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:41 PM ibp216s0 5620242.00 196400.00 11448537.04 13633.62 0.00 0.00 0.00 93.79 09:01:41 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:42 PM ibp216s0 5699689.00 187514.00 11610387.99 13142.38 0.00 0.00 0.00 95.11 09:01:42 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:43 PM ibp216s0 5713953.00 186006.00 11639418.27 13060.07 0.00 0.00 0.00 95.35 09:01:43 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:44 PM ibp216s0 5725531.00 193176.00 11663012.59 13484.04 0.00 0.00 0.00 95.54 09:01:44 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:45 PM ibp216s0 5610783.00 187174.00 11429270.82 13089.09 0.00 0.00 0.00 93.63 09:01:45 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:46 PM ibp216s0 1942932.00 62188.00 3957775.14 4379.84 0.00 0.00 0.00 32.42 09:01:46 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:47 PM ibp216s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:01:47 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:01:48 PM ibp216s0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

      And still not tried creating snapshots as reflink copies or even tried duperemove ? Then dream on with zfs.