About

Why KubeSolo exists

The design decisions behind single-node Kubernetes, and where it fits.

The problem KubeSolo solves

Kubernetes is the right answer for container orchestration at scale. The ecosystem, the tooling, the declarative model, the RBAC, the operator pattern... none of that has a real equivalent outside of Kubernetes. That's not in dispute.

The problem is that standard Kubernetes is designed for multi-node clusters, and a lot of the real world runs on single nodes. Edge devices. Factory gateways. Developer laptops. Remote site hardware. IoT controllers. The millions of machines that have been running Docker or Podman because standing up a full cluster was overhead that couldn't be justified for a single workload host.

That creates a gap. You either run Docker and give up the Kubernetes ecosystem entirely, or you run K3s or MicroK8s and accept that you're carrying clustering machinery you'll never use. KubeSolo closes the gap by taking a different starting position: remove the clustering code rather than disable it.

The design decision

Most "lightweight" Kubernetes distributions are full distributions that have been slimmed down. They still contain the multi-node code. It's just not active by default.

KubeSolo starts from the other end. The question we asked was: what does Kubernetes look like if you remove everything that requires more than one node? No etcd quorum, no leader election, no multi-node CNI overlay, no control plane distribution. What you're left with is still real Kubernetes (full API, full control loop, full ecosystem compatibility) but none of the unused weight.

The practical result: under 200 MB RAM at idle, optimized for flash storage, and an install path that takes under 60 seconds on hardware from a Raspberry Pi to an industrial gateway.

The broader use case

KubeSolo was originally built with the device edge in mind: far-edge deployments, industrial OT environments, constrained embedded hardware where resources are finite and connectivity is intermittent.

That use case still defines the design. But the same properties that make KubeSolo right for edge hardware make it right for any single-node deployment scenario: developer workstations that want local Kubernetes without a hypervisor, remote and branch sites where one node per location makes operational sense, kiosk and appliance workloads, CI runners, and anywhere you'd previously reached for Docker because a full cluster was overkill.

The framing we use internally: if you would have run Docker there, you can run KubeSolo instead. Same images, better runtime, full ecosystem.

Relationship to Portainer

KubeSolo is built and maintained by Portainer, an open source container management platform. KubeSolo runs standalone without Portainer.

Where Portainer adds value is at the fleet level. If you're operating multiple KubeSolo nodes, Portainer's Edge Agent connects each node to a centralized operator control plane: GitOps deployments via the Portainer operator, RBAC across your fleet, and lifecycle management for every node from a single interface. The Edge Agent initiates an outbound connection from the node, so it works behind NAT and in air-gapped environments.

KubeSolo is the runtime. Portainer is the optional management layer for when you need more than one.

Licence

KubeSolo is released under the MIT license. Source is on GitHub. Contributions, issues, and pull requests are welcome.