kubernetes
Google's open-source version of container cluster management
Google's open-source version of container cluster management
Reviews for kubernetes
Hear what real users highlight about this tool.
Makers praise Kubernetes for dependable orchestration at scale and cloud portability. The makers of Dodo Payments say it powers their containerized workloads and smooth deployments. The makers of xpander.ai highlight customers running AI agent infrastructure on their own clusters. The makers of Pulse for Elasticsearch and OpenSearch deploy across all clouds. Broader reviewers echo strong autoscaling, reliability, and flexibility for microservices, with zero-downtime rollouts and vendor lock-in avoidance noted. Some mention alternatives like ECS, but consensus favors Kubernetes for consistent, scalable operations across environments.
This AI-generated snapshot distills top reviewer sentiments.
Our customers are using the xpander.ai self-service deployment capabilities to run AI Agent infrastructure on their own Kubernetes clusters
It’s the backbone of our orchestration. K8s gives us control, isolation, and scalability across all our agent workloads. Still the best way to scale complex, stateful AI infra.
We self-host everything on our own severs in the Netherlands using Kubernetes to stay fully in control of reliability, privacy, and cost. It lets us scale efficiently, roll out updates with zero downtime, and avoid vendor lock-in, critical for a platform like UniDeck that offers self-hosted Enterprise licenses.
all about kengoro all about kengoro
been using kubernetes for a few years to manage production workloads. once you get past the learning curve, it’s incredibly reliable for scaling, deployment, and monitoring. it’s now a core part of our devops workflow and makes managing containerized apps so much easier.
We’ve been relying on Kubernetes to run and scale our applications, and it’s been a huge step forward. The flexibility it gives us to manage deployments, scale services automatically, and keep everything resilient has made a real difference for our team.
Kubernetes gives us full control and scalability for our infrastructure. Instead of being tied to one provider’s limitations, we can orchestrate our services in a way that’s resilient, portable, and ready for growth. It’s the backbone that keeps Topik reliable as we scale.
As a platform leader, I’ve seen firsthand how it brings consistency, reliability, and flexibility to complex deployments. It’s a game-changer for any team serious about modern infrastructure. AWS ECS is a good alternative.
As we’re building a next-gen AI workflow discovery platform, reliability and scalability are critical. Kubernetes empowers us to manage complex microservices and scale our vector databases, worker pods, and context engines seamlessly.
Kubernetes has been essential for deploying and scaling StakeCircle reliably. It manages the orchestration of containers, ensuring that the platform remains highly available and resilient across different environments as usage grows.
What we use it for: Container orchestration and scaling Why we like it: Kubernetes lets us run our real-time inference infrastructure at scale, with the flexibility to optimize for performance, cost, and availability.
The intelligence at the heart of Speaknbuild relies on our AI agents, which require a robust and flexible infrastructure. That's why we use Kubernetes to orchestrate them. Kubernetes allows us to run our AI agents reliably and ensures the necessary scalability to handle a growing number of users and complex requests. This makes Speaknbuild better and guarantees fast performance even under heavy load.
We selected Kubernetes for its unmatched orchestration capabilities and vast ecosystem, making our infrastructure more robust and portable than using ECS or traditional VMs.