Skip to content

Kubernetes

A new paradigm for cloud-native infrastructure

Spot

What if spinning up a fully isolated Kubernetes cluster took seconds instead of hours, and cost a fraction of traditional managed Kubernetes? What if that cluster could run worker nodes anywhere in the world, even across clouds, while still being centrally managed? What if the Control Plane itself could be treated as a workload, scaling elastically and sharing infrastructure with hundreds of other clusters? What if all of this functionality was available now?

Rackspace launched "Spot", a Kubernetes offering with a clear mission: to provide fully managed Kubernetes clusters at compelling cost-efficiency, powered by dynamic spot/auction compute, and delivered as a turnkey experience.

In doing so, a fundamental question has to be confronted: how do you build a multi-tenant service that can spin up hundreds, or even thousands, of isolated Kubernetes clusters, each with its own Control Plane, without the overhead and complexity that traditional architectures entail?

What was needed was way more than a simple Kubernetes cluster creation automation: Rackspace needed an architecture built for scale, elasticity, and efficient multi-tenant orchestration. That's where Kamaji came in.

Streamlining Node Access with K9s and kubectl-node-shell

Debugging Kubernetes clusters often requires direct access to nodes. There are several ways to access your nodes... ssh, iLO/DRAC, kubectl debug, etc. I love shortcuts, aliases, functions, and scripts that can help me quickly gather data and help with my troubleshooting. I have found K9s, a powerful terminal UI for Kubernetes, and how to enhance it with kubectl-node-shell for seamless node access. This quick blog will hopefully give you another tool you can use with your kubernetes clusters.

Create Octavia Loadbalancers dynamically with Kubernetes and Openstack Cloud Controller Manager

octavia

Load Balancers are essential in Kubernetes for exposing services to users in a cloud native way by distributing network traffic across multiple nodes, ensuring high availability, fault tolerance, and optimal performance for applications.

By integrating with OpenStack’s Load Balancer as a Service (LBaaS) solutions like Octavia, Kubernetes can automate the creation and management of these critical resources with the use of the Openstack Cloud Controller Manager. The controller will identify services of type LoadBalancer and will automagically create cloud Loadbalancers on Openstack Flex with the Kubernetes nodes as members.

Deploy a Fully Automated Talos Cluster in Under 180 Seconds with Pulumi TypeScript

pulumi talos-linux

Talos is a modern operating system designed for Kubernetes, providing a secure and minimal environment for running your clusters. Deploying Talos on OpenStack Flex can be streamlined using Pulumi, an infrastructure as code tool that allows you to define cloud resources using familiar programming languages like TypeScript.

In this guide, we'll walk through setting up the necessary network infrastructure on OpenStack Flex using Pulumi and TypeScript, preparing the groundwork for running Talos.

Getting Started with Pulumi and OpenStack

pulumi

Pulumi is an open-source infrastructure-as-code (IaC) platform that enables you to define, deploy, and manage cloud infrastructure using familiar programming languages like Python, JavaScript, TypeScript, Go, and C#. By leveraging your existing coding skills and knowledge, Pulumi allows you to build, deploy, and manage infrastructure on any cloud provider, including AWS, Azure, Google Cloud, Kubernetes, and OpenStack. Unlike traditional tools that rely on YAML files or domain-specific languages, Pulumi offers a modern approach by utilizing general-purpose programming languages for greater flexibility and expressiveness. This means you can use standard programming constructs—such as loops, conditionals, and functions—to create complex infrastructure deployments efficiently.

Running Teleport Cluster on OpenStack Flex

alt text

Teleport is a modern security gateway for remotely accessing clusters of Linux servers via SSH or Kubernetes. In this guide, we will walk through deploying Teleport on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Teleport software, and configure the service to run on the instance. This setup will allow us to access the Teleport web interface and create new users and roles, and manage access to the instance. The intent of this guide is to provide a simple example of how to deploy Teleport on an OpenStack Flex instance.

Running Crunchydata Postgres on OpenStack Flex

Crunchdata

Crunchydata provides a Postgres Operator that simplifies the deployment and management of PostgreSQL clusters on Kubernetes. In this guide, we will walk through deploying the Postgres Operator from Crunchy Data on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Postgres Operator software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy the Postgres Operator from Crunchy Data on an OpenStack Flex on Kubernetes.

Running MetalLB on OpenStack Flex

alt text

MetalLb is a load balancer for Kubernetes that provides a network load balancer implementation for Kubernetes clusters. MetalLB is a Kubernetes controller that watches for services of type LoadBalancer and provides a network load balancer implementation. The load balancer implementation is based on standard routing protocols. In this post we'll setup a set of allowed address pairs on the OpenStack Flex network to allow MetalLB to assign floating IPs to the load balancer service.

Running CockroachDB on OpenStack Flex

CockroachDB CockroachDB is a distributed SQL database that provides consistency, fault-tolerance, and scalability that has been purpose built for the cloud. In this guide, we will walk through deploying CockroachDB on an OpenStack Flex instance. As operators, we will need to create a new instance, install the CockroachDB software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy CockroachDB on an OpenStack Flex on Kubernetes.

Running Longhorn on OpenStack Flex

Longhorn logo

Longhorn is a distributed block storage system for Kubernetes that is designed to be easy to deploy and manage. In this guide, we will walk through deploying Longhorn on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Longhorn software, and configure the service to run on the instance. This setup will allow us to access the Longhorn web interface and create new volumes, snapshots, and backups. The intent of this guide is to provide a simple example of how to deploy Longhorn on an OpenStack Flex instance.