Skip to content

Blog

Create Octavia Loadbalancers dynamically with Kubernetes and Openstack Cloud Controller Manager

octavia

Load Balancers are essential in Kubernetes for exposing services to users in a cloud native way by distributing network traffic across multiple nodes, ensuring high availability, fault tolerance, and optimal performance for applications.

By integrating with OpenStack’s Load Balancer as a Service (LBaaS) solutions like Octavia, Kubernetes can automate the creation and management of these critical resources with the use of the Openstack Cloud Controller Manager. The controller will identify services of type LoadBalancer and will automagically create cloud Loadbalancers on Openstack Flex with the Kubernetes nodes as members.

Using ansible to create resources on flex cloud

Ansible has a wide range of modules available to create and manage resources like openstack flavors, images, keypairs, networks, routers among others on the flex cloud. These modules are available in the Openstack.Cloud ansible collection. In this post we will discuss creating resources on flex cloud using ansible modules.

Deploy a Fully Automated Talos Cluster in Under 180 Seconds with Pulumi TypeScript

pulumi talos-linux

Talos is a modern operating system designed for Kubernetes, providing a secure and minimal environment for running your clusters. Deploying Talos on OpenStack Flex can be streamlined using Pulumi, an infrastructure as code tool that allows you to define cloud resources using familiar programming languages like TypeScript.

In this guide, we'll walk through setting up the necessary network infrastructure on OpenStack Flex using Pulumi and TypeScript, preparing the groundwork for running Talos.

Using ansible to manage instances on flex cloud

In this blog post we will discuss how we can use ansible to manage instances running on a flex cloud. It is important to note that while it is possible to create resources on an openstack cloud using ansible itself but the main aim of this blog post is to discuss how we can manage existing instances running within a project with ansible. The examples provided in this blog post are for instances running ubuntu 20.04 LTS as the base OS; these instructions can be adapted to accomodate any other OS as well

Agentic AI in Action: Deploying Secure, Task-Driven Agents in Rackspace Cloud

The concept of AI agents has emerged as a transformative tool, empowering organizations to create AI systems capable of secure, real-world interactions. By leveraging a language model’s natural language abilities alongside function-calling capabilities, an agentic AI system can interact with external systems, retrieve data, and perform complex tasks autonomously. Enterprises can harness the full potential of AI by designing agent workflows that interact securely with business data, ensuring control and privacy. In this post, we explore building an AI agentic workflow with Meta’s LLaMA 3.1, specifically crafted for interacting with private data from a database. We’ll dive into the technical foundation of agentic systems, examine how function calls operate, and show how to securely deploy all of this within a private cloud, keeping data secure.

Getting Started with Pulumi and OpenStack

pulumi

Pulumi is an open-source infrastructure-as-code (IaC) platform that enables you to define, deploy, and manage cloud infrastructure using familiar programming languages like Python, JavaScript, TypeScript, Go, and C#. By leveraging your existing coding skills and knowledge, Pulumi allows you to build, deploy, and manage infrastructure on any cloud provider, including AWS, Azure, Google Cloud, Kubernetes, and OpenStack. Unlike traditional tools that rely on YAML files or domain-specific languages, Pulumi offers a modern approach by utilizing general-purpose programming languages for greater flexibility and expressiveness. This means you can use standard programming constructs—such as loops, conditionals, and functions—to create complex infrastructure deployments efficiently.

Running Teleport Cluster on OpenStack Flex

alt text

Teleport is a modern security gateway for remotely accessing clusters of Linux servers via SSH or Kubernetes. In this guide, we will walk through deploying Teleport on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Teleport software, and configure the service to run on the instance. This setup will allow us to access the Teleport web interface and create new users and roles, and manage access to the instance. The intent of this guide is to provide a simple example of how to deploy Teleport on an OpenStack Flex instance.

Running Crunchydata Postgres on OpenStack Flex

Crunchdata

Crunchydata provides a Postgres Operator that simplifies the deployment and management of PostgreSQL clusters on Kubernetes. In this guide, we will walk through deploying the Postgres Operator from Crunchy Data on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Postgres Operator software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy the Postgres Operator from Crunchy Data on an OpenStack Flex on Kubernetes.

Running MetalLB on OpenStack Flex

alt text

MetalLb is a load balancer for Kubernetes that provides a network load balancer implementation for Kubernetes clusters. MetalLB is a Kubernetes controller that watches for services of type LoadBalancer and provides a network load balancer implementation. The load balancer implementation is based on standard routing protocols. In this post we'll setup a set of allowed address pairs on the OpenStack Flex network to allow MetalLB to assign floating IPs to the load balancer service.