Skip to content

Blog

Agentic AI in Action: Deploying Secure, Task-Driven Agents in Rackspace Cloud

The concept of AI agents has emerged as a transformative tool, empowering organizations to create AI systems capable of secure, real-world interactions. By leveraging a language model’s natural language abilities alongside function-calling capabilities, an agentic AI system can interact with external systems, retrieve data, and perform complex tasks autonomously. Enterprises can harness the full potential of AI by designing agent workflows that interact securely with business data, ensuring control and privacy. In this post, we explore building an AI agentic workflow with Meta’s LLaMA 3.1, specifically crafted for interacting with private data from a database. We’ll dive into the technical foundation of agentic systems, examine how function calls operate, and show how to securely deploy all of this within a private cloud, keeping data secure.

Getting Started with Pulumi and OpenStack

pulumi

Pulumi is an open-source infrastructure-as-code (IaC) platform that enables you to define, deploy, and manage cloud infrastructure using familiar programming languages like Python, JavaScript, TypeScript, Go, and C#. By leveraging your existing coding skills and knowledge, Pulumi allows you to build, deploy, and manage infrastructure on any cloud provider, including AWS, Azure, Google Cloud, Kubernetes, and OpenStack. Unlike traditional tools that rely on YAML files or domain-specific languages, Pulumi offers a modern approach by utilizing general-purpose programming languages for greater flexibility and expressiveness. This means you can use standard programming constructs—such as loops, conditionals, and functions—to create complex infrastructure deployments efficiently.

Running Teleport Cluster on OpenStack Flex

alt text

Teleport is a modern security gateway for remotely accessing clusters of Linux servers via SSH or Kubernetes. In this guide, we will walk through deploying Teleport on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Teleport software, and configure the service to run on the instance. This setup will allow us to access the Teleport web interface and create new users and roles, and manage access to the instance. The intent of this guide is to provide a simple example of how to deploy Teleport on an OpenStack Flex instance.

Running Crunchydata Postgres on OpenStack Flex

Crunchdata

Crunchydata provides a Postgres Operator that simplifies the deployment and management of PostgreSQL clusters on Kubernetes. In this guide, we will walk through deploying the Postgres Operator from Crunchy Data on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Postgres Operator software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy the Postgres Operator from Crunchy Data on an OpenStack Flex on Kubernetes.

Running MetalLB on OpenStack Flex

alt text

MetalLb is a load balancer for Kubernetes that provides a network load balancer implementation for Kubernetes clusters. MetalLB is a Kubernetes controller that watches for services of type LoadBalancer and provides a network load balancer implementation. The load balancer implementation is based on standard routing protocols. In this post we'll setup a set of allowed address pairs on the OpenStack Flex network to allow MetalLB to assign floating IPs to the load balancer service.

Running CockroachDB on OpenStack Flex

CockroachDB CockroachDB is a distributed SQL database that provides consistency, fault-tolerance, and scalability that has been purpose built for the cloud. In this guide, we will walk through deploying CockroachDB on an OpenStack Flex instance. As operators, we will need to create a new instance, install the CockroachDB software, and configure the service to run on the instance. The intent of this guide is to provide a simple functional example of how to deploy CockroachDB on an OpenStack Flex on Kubernetes.

Running Longhorn on OpenStack Flex

Longhorn logo

Longhorn is a distributed block storage system for Kubernetes that is designed to be easy to deploy and manage. In this guide, we will walk through deploying Longhorn on an OpenStack Flex instance. As operators, we will need to create a new instance, install the Longhorn software, and configure the service to run on the instance. This setup will allow us to access the Longhorn web interface and create new volumes, snapshots, and backups. The intent of this guide is to provide a simple example of how to deploy Longhorn on an OpenStack Flex instance.

Running Talos on OpenStack Flex

talos-linux

As developers, we're constantly seeking platforms that streamline our workflows and enhance the performance and reliability of our applications. Talos is a container optimized Linux distribution reimagined for distributed systems. Designed with minimalism and practicality in mind, Talos brings a host of features that are particularly advantageous for OpenStack environments. By stripping away unnecessary components, it embodies minimalism, reducing the attack surface and resource consumption. It comes secure by default, providing out-of-the-box secure configurations that alleviate the need for extensive hardening.