Skip to content

openstack

Bringing the AMD Radeon AI PRO R9700 Online in OpenStack Flex

I'll be honest. When the AMD Radeon AI PRO R9700 first showed up on my radar, I wasn't sure what to make of it. It's not a traditional datacenter card and it's not a gaming card either. The R9700 is a 32 GB professional GPU that won't break the bank, and sits in a product category that didn't really exist eighteen months ago.

This week our team brought a pair of R9700 GPUs online in Rackspace OpenStack Flex; like any good story there was a bit of drama with servers, placement, shipping times, cables oddities, chassis crisis, and more; we had the making of a full feature length K-Drama with all the twists and turns. Once we got past the drama, parts were installed and powered on, the entire deployment took about ten minutes which is a testament to the power of Genestack's Kubernetes-native architecture and OpenStack's hardware-agnostic design.

Getting Started with AMD GPU Compute on Rackspace OpenStack Flex

Your instance is up, your AMD GPU is attached, and you're staring at a terminal with no nvidia-smi to lean on. Welcome to the other side.

If you've read our NVIDIA getting started guide, you know the drill: provision an instance, install drivers, verify the hardware, start computing. The AMD path follows the same logic but with different tooling. Instead of CUDA, you're working with ROCm. Instead of nvidia-smi, you've got rocm-smi. Instead of a driver ecosystem that's had two decades of cloud deployment polish, you've got one that's been moving fast and getting dramatically better, but still has some rough edges worth knowing about.

The Business Case for CPU-Based AI Inference

Your finance team doesn't care about tokens per second. They care about predictable costs, compliance risk, and vendor lock-in. Here's how CPU inference stacks up.

The other week I published a technical deep-dive on running LLM inference with AMD EPYC processors and ZenDNN. The benchmarks showed that a $0.79/hour VM can push 40-125 tokens per second depending on model size, genuinely usable performance for a surprising range of workloads.

But benchmarks don't answer the question that actually matters: Should you do this?

Running AI Inference on AMD EPYC Without a GPU in Sight

Spoiler: You don't need a $40,000 GPU to run LLM inference. Sometimes 24 CPU cores and the right software stack will do just fine.

The AI infrastructure conversation has become almost synonymous with GPU procurement battles, NVIDIA allocation queues, and eye-watering hardware costs. But here's a reality that doesn't get enough attention: for many inference workloads, especially during development, testing, and moderate-scale production, modern CPUs with optimized software can deliver surprisingly capable performance at a fraction of the cost.

Solving GPU Passthrough Memory Addressing in OpenStack

Delivering Accelerator enabled Developer Cloud Functionality on Rackspace OpenStack Flex.

When AMD launched the AMD Developer Cloud, we took notice. Here was a streamlined platform giving developers instant access to high-performance MI300X GPUs, complete with pre-configured containers, Jupyter environments, and pay-as-you-go pricing. The offering resonated with the AI/ML community because it eliminated friction: spin up a GPU instance, start training, destroy it when done.