Automating Scheduled VM Snapshots
This guide walks through creating an OpenStack Application Credential, setting up a Python environment, installing the snapshot script, and scheduling it to run automatically.
This guide walks through creating an OpenStack Application Credential, setting up a Python environment, installing the snapshot script, and scheduling it to run automatically.
We detail how to build immutable, secure, and minimal Kubernetes clusters by combining Cluster API (CAPI) with TALOS OS. This powerful stack allows you to leverage the cloud-agnostic management capabilities of CAPI while benefiting from TALOS's minimal attack surface. Deploying on RackSpace OpenStack Flex grants you complete control over your underlying infrastructure, maximizing resource efficiency and providing a production-ready cloud-native environment. This integration simplifies day-2 operations and delivers enterprise-grade security for your private cloud.

Note
For creating management clusters - In this case - we are going to make use of flex management cluster's namespace.
For the purpose of this blog and environment, following options were chosen:
Fetch the image on your RackSpace Openstack Flex environment :
wget https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v1.11.1/openstack-amd64.raw.xz
Note
If qemu-img isnt installed, one can install the same using: sudo apt-get install libguestfs-tools
qemu-img convert -f raw \
-O qcow2 openstack-amd64.raw talos-v1.11.1.qcow2
Upload to glance:
openstack image create \
--progress \
--disk-format qcow2 \
--container-format bare \
--file ./talos-v1.11.1.qcow2 \
--property hw_vif_multiqueue_enabled=true \
--property hw_qemu_guest_agent=yes \
--property hypervisor_type=kvm \
--property img_config_drive=optional \
--os-cloud image-services \
--public \
Talos-v1.11.1
K-ORC is a Kubernetes API for declarative management of OpenStack resources. By fully controlling the order of OpenStack operations, it allows consumers to easily create, manage, and reproduce complex deployments. ORC aims to be easily consumed both directly by users, and by higher level controllers. ORC aims to cover all OpenStack APIs which can be expressed declaratively.
kubectl apply -f https://github.com/k-orc/openstack-resource-controller/releases/latest/download/install.yaml
Output
namespace/orc-system created
customresourcedefinition.apiextensions.k8s.io/flavors.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/floatingips.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/images.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/networks.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/ports.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/projects.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/routerinterfaces.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/routers.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/securitygroups.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/servergroups.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/servers.openstack.k-orc.cloud created
customresourcedefinition.apiextensions.k8s.io/subnets.openstack.k-orc.cloud created
serviceaccount/orc-controller-manager created
role.rbac.authorization.k8s.io/orc-leader-election-role created
clusterrole.rbac.authorization.k8s.io/orc-image-editor-role created
clusterrole.rbac.authorization.k8s.io/orc-image-viewer-role created
clusterrole.rbac.authorization.k8s.io/orc-manager-role created
clusterrole.rbac.authorization.k8s.io/orc-metrics-auth-role created
clusterrole.rbac.authorization.k8s.io/orc-metrics-reader created
rolebinding.rbac.authorization.k8s.io/orc-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/orc-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/orc-metrics-auth-rolebinding created
service/orc-controller-manager-metrics-service created
deployment.apps/orc-controller-manager created
Note
Its assumed that clusterctl is pre-installed. If not - one can refer to these instructions
Output
Fetching providers
Skipping installing cert-manager as it is already installed
Installing provider="cluster-api" version="v1.10.4" targetNamespace="capi-system"
Installing provider="bootstrap-kubeadm" version="v1.10.4" targetNamespace="capi-kubeadm-bootstrap-system"
Installing provider="control-plane-kubeadm" version="v1.10.4" targetNamespace="capi-kubeadm-control-plane-system"
Installing provider="infrastructure-openstack" version="v0.12.4" targetNamespace="capo-system"
Your management cluster has been initialized successfully!
CAPI has bootstrap and controlplane provider support for TALOS: - CACPPT repo - CABPT repo
Output
talos BootstrapProvider https://github.com/siderolabs/cluster-api-bootstrap-provider-talos/releases/latest/ bootstrap-components.yaml
talos ControlPlaneProvider https://github.com/siderolabs/cluster-api-control-plane-provider-talos/releases/latest/ control-plane-components.yaml
openstack InfrastructureProvider https://github.com/kubernetes-sigs/cluster-api-provider-openstack/releases/latest/ infrastructure-components.yaml
Output
Note
For creating management clusters - In this case - we are going to make use of flex management cluster's namespace.
Create clouds.yaml file using project credentials
clouds:
default:
auth:
auth_url: https://<keystone_public_url>/v3
project_name: <project name>
project_domain_name: <Project domain name>
username: <username>
password: <password>
user_domain_name: <User domain name>
interface: public
region_name: <Region name>
identity_api_version: "3"
Now let's create a secret and label it using below commands:
kubectl create secret generic talos-demo-cloud-config --from-file=clouds.yaml='clouds.yaml' --from-literal=cacert="" -n talos-cluster
Below is the talos.yaml manifest which we will be using to create our k8s workload cluster:
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: talos-cluster
namespace: talos-cluster
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: TalosControlPlane
name: talos-cluster-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackCluster
name: talos-cluster
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackCluster
metadata:
name: talos-cluster
namespace: talos-cluster
spec:
disableAPIServerFloatingIP: false
externalNetwork:
# adjust to the elastic IPs network id in the openstack project
id: 158704f8-4de4-42f7-8a2a-704c3427aa6e
managedSubnets:
- cidr: 10.6.0.0/24
dnsNameservers:
- 8.8.8.8
- 8.8.4.4
identityRef:
cloudName: default
name: talos-demo-cloud-config
managedSecurityGroups:
allNodesSecurityGroupRules:
- description: Created by cluster-api-provider - Talos API nodes
direction: ingress
etherType: IPv4
name: Talos-API-NODES
portRangeMax: 50001
portRangeMin: 50000
protocol: tcp
remoteIPPrefix: "0.0.0.0/0"
- description: Created by cluster-api-provider - Talos API LB
direction: ingress
etherType: IPv4
name: Talos-API-LB
portRangeMax: 50000
portRangeMin: 50000
protocol: tcp
# this means the talosctl control port is open to the world
# adjust as seen fit
remoteIPPrefix: "0.0.0.0/0"
- description: Created by cluster-api-provider - ICMP Echo Request/Reply ingress
direction: ingress
etherType: IPv4
name: ICMP-Echo-ingress
protocol: icmp
portRangeMin: 8
portRangeMax: 0
remoteManagedGroups:
- controlplane
- worker
- description: Created by cluster-api-provider - ICMP Echo Request/Reply egress
direction: egress
etherType: IPv4
name: ICMP-Echo-egress
protocol: icmp
portRangeMin: 8
portRangeMax: 0
remoteManagedGroups:
- controlplane
- worker
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackMachineTemplate
metadata:
name: talos-cluster-control-plane
namespace: talos-cluster
spec:
template:
spec:
flavor: gp.8.4.8
image:
filter:
name: Talos-v1.11.1
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackMachineTemplate
metadata:
name: talos-cluster-md-0
namespace: talos-cluster
spec:
template:
spec:
flavor: gp.8.4.8
image:
filter:
name: Talos-v1.11.1
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: TalosConfigTemplate
metadata:
name: talos-cluster-md-0
namespace: talos-cluster
spec:
template:
spec:
generateType: join
configPatches:
- op: replace
path: /machine/install/disk
value: /dev/vda
# adjust to your talos version
talosVersion: 1.11.1
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: talos-cluster-md-0
namespace: talos-cluster
spec:
clusterName: talos-cluster
# adjust to the desired number of worker nodes
replicas: 1
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: TalosConfigTemplate
name: talos-cluster-md-0
clusterName: talos-cluster
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackMachineTemplate
name: talos-cluster-md-0
# adjust to the desired kubernetes version
version: v1.31.0
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: TalosControlPlane
metadata:
name: talos-cluster-control-plane
namespace: talos-cluster
spec:
infrastructureTemplate:
kind: OpenStackMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
name: talos-cluster-control-plane
namespace: talos-cluster
controlPlaneConfig:
controlplane:
generateType: controlplane
# adjust to the your talos version
talosVersion: 1.11.1
configPatches:
- op: replace
path: /machine/install/disk
value: /dev/vda
- op: add
# this is required to deploy the cloud-controller-manager
# which is responsible for running cloud specific controllers
path: /cluster/externalCloudProvider
value:
enabled: true
manifests:
- https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
- https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
- https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
# adjust to the desired number of control planes (kubernetes master nodes)
replicas: 1
# adjust to the desired kubernetes version
version: v1.31.0
Now let's apply this manifest to create our workload cluster.
Output
cluster.cluster.x-k8s.io/talos-cluster created
openstackcluster.infrastructure.cluster.x-k8s.io/talos-cluster created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/talos-cluster-control-plane created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/talos-cluster-md-0 created
talosconfigtemplate.bootstrap.cluster.x-k8s.io/talos-cluster-md-0 created
machinedeployment.cluster.x-k8s.io/talos-cluster-md-0 created
taloscontrolplane.controlplane.cluster.x-k8s.io/talos-cluster-control-plane created
After 2-3 mins, we can verify our workload cluster has been created. Extract the kubeconfig using clusterctl command and try to create a pod in it to confirm that our workload k8s cluster is functional.
Get the status of the various components using below commands
Output
kubectl get -n talos-cluster taloscontrolplane.controlplane.cluster.x-k8s.io/talos-cluster-control-plane
Output
Output
Output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
talos-cluster-control-plane-5jr7r Ready control-plane 9m34s v1.31.0 10.6.0.39 <none> Talos (v1.10.6) 6.12.40-talos containerd://2.0.5
talos-cluster-md-0-ltdrq-zkld4 Ready <none> 9m28s v1.31.0 10.6.0.118 <none> Talos (v1.10.6) 6.12.40-talos containerd://2.0.5
Output
NAME READY STATUS RESTARTS AGE
coredns-958d7d544-4p5hw 1/1 Running 0 10m
coredns-958d7d544-h7fjz 1/1 Running 0 10m
kube-apiserver-talos-cluster-control-plane-5jr7r 1/1 Running 0 9m48s
kube-controller-manager-talos-cluster-control-plane-5jr7r 1/1 Running 2 (10m ago) 9m2s
kube-flannel-gbtpl 1/1 Running 0 9m58s
kube-flannel-mxgzj 1/1 Running 0 10m
kube-proxy-frjfg 1/1 Running 0 9m58s
kube-proxy-ppcw8 1/1 Running 0 10m
kube-scheduler-talos-cluster-control-plane-5jr7r 1/1 Running 2 (10m ago) 8m55s
Output
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/nginx created
Output
The k8s cluster resources can also be seen as newly created Virtual Machines, Networ, using:
Now let's get the talosconfig secret from talos-cluster namespace in our management cluster so that we can run some of talosctl commands.
kubectl get secret talos-cluster-talosconfig -n talos-cluster -o jsonpath="{.data.talosconfig}" | base64 -d > taclosconfig.new
Following talosctl commands can be run to interact with the TALOS cluster. TALOS by default does not allow SSH for security reasons, adhering to the least-priviledge principle.
Info
In below commands: -e : Endpoint viz. FloatingIP of the control plane VM. -n : Node IP which are master/worker nodes in the cluster.
talosctl --talosconfig talosconfig.new -e 204.232.x.x -n 204.232.x.x get disks
talosctl --talosconfig talosconfig.new -e 204.232.x.x -n 10.22.0.226 mounts
talosctl --talosconfig talosconfig.new -e 204.232.x.x -n 10.22.0.226 cluster show
talosctl --talosconfig talosconfig.new -e 204.232.x.x -n 10.22.0.148 config info
talosctl --talosconfig talosconfig.new -e 204.232.x.x -n 10.22.0.148 containers
talosctl --talosconfig talosconfig.new -e 204.232.x.x -n 10.22.0.226 dashboard
In the beginning there was Vyatta, a free software-based alternative to hardware-based routing products offered by network beheamoths like Cisco and Juniper. After being acquired by Brocade, development of Vyatta Core, the community edition of the product, began to languish until its abandonment shortly thereafter. VyOS formed from the ashes and has continued to build upon the core to deliver a highly-functional and highly-competitive routing platform.

By utiliizing manual pools and farms in Horizon 8, organizations can leverage infrastructure platforms not natively supported for automated pool provisioning. This blog post details a real-world deployment using manual pools and farms running on OpenStack and was inspired by the blog post "Horizon 8 with manual pools and farms – using alternative hypervisors," written by Omnissa, which introduced how their incredible VDI solution can be leveraged on non-proprietary platforms.
virt-v2v Windows VM migration pre-requisiteThis article explains the prerequisites for migrating a Windows2019 VM from VMware cloud to OpenStack. These are additional requirements that needs to be setup before completing a Windows VM migration. If you do not complete it, you may see following error while completing a Windows VM migration.
This document describes the method to use nbdkit vddk plugins for migration from VMware to OpenStack. vddk plugins extensively makes it quite fast and takes less time to perform data migration.
This document describes the path to migrate a virtual machine from VMware to OpenStack using virt-v2v vpx. You should use vddk plugins to make this process fast for which link is mentioned in the doc.
I used OpenStack volume on the destination cloud however one can select glance or local basis upon their used cases.
This document describes the path to build and install vddk plugins for nbdkit which is required to migrate a virtual machine from VMware
to OpenStack using vpx. Please keep in mind that it requires VMware proprietary library that you must download yourself.
This documentation provides a comprehensive guide to installing the free version of aaPanel on an Ubuntu 24 instance running on Rackspace Flex Cloud. aaPanel is a lightweight, user-friendly control panel that simplifies server and website management by offering a graphical interface for handling web services, databases, and security configurations. There is also a paid version of aaPanel which you can purchase through them and install.
Ansible has a wide range of modules available to create and manage resources like openstack flavors, images, keypairs, networks, routers among others on the flex cloud. These modules are available in the Openstack.Cloud ansible collection. In this post we will discuss creating resources on flex cloud using ansible modules.