Platform9 KubeVirt solution.

 

This is my opinion about this platform.

A few weeks ago, Platform9 announced a Hands-on-Lab for their KubeVirt implementation, and after using Harvester for running VMs mainly for deploying Rancher RKE clusters, I got my hands on this platform and the differences are huge.

First, Platform9 keeps its offering very close to the upstream project, what does this mean, it looks like you installed KubeVirt manually in your K8s cluster, this is good. The good thing about it is that you are more familiar with the solution and when the time to move to another KubeVirt offering comes, the changes will be minimal.

As you may know, Kubernetes goes first. PMK (Platform9 Managed Kubernetes) needs to be installed.

https://platform9.com/docs/kubernetes/get-started-bare-metal

pf9ctl is the tool used to create a K8s cluster managed from PMK. In the previous link, you can see how easy is to create a cluster with just one Master node (for testing of course!) and one Worker, this was the scenario of the Hands-on-Labs.

The pre-node option for pf9ctl will install an agent and begin promoting the server to a PMK node that can be used to build a cluster. This progress can be monitored in the infrastructure -> Nodes section of the platform.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

These two nodes are already assigned to a cluster, there you can see the Role assigned to each of them.

With a K8s cluster already running, is time to add KubeVirt. Platform9 provides this as an add-on, with just one click it can be installed!

From Infrastructure -> Clusters -> Managed, a list of managed clusters will appear, there we select the one intended for KubeVirt.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

There are some similarities with the Node section from Infrastructure. Here the information is about Kubernetes, Let’s click Add-ons and search for KubeVirt. In this cluster, the add-on is already active. But as I said, is just one click away.

A screenshot of a computer

Description automatically generated

In the Platform9 KubeVirt documentation, the detail of the steps are for a cluster with a KubeVirt add-on added at build time, this is the fastest way to do it for a new cluster, in the case that the cluster already exists, the add-on can be added without issues. One dependency for KubeVirt is Luigi, which is a network plugin operator.

KubeVirt section.

A screenshot of a computer

Description automatically generated

A lot of information. Virtual Machines section, you can easily see the total, running or the VMs being migrated.

Virtual Machine creation.

Still, in the KubeVirt section of the platform, we need to go to Virtual Machines, there we have three areas of interest. All VMs, Live Migrations, and Instance Types.

In All VMs, is where all the created VMs will appear. In the top right, we have Add Virtual Machine.

A screenshot of a computer

Description automatically generated

Clicking the Create using wizard will bring this page:

A screenshot of a computer

Description automatically generated

The best part is that while we select the desired options for our VM, the right side of the wizard with the YAML syntax will start updating itself!

That’s a great feature, this way we can start learning how to do the YAML version of the VM creation process and maybe run some CI/CD and automagically get VMs.

What can we do with VMs on this implementation of KubeVirt?

From the Virtual Machines -> All VMs section, the list of available VMs will appear, there we can manage those VMs.

A screenshot of a computer

Description automatically generated

Selecting a VM gives us more information and a lot of other parameters to modify, like disk size, memory size, and networking.

A screenshot of a computer

Description automatically generated

There is a lot more to talk about, I’m planning to keep getting into Platform9 KubeVirt solution and do a comparison to Harvester!

While creating our cluster, we selected an older version of Kubernetes, the idea is to be able to run an upgrade and see how things are handled for our VMs.

In Infrastructure -> Clusters -> Managed we can select the cluster that will be upgraded, in my case there is only one.

A screenshot of a computer

Description automatically generated

A screenshot of a computer program

Description automatically generated

Here I selected Patch and clicked Upgrade Now.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

The steps for the upgrade are very similar to the initial install.

While upgrading I noticed that the VMs first were moved to the Worker node, this is expected, the first nodes to upgrade on K8s are the Master nodes.

Now we are at 1.26.14-pmk.

A screenshot of a computer

Description automatically generated

Of course, a cluster with just one Master and one Worker is not a production-ready cluster, and doing an upgrade to that will cause connectivity loss and other issues.

Next, I will try to get my hands on PMK access to try to build a cluster in my homelab, here I will be testing more stuff related to Storage and Networking, MetalLB being the more interesting one!

Just like the OpenStack HoL version, there will be some videos on YouTube, stay tuned!

 

Creating Linux VM with Harvester HCI.

In a previous article, we saw how to integrate Harvester in Rancher UI, and from there we were able to request a new K8s cluster with just a few clicks. Now is Virtual Machine time. How fast can we deploy a Linux VM.

Look at https://arielantigua.com/weblog/2023/12/harvester-hci-en-el-homelab/

For installing Harvester.

Linux VM.

This is easier than expected. You just need an img or qcow2 file imported into Harvester. Navigate to Images and click Create.

Continuar leyendo «Creating Linux VM with Harvester HCI.»

Ceph on Proxmox as Storage Provider.

For a few months, I’ve been reading about Ceph and how it works, I love distributed stuff, maybe the reason is that I can have multiple machines and the idea of clustering has always fascinated me. In Ceph, the more the better!

If you have multiple machines with lots of SSD/NVME the Ceph performance will be a lot different than having a 3-node cluster with only one OSD per node. This is my case, and the solution has been working well.

Installing Ceph on Proxmox is just a few clicks away, is already documented in https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster

At first, I have two nodes and the state of Ceph was faulty.

A screenshot of a computer Description automatically generated

The crush_map created by Proxmox is a 3-host configuration, that adds at least one OSD to the cluster, in this picture, there were only 2 hosts with 1 OSD each.

Continuar leyendo «Ceph on Proxmox as Storage Provider.»

Ceph as my storage provider?

 

Ceph.io — Logo Usage

Ceph is the future of storage; where traditional systems fail to deliver, Ceph is designed to excel. Leverage your data for better business decisions and achieve operational excellence through scalable, intelligent, reliable, and highly available storage software. Ceph supports object, block and file storage, all in one unified storage system.

That’s the official definition from Ceph website. It’s it true?

I don’t know. Want to find out!

Since few weeks ago I’ve been in the planning stage to install and configure Ceph in a 3-node cluster, everything done via Proxmox UI. One of the main issues with this solution, the storage devices. how’s that?

Well.. it doesn’t like Consumer SSD/Disks/NVME.

BoM:

  • Supermicro X9SRL with Xeon E5-2680v2 + 128GB of RAM + Intel P3600 1.6TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 128GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 64GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe

Note: The storage listed here will be used for Ceph OSD, there is a dual 10GbE card on each host for replication.

I have a pair of 970 EVO Plus (1TB) that were working fine with vSAN ESA, decide to move to Intel Enterprise NVMe because a lot of information around the web points to bad performance with this type of NMVe.

The Supermicro machine is already running Proxmox, lets the Ceph Adventure begins!

This picture is one of the Z440, is full in there!!

A close up of a computer

Description automatically generated