So I'm thinking about getting rid of VMware and going all in on FOSS, being that I'm now running Fedora 31 on both desktop and laptop. I need to be able to run two virtualization server in a cluster/HA/failover configuration with iSCSI or NFS storage backend (no Gluster as they have no local storage except a SSD for boot). I like KVM.
There you have it - any suggestions?
@selea Tried two years ago. Not impressed then buy maybe it's matured by now?
What was it that did not impress you?
I've been using it in small and pretty big environments, and it works nicely. I am actually thinking about going 100% proxmox at my job too.
@selea Mainly issues with throughput/performance when doing live migrations. Also pretty much no support for FC SAN storage what so ever which is a bummer since I have a nice FC SAN too.
But as long at you can mount the Fibre Channel SAN in linux you can use it in proxmox. You just have to mount it as an ordinary drive.
Live migration can be slow sometimes yes, but personally that does not really matter for me.
@selea Not true. You need a cluster aware file system. EXT4 and the like does not cut it. Without cluster awareness one node can write changes to the SAN and the other node will not see them until they re-mount the SAN again. This is not an issue with iSCSI and NFS of course but then you get a bit of a performance hit.
Proxmox will create LVM volumes for each VM, and only the active node will write to the allocated space.
I remember that I needed to use GFS (or something like that) when I used DRBD
@selea I need to investigate this further. I got two Super Micro servers with 8 cores and 128GB RAM each that I'll use. Once I've migrated my VM's to another storage I can play around with this some more. Thanks for your input.
Ah nice, which ones?
I am very fond of Supermicro, mainly because I own one too :P
@selea To be honestly I can't remember the model name. They only have four cores each btw, but they are dual xeons both of them. I run my current VMware cluster on two Dell R610 with 192GB RAM and 2x8 cores each and the idea is to install Proxmox or whatever on them after testing is done. Can I do a proxmox cluster with different servers and CPU generations or is the same generation and all required like in VMware?
Yes, you can run proxmox on different CPU generations, it will be compatible with the oldest generator (just like VMware does).
I can suggest you to look check out Ceph if you spin up three or more nodes in the cluster - then you can have your own HCI environment :)
@selea Can I do live migration between servers of different xeon generations and models?
@selea Awesome. that makes my life sooo much easier.
@selea Ceph requires local drives and I prefer to keep it all in one big box. I'm weird that way.
well that's the normal way to do it :)
@selea I tried doing this with some really hairy LVM hacks and it worked... sort of. But it never felt safe enough. That's the upside with VMware - it really never let's you down.
and yes, the last updates (5.0<) has been a great improvement
@selea I'm gonna give it a shot then. My new iSCSI based SAN will be ready soon.
That is supported by the proxmox GUI atleast :) So you dont even need to visit the CLI in order to mount it :)
@joacim Proxmox. But for live migrations/HA you would need 3 nodes minimum. With two nodes you would have downtime during migration.
@fgra Bummer. That works fine with VMware and vCenter.
I just tested a live migration after the latest update, and I was able to do it on a guest with local storage (raw, lvm-thin) between 2 nodes.
This was not possible before.
You still need 3 nodes to have quorum/HA, but at least now you can live migrate.
@fgra Alright, good to know. I'm gonna start doing some testing this weekend or whenever I can finish my new SAN and move my VM's off my current FC based one.
Keep the open web free - join Mastodon today!