Virtualization is a must, if you want to have 50x mail servers, 20x windows servers and 10x linux servers on the same machine 😉 meaning: if your requirements reach a certain complexity, you will not be able to get all this software to work properly in one system.
also: virtualisation allows you easy snapshot & restore of RUNNING (!) VMs… try this with a physical machine.
since 2013 the small business has still 20 employees and switched from Citrix XEN to MS-Windows-7 as host + VirtualBox … which ran actually quiet well and reliably.
Why VirtualBox? Because it is the only hypervisor that allows VMs to be moved without major fuzz between Linux and Windows servers. (fuzz = Windows HAL)
No other virtualization technology i know of can do that (maybe VMWare can – but i don’t know).
i did NOT do the VirtualBox updates – which – of course is s security risk – but gurantees stability.
We got a new server, moved all VMs from VirtualBox 4.2 to VirtualBox 5.1 – and besides for selecting a new network card and manually editing config files (bug IMHO) – things worked pretty smooth and is now under testing.
NOTE: Best way to move VirtualBox VMs to a new server is to
- mount shared folder of new server as X:
- select X: as new General Working Directory – „Default Machine Folder“
- powerdown VM
- clone VM
- double click *.vbox on new server to import.
After this – reset your „Default Machine Folder“ to the previous folder.
Actually PromoCache showed that C:\WINDOWS is making heavy use of the cache – accessing the same files over and over again 😉 (90% cache hit rate)
While Data-Partitions only had 1-10% cache hit rate. But never the less some hit rate.
Of course you should NEVER use RAM caching massively without a reliable battery pack:
The LSI MegaRaid 1068e controller is not the fastest in town… but it worked reliably as well and send reliably e-mails if a disk went down. Having replacement-disks stacked right on top of the server… it was a matter of seconds to switch them.
Backups are done with StorageCraft’s ShadowProtect (recommendation by German hardware vendor ctt.de – they usually recommend only professional high-quality) which is a good alternative to Acronis. In addition to Acronis the basic Version of ShadowProtect can backup dynamic (resized, resizable) partitions – which Acronis Home can NOT do – you will have to buy the 1000$ Acronis Server edition.
Next move could be – to jumper the LSI MegaRaid 1068 to SATA-Mode only
+ stay with Virtualbox but on a Linux-Host (have to test the network latency windows->linux->windows-vm)
+ use Linux‘ software RAID MDADM to do a 3x Harddisks RAID1 (2x Harddisks can fail without dataloss).
Having 6 Harddisk slots on the backplane – you can combine 2xRAID1 (with each 3x Harddisks) to a RAID0 – and have both – speed and reliability.
And you don’t have to run that fast – if one harddisks fails… because 2x can fail 😀
only Linux SOFTWARE RAID MDADM can do RAID1 with more than 2x Harddisks (AFAIK)
a small business of 20 employees was running quiet well for the last 5 years with the free version of citrix xen.
as harddisk we used a NFS-attached QNAP Turbo NAS TS-559 Pro II as raid5 2x2TB giving 7.35TB as storage).
i could not get the internal LSI 1068-RAID card to work with citrix xen (which uses red hat4/5) and supermicro boards (using LSI M1068E soft/hard-raid controlers where the controler has 512MB cache but uses the xeon cpu for the computing).
we switched 2013 to virtualbox using windows as a server, where drivers were available.
amazingly: the harddisk performance is not better than with citrix xen + nfs over 1GBIT attached QNAP NAS.
disadvantage: the network PING / LAG / LATENCY of virtualized Computers is increased by 1-5ms which can be too large for some applications that rely on short-latency SMB-shares (Windows Network Share).
i really wonder if it would perform faster if we would use linux as basis, but then again, one can only use the LSI MEGA 1068 Raid card as SATA-Controller (and then do Software-Raid with MDADM – that’s what i will do next.).
So we are forced to use windows.
but now the NAS can operate as backup of the virtualbox-windows-server.
i installed almost all of them and evaluated them a little.
vmware esxi / vmware vsphere
o if hardware is supported (has MASSIVE PROBLEMS WITH ICH-Sata controlers -> NO CDROM) than its easy to get going
+ has a p2v converter tool
+ reliable if you get it installed on your hardware
– hardware compatibility (ICH-Sata)
In my test the setup of VMWare vSphere ESXi on Supermicro CSE 745TQ-R800B Bareb X8DT3-F-B,Rev.2 (basically same mobo as x8dti) WORKED ONLY THROUGH THE USE OF THE WEB-CDROM-MOUNTING FEATURE!!!
It will not boot from CDROM, because ESX has no drivers (!?) for ICH10R Controller.
(Debian has, Centos has… comon guys… this is NOT PROFESSIONAL!)
+ vsphere also has a web frontend
i am evaluating it right now and it really seems to be the „feature richest“ software (they started early! 😉
the console is way more complex than that of xen… which can be good 🙂 if you like having many possibilities, or a little confusing at first, but should be okay after some trial and error.
vmware server 2
+ web frontend
+ still good for easy and quick virtualization on windows machines (but not enterprise reliability)
– last update 2008!?
xen hypervisor by citrix
o if hardware is supported (CDROM & HD) than its easy to get going
– no software raid configurable during setup
+ also runs on hardware lacking VT-CPU support
+ good support of NFS connection to QNAP (20% faster than SMB)
+ reliable (14 days uptime no problems even p2v vms run reliable (tested with windows server 2003 smal business version)
+ speed is good (reboot of debian 7 in seconds, althought the VM is COMPLETELY on the NAS)
+ easy gui
+ COMPLETELY for free, they will nag you in the guy sometimes „buy pro buy pro… with these and that features“ but thats it.
+ backup of the vms and the server-installation/config through the guy
+ i managed to get OpenVZ (linux-container based virtualization/separation of systems) running on xen BUT you need to configure routes on the host-openvz machine.
8. internet access IP forwarding networking for openvz host on xen host # testing and worked on openwall/owlOpenVZ # on host # put this on openvz-host into vim /etc/rc.d/rc.local # config gateway ifconfig eth0 192.168.0.123 netmask 255.255.255.0 up route add default gw 192.168.0.1 # load iptables_nat kernel module modprobe iptable_nat # forward outgoing traffic from VPS to internet/gateway of host iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 192.168.0.123 # forward incoming traffic from host/internet/gateway to VPS # forward ssh connections to hardware node on port 44449 to port 22 of guest/vps iptables -t nat -A PREROUTING -p tcp -d 192.168.0.123 --dport 44449 -i eth0 -j DNAT --to-destination 192.168.0.124:22
xen on debian
+ software raid quiet easy to setup during debian/ubuntu setup
hyper-v (by microsoft)
+ windows always had good driver support for the latest hardware
– said to be „free“ in a very microsofish way. yes the hyper-v windows 2008 r2 CORE SERVER (!!!) is free, but this does not include any GUI tools that you might NEED to get ANY VM up and runnin on hyper-v. It’s called core server because it has no GUI, no Desktop, no icons, nothing to click on, except a cmd prompt. If you want that, you need to buy it 500$ or 800$ for a full windows server 2008 r2, or a cheap alternative: http://www.5nine.com/5nine-manager-for-hyper-v-free.aspx (but it’s also not 100% free)
o i think they have support for NFS now (but not tested!)
– accessing VMs on a QNAP-SMB share is slower than NFS
+ quiet easy to get going, just install ubuntu + virt-manager
+ software raid quiet easy to setup during debian/ubuntu setup
– vnc for windows xp rectangle shit
– pci pass through not really working for me (Fritz! Card AVM ISDN)
– seems quiet reliable but still lots of bugs
o wenn es zuverlässig funktioniert ist nfs schneller als smb (server<->linux nas von qnap)
+ super easy to get going, just install some OS and VirtualBox
+ support for many OS host platforms
– harddisk performance (not worse than microsoft)
– pci pass through not existing, usb pass through is existing and kind of works.
+ pretty stable
+ scriptable, so suspending than backing up your VMs in one go is possible
– 3-5ms network latency might be too much for your application (relying heavly on fast windows-shares)
+ absolutely free!
to sum it up:
my first choice: VirtualBox + install everything that can not deal with higher network latency on the physical machine.
second choice: xen server 5.6, why? because i don’t see the point of having to buy Microsoft’s Gui and not getting more than VirtualBox. (OK, they have live-migration now i think 😀 but i don’t need it… i can suspend… and clone… and start.)
even if it might loose support by the linux kernel guys. xen will always run on linux, but maybe not in super acclerated mode.
but it’s enough for me to have a system that can run windows & linux guests RELIABLE. (Windows 7 is no problem, conversion of Windows Server 2003 went smoothly.)
and the support of the QNAP-NFS-NAS is PERFECT! (Mounting of NFS no problem at all, speed is good)
I have a Qnap TS-559 Pro+ Turbo NAS connected DIRECTLY (no switch) to one of the network cards of the server, so i get FULL 1000MBit. (asign it to be an gui management interface card with fixed ip)
And it JUST WORKS! WHAT ELSE DO YOU WANT?