got the server cheap from ebay.
has unfortunately only 4x 2.5″ SAS (3.5″ would be better = more storage)
one SAS drive already faulty.
using the native Hardware RAID10 (4x 75GByte Hitachi HGST Ultrastar C10K600 450GB (HUC106045CSS600) 2,5″ SAS2 64MB 10000 RPM)
libvirtd --version libvirtd (libvirt) 4.5.0 lshw -class tape -class disk -class storage -short H/W path Device Class Description ========================================================== /0/100/1/0 scsi0 storage Smart Array G6 controllers /0/100/1/0/1.0.0 /dev/sda disk 900GB LOGICAL VOLUME
installed CentOS 7 using it’s fast default XFS and encrypted harddisk.
Win 7 runs pretty well – installed fast and now the CrystalDiskMark 6 Harddisk Benchmark:
setup:
basic gui (optional)
one might want to have a basic gui:
# tested on hostnamectl Static hostname: hp.centos Icon name: computer Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 5.1.15 Architecture: x86-64 yum update yum groupinstall "X Window System" yum groupinstall "Fonts" yum install gdm mate-desktop mate-control-center mate-terminal mate-settings-daemon caja caja-open-terminal # make gui default mode to boot into systemctl set-default graphical.target # make gui default - manual unlink /etc/systemd/system/default.target ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target # start gui systemctl isolate graphical.target
https://dokuwiki.tachtler.net/doku.php?id=tachtler:centos_7_-_minimal_desktop_installation
install kvm:
virt-manager is the gui tool.
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils # allow non-root user "user" to also use virt-manager-kvm usermod -a -G libvirt user
start it:
reboot the server and one should see the GNOME login screen.
login with one’s usual non-root terminal user (root user should work too).
Hit Alt+F2 and type “virt-manager”
there should be a list of programs coming up.
Now one will probably need to define some storage space.
CentOS per default separates between OS /root partition and /home partition.
this is where one wants the very large vm harddisk files to reside.
df -H Filesystem Size Used Avail Use% Mounted on devtmpfs 8.4G 0 8.4G 0% /dev tmpfs 8.4G 0 8.4G 0% /dev/shm tmpfs 8.4G 11M 8.4G 1% /run tmpfs 8.4G 0 8.4G 0% /sys/fs/cgroup /dev/mapper/centos-root 54G 19G 35G 36% / /dev/sda1 1.1G 294M 771M 28% /boot /dev/mapper/centos-home 837G 76G 762G 9% /home tmpfs 1.7G 4.1k 1.7G 1% /run/user/42 tmpfs 1.7G 2.3M 1.7G 1% /run/user/1000 tmpfs 1.7G 0 1.7G 0% /run/user/0 # become root su - root # this is where the vms will go mkdir /home/vms # this is where the windows.iso shall go mkdir -p /home/software/iso
in virt-manager r-click QEMU/KVM -> select Details
one can delete the other storage pools.
create another storage pool “software” and copy all your windows.iso linux.iso into it.
now one can start to create a vm:
and for example download: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.141-1/virtio-win-0.1.141.iso
and install some drivers for virtual devices RedHat Balloon Driver to perfect integration.
- Balloon, the balloon driver, affects the PCI standard RAM Controller in the System devices group.
- vioserial, the serial driver, affects the PCI Simple Communication Controller in the System devices group.
- NetKVM, the network driver, affects the Network adapters group.
- This driver is only available if a virtio NIC is configured.
- viostor, the block driver, affects the Disk drives group. This driver is only available if a virtio disk is configured.
have fun! 🙂
per default files of the vm are stored here
hostnamectl; # tested with Operating System: Debian GNU/Linux 10 (buster) Kernel: Linux 4.19.0-13-amd64 Architecture: x86-64 /etc/libvirt/qemu/vmname.xml; # config file /var/log/libvirt/qemu/vmname.log; # log /var/lib/libvirt/images/vmname.qcow2; # harddisk image
sequential harddisk benchmark:
cat /scripts/bench_harddisk.sh #!/bin/bash echo "========== get mobo model ==========" dmidecode -t 2 echo "========== what cpu ==========" lscpu echo "========== number of cores ==========" grep -c ^processor /proc/cpuinfo echo "========== show bogomips per core ==========" cat /proc/cpuinfo | grep bogomips echo "========== what harddisk / controllers are used ==========" lshw -class tape -class disk -class storage -short echo "========== writing 3GB of zeroes to /root/testfile ==========" time dd if=/dev/zero of=/root/testfile bs=3G count=1 oflag=direct echo "========== reading 6GB of zeroes from /root/testfile ==========" time dd if=/root/testfile bs=3GB count=1 of=/dev/null echo "========== tidy up, removing testfile ==========" rm -rf /root/testfile;
result:
========== get mobo model ========== # dmidecode 3.1 Getting SMBIOS data from sysfs. SMBIOS 2.7 present. ========== what cpu ========== Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 26 Model name: Intel(R) Xeon(R) CPU E5540 @ 2.53GHz Stepping: 5 CPU MHz: 1844.213 CPU max MHz: 2533.0000 CPU min MHz: 1600.0000 BogoMIPS: 5066.63 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm pti tpr_shadow vnmi flexpriority ept vpid dtherm ida ========== number of cores ========== 16 ========== show bogomips per core ========== bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 bogomips : 5066.63 bogomips : 5065.93 ========== what harddisk / controllers are used ========== H/W path Device Class Description ========================================================== /0/100/1/0 scsi0 storage Smart Array G6 controllers /0/100/1/0/1.0.0 /dev/sda disk 900GB LOGICAL VOLUME ========== writing 3GB of zeroes to /root/testfile ========== 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB) copied, 9.89083 s, 217 MB/s real 0m10.003s user 0m0.000s sys 0m2.520s ========== reading 6GB of zeroes from /root/testfile ========== 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB) copied, 3.25841 s, 659 MB/s real 0m3.388s user 0m0.001s sys 0m1.174s ========== tidy up, removing testfile ==========
now lets test if nested esxi can work:
virt-host-validate QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'memory' controller mount-point : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpu' controller mount-point : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller mount-point : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller mount-point : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'devices' controller mount-point : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for cgroup 'blkio' controller mount-point : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments) LXC: Checking for Linux >= 2.6.26 : PASS LXC: Checking for namespace ipc : PASS LXC: Checking for namespace mnt : PASS LXC: Checking for namespace pid : PASS LXC: Checking for namespace uts : PASS LXC: Checking for namespace net : PASS LXC: Checking for namespace user : PASS LXC: Checking for cgroup 'memory' controller support : PASS LXC: Checking for cgroup 'memory' controller mount-point : PASS LXC: Checking for cgroup 'cpu' controller support : PASS LXC: Checking for cgroup 'cpu' controller mount-point : PASS LXC: Checking for cgroup 'cpuacct' controller support : PASS LXC: Checking for cgroup 'cpuacct' controller mount-point : PASS LXC: Checking for cgroup 'cpuset' controller support : PASS LXC: Checking for cgroup 'cpuset' controller mount-point : PASS LXC: Checking for cgroup 'devices' controller support : PASS LXC: Checking for cgroup 'devices' controller mount-point : PASS LXC: Checking for cgroup 'blkio' controller support : PASS LXC: Checking for cgroup 'blkio' controller mount-point : PASS LXC: Checking if device /sys/fs/fuse/connections exists : PASS vim /etc/default/grub GRUB_TIMEOUT=1 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.luks.uuid=luks-23c92dca-43f2-4a55-80cf-acfecb0ca482 rd.lvm.lv=centos/swap rhgb intel_iommu=on" GRUB_DISABLE_RECOVERY="true" # activate the config grub2-mkconfig -o /boot/grub2/grub.cfg
command line:
manpage: virsh.man.txt
# list all vms that run with current user previliges virsh list # list all vms (off and on) virsh list --all # stop all vms for i in `sudo virsh list | grep running | awk '{print $2}'`; do sudo virsh shutdown $i; done;
about kvm:
- Security: KVM is able to use any storage supported by Linux, including some local disks and network-attached storage (NAS). Multipath I/O may be used to improve storage and provide redundancy. KVM also supports shared file systems so VM images may be shared by multiple hosts. Disk images support thin provisioning, allocating storage on demand rather than all up front.
src: https://www.redhat.com/en/topics/virtualization/what-is-KVM
who uses kvm?
https://www.reddit.com/r/aws/comments/7bfjbk/aws_is_leaving_xen_for_kvm/
https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
vmware esxi 6.5 inside kvm
yes it can be done! (BUT JUST AS OSX INSIDE KVM QUEMU IT WILL BE VERY BUGGY AND PRACTICAL UNUSABLE)
did not use the latest version of kvm, but the default one coming with yum packages from CentOS 7.
(did no long term stability testing… )
what one will need:
- hardware that is supported by esxi (the HP ProLiant DL360 G6’s CPU is only supported with esxi 6.5 not esxi 6.7)
- Vendor ID: GenuineIntel CPU family: 6 Model: 26 Model name: Intel(R) Xeon(R) CPU E5540 @ 2.53GHz Stepping: 5
- modify grub, pass kernel option intel_iommu=on
- one can even pci pass through all sorts of hardware with kvm (even the GPU, but usually GPU can only be used by one (real or virtual) machine): https://dwaves.de/2019/08/18/kvm-qemu-virt-manager-for-a-test-drive-on-hp-proliant-dl360-g6-windows-7-64bit-guest-harddisk-benchmark/
- nvidia is working on GPU sharing between VMs: https://devblogs.nvidia.com/dgx2-server-virtualization-nvswitch-faster-gpu-virtual-machines/
- create a new kvm-vm in virt-amanger
- use to edit settings before start
- now one should be able to install esxi 6.5 without complains
- recommend to use the Default-US keyboard and specify a password like:
dfghj123.
or one will have trouble web logging in afterwards (keyboard layout missmatch = password missmatch)
- checkout this demo setup esxi 6.5 in VirtualBox (does not supported nested, it was just for testing)
- browser to the ip of one’s esxi 6.5 installation and provide username: root and password “dfghj123.”
- in contrast to the setup inside VirtualBox one will actually be able to run vms on the esxi (no long term testing was done, so can not say anything about long-term stability, esxi shows screen output via html5 stream inside browser window, no need to install dedicated windows desktop programs anymore! 🙂
- recommend to use the Default-US keyboard and specify a password like:
Links:
https://www.linuxtechi.com/install-kvm-hypervisor-on-centos-7-and-rhel-7/
liked this article?
- only together we can create a truly free world
- plz support dwaves to keep it up & running!
- (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
- really really hate advertisement
- contribute: whenever a solution was found, blog about it for others to find!
- talk about, recommend & link to this blog and articles
- thanks to all who contribute!