Skip to content

virtual

How to install KVM & the Kimchi graphical web interface on Ubuntu 15.04

KVM is an excellent virtualization engine, but it lacks an easy to use user interface. Kimchi changes that. Kimchi allows you to handle the basic management tasks, like creating, starting and stopping virtual machines, adding iSCSI targets, NFS shares, and so much more. The interface is beautiful and it’s pretty easy to set up. Today, I’ll show you how.

Note: Kimchi requires systemd, so Ubuntu 14.04 LTS will NOT work. You might be able to use 14.10, if systemd is installed. I am using Ubuntu 15.04 for this guide, which uses systemd by default.

How to install KVM on Ubuntu 15.04

First, let’s make sure everything is updated and upgraded. I’m working with a minimal installation of Ubuntu 15.04, with only OpenSSH server installed.

#  sudo apt-get update

#  sudo apt-get upgrade

Now, lets install KVM, and all the dependencies needed for Kimchi.

#  sudo apt-get install gcc make autoconf automake gettext git \
python-cherrypy3 python-cheetah python-libvirt libvirt-bin \
python-imaging python-pam python-m2crypto python-jsonschema \
qemu-kvm libtool python-psutil python-ethtool sosreport \
python-ipaddr python-ldap python-lxml nfs-common open-iscsi \
lvm2 xsltproc python-parted nginx firewalld python-guestfs \
libguestfs-tools python-requests websockify novnc spice-html5 \
wget unzip

At some point during the installation, a postfix configuration window will appear. Unless you have a reason to do otherwise, I suggest you select “Local only.”

Screen Shot 2015-06-04 at 11.55.09 AM

Click Here To Read The Rest!

How to install a nested hypervisor on an ESXi virtual machine without a vSphere server

If you read my blog, you’ve probably noticed I’ve been doing a lot of stuff with hypervisors lately, more specifically setting up OpenStack. I’ve always been a VMware guy. I like the simplicity of ESXi and the intuitiveness of of the interface. Since OpenStack really works best with at least 3 servers, 2 of which don’t do much of anything, I decided to use an ESXi server to install the openstack infrastructure. The controller node and network node do not provide any type of virtualization capabilities, but the compute node(s) do.

ESXi, at least since version 5.1, has supported running 64-bit hypervisor guests, or “nested” hypervisors on any Intel i3 or newer CPU. Specfically, your CPU needs to be one of the following:

  • Intel VT-x or AMD-V for 32-bit nested virtualization
  • Intel EPT or AMD RVI for 64-bit nested virtualizaiton

In my case, my Xeon W5580 has VT-x and EPT support, so I can run 64-bit nested virtual machines.

This will allow you to run any nested hypervisor within an ESXi 5.1 or newer host. I’ve ran Xen, KVM, OpenStack, Proxmox, and ESXi; they all worked great.

How To Enable

The feature, or setting, of the virtual machine that allows the VT-x functionality to be passed through to the guest virtual machine is called HV (as in hypervisor). The problem is you have to be running the new vSphere Web Client to get at the nice little check box to turn this on. The vSphere Desktop Client does not have this functionality and unless you have a license for vSphere server, there is no way to enable HV on a virtual machine using the GUI. However, there is a VERY easy work around for this. You simply add a single line to the .vmx file for the virtual machine you need HV enabled on.

To do this, fire up the vSphere Client, and make sure the host is selected in the left pane. Also, verify the VM is powered OFF.

Screen Shot 2015-04-27 at 5.55.35 PM

Click here to read the entire tutorial!

Expanding & Resizing an LVS Partition / Group on Ubuntu 14.04 LTS

I have a server dedicated to the purpose of hosting an ownCloud instance. OwnCloud 8 to be exact. It’s an Ubuntu 14.04 LTS virtual machine, on an ESXi 5 hypervisor. This is my own server, and not any sort of revenue generating customer service. It has become a “Dropbox” replacement for myself, and a few select friends and family. Recently, I found the original 1TB I allocated to be filling up quickly. So, I started doing some google searches to see how I could go about resizing, or expanding, an LVM group (like a partition). I found an enormous wealth of information, much of it conflicting. As I started going through a guide, that closely matched my configuration (everything was the same, actually, except the size of the disk), I instantly faced problems with commands not working. It was frustrating. Eventually I navigated through it and successfully expanded the Logical Volume. I figured I would go ahead and document my troubles so that I can make others lives a little easier.

Firstly, you need to run a couple commands to see what you’re working with. These commands are df and fdisk -l. You should see something like this:

[email protected]:~$ df
Filesystem                  1K-blocks     Used Available Use% Mounted on
/dev/mapper/cloud--vg-root 1048254140 65112056 929870740   7% /
none                                4        0         4   0% /sys/fs/cgroup
udev                          4077252        4   4077248   1% /dev
tmpfs                          817700      524    817176   1% /run
none                             5120        0      5120   0% /run/lock
none                          4088496        0   4088496   0% /run/shm
none                           102400        0    102400   0% /run/user
/dev/sda1                      240972    67164    161367  30% /boot


[email protected]:~$ sudo fdisk -l
[sudo] password for mike:

Disk /dev/sda: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c9595

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758  2147481599  1073489921    5  Extended
/dev/sda5          501760  2147481599  1073489920   8e  Linux LVM

Disk /dev/mapper/cloud--vg-root: 1090.7 GB, 1090661646336 bytes
255 heads, 63 sectors/track, 132598 cylinders, total 2130198528 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/cloud--vg-root doesn't contain a valid partition table

Disk /dev/mapper/cloud--vg-swap_1: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Click here to see the entire post!!