Skip to content

Monthly Archives: April 2015

How to change the MTU in Windows Server 2008 & 2012

When I finally got a Windows Server 2012 image built and deployed on OpenStack, I started having some seriously squirrely problems with networking. I was able to ping and resolve DNS. I was even able to browse network shares on other servers that were well up the chain outside of the virtual environment, but I was unable to actually browse the internet from the Windows Server 2012 instance on OpenStack. I was having no issues with Linux based images.

I immediately suspected MTU as the culprit. I double check my neutron-dnsmasq.conf file to make sure the MTU was set at 1454, via DHCP configuration. It was. So, I checked the MTU settings on the Windows image and it was in fact 1500. For some reason the DHCP option was not having any effect on the Windows image. This is supposed to be addressed by the CloudBase VirtIO driver, allowing the MTU to be set via DHCP in OpenStack environments, but it obviously wasn’t working. You can check your MTU by doing the following:

 

Open an Administrator command prompt.

netsh interface ipv4 show interfaces

 

Screen Shot 2015-04-27 at 7.29.38 PM

 

This will show you your current MTU settings. Pay close attention to the Idx # of the ethernet interface. You will need this information to change the MTU. To change the MTU to 1454 use this command. (you will need to replace the “10” with the Idx for your ethernet interface)

Click here to read the entire tutorial!

How to install a nested hypervisor on an ESXi virtual machine without a vSphere server

If you read my blog, you’ve probably noticed I’ve been doing a lot of stuff with hypervisors lately, more specifically setting up OpenStack. I’ve always been a VMware guy. I like the simplicity of ESXi and the intuitiveness of of the interface. Since OpenStack really works best with at least 3 servers, 2 of which don’t do much of anything, I decided to use an ESXi server to install the openstack infrastructure. The controller node and network node do not provide any type of virtualization capabilities, but the compute node(s) do.

ESXi, at least since version 5.1, has supported running 64-bit hypervisor guests, or “nested” hypervisors on any Intel i3 or newer CPU. Specfically, your CPU needs to be one of the following:

  • Intel VT-x or AMD-V for 32-bit nested virtualization
  • Intel EPT or AMD RVI for 64-bit nested virtualizaiton

In my case, my Xeon W5580 has VT-x and EPT support, so I can run 64-bit nested virtual machines.

This will allow you to run any nested hypervisor within an ESXi 5.1 or newer host. I’ve ran Xen, KVM, OpenStack, Proxmox, and ESXi; they all worked great.

How To Enable

The feature, or setting, of the virtual machine that allows the VT-x functionality to be passed through to the guest virtual machine is called HV (as in hypervisor). The problem is you have to be running the new vSphere Web Client to get at the nice little check box to turn this on. The vSphere Desktop Client does not have this functionality and unless you have a license for vSphere server, there is no way to enable HV on a virtual machine using the GUI. However, there is a VERY easy work around for this. You simply add a single line to the .vmx file for the virtual machine you need HV enabled on.

To do this, fire up the vSphere Client, and make sure the host is selected in the left pane. Also, verify the VM is powered OFF.

Screen Shot 2015-04-27 at 5.55.35 PM

Click here to read the entire tutorial!

Installing Ubuntu OpenStack on a Single Machine, Instead of 7

For an updated guide click here to read “Install OpenStack on a Single Ubuntu 16.04 Xenial Xerus Server – Updated!”

If you’ve read my other recent posts, you’ve probably notice I’ve been spending a lot of time with different cloud architectures. My previous guide on using DevStack to deploy a fully functional OpenStack environment on a single server was fairly involved, but not too bad. I’ve read quite a bit about Ubuntu OpenStack and it seems that Canonical has spent a lot of energy development their spin on it. So, now I want to set up Ubuntu OpenStack. All of Ubuntu’s official documentation and guides state a minimum requirement of 7 machines (server). However, although I could probably round up 7 machines, I really do not want to spend that much effort and electricity. After scouring the internet for many hours, I finally found some obscure documentation stating that Ubuntu OpenStack could in fact be installed on a single machine. It does need to be a pretty powerful machine; the minimum recommended specifications are:

  • 8 CPUs (4 hyperthreaded will do just fine)
  • 12GB of RAM (the more the merrier)
  • 100GB Hard Drive (I highly recommend an SSD)

With the minimum recommended specs being what they are, my little 1u server may or may not make the cut, but I really don’t want to take any chances. I’m going to use another server, a much larger 4u, to do this. Here are the specs of the server I’m using:

  • Supermicro X7DAL Motherboard
  • Xeon W5580 4 Core CPU (8 Threads)
  • 12GB DDR3 1333MHz ECC Registered RAM
  • 256GB Samsung SSD
  • 80GB Western Digital Hard Drive

I have installed Ubuntu 14.04 LTS, with OpenSSH Server being the only package selected during installation. So, if you have a machine that is somewhat close to the minimum recommended specs, go ahead and install Ubuntu 14.04 LTS. Be sure to run a sudo apt-get upgrade before proceeding.

Lets Get Started

First, we need to add the OpenStack installer ppa. Then, we need to update app. Do the following:

Click here to read the entire tutorial!

MailCleaner Spam Filter – How To Open a Port & Add IPTables Firewall Rules

MailCleaner is a nice Open Source Linux distribution that creates a spam filter appliance. It is designed to sit in between an email server and the internet and filter spam out of email using advanced rules, DNS RBL (realtime black list), and many other techniques. It also scans email for viruses. Although I no longer use MailCleaner (I have replaced it with ScrollOut F1), I remember coming across a big issue in the past that took me some time to figure out, so I thought I would share it.

Because MailCleaner is more or less an appliance, most aspects of the operating system are controlled by MailCleaner. A majority of the settings you need to change are easily available on the web interface, however firewall rules are not. MailCleaner is designed so that it manages all IPTables rules. If you manually add an IPTables rule from the command line, once it’s reloaded or the system is reboot, the rule is gone. That is because MailCleaner wipes out and reloads IPTables rules from data stored in its MySQL database. So, in order to open any additional ports, you must modify the database. I encountered this dilemma when I installed a remote monitoring client (the Nagios based Check_MK to be exact), and needed to open a port to allow the monitoring server to connect.

Lets assume I need to open up SSH (port 22) and RSYNC (port 873) and I only want my mail server’s IP, 1.2.3.4, to connect. Normally we would enter the following iptables commands:

sudo iptables -A INPUT -s 1.2.3.4/32 -p tcp -m tcp --dport 873 -j ACCEPT
sudo iptables -A INPUT -s 1.2.3.4/32 -p tcp -m tcp --dport 22 -j ACCEPT

But in this case, we cannot. The good news is the MailCleaner will do it for you if you add the correct info into the MySQL database. Here’s how you do that (from a command prompt on the MailCleaner server):

Click Here To Read The Entire Tutorial!

How To Install VMware tools on CentOS 6 and CentOS 7 / RHEL

This is a quick and dirty guide on installing VMware tools (vmtools) on a CentOS 6 or CentOS 7 virtual machine as well as RHEL (Red Hat Enterprise Linux).

First, you will need to install the VMware tools prerequisites:

[[email protected]]$  yum install make gcc kernel-devel kernel-headers glibc-headers perl net-tools

Now you will need to mount the VMware Tools ISO and select “Install/Upgrade VMware Tools” option on ESXi. This can be found a few different ways. I prefer to right click on the virtual machine, then go to guest and click on “Install/Upgrade VMware Tools.”

Screen-Shot-2015-04-22-at-10.28.25-AM

Click Here To Read The Entire Tutorial!

Installing OpenStack on a Single CentOS 7 Server

This guide will help you install OpenStack on CentOS 7.  If you would like to install Openstack on Ubuntu, here is a guide to install OpenStack on a single Ubuntu 14.04 server, and this one will help you get OpenStack installed on a single Ubuntu 16.04 server.

I’ve always been rather curious about OpenStack and what it can and can’t do. I’ve been mingling with various virtualization platforms for many, many years. Most of my production level experience has been with VMWare but I’ve definitely seen the tremendous value and possibilities the OpenStack platform has to offer. A few days ago I came across DevStack while reading up on what it takes to get an OpenStack environment set up. DevStack is pretty awesome. Its basically a powerful script that was created to make installing OpenStack stupid easy, on a single server, for testing and development. You can install DevStack on a physical server (which I will be doing), or even a VM (virtual machine). Obviously, this is nothing remotely resembling a production ready deployment of OpenStack, but, if you want a quick and dirty environment to get your feet wet, or do some development work, this is absolutely the way to go.

The process to get DevStack up and running goes like this:

  1. Pick a Linux distribution and install it.  I’m using CentOS7.
  2. Download DevStack and do a basic configuration.
  3. Kick of the install and grab a cup of coffee.

A few minutes later you will have a ready-to-go OpenStack infrastructure to play with.

Server Setup and Specs

I have always been fond of CentOS and it is always my go-to OS of choice for servers, so that is what I’m going to use here. CentOS version 7 to be exact. Just so you know, DevStack works on Ubuntu 14.04 (Trusty), Fedora 20, and CentOS/RHEL 7. The setup is pretty much the same for all three so if you’re using one of the other supported OS’s, you should be able to follow along without issues, but YMMV.

Click Here To Read The Entire Post!

Expanding & Resizing an LVS Partition / Group on Ubuntu 14.04 LTS

I have a server dedicated to the purpose of hosting an ownCloud instance. OwnCloud 8 to be exact. It’s an Ubuntu 14.04 LTS virtual machine, on an ESXi 5 hypervisor. This is my own server, and not any sort of revenue generating customer service. It has become a “Dropbox” replacement for myself, and a few select friends and family. Recently, I found the original 1TB I allocated to be filling up quickly. So, I started doing some google searches to see how I could go about resizing, or expanding, an LVM group (like a partition). I found an enormous wealth of information, much of it conflicting. As I started going through a guide, that closely matched my configuration (everything was the same, actually, except the size of the disk), I instantly faced problems with commands not working. It was frustrating. Eventually I navigated through it and successfully expanded the Logical Volume. I figured I would go ahead and document my troubles so that I can make others lives a little easier.

Firstly, you need to run a couple commands to see what you’re working with. These commands are df and fdisk -l. You should see something like this:

[email protected]:~$ df
Filesystem                  1K-blocks     Used Available Use% Mounted on
/dev/mapper/cloud--vg-root 1048254140 65112056 929870740   7% /
none                                4        0         4   0% /sys/fs/cgroup
udev                          4077252        4   4077248   1% /dev
tmpfs                          817700      524    817176   1% /run
none                             5120        0      5120   0% /run/lock
none                          4088496        0   4088496   0% /run/shm
none                           102400        0    102400   0% /run/user
/dev/sda1                      240972    67164    161367  30% /boot


[email protected]:~$ sudo fdisk -l
[sudo] password for mike:

Disk /dev/sda: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c9595

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758  2147481599  1073489921    5  Extended
/dev/sda5          501760  2147481599  1073489920   8e  Linux LVM

Disk /dev/mapper/cloud--vg-root: 1090.7 GB, 1090661646336 bytes
255 heads, 63 sectors/track, 132598 cylinders, total 2130198528 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/cloud--vg-root doesn't contain a valid partition table

Disk /dev/mapper/cloud--vg-swap_1: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Click here to see the entire post!!