Skip to content

Virtualization - 3. page

Anything related to virtualization.

How To Enable SNMP On ESXi 5 / 5.5 / 6 For Remote Monitoring

SNMP isn’t exactly new technology, but it’s pretty reliable and just about every monitoring system out there supports it. There are definitely more in-depth monitoring solutions for ESXi out there, but if you are looking for a quick and dirty monitoring solution for an ESXi host to integrate into a platform you already have, SNMP will do the trick. This post describes how to setup SNMP on ESXi 5, 5.5, and 6. I’m fairly certain it will work on older versions of ESXi as well, but i have not tested that theory.

How to enable SNMP on ESXi 5 / 5.5 / 6

There are a few steps involved in getting SNMP functional on ESXi. They go something like this.

  • Set the SNMP community string
  • Enable the SNMP service
  • Add necessary firewall rules
  • Enable the added firewall rule
  • Restart the SNMP daemon

It’s a pretty straight forward process. Let’s getting started. First, you need to connect to your ESXi host via SSH, If you don’t know how to do this, click here to read my post on how to enable SSH on ESXi. After logging in to your ESXi host via SSH, run the following commands.

#  esxcli system snmp set --communities YOUR_STRING
#  esxcli system snmp set --enable true

You will need to change YOUR_STRING to the name of your SNMP community. Many times this is PUBLIC or PRIVATE, but I suggest you use something different and unique for security purposes.

Screen Shot 2015-05-19 at 11.10.33 AM

Next, we need to add a firewall rule to allow the SNMP inbound port, and then enable it.

Click Here To Read The Entire Tutorial!

How To Install VMware VMtools (VMware Tools) on Ubuntu Linux

So, you need to install vmtools on Ubuntu. You’ve come to the right place. I’ve done it hundreds of times, but recently a friend of mine was having some difficulty doing this. I thought I would put a quick how-to together so I could maybe help some more people out. Here goes.

How To Install VMtools on Ubuntu

First thing’s first. Before going any further, I suggest you update apt, and then upgrade. This will make sure everything is up to date on your virtual machine.

#  sudo apt-get -y update

#  sudo apt-get -y upgrade

Now, you need to attach the VMware tools installation disc to your virtual machine. In ESXi / vSphere, just right click on the virtual machine, in the left pane, go to Guest, then select “Install/Upgrade VMware Tools.” Like this.

Screen Shot 2015-05-17 at 3.43.46 PM

If you are using VMware Workstation, or VMware Fusion, select the virtual machine in the library, then under the Virtual Machine pull down menu at the top, select “Install VMware Tools.” In VMware Fusion, it looks like this.

Click Here To Read The Entire Tutorial!

How to Install Docker on CentOS 7 and Set Up A Ghost Blog

Docker is a slick container based virtualization platform that allows you to run “images,” with minimal overhead. There are many different images available, from full blown OS’s, such as Ubuntu or CentOS, to web apps like WordPress or Ghost. The possibilities are endless, and because resource usage is minimal, you can really do a lot with little resources. You can install docker on all of the major linux distributions, as well as windows. I works fine in a virtual machine, or VPS. I will be installing Docker on a CentOS 7 VM, running on an ESXi hypervisor.

Lets Get Started

I’m assuming you already have your operating system installed, you are sitting at a command prompt. Installation and configuration is very easy on CentOS 7. By default, CentOS uses firewalld. Docker and firewalld do not get along nicely. Docker creates iptables rules directly for access to running containers, and if firewalld is refreshed or restarted, all of the iptables rules docker initiated get wiped by firewalld. So, we will disable firewalld and install the classic iptables functionality. Here are the steps involved:

  • Install Docker
  • Disable firewalld
  • Install iptables configuration scripts
  • Download Ghost Docker image and run

First, we will go ahead and install Docker. To do this only requires a single, simple command.

#  sudo yum install docker

Let’s set up Docker to start at boot time.

#  sudo chkconfig docker on

There will be a handful of dependencies, nothing out of the ordinary. If you are already running as root, you can omit the sudo. Next, we need to get firewalld stopped, removed, and iptables configuration scripts installed.

Click Here To Read The Entire Tutorial!

How to open up all ports on VMware ESXi 5, 5.1 & 5.5 to specific IP addresses or subnet

It a lab environment, and very limited production scenarios, it’s often very useful to open all ports, TCP and UDP, but only to certain IP addresses, subnets, or IP address ranges. I have found very little info on this specifically, so I thought I would whip up this guide so you know an easy way to open up all ports for specific addresses. This will work on VMware ESXi 5, 5.1 and 5.5 for sure, but it will most likely work for most versions of ESXi, although I have not tested it. Please let me know if the comments if you have luck on non 5.x versions, specifically 4.x and 6.x.

Basically, we are going to create 4 firewall rules, each does the following:

  • Open all UDP ports inbound (ports 1-60,000).
  • Open all UDP ports outbound (ports 1-60,000).
  • Open all TCP ports inbound (ports 1-60,000).
  • Open all TCP ports outbound (ports 1-60,000).

Once that’s done we’ll lock access down to a specific address(s) via the vSphere Client. First, go ahead and SSH into your ESXi host. Once you are at a command prompt you will need to edit /etc/vmware/firewall/service.xml. I prefer nano, but that’s not available on ESXi, so we have to use VI. First, lets make a backup of the file and change permissions so we can edit the file.

# cp /etc/vmware/firewall/service.xml /etc/vmware/firewall/service.xml.bak
# chmod 644 /etc/vmware/firewall/service.xml
# chmod +t /etc/vmware/firewall/service.xml

Now we have a backup of the service.xml file, called service.xml.bak. We have also allowed writes to service.xml and toggled the sticky bit. Lets go ahead and open service.xml with vi.

# vi /etc/vmware/firewall/service.xml

The service.xml file is the main template for firewall rules, specifically pertaining to ports. It is what populates all of the available information on the Security Profile > Firewall tab in the vSphere Client. It is here we are going to add our four rules. If you are unfamiliar with vi, it can be a big confusing. Here are some pointers for you:

  • When you first enter vi, you cannot manipulate any text. to do so, hit the “i” key. This puts you in “insert” mode.
  • Once selecting “i” you can move about freely and add/edit at will.
  • After making all needed changes, press the “ESC” key, the “:” – This puts you in vi command mode.
  • At the “:” prompt, enter “w” (for write) and q (for quit) and then press enter. So it should look like this :wq
  • You have just saved and exited. That’s it. So, lets continue.

Click here to continue reading this tutorial

How to install LSI MegaRAID Storage Manager (MSM) on VMware ESXi 5.5

This morning I got an email from the datacenter that informed me of a loud alarm coming from one of my servers. I knew right away it was the LSI card sounding off due to a hard drive failure. Since I almost always use RAID 10 in critical arrays, I was more annoyed than concerned. So, off to the datacenter I went, new drive in hand. While diagnosing the issues, I realized there is no out-of-the-box way to be notified of a drive failure within ESXi. As far as I could tell, everything was fine, except for an audible alarm I would have never heard.

The RAID card in this particular server is an LSI 9260-8i, however this guide is the same for all of the 92xx series cards, like the 9265-8i, or 9265-16i. VMware includes drivers for these cards, starting in ESXi 5.1 if I remember correctly. However, there is no health data for drives and no management interface for arrays. After a couple google searches, I quickly found that there is a lot of conflicting information and tons of problems that go along with installing the LSI MegaRAID Manager, MSM, on ESXi. I also ran into some problems. So, I thought I would put together a quick, easy, clear guide to save others the hassle of going through what I went through. So, here we go.

 

How to install MSM on ESXi 5.5

To complete this process, you will have to put your ESXi host into maintenance mode, and you will have to reboot. So make sure your VMs are all shut down before proceeding.

You will need to have the following items:

 

The process is pretty straight forward. In a nutshell, here are the steps we will take:

  1. Enable SSH on ESXi Host.
  2. Copy LSI SMIS Provider to ESXi Host via WinSCP.
  3. Configure Host and Install SMIS Provider.
  4. Install MegaRAID Storage Manager on VM

 

Log into your host using the vSphere client, or the web interface, then go to the configuration tab and select Security Profile.

 

Click here to read the entire tutorial

How to install a nested hypervisor on an ESXi virtual machine without a vSphere server

If you read my blog, you’ve probably noticed I’ve been doing a lot of stuff with hypervisors lately, more specifically setting up OpenStack. I’ve always been a VMware guy. I like the simplicity of ESXi and the intuitiveness of of the interface. Since OpenStack really works best with at least 3 servers, 2 of which don’t do much of anything, I decided to use an ESXi server to install the openstack infrastructure. The controller node and network node do not provide any type of virtualization capabilities, but the compute node(s) do.

ESXi, at least since version 5.1, has supported running 64-bit hypervisor guests, or “nested” hypervisors on any Intel i3 or newer CPU. Specfically, your CPU needs to be one of the following:

  • Intel VT-x or AMD-V for 32-bit nested virtualization
  • Intel EPT or AMD RVI for 64-bit nested virtualizaiton

In my case, my Xeon W5580 has VT-x and EPT support, so I can run 64-bit nested virtual machines.

This will allow you to run any nested hypervisor within an ESXi 5.1 or newer host. I’ve ran Xen, KVM, OpenStack, Proxmox, and ESXi; they all worked great.

How To Enable

The feature, or setting, of the virtual machine that allows the VT-x functionality to be passed through to the guest virtual machine is called HV (as in hypervisor). The problem is you have to be running the new vSphere Web Client to get at the nice little check box to turn this on. The vSphere Desktop Client does not have this functionality and unless you have a license for vSphere server, there is no way to enable HV on a virtual machine using the GUI. However, there is a VERY easy work around for this. You simply add a single line to the .vmx file for the virtual machine you need HV enabled on.

To do this, fire up the vSphere Client, and make sure the host is selected in the left pane. Also, verify the VM is powered OFF.

Screen Shot 2015-04-27 at 5.55.35 PM

Click here to read the entire tutorial!

Installing Ubuntu OpenStack on a Single Machine, Instead of 7

For an updated guide click here to read “Install OpenStack on a Single Ubuntu 16.04 Xenial Xerus Server – Updated!”

If you’ve read my other recent posts, you’ve probably notice I’ve been spending a lot of time with different cloud architectures. My previous guide on using DevStack to deploy a fully functional OpenStack environment on a single server was fairly involved, but not too bad. I’ve read quite a bit about Ubuntu OpenStack and it seems that Canonical has spent a lot of energy development their spin on it. So, now I want to set up Ubuntu OpenStack. All of Ubuntu’s official documentation and guides state a minimum requirement of 7 machines (server). However, although I could probably round up 7 machines, I really do not want to spend that much effort and electricity. After scouring the internet for many hours, I finally found some obscure documentation stating that Ubuntu OpenStack could in fact be installed on a single machine. It does need to be a pretty powerful machine; the minimum recommended specifications are:

  • 8 CPUs (4 hyperthreaded will do just fine)
  • 12GB of RAM (the more the merrier)
  • 100GB Hard Drive (I highly recommend an SSD)

With the minimum recommended specs being what they are, my little 1u server may or may not make the cut, but I really don’t want to take any chances. I’m going to use another server, a much larger 4u, to do this. Here are the specs of the server I’m using:

  • Supermicro X7DAL Motherboard
  • Xeon W5580 4 Core CPU (8 Threads)
  • 12GB DDR3 1333MHz ECC Registered RAM
  • 256GB Samsung SSD
  • 80GB Western Digital Hard Drive

I have installed Ubuntu 14.04 LTS, with OpenSSH Server being the only package selected during installation. So, if you have a machine that is somewhat close to the minimum recommended specs, go ahead and install Ubuntu 14.04 LTS. Be sure to run a sudo apt-get upgrade before proceeding.

Lets Get Started

First, we need to add the OpenStack installer ppa. Then, we need to update app. Do the following:

Click here to read the entire tutorial!