Skip to content

Virtualization

Anything related to virtualization.

Increasing VMware ESXi 6 & 6.5 Host Client Session Timeout for Web Interface

With the latest version(s) of VMware ESXi, 6 and 6.5, VMware decided it would be most convenient to automatically logoff sessions every 15 minutes.  So, after 15 minutes of inactivity, you have to log back in to the ESXi Host Client Web Interface, again and again.  I found this extremely annoying.  Especially in a lab environment when testing various features or troubleshooting issues.  Fortunately, this automatic logoff timeout can be increased so it’s not quite so painful.

How to Increase Session Timeout on ESXi 6 & 6.5

To increase the session timeout, all you need to do is change one advanced configuration parameter in the ESXi Host Client Web Interface.

First, log in to the web interface.  After doing so, navigate to Host > Manage > System > Advanced Settings.  Scroll down or search for the UserVars.HostClient.SessionTimeout key.

 

 

The default value for UserVars.HostClient.SessionTimeout is 900.  Because this value is in seconds, by default you will be logged out after 15 minutes of activity.  Personally, I would like to set this to 24 hours, but that isn’t possible.

Click Here to Keep Reading!!

How to Fix ‘setkeycodes 00’ and ‘Unknown key pressed’ Console Errors on OpenStack!

Earlier today, I wrote an updated tutorial on using devstack to install OpenStack on a single Ubuntu 16.04 server.  That deployment went so smooth it was no surprise when I ran into a roadblock when trying to console into my first instance.

 

The Problem

 

When accessing the console through the web browser, I wasn’t able to use the keyboard.  Every time I hit any key, these two lines would display in the console:

 

[ 74.003678] atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.

[ 74.004462] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0).

 

use_setkeycodes_unknown_key_pressed_error_VNC_console_openstack

Click Here To Keep Reading!

Install OpenStack on One Virtual Machine, the Easy Way, On Ubuntu 16.04 LTS!

Many of you have emailed me or posted to voice your gripes about the painful process of installing an OpenStack environment to play around with. I feel your pain! My recent article on deploying OpenStack using conjure-up worked great until a developer committed some defective code.  Some of you even reverted to my old guide on deploying OpenStack on Ubuntu 14.04 from last year.  So, I set out to give you a fool proof, 100% guaranteed deployment method that’s EASY, STABLE, and works on Ubuntu 16.04 Xenial.  Here you go!

Requirements

 

For this guide, you will need a server at least as good as these specs.

  • Virtual Machine on a real hypervisor (ESXi, KVM, Xen, etc) or a bare metal server with virtualization support.
  • 14GB of RAM is the recommended minimum.
  • 100GB of hard disk space, at least.
  • Ubuntu 16.04 LTS server, having already ran sudo apt update && sudo apt upgrade
  • About an hour and a cup of coffee.

 

Installing OpenStack

Click Here To Keep Reading!

Install OpenStack on a Single Ubuntu 16.04.1 Xenial Xerus Server Using Conjure-up

Introduction

 

It’s been some time since I wrote Installing Ubuntu OpenStack on a Single Machine, Instead of 7.  Since then, there have been many updates to both OpenStack, and Ubuntu.

This tutorial will guide you through installing OpenStack on a single Ubuntu 16.04 Server.  I will be installing Ubuntu and OpenStack within a virtual machine hosted on a VMware ESXi Hypervisor, but any fresh installation of Ubuntu 16.04 should work fine, as long as it meets the minimum requirements below.  I will be using conjure-up to install the environment due to the fact that Ubuntu’s Openstack-install package doesn’t working on Ubuntu 16.04.1 at this time.

 

Note:  I have written an updated guide on Installing OpenStack on Ubuntu 16.04 LTS using devstack.  I suggest following that guide unless you have a specific reason for using the conjure-up method.  From my experience, the devstack method requires less resources, runs faster, and performs much better once deployed.

 

Minimum Requirements

 

To install the entire environment on a single physical server or virtual machine, you will need at least:

 

  • 8 CPU’s (vCPUs will work just fine)
  • 12GB of RAM (minimum needed to successfully start everything, more is better)
  • 100GB Disk Space (SSD Prefered, but rotating disk will work)
  • Ubuntu 16.04.1 Xenial Xerus x64 Server(only OpenSSH Server installed)

Click Here To Continue Reading!

How to Update Proxmox VE When “You Do Not Have A Valid Subscription”

If you have recently dove into the Proxmox VE world, your mind is probably blowing chunks.  Proxmox gives you the unmatched ability to run hypervisor-like Virtual Machines, as well as containers, side by side with High Availability.  It’s an amazing virtualization platform and if you haven’t tried it out yet I highly recommend you do so.  After installing Proxmox 4.2 on one of my lab servers, I found the need to update it and I wasn’t about to pay for an Enterprise Subscription for my home lab.

proxmox-1

How to update Proxmox when “You do not have a valid subscription for this server, please visit www.proxmox.com to get a list of available options” and keep you Proxmox server updated!”

There are a few steps involved and they go something like this:

  1. Disable the enterprise repository that is configured by default
  2. Add the no-subscription repository
  3. Update apt so it knows what can be updated
  4. Use apt to upgrade any packages
  5. Upgrade the entire distribution, using apt, of course

First, lets disable the enterprise repository.  You can’t run apt-get update by default without a subscription, you will get an error.  So, lets comment out that repo so it isn’t checked.  Go ahead and putty / ssh / console into your Proxmox server, and run the following command:


sed -i.bak 's|deb https://enterprise.proxmox.com/debian jessie pve-enterprise|\# deb https://enterprise.proxmox.com/debian jessie pve-enterprise|' /etc/apt/sources.list.d/pve-enterprise.list

Click Here to Continue Reading!

How to Create an iSCSI Target & Extent / Share on FreeNAS 9 (and previous versions)

Today, I’m going to guide you through the process of creating an iSCSI target / extent on FreeNAS-9. This will also work on previous versions of FreeNAS, such as version 7 and 8. There are a few different ways you can go about creating an iSCSI share. You can dedicate an entire device (Hard drive, or RAID array) to the iSCSI share, or you can simply create a Volume, and create multiple iSCSI shares and each is simply a file on the volume. This approach works well because you can use part of a volume as an NFS share, part of it as a CIFS share for Windows, and if you want a few separate iSCSI targets you can just create a single file for each. Lets get started.

How to create an iSCSI Target / Share on FreeNAS

 

First, we need to add a volume using your hard drive or RAID array that is connected to your FreeNAS server. If you have already done this, you can skip this step.  Let’s get started with the rest.

Log into your FreeNAS web interface, and go to Storage > Volumes > Volume Manager.  Fill in a volume name (make sure it starts with a letter, and NOT a number, otherwise you will get an error).  Add one or more of your Available Disks (by clicking the + sign).  Select a RAID type if you wish to do so.  In my case, I’m using hardware RAID, so I will leave the default (single drive stripe, IE, JBOD).  Now click Add Volume.

 

freenas-1

 

Now that we have added a volume, we can begin the process of creating an iSCSI share.  This process required multiple steps, in the following order:

  1. Add a Portal
  2. Add an Initiator
  3. Add a Target
  4. Create an Extent (the file that corrasponds to the iSCSI share)
  5. Link the Target and the Extent together
  6. Start the iSCSI service

Click Here to Continue Reading!

How to add a vLAN to a Cisco UCS using Unified Computing System Manager

Cisco’s UCS platform is an amazing blade infrastructure.  They are extremely reliable, very fast, and easily expanded.  Today, I’m going to briefly go over how to add a vLAN to your Cisco UCS setup, using the Cisco Unified Computing System Manager.  This guide assumes you have already configured the vLAN on your network and you have trunk-enabled ports being fed into your UCS and/or Fabric switches.

 

Go ahead and log into the Cisco UCS Manager.  Once you have logged in, select the LAN tab, then VLANs (in the left column).  Once there, click the New button, up at the top, and then Create VLANs.

 

For the VLAN Name/Prefix, give the VLAN a unique identifiable name.  In the VLAN IDs field, you need to enter to exact vLAN ID that was assigned to the vLAN when you configured it on your network infrastructure.  Once you have filled in those two fields, click OK.

 

Click Here To Continue Reading!