Skip to content

Linux - 6. page

Anything Linux related.

How to Install Docker on CentOS 7 and Set Up A Ghost Blog

Docker is a slick container based virtualization platform that allows you to run “images,” with minimal overhead. There are many different images available, from full blown OS’s, such as Ubuntu or CentOS, to web apps like WordPress or Ghost. The possibilities are endless, and because resource usage is minimal, you can really do a lot with little resources. You can install docker on all of the major linux distributions, as well as windows. I works fine in a virtual machine, or VPS. I will be installing Docker on a CentOS 7 VM, running on an ESXi hypervisor.

Lets Get Started

I’m assuming you already have your operating system installed, you are sitting at a command prompt. Installation and configuration is very easy on CentOS 7. By default, CentOS uses firewalld. Docker and firewalld do not get along nicely. Docker creates iptables rules directly for access to running containers, and if firewalld is refreshed or restarted, all of the iptables rules docker initiated get wiped by firewalld. So, we will disable firewalld and install the classic iptables functionality. Here are the steps involved:

  • Install Docker
  • Disable firewalld
  • Install iptables configuration scripts
  • Download Ghost Docker image and run

First, we will go ahead and install Docker. To do this only requires a single, simple command.

#  sudo yum install docker

Let’s set up Docker to start at boot time.

#  sudo chkconfig docker on

There will be a handful of dependencies, nothing out of the ordinary. If you are already running as root, you can omit the sudo. Next, we need to get firewalld stopped, removed, and iptables configuration scripts installed.

Click Here To Read The Entire Tutorial!

How to Run Bandwidth Speed Tests From the Linux Command Line With Speedtest.net

Believe it or not, there is a way to use Speedtest.net’s speed test service from a Linux command line. Usually, one would fire up a web browser and just go to Speedtest.net and the flash utility would load. Obviously, this is impossible from a command line. If you have a cloud instance or virtual private server (VPS), you don’t have a gui or a web browser. So, here is how to run an Internet speed test from the Linux command line.

To achieve this, there is a package called speedtest-cli. It is a python based utility that more or less has the same functionality as the gui. When ran with defaults, it will locate the closest server and run a download test, then an upload test, and display the results when it’s finished. You can do this by running:

#  wget -O - https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py | python

After the script downloads and runs, you’ll see something like this:

Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from AT&T U-verse (108.238.104.79)...
Selecting best server based on latency...
Hosted by TekLinks (Birmingham, AL) [103.61 km]: 30.383 ms
Testing download speed........................................
Download: 98.96 Mbit/s
Testing upload speed..................................................
Upload: 56.06 Mbit/s

Personally, I like to select a specific server from a specific location when I run a speed test. I’ve found that the closest server, isn’t always the fastest. Just because a speed test server is located a couple hundred miles from you, it does not mean the path to it is linear, and it doesn’t mean their connection is fast enough to saturate your own. Not to worry, you can also select a server to your liking. There are two ways to approach this. You can either install the speedtest-cli package using your package manager, or you can download the script manually. I’ll cover both.

To install the speedtest-cli package on Ubuntu

#  sudo apt-get install speedtest-cli

After installing the package, you can simply run:

#  speedtest-cli

Now, if you’re using a distribution other than Ubuntu, or do not wish to install the package, you can simply download the script. To do that, do the following:

#  wget https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py

#  chmod +x speedtest_cli.py

The chmod command gives execute permission to the file. This is required to run it. Once you have downloaded the script, you can run it by doing this:

#  ./speedtest_cli

There are quite a few options you can use with the script. I’ll go over the few that I have used. First up is –share. This option gives you a web link to share you speed test results with others. You’ve probably seen the little png boxes before. They look like this:

speed test results

So to get a nice automatically generated results picture like this, just run this command:

#  ./speedtest_cli.py --share

or

#  speedtest-cli --share

It will run the speed test like normal, but the very last line will have a link to your results. Now, like I was saying earlier, I like to specify the server the speed test runs against. To do that you first need to know the ID of the server you want to use. To get a list of speed test servers available, and their ID, run this command:

#  ./speedtest_cli.py --list | more

or

#  speedtest-cli --list | more

My favorite servers ID is 3595, so I’ll use it in my example. Once you have the ID of the server you want to use, all you need to do is specify it with the –server option. Be sure to swap out 3595 with the ID of your prefered server. Like this:

#  ./speedtest_cli.py --server 3595

or

#  speedtest-cli --server 3595

There are some other pretty cool options available if you want to play around some more. You can display values in Bytes instead of Bits, use the URL of a Speedtest Mini server, and even select the source IP you want to bind to. If you want to check out the other options available, run this command.

#  ./speedtest_cli.py --help

or

#  speedtest-cli --help

That’s all there is to it. If you run into any troubles feel free to ask for help in the comments below. Thanks!

Simple Method to Benchmark Disk Read & Write Speeds From the Linux Command Line

Recently, I’ve been exploring high availability iSCSI targets and using them as virtual machine storage. I have always been a bit weary of iSCSI performance over gigabit networks due to some not-so-great experiences many years ago. iSCSI technology has progressed quite a bit since then. FreeNAS has an excellent implementation of iSCSI, as well as Nexenta. I wanted to get a good grasp on how well everything was performing, so I decided to run some basic benchmarks.

The virtual machine I’m working with has Ubuntu 15.04 installed, but these commands will work on just about any linux distribution in existence. The hyperviser is running VMware ESXi 6, with this particular virtual machine stored on an iSCSI target served from a FreeNAS virtual machine running on another VMware ESXi hyperviser. The FreeNAS virtual machine has been given PCI Passthrough to a 3ware 9650SE-16ML, connected to 4x1TB Hitachi SATA Hard Drives exported as JBOD (individual drives, no RAID). FreeNAS configured a RAID10 with the four drives, which is where the iSCSI target resides. The network is gigabit, with an Adtran NetVanta 1524ST switch. I have not enabled Jumbo Frames. The theoretical maximum transfer speed on a gigabit network is about 125MB/s. Of course, with the overhead associated with TCP/IP and iSCSI encapsulation, a single link should be a little less. Lets get started.

To test WRITE speed of hard disk using the DD command:

#  sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.8496 s, 99.0 MB/s

So, this command writes a bunch of zeros to a file called tempfile, with a size of 1024MB. If you want to use a larger test file, you can change “1024” to a higher number. For instance, changing 1024 to 10000 would write a 10GB temp file. This command only tests the write speed. As you can see, the reported write speed was 99MB/s, not too shabby.

Click here to view the entire tutorial

Installing Ubuntu OpenStack on a Single Machine, Instead of 7

For an updated guide click here to read “Install OpenStack on a Single Ubuntu 16.04 Xenial Xerus Server – Updated!”

If you’ve read my other recent posts, you’ve probably notice I’ve been spending a lot of time with different cloud architectures. My previous guide on using DevStack to deploy a fully functional OpenStack environment on a single server was fairly involved, but not too bad. I’ve read quite a bit about Ubuntu OpenStack and it seems that Canonical has spent a lot of energy development their spin on it. So, now I want to set up Ubuntu OpenStack. All of Ubuntu’s official documentation and guides state a minimum requirement of 7 machines (server). However, although I could probably round up 7 machines, I really do not want to spend that much effort and electricity. After scouring the internet for many hours, I finally found some obscure documentation stating that Ubuntu OpenStack could in fact be installed on a single machine. It does need to be a pretty powerful machine; the minimum recommended specifications are:

  • 8 CPUs (4 hyperthreaded will do just fine)
  • 12GB of RAM (the more the merrier)
  • 100GB Hard Drive (I highly recommend an SSD)

With the minimum recommended specs being what they are, my little 1u server may or may not make the cut, but I really don’t want to take any chances. I’m going to use another server, a much larger 4u, to do this. Here are the specs of the server I’m using:

  • Supermicro X7DAL Motherboard
  • Xeon W5580 4 Core CPU (8 Threads)
  • 12GB DDR3 1333MHz ECC Registered RAM
  • 256GB Samsung SSD
  • 80GB Western Digital Hard Drive

I have installed Ubuntu 14.04 LTS, with OpenSSH Server being the only package selected during installation. So, if you have a machine that is somewhat close to the minimum recommended specs, go ahead and install Ubuntu 14.04 LTS. Be sure to run a sudo apt-get upgrade before proceeding.

Lets Get Started

First, we need to add the OpenStack installer ppa. Then, we need to update app. Do the following:

Click here to read the entire tutorial!

MailCleaner Spam Filter – How To Open a Port & Add IPTables Firewall Rules

MailCleaner is a nice Open Source Linux distribution that creates a spam filter appliance. It is designed to sit in between an email server and the internet and filter spam out of email using advanced rules, DNS RBL (realtime black list), and many other techniques. It also scans email for viruses. Although I no longer use MailCleaner (I have replaced it with ScrollOut F1), I remember coming across a big issue in the past that took me some time to figure out, so I thought I would share it.

Because MailCleaner is more or less an appliance, most aspects of the operating system are controlled by MailCleaner. A majority of the settings you need to change are easily available on the web interface, however firewall rules are not. MailCleaner is designed so that it manages all IPTables rules. If you manually add an IPTables rule from the command line, once it’s reloaded or the system is reboot, the rule is gone. That is because MailCleaner wipes out and reloads IPTables rules from data stored in its MySQL database. So, in order to open any additional ports, you must modify the database. I encountered this dilemma when I installed a remote monitoring client (the Nagios based Check_MK to be exact), and needed to open a port to allow the monitoring server to connect.

Lets assume I need to open up SSH (port 22) and RSYNC (port 873) and I only want my mail server’s IP, 1.2.3.4, to connect. Normally we would enter the following iptables commands:

sudo iptables -A INPUT -s 1.2.3.4/32 -p tcp -m tcp --dport 873 -j ACCEPT
sudo iptables -A INPUT -s 1.2.3.4/32 -p tcp -m tcp --dport 22 -j ACCEPT

But in this case, we cannot. The good news is the MailCleaner will do it for you if you add the correct info into the MySQL database. Here’s how you do that (from a command prompt on the MailCleaner server):

Click Here To Read The Entire Tutorial!

How To Install VMware tools on CentOS 6 and CentOS 7 / RHEL

This is a quick and dirty guide on installing VMware tools (vmtools) on a CentOS 6 or CentOS 7 virtual machine as well as RHEL (Red Hat Enterprise Linux).

First, you will need to install the VMware tools prerequisites:

[[email protected]]$  yum install make gcc kernel-devel kernel-headers glibc-headers perl net-tools

Now you will need to mount the VMware Tools ISO and select “Install/Upgrade VMware Tools” option on ESXi. This can be found a few different ways. I prefer to right click on the virtual machine, then go to guest and click on “Install/Upgrade VMware Tools.”

Screen-Shot-2015-04-22-at-10.28.25-AM

Click Here To Read The Entire Tutorial!

Installing OpenStack on a Single CentOS 7 Server

This guide will help you install OpenStack on CentOS 7.  If you would like to install Openstack on Ubuntu, here is a guide to install OpenStack on a single Ubuntu 14.04 server, and this one will help you get OpenStack installed on a single Ubuntu 16.04 server.

I’ve always been rather curious about OpenStack and what it can and can’t do. I’ve been mingling with various virtualization platforms for many, many years. Most of my production level experience has been with VMWare but I’ve definitely seen the tremendous value and possibilities the OpenStack platform has to offer. A few days ago I came across DevStack while reading up on what it takes to get an OpenStack environment set up. DevStack is pretty awesome. Its basically a powerful script that was created to make installing OpenStack stupid easy, on a single server, for testing and development. You can install DevStack on a physical server (which I will be doing), or even a VM (virtual machine). Obviously, this is nothing remotely resembling a production ready deployment of OpenStack, but, if you want a quick and dirty environment to get your feet wet, or do some development work, this is absolutely the way to go.

The process to get DevStack up and running goes like this:

  1. Pick a Linux distribution and install it.  I’m using CentOS7.
  2. Download DevStack and do a basic configuration.
  3. Kick of the install and grab a cup of coffee.

A few minutes later you will have a ready-to-go OpenStack infrastructure to play with.

Server Setup and Specs

I have always been fond of CentOS and it is always my go-to OS of choice for servers, so that is what I’m going to use here. CentOS version 7 to be exact. Just so you know, DevStack works on Ubuntu 14.04 (Trusty), Fedora 20, and CentOS/RHEL 7. The setup is pretty much the same for all three so if you’re using one of the other supported OS’s, you should be able to follow along without issues, but YMMV.

Click Here To Read The Entire Post!