Skip to content

Storage - 2. page

BYOC Series #1 – How to install Pydio on Ubuntu 14.04 – Your own private Dropbox clone

This is the first post in the “Build Your Own Cloud” series (BYOC) of guides. Each BYOC post will help you build the foundation of your own personal, private cloud. Today I’m setting my sights on Pydio. Pydio is an Open Source platform that mimics the functionality of Dropbox. There are a few Open Source Dropbox-clones out there, including OwnCloud, which I’ve written about in the past. Pydio is definitely more visually aesthetic than OwnCloud, and quite possibly even Dropbox. It’s also packed full of features. Some of Pydio’s key features include the following.

  • File Sharing – Web UI, Desktop Sync Client, & Mobile Apps
  • Web Access – Drag and drop files from your desktop, view & edit files online
  • Mobile Access – Native Android and iOS apps for phones and tablets
  • Flexible Backend Storage – Works with AWS, OpenStack, Samba, FTP, and even Dropbox
  • Directory Authentication – Will authenticate against LDAP, Active Directory, WordPress, Drupal, Google, and more
  • Very Secure – Supports Encryption as well as File & Folder ACLs
  • Compatible Platform – PHP-based & runs on LAMP or Windows IIS

It’s powerful enough to do everything Dropbox does, but you maintain control of your own data and personal information. You don’t have to pay a monthly fee to get large amounts of storage for you, or your company or even your family. Let’s get started.

Installing Pydio on Ubuntu 14.04

I’m installing Pydio on a virtual machine running Ubuntu 14.04, minimal server installation, with OpenSSH server running. First things first, let’s make sure everything is updated and upgraded.

#  sudo apt-get -y update
#  sudo apt-get -y upgrade

We need to add the debian package sources for Pydio to sources.list.

#  sudo nano /etc/apt/sources.list

Click Here To Read The Rest!

How to install GlusterFS + NFS on CentOS 7 for Virtual Machine Storage

GlusterFS is one of the fastest growing Open Source storage platforms in existence. It’s very simple to install, scale, and manage. What makes Gluster so amazing, is its ability to scale and replicate. It really sets the bar for software defined storage systems. It runs on whitebox hardware, or virtual machines. Lately, I’ve come across quite a few people that seem to be scared of Gluster and don’t know where to begin. I am here to help! Today, we’re going to install and configure GlusterFS on a CentOS 7 virtual machine; and, we’re going to make it NFS accessible for VM storage. Every hypervisor in existence supports NFS storage for virtual machines, including VMware ESXi / vSphere, Proxmox, Xen, KVM, oVirt, OpenStack, and all the others.

Installing GlusterFS Server and Client on CentOS 7 (two nodes)

I am using two virtual machines, each running CentOS 7. Their hostnames are gfs1 and gfs2. I have added a 40GB second disk to each VM that will be dedicated to GlusterFS. I suggest you have an identically sized second partition or drive on each of your systems as well.

As always, after connecting via SSH or console, go ahead and make sure everything is updated and upgraded on both nodes.

yum -y update

And, let’s go ahead and install a few useful packages (both nodes).

yum -y install nano net-tools wget

Edit the hosts file on both nodes. Make sure both nodes can resolve to each other via hostname.

nano /etc/hosts

Screen Shot 2015-06-03 at 1.44.35 PM

Now we can download and install Gluster (both nodes).

Click Here To Read The Rest!

Need to recover a FreeNAS server? How to import an existing FreeNAS iSCSI target that existed on a prior installation

Last night I noticed a new version of FreeNAS 9.3 was released. Just two days earlier I built this FreeNAS server, so I wanted everything to be up to date. When I tried to update FreeNAS via the web GUI, it errored out. As I came to find out, this was one of the bugs addressed in the update I was trying to install. It was a catch-22. So, I downloaded the installation disc, burnt it to CD, and booted the FreeNAS server from it. That errored out as well. I had no choice but to blow away the existing installation and do a fresh FreeNAS load. All of my shares and iSCSI targets were stored on a 4 disk RAID-Z array, and FreeNAS itself is installed on an 8GB USB Thumb drive. So, I expected my data to stay in tact.

When I booted the fresh installation for the first time, it automatically imported the zpool stored on the RAID array. I was able to re-create the SMB shares and point them to the /mnt folders those shares pointed to before, everything was going well. Until I got to work trying to bring my iSCSI target volumes back online. In Storage > Volumes, I could see all of the volumes that matched up with my previous ISCSI targets, but I couldn’t import them. I couldn’t figure out how to do anything with them. All of my virtual machines were stored on these volumes so I was desperate to find a solution. I did.

Have you lost your FreeNAS installation? Just recovered from a catastrophe? Recently reinstalled FreeNAS and need to get your iSCSI and other shares back? Going through a FreeNAS recovery? You’ve come to the right place.

How to import an iSCSI target volume from an old FreeNAS installation

First, let’s make sure the volumes that previously correlated to iSCSI targets are visible. Navigate to Storage > View Volumes. Here is what mine looks like.

Screen Shot 2015-05-27 at 3.08.37 PM

Click Here To Read The Rest Of This Post!

How To Add An iSCSI Target To Proxmox VE 3.4 And Create LVM Group

I’ve been digging into Proxmox VE 3.4 quite a bit lately. I have a FreeNAS server on my network that I use for VM storage in my lab. When I went to add an iSCSI target on Proxmox for virtual machine and image storage, it was a bit confusing. So, I thought I would put a quick step by step guide together to help other folks in the same boat. Here goes.

How to add an iSCSI target in Proxmox

First, log into your Proxmox VE 3.4 server via the web interface. Make sure Datacenter (top level) is selected in the left pane, and make sure you are on the Storage tab on the right pane. It should look like this.

Screen Shot 2015-05-23 at 9.08.23 PM

Now, click on the Add pull down menu, and select iSCSI.

Screen Shot 2015-05-23 at 9.10.31 PM

Click Here To Read The Rest!

How To Show Hidden Files In Finder on Apple Mac OSX

Recently, I needed to locate an obscure file on my Apple Macbook Pro. I quickly realised that showing hidden files in OSX Finder is not a very intuitive process. As a matter of fact, it doesn’t appear to even be possible to change the “show hidden files” setting through the gui. So, to google I went. I found a defaults command line option to enable and disable the “AppleShowAllFiles” option, which toggles the ability to see hidden files through the Finder gui. It’s a relatively easy process. Here is how it’s done.

How to show hidden files in OSX

To show hidden files in Finder, open Terminal (command prompt) and run the following commands.

$  defaults write com.apple.finder AppleShowAllFiles TRUE
$  killall Finder

It’s important to capitalize the “F” in finder, when running the killall command, otherwise it will not kill the service properly. After running killall Finder, Finder will restart and reopen the windows you had open before.

Screen Shot 2015-05-19 at 9.12.53 AM

You will quickly notice how many files are hidden and find that it’s pretty annoying. Once I did what i needed to do, I was ready to re-hide all those annoying files.

How to re-hide hidden files in OSX

Click Here To Read The Entire Tutorial!

How To Enable Data Deduplication In Windows Server 2012 On An Existing Volume

I have a very large RAID 6 array that is used to store movies, tv shows, personal files, and various other things. It’s formatted capacity is about 36TB. Believe it or not, it’s pretty much full. It currently consists of 20x2TB hard drives and I really don’t want to add any more drives to it in its current form. Later this year I’m planning on building a new array to replace it, using fewer 6TB or 8TB drives. The server that manages the array had Server 2008R2 installed. After getting down to the last few gigs of free space it dawned on me, why not install Server 2012 R2 and set up data deduplication. I’ve read some pretty impressive articles online, where people were able to reclaim up to 60% of their storage using the dedup mechanism in Server 2012. So, I went ahead and upgraded. I started poking around and it wasn’t very obvious enabling dedup, so I put this guide together to help you get started.

Enabling Deduplication in Server 2012 R2

First, we need to install the Data Deduplication service. It’s part of File and Storage Services. Open Server Manager, select Local Server in the left side pane, then go to the Add Roles and Features wizard, under Manage.

Screen Shot 2015-05-14 at 1.55.06 PM

Go through the first few windows, and when you get to Server Roles, you need to make sure Data Deduplication is selected, at minimum, under File and Storage Services. This is also a good opportunity to install any other roles or services you might be interested in.

Screen Shot 2015-05-14 at 1.53.02 PM

Click Here To Read The Entire Tutorial!

Simple Method to Benchmark Disk Read & Write Speeds From the Linux Command Line

Recently, I’ve been exploring high availability iSCSI targets and using them as virtual machine storage. I have always been a bit weary of iSCSI performance over gigabit networks due to some not-so-great experiences many years ago. iSCSI technology has progressed quite a bit since then. FreeNAS has an excellent implementation of iSCSI, as well as Nexenta. I wanted to get a good grasp on how well everything was performing, so I decided to run some basic benchmarks.

The virtual machine I’m working with has Ubuntu 15.04 installed, but these commands will work on just about any linux distribution in existence. The hyperviser is running VMware ESXi 6, with this particular virtual machine stored on an iSCSI target served from a FreeNAS virtual machine running on another VMware ESXi hyperviser. The FreeNAS virtual machine has been given PCI Passthrough to a 3ware 9650SE-16ML, connected to 4x1TB Hitachi SATA Hard Drives exported as JBOD (individual drives, no RAID). FreeNAS configured a RAID10 with the four drives, which is where the iSCSI target resides. The network is gigabit, with an Adtran NetVanta 1524ST switch. I have not enabled Jumbo Frames. The theoretical maximum transfer speed on a gigabit network is about 125MB/s. Of course, with the overhead associated with TCP/IP and iSCSI encapsulation, a single link should be a little less. Lets get started.

To test WRITE speed of hard disk using the DD command:

#  sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 10.8496 s, 99.0 MB/s

So, this command writes a bunch of zeros to a file called tempfile, with a size of 1024MB. If you want to use a larger test file, you can change “1024” to a higher number. For instance, changing 1024 to 10000 would write a 10GB temp file. This command only tests the write speed. As you can see, the reported write speed was 99MB/s, not too shabby.

Click here to view the entire tutorial