My Iomega ix2 and my new 3 TB USB drive

Purchased 3 TB Seagate USB 3.0 drive from Amazon (

Waited… Very excited to connect to my ix2….

A few days later my 3 TB USB expansion drive arrived.  I hurried to unpack and connect to my ix2 expecting plug-and-play.  I plugged by no play.


An overwhelming feeling of sadness consumed me, followed by WTF then the joy of knowing I could and would hack this to make it work.

Knowing that this Iomega thing had to be running Linux I began to scour web for how to enable SSH with Firmware Version

Found plenty of how to information on Firmware Version 2.x but 3.x (Cloud Enabled Firmware) is a bit more sparse.

Finally to enable SSH:  http://ip/diagnostics.html


SSH now enabled, opened PuTTY and SSH to device.

Username:  root
Password:  soho

Boom!  In….


A quick “df -h” shows my currently configured capacity:


A quick “cat /proc/scsi/usb-storage/4” followed by a “fdisk -l” reveals the drive is being seen by the ix2.



Created partition on /dev/sdc, “fdisk /dev/sdc”


Now what?

Hmmmmm…. Maybe I can create a mount point on /mnt/pools/B/B0, seems logical.



Whoops forgot to mkfs.

Run “mkfs /dev/sdc1”


“mount /dev/sdc1 /mnt/pools/B/B0/”



“umount /dev/sdc1”

Tried to partition with parted (core dumps, ix2 running ver 1.8.  pretty sure GPT partition support not ready for primetime in vet 1.8)

Let see if I can get a new version of parted.

Enabled apt-get (required a little work)

cd /mnt/pools/A/A0
mkdir .system
cd .system

mkdir ./var; mkdir ./var/lib/; mkdir ./var/cache; mkdir ./var/lib/apt; mkdir ./var/cache/apt;  mkdir ./var/lib/apt/lists; mkdir ./var/lib/apt/lists/partial; mkdir ./var/cache/apt/archives;  mkdir ./var/cache/apt/archives/partial; mkdir ./var/lib/aptitude

(I think that is all the required dirs, you will know soon enough)

cd /var/lib
ln -s /mnt/pools/A/A0/.system/var/lib/apt/ apt
ln -s /mnt/pools/A/A0/.system/var/lib/aptitude/ aptitude
cd /var/cache
ln -s /mnt/pools/A/A0/.system/var/cache/apt/ apt

run “apt-get update”

Should run without issue.

run “aptitude update”
Note:  Should run without issue.


Jettison that idea, not enough space on root and /mnt/apps to install new version of parted and required dependencies.

New approach:

run “dd /dev/zero /dev/sdc”

Let run for a minute of so to clear all partition info (ctrl-c) to stop

Download EASEUS Partition Master 9.2.1 from filehippo (

Install EASEUS Partion Master 9.2.1 on Windows 7 desktop
Connect 3 TB Seagate USB drive to Windows 7 desktop
Partition and format partition ext3 using EASEUS Partion Master 9.2.1
Note:  This takes a little while.

Once complete I connected the drive to my Iomega ix2




Cleaned up the “/mnt/pools/B directory” I created earlier (“rm -rf /mnt/pools/B”)

Reboot my ix2 (make sure I didn’t jack anything up) and enjoy my added capacity.


Pretty sick footprint for ~ 4.5 TB of storage (1.8 TB of it R1 protected).

DNS and Disaster Recovery

I’ve been conducting DR tests and site failovers for years using a myriad of host based and array based replication technologies.  By now the tasks of failing the host over from site A to site B and gaining access to replicated data is a highly predictable and controllable event.  What I often find is that little issues like time being out-of-sync due to NTP server issue, a host needing to be rejoined to the domain or the dreaded missing or fat fingered DNS entry tend to slow you down.

I recently ran a DR test where in the prior test a DNS entry was fat fingered, the bad DNS entry impacted the failback and extended the test time by about 5 hours.  Prior to this year’s test I decided to safeguard the DNS component of the test.  I crafted a small shell script to record and check the DNS entries (forward and reverse),  The plan would be as follows:

  1. Capture DNS entries prior to the DR test and save as the production gold copy (known working production DNS records)
  2. Capture DNS entries following the failover to the DR location and DNS updates.  Ensure that the DNS entries match the documented DR site IP schema.
  3. Finally capture the DNS entries post failback to the production site.  Diff the pre-failover production site DNS entries (gold copy) with the post-failback production site DNS entries.

The fail-safe DNS checks proved to be very valuable, uncovering a few issues on failover and failback.  Below is my script, I ran the shell script from a Linux host, if you need to run on Windows and don’t want to rewrite you could try Cygwin (I don’t believe the “host” command is natively packaged with Cygwin but it could probably be compiled, haven’t looked around much)  or you could download VirtualBox and run a Linux VM. Hopefully you find this useful.

Note:  you will need two input files:  “” and “”. These input files should contain your lookups for each site.

.in file example (syntax for .in files is “hostname | IP [space] record type”):
host1 a
host2 a a a

Syntax to execute the script is as follows “./ [prod | dr]”

Mapping RDM Devices

This post was driven by a requirement to map RDM volumes on target side in preparation for a disaster recovery test.  I thought I would share some of my automation and process with regards to mapping RDM devices that will be used to present RecoverPoint replicated devices to VMs as part of a DR test.

Step 1:  Install VMware vCLI and PowerCLI
Step 2:  Open PowerCLI command prompt
Step 3:  Execute addvcli.ps1 (. .\addvcli.ps1)

Step 4:  Execute getluns.ps1 (. .\getluns.ps1)

Step 5:  Execute mpath.ps1 (. .\mpath.ps1)

Step 6:  Get SP collect from EMC CLARiiON / VNX array
At this point you should have all the data required to map the RDM volumes on the DR side.  I simply import the two CSVs generated by the scripts into excel (scsiluns.csv, mpath.csv) as well as the LUNs tab from the SP Collect (cap report).
Using Excel and some simple vlookups with the data gathered above you can create a table that looks like the following:
I could probably combine these three scripts into one but under a time crunch so just needed the data, maybe I will work on that at a later date or maybe someone can do it and share with me.

Repurposing old EMC Centera nodes

This is the first in a multi-part series on repurposing old EMC equipment.  I recently acquired six  EMC Centera nodes, two of the nodes with 4x1TB SATA drives and four of the nodes with 4x500GB SATA drives so I started thinking what can I do with these Pentium based machines with 1 GB of RAM and a boat load of storage.  An idea hit me to create a NAS share leveraging a global file system to aggregate the capacity and performance across all the Centera nodes.  Seemingly simple there was a challenge here, most modern day global file systems like GlusterFS or GFS2 require a 64 bit processor architecture, the Centera nodes use 32 bit Pentium processors.  After spending a vacation day researching I identified two possible Global file systems as potential options, XtreemFS and FraunhoferFS (fhgfs).  I discovered fhgfs first and it looked pretty interesting, a fairly traditional Global File System consisting of metadata nodes and storage nodes (I came across this presentation which provides a good overview of the FraunhoferFS.  While fhgfs provided the basics of what I was looking for the missing link was how I was going to protect the data, fhgfs for the most part relied on hardware RAID for node survivability, because the Centera nodes are built to run EMC’s CentraStar an OS which leverages RAIN (Redundant Array of Independent Nodes) no redundancy is built in at the node level.  EMC acquired Centera and CentraStar from a Belgian company named FilePool in 2002.  As I thought trough some possible workarounds I stumbled across XtreemFS an interesting object based global file system, what was most interesting was the ability to replicate objects for redundancy.  At this point I decided to attempt to move forward with XtreemFS, my single node install went well, no issues  to really speak of as I moved towards the multi node configuration I was beginning to get core dumps when starting the daemons, is was at this point that I decided to give fhgfs a try, thinking that in phase 2 of the project I could layer on either rsync of drbd to protect the data (not there yet so not sure how well this theory will play out).  The fhgfs installed fairly easily and is up and running, the rest of this blog will walk you though the steps that I took to prepare the old Centera nodes, install Ubuntu and configure Ubuntu server, install an configure fhgfs.

Because the Centera nodes came out of a production environment they were wiped prior to leaving the production data center (as a side note DBAN booted form a USB key was used to perform the wipe of each node).  So with no data on the four Centera node internal drives the first step was to install a base OS on each node.  Rather than use a USB CD-ROM (only had one) I decided to build a unattended PXE boot install.

Phase 1:  Basic Environment Prep:

Step 1:  Build PXE Server (because this is not a blog on how to build a PXE server I suggest doing some reading).  The following two links should be very helpful:, I built my PXE boot server on Ubuntu 12.04 server but the process in pretty much as documented in the above two links.  You can also Google “ubuntu pxe boot server”.

Note:  One key is to be sure to install Apache and copy your Ubuntu distro to a http accessible path.  This is important when creating your kickstart configuration file (ks.cfg) so you can perform a completely automated install.  My ks.cfg file.

Step 1A:  Enter BIOS on each Centera and reset to factory defaults, make sure that each node has PXE boot enabled on the NICs.

Note:  I noticed on some of the nodes that the hardware NIC enumeration does not match Ubuntu’s ETH interface enumeration (i.e. – On 500GB nodes ETH0 is NIC2) just pay attention to this as it could cause some issues, if you have the ports just cable all the NICs to make life a little easier.

Step 1B:  Boot servers and watch the magic of PXE.  Ten minutes from now all the servers will be booted and at the “kickstart login:” prompt.

Step 2:  Change hostname and install openssh-server on each node.  Login to each node, vi /etc/hostname and update to “nodeX”, also execute apttidude install openssh-server (openssh-server will be installed form the PXE server repo, I only do this no so I can do the rest of the work remotely instead of sitting at the console).

Step 3:  After Step 2 is complete reboot the node.

Step 4:  Update /etc/apt/sources.list

Step 4 Alternative:  I didn’t have the patience to wait for the repo to mirror but you may want to do this and copy you sources.list.orig back to sources.list at a later date.

Note:  If you need to generate a a sources.list file with the appropriate repos check out

Step 4A:  Add the FHGFS repo to the /etc/sources.list file

deb deb6 non-free

Step 4A:  Once you update the /etc/sources.list file run an apt-get update to update the repo, followed by an apt-get upgrade to upgrade distro to latest revision.

Step 5:  Install lvm2, default-jre, fhgfs-admon packages

aptitude install lvm2
aptitude install default-jre
aptitude install fhgfs-admon

Phase 2:  Preparing Storage on each node:

Because the Centera nodes use JBOD drives I wanted to get the highest performance by striping within the node (horizontally) and across the nodes (vertically).  This section focuses on the configuration of horizontal striping on each node.

Note:  I probably could have taken a more elegant approach here, like boot for USB key and use the entire capacity of the four internal disks for data but this was a PoC so didn’t get overly focused on this.  Some of the workarounds I use below could have probably been avoided.

  1. Partition the individual node disks
    1. Run fdisk –l (will let you see all disks and partitions)
    2. For devices that do not have partitions create a primary partition on each disk with fdisk (in my case /dev/sda1 contained my node OS, /dev/sda6 was free, /dev/sdb, /dev/sdc and /dev/sdd had no partition table so I created a primary partition dev/sdb1, /dev/sdc1 and /dev/sdd1)
  2. Create LVM Physical Volumes (Note: If you haven’t realized it yet /dev/sda6 will be a little smaller than the other devices, this will be important later.)
      1. pvcreate /dev/sda6
      2. pvcreate /dev/sdb1
      3. pvcreate /dev/sdc1
      4. pvcreate /dev/sdd1
  3. Create a Volume Group that contains the above physical volumes
    1. vgcreate fhgfs_vg /dev/sda6 /dev/sdb1 /dev/sdc1 /dev/sdd1
    2. vgdisplay (make sure the VG was created)
  4. Create Logical Volume
    1. lvcreate -i4 -I4 -l90%FREE -nfhgfs_lvol fhgfs_vg –test
      1. Above command runs a test, notice the –I90% flag, this says to only use 90% of each physical volume.  Because this is a stripe and the available extents differ on /dev/sda6 we need to equalize the extents by consuming on 90% of the available exents.
    2. lvcreate -i4 -I4 -l90%FREE -nfhgfs_lvol fhgfs_vg
      1. Create the logical volume
    3. lvdisplay (verify that the lvol was created)
    4. Note:  The above commands performed on a node with 1TB drives, I also have nodes with 500GB drives in the same fhgfs cluster.  Depending on the the drive size in the nodes you will need to make adjustments so that the extents are equalized across the physical volumes.  As an example on the nodes with the 500GB drives the lvcreate commands looks like this lvcreate -i4 -I4 -l83%FREE -nfhgfs_lvol fhgfs_vg.
  5. Make a file system on the logical volume
    1. lvcreate -i4 -I4 -l83%FREE -nfhgfs_lvol fhgfs_vg
  6. Mount newly created file system and create relevant directories
    1. mkdir /data
    2. mount /dev/fhgfs_vg/fhgfs_lvol /data
    3. mkdir /data/fhgfs
    4. mkdir /data/fhgfs/meta
    5. mkdir /data/fhgfs/storage
    6. mkdir /data/fhgfs/mgmtd
  7. Add file system mount to fstab
    1. echo “/dev/fhgfs_vg/fhgfs_lvol     /data     ext4     errors=remount-ro     0     1” >> /etc/fstab

Note:  This is not a LVM tutorial, for more detail Google “Linux LVM”

Enable password-less ssh login (based on a public/private key pair) on all nodes

  1. On node that will be used for management run ssh-keygen (in my environment this is fhgfs-node01-r5)
    1. Note:  I have a six node fhgfs cluster fhgfs-node01-r5 to fhgfs-node06-r5
  2. Copy the ssh key to all other nodes.  From fhgfs-node01-r5 run the following commands:
    1. cat ~root/.ssh/ | ssh root@fhgfs-node02-r5 ‘cat >> .ssh/authorized_keys’
    2. cat ~root/.ssh/ | ssh root@fhgfs-node03-r5 ‘cat >> .ssh/authorized_keys’
    3. cat ~root/.ssh/ | ssh root@fhgfs-node04-r5 ‘cat >> .ssh/authorized_keys’
    4. cat ~root/.ssh/ | ssh root@fhgfs-node05-r5 ‘cat >> .ssh/authorized_keys’
    5. cat ~root/.ssh/ | ssh root@fhgfs-node06-r5 ‘cat >> .ssh/authorized_keys’
  3. Note:  for more info Google “ssh with keys”

Configure FraunhoferFS (how can you not love that name)

  1. Launch the fhgfs-admon-gui
    1. I do this using Cygwin-X on my desktop, sshing to the fhgfs-node01-r5 node, exporting the DISPLAY back to my desktop and then launch the fhgfs-admon-gui.  If you don’t want to install Cygwin-X Xmingis a good alternative.
      1. java -jar /opt/fhgfs/fhgfs-admon-gui/fhgfs-admon-gui.jar
    2. Note:  This is not a detailed fhgfd install guide, reference the install guide for more detail
  2. Adding Metadata servers, Storage servers, Clients
    1. SNAGHTML12587da
  3. Create basic configuration
    1. SNAGHTML12f1d17
  4. Start Services
    1. SNAGHTML13277e3
  5. There are also a number of CLI command that can be used
    1. image
    2. e.g. – fhgfs-check-servers
      1. image
  6. If all works well a “df –h”yield the following
    1. image
    2. Note the /mnt/fhgfs mount point (pretty cool)

Creating a CIFS/NFS share

  1. Depending on how you did the install of you base Ubuntu system you likely need to load the Samba and NFS packages (Note:  I only loaded these on my node01 and node02 nodes, using these nodes as my CIFS and NFS servers respectively)
    1. aptitude install nfs-server
    2. aptitude install samba
  2. Configure Samba and/or NFS shares from /mnt/fhgfs
    1. There are lot’s or ways to do this, this is not a blog on NFS or Samba so refer to the following two links for more information:
      1. NFS:
      2. Samba/CIFS:
    2. As a side note I like to load Webmin on the for easy web bases administration of all the nodes, as well as NFS and Samba
      1. wget
      2. Then use dpkg –i webmin_1.590_all.deb to install
      3. image

Side note:  Sometime when installing a debian package using dpkg you will have unsatisfied dependencies.  To solve this problem just follow the following steps:

  1. dpkg –i webmin_1.590_all.deb
  2. apt-get -f –force-yes –yes install

Performance testing, replicating, etc…

Once I finished the install it was time to play a little.  From a windows client I mapped to the the share that I created from the fhgfs-node01-r5 and started running some I/O to the FraunhoferFS….. I stared benchmarking using with IOzone, my goal is to compare and contrast my FraunhoferFS and NAS performance with other NAS products like NAS4Free, OpenFiler, etc… I also plan to do some testing with Unison, rsync and drdb for replication.

This is a long post so I decided to create a separate post for performance and replication.  To wet your appetite here are some the early numbers from the FhGFS NAS testing.





Created the above output quickly, In my follow-up performance post I will document the test bed, publish all the test variants and platform comparisons.  Stay tuned…

Ghetto Fabulous

Most environments running VMware would like some way to backup, protect and revision VMs. There are a number of commercial products that do a good job protecting VMs; products such as Veeam Backup and Replication, Quest Software (formerly Vizioncore ) vRanger and PHD Virtual Backup to name a few. This post will focus on the implementation of much lower cost (free) implementation of a backup and recovery solution for VMware. As with any free or open source software there is no right or wrong implementation model so this is a post that will talk about how ghettoVCB was implemented with Data Doman to enhance the protection of VMs.


What was the driver behind the requirement for image level protection of VMs in this particular instance? Within the particular environment that I am referencing in this post the customer has a fairly large ESX farm at their production site. Most of the production infrastructure is replicated to a DR location with the exception of some of the “less critical” systems. The DR site also has some running VMs such as domain controllers, etc… also deemed “less critical” so these are not replicated. You may ask why these are not replicated, the short answer is the customer uses EMC RecoverPoint to replicate data from Site A to Site B in conjunction with VMware SRM to facilitate failover, until recently (VNX) RecoverPoint had a capacity based license so dollars were saved by only replicating critical systems. Backups are taken of all systems but this does not provide the ability to restore an older VM image. A storage migration was being done from an older SAN infrastructure to a new SAN infrastructure, the migration was deemed completed but there was one VMFS volume that was missed and never migrated, the OEM was contracted to a do a date erasure on the old SAN prior to removing it from the data center. It was at that time that the “less critical” systems were lost and everyone realized that they were not really “less critical”. VMs needed to be rebuilt, this was labor intensive and could have been avoided had a good VM backup strategy been in place.

Discussions around how to protect against this in the future started to occur, the interesting thing was as part of the new infrastructure Data Domain was implemented as a backup to disk target but there was no money left in the budget to implement a commercial VMware image level backup product. vGhetto ghettoVCB to the rescue! With a little bit of design vGhetto was implemented on all the ESX servers and has been running successfully for over a year.

How to get started…

Download the appropriate ghettoVCB code from the vGhetto Script Repository there are multiple versions (you should use the latest version, the implementation discussed in this post uses ghettoVCBg2). All of the prerequisites and usage is well documented on the vGhetto site. Take your time and read, don’t jump in to this without reading the documentation.

Note: You will have to edit configuration files for vGhetto to setup alerts, retention, backup locations, etc… be sure to read the documentation carefully.

The Implementation details…

High-level Topology

Note: Site A and Site B backups target share on each respective DD670 (e.g. \\siteADD670\siteAvmbackup for daily backups at Site A) these are replicated to the peer DD670. Replicated data is accessible at the target side by accessing the backup sharename (e.g. – \\siteADD670\siteAvmbackup replicated data would be accessible by accessing \\siteBDD670\backup\siteA_vm_backup_replica).

In the environment that this deployment was done all of the ESX servers are running ESX 4.1 full (not ESXi) so the service console was leveraged, deployment models can differ from using the remote support console to using the vMA (vSphere Management Assistant). This is why it is critical that you read the ghettoVCB documentation.


  • Develop and document an architecture / design, this will require a little planning to make deployment as easy as possible.
  • Create a CIFS of NFS share on the Data Domain or other CIFS/NFS target.
    • If you want to keep the cost to nearly zero I recommend Opendedup
    • In this case Data Domain 670s already existed in both locations
    • I created two shares in each location one for daily backups and one for monthly backups (see High-level topology)

The reason for two shares is that only one (1) monthly is retained on the monthly share and fourteen (14) daily backups are maintained on the daily share. There is a tape backup job monthly that vaults the VM image backups from the monthly share.

  • There are basically three tasks that need to be performed on every ESX server in the environment:
    • Mount the target backup share(s):
      • Create mountpoint: mkdir /mnt/backup
      • For NFS: mount servername|IP:/sharename /mnt/backup
      • For CIFS: mount -t cifs //servername|IP /sharename /mnt/backup -o username=USERNAME,password=PASSWORD
  • Add the target backup share(s) to /etc/fstab to make them persistent:
    • For CIFS: echo “//servername|IP /sharename /mnt/backup cifs credentials=/root/.smbcreds” >> /etc/fstab
Note: FOR CIFS create .smbcreds file that contains the CIFS share login credentials. This file should contain the following two lines:
    • For NFS: echo “servername|IP: /sharename /mnt/backup nfs [any NFS mount options] ” >> /etc/fstab
  • Create cron job(s):
    • Daily Job (runs Mondy thru Friday at midnight): 0 0 * * 1-5 root /mnt/backup/.files/ghettoVCB/ -a > /mnt/backup/.files/logs/hostname_ghettoVCB.log 2>&1
    • Monthly Job (runs Saturday at midnight): 0 0 * * 6 root /mnt/monthly_backup/.files/ghettoVCB/ -a > /mnt/monthly_backup/.files/logs/hostname_ghettoVCB.log 2>&1
Note: You will notice that the path to the is .files on the CIFS | NFS share, this is so I can make modifications post deployment and since all the ESX server us a shared location it is easy to maintain, more on this when I walk through my deployment methodology.

Note: crontab entries need to go in /etc/crontab. If you place them in the user crontab using crontab –e or vi /var/spool/cron/root it will NOT work.


Once you complete the above steps and test on a single server you are ready to roll out to all the servers in your environment. To simplify this I recommend storing the config files, scripts, etc… in a hidden directory on the CIFS or NFS share.

In my case I have a .files directory in the daily backup and monthly backup directories. This includes the ghettoVCB code, .smbcreds file and the deployment scripts.

Deployment Scripts:

 Note: The above scripts assumes a CIFS target, modify accordingly for a NFS target.

Deployment is easy, as new ESX servers come online using plink  I remotely execute a mount of the appropriate share, copy the deployment script to /tmp and execute.

All the changes are made to the fstab, cron, etc.. and VM image backups will now run on a regular basis.

Accessing backed up data…

You will now be able to browse the //servername|IP/sharename from any host and see your backups organized by date:

I use vmware-mount.exe which is part of the VMware Virtual Disk Development Kit  on the virtual center server to mount the backup vmdk files for individual file restores, obviously for a full restore I just copy the vmdk back to the production datastore.

The following are the key steps to mount a backed up vmdk:

  • Mount the CIFS share (if using NFS you can usually share the volume via CIFS of SMB as well and gain access from windows to use the process I am outlining here)
    • net use v: //servername|IP/sharename
    • net use

You should see something similar to this:

  • v:
  • dir (you should see all you VM backup dirs)
  • cd to the VM perform a recovery from
  • cd to the proper backup image
  • dir

This is what the above command sequence looks like:

  • Now mount the vmdk
    • vmware-mount.exe z: “2003 SP2 Template.vmdk”
    • You can verify a successful mount by just typing vmwre-mount.exe
  • z:
  • dir

You are now looking at the c: drive from the “2003 SP2 Template” VM from January 24, 2012.

You can navigate and copy files just like any normal drive.

Verizon Actiontec Router and Local DNS

I have been really busy and not posting much, but I have my home lab pretty much built out and have a bunch of new projects in the hopper, more on that in future posts.  If you have FIOS like I do you probably have a Actiontec router provided by Verizon.  When building out my home lab I wanted to use my Actiontec router as my DNS server, for obvious reasons, the web interface became frustrating pretty quickly.  So many clicks and the ability to only enter a single host registration at a time:


The ability to edit DNS from telnet is actually really nice on the Action tech router.  Commands are petty simple.

1) Enable Telnet on the router (Advanced –> Local Administration)


2) Once telnet is enabled, you can now telnet to your router using the same credentials used with the web interface.


3) After the telnet session is established there are basically three commands you need to be familiar with:

  • dns_get:  lists all DNS server entries
  • dns_set:  adds a DNS entry
  • dns_del:  deletes a dns entry

The syntax is pretty simple:

  • dns_get:  used by itself to list all DNS entries
  • dns_set: dns_set ID HOSTNAME IP_ADDRESS (e.g. – dns_set 1 host1
  • dns_del:  dns_del ID (e.g. – dns_del 1)

This method of adding and removing DNS entries from the Actiontec router is significantly faster than using the web interface.

I use a Google Doc spreadsheet to track my IPs and build the command to add and remove DNS entries.  I have shared my template here:

Best Remote Connection Tool

I have tested a ton of tabbed remote connection tools.

RDTabs (  Like it for pure RDP, no SSH, http, etc…

Terminals (  Slow and a little buggy IMO

Remote Desktop Manager (  Over built app, not portable, etc…

I am now using mRemoteNG (  Love it!


This fits all my needs.  Supports all the protocols that I require, no install portable version available which is perfect for me.  I have the portable version in my dropbox ( folder so I can launch on any machine and have all my connections readily available.  I can add connections anywhere and they sync’d via  dropbox.  The perfect solution for me.  The app is light weight and fast, give it a try.

App that provides dramatic productivity improvements (for parents)

So this may seem like a strange post, as most people will think that I am going to be talking about a an IDE application, a RAD tool, a CRM application or some sort of text-to-speech processor, regardless of what you are expecting I can almost guarantee you will be expecting something a little more sexy than what you are about to see (especially if you are not a parent).

I think this app is so useful I am not only posting to my blog but also to my blog because it is that good.

Let me provide some background.  I have two wonderful little girls, a 5 year old and a 6 month old, for anyone with children we all we have retooled the human machined (ourselves) to have a CPU that is focused on work and coprocessor that deals with our children while we try to focus (we can flip this paradigm as well).  I have to say my time slicing skills are second to none, you learn how to work in 2 min slices while breaking away for 30 seconds to lend some CPU cycles to an often overheating parental coprocessor.  I often read emails back later that had the same thought double typed, missing words, etc… this is because I am processing too much information, my mental programming is way off.  I have this huge array of things I need to do, things I am doing, things I am being told to do, things my kids want to do, yadda, yadda, yadda…. Let’s just say that the that I often suffer pointer corruption which leads to memory leaks, corruption and eventually a segmentation fault (in non techie lingo this is know as a freak out, but this is a technical blog hence the techie speak).

So to the point of the post.  There is this brilliant lady named Julie Aigner-Clark the founder of The Baby Einstein Company, absolute best videos for kids under the age of one to help cool down the coprocessor (why didn’t I start filming shiny lights and hand puppets 10 years ago).  My 5 year old will even watch the videos.  There is this great website site called YouTube where you can find Baby Einstein videos as well as other great videos like Oswald, WordGirl, Hannah Montana and The Pink Panther (a few of my older daughters favorites) So you are probably asking what relevance does this have.  I will explain, be patient, I know how difficult this probably is because you 6 month old wants to eat and your 5 year old wants you to “Play Barbies” with her.

I am in my office trying to work and my daughter comes in, she wants me to stop what I am doing to play with her, I attempt to stall and concentrate at the same time (very difficult).  I eventually sit her on my lap (applies to 6 month old and 5 year old) and open YouTube in my browser and start playing our favorite Baby Einstein or WordGirl video.  Good so far.  I pop out the video window from and resize my excel sheet and attempt to work, here is a screen shot of what I am left with:


So on the left my daughter(s) can sit on my lap and watch the vide while I work on the spreadsheet on the right.  Now here is the issue, I only have 3/4 of the screen which can be a little annoying, if I need to use another app it can be a big issue.  So what is the effect of me switching windows:


Oh no, the video moved to the background, scramble to resize the browser window to avoid a complete meltdown.  Reflexes are not that good so I rarely accomplish the goal.

Now for the introduction of a must have application that dramatically improves productivity, focus and sanity.  The app is called DeskPins and simply it allows you to pin any window to the foreground so lets look at a couple of examples of how I use this.

I follow the same process as before with finding a video on YouTube, popping out the video windows but now I pin the video window to the foreground.


Now I can maximize my spreadsheet (far better) and without the video moving to the background, I can move the video window around as needed.  I can open FireFox and not worry about losing the video to the background.


The app works on 32 and 64 bit versions of Windows (I am running on 32 bit XP, 32 bit Win 7 and 64 bit Win 7) and has become an invaluable tool for me.  Hopefully this post helps with some use case examples and helps other parents occupy their children in times of need.  Enjoy!

Hello from Cisco Live 2010

Got in yesterday (6/28/2010) and planned to attend an afternoon session but I got hung up on a few items that required my attention.  Attendance looks pretty good, food was a bit weak this AM but I am more of a coffee only person in the morning so not a huge deal for me.  Internet connectivity is stellar thus far hopefully this keeps up.  Looking forward to the sessions this week, I am starting the week with a session on entitled Mastering IP Subnetting Forever.  I will be blogging as always from the sessions I attend.  TTFN

Avamar sizing brain dump

Avamar DS18 = Utility Node + Spare Node + 16 Active Data Nodes

For a 3.3. TB Gen-3 Grid

  • Raw Capacity ~102 TB
  • Spare Node ~6 TB
  • RAID5 ~15 TB
  • Checkpoint  / GC ~28 TB
  • RAIN ~3 TB
  • Available for Active Backups ~49 TB

RAID Configuration:

  • RAID 1 for 3.3 TB node
  • RAID 5 for 2 TB nodes
  • RAID 1 for 1 TB nodes

How to calculate the required capacity:

  • Seed (Initial backups)
    Daily Change * Rentention in Days
    +RAIN = GSAN Utilization


  • Need min available space for 4 checkpoints
  • 3 checkpoints maintained by default

Data Gathering

Note:  Agent only vs. data store depends on the desired RPO

  • xfer_rate = Gb/hr * .70
  • date_size = total of the data set to be backed up
  • restore_time = data_size x .65 / xfer_rate

If RTO < restore_rate then data store else agent only

Always use 3.3 TB nodes when configuring unless additional nodes are required to increase the ingestion rate.

Use the default de-dupe rate unless a POC or assessment has been performed.

Sizing Considerations:

  • Data Types
    • File Systems
    • Databases
    • Large Clients > 2 TB
    • Dense File Systems (excluding EMC Celerra and NetApp)
  • Organic Growth
  • RTO
  • Replication Window
  • Maintenance Window

Non-RAIN node must be replicated this includes single node Avamar deployments and 1×2 (1 utility node and 2 data store nodes – this is non-RAIN config) configurations.

**** Remember this: As a general rule it seems that transactional databases are better suited to be backed up to Data Domain and NOT with the Avamar as the hashing of databases is generally very slow.

VMware (specifically using the VMware Storage APIs) and CIFS are well suited for Avamar

Data save rates:

  • 100 – 150 GB/hr per avtar stream on latest server types
    • Note:  it is possible to launch multiple avtar daemons with some tweaking, but an out of the box install only launches a single avtar process.
  • VMguest backups can be slower (very scientific, these are backups that
  • Default assumption is chuck-compress-hash process runs at a rate of 100 GB/hr
    • This is the process that bottlenecks database backups (ideally is seems that the avtar stream rate should match the check-compress-hash process)

Scan rate:

  • ~ 1 million files per hour
    • 1 TB of file data will take about 1 hour to backup
    • 1 TB DB will take ~ 10 hours to complete


  • 1 TB/hr per node in the grid (all file data)
  • 80% file (800 GB file) and 20% DB (200 GB DB) and the performance level drops off to .5 TB/hr
  • E.g. – DS18 perf will be ~ 15-16 TB/hr
  • Per node ingest rate ~ 8GB/hr


Data Fetch Process

  • Per node assumption
    • Chuck size 24kb
    • each chunk is referenced in a hash index stripe
    • Speed:
      • 5 MB/s
      • 18 GB/hr (compressed chunk)
      • 25 GB/hr (rehydrated chunk)
  • E.g. – A DS18 will restore at a rate of .5 TB/hr

NDMP Sizing:

  • Size of the NDMP data set
  • Type of filer (Celerra or NetApp)
  • Number of volumes, file systems, qtrees
  • Size of volumes
  • Number of files per volume / file system

L-0 Fulls on happen once (we don’t want to size for them)

Size for L-1 incremental which will happen in perpetuity following the completion of the L-0 full.

  • Important L-1 sizing data
    • Number of files in the L-1 backup
    • Backup window

2 Accelerator Nodes

Config Max Files   Max Data   Max Streams  
  Celerra NetApp Celerra NetApp Celerra NetApp
6 GB 5 m 30 m 4-6 TB 4-6 TB 1-2 1-2
36 GB 40 m 60 m 8-12 TB 8-12 TB 4 4

NDMP throughput ~ 100 – 150 TB/hr

Assumed DeDupe Rates:

  • File data
    • Initial backup:  70% commonality (30% of the data is unique)
      • e.g. – 30% of 10 TB = 3 TB stored
    • Subsequent backups:  .3% daily change
      • e.g. – .3% of 10 TB = 30 GB stored per day
  • Database data
    • Initial backup:  35% commonality (65% of the data is unique)
      • e.g. – 65% of 10 TB = 6.5 TB stored
    • Subsequent backups:  4% daily change
      • e.g. – 4% of 10 TB = 400 GB stored per day

Tip:  Based on scan rate and the amount of data stored for DB backups you can see why Avamar may not be the best choice for DB backups.

NDMP Tips:

  • Avamar NDMP accelerator node should be on the same LAN segment as the filer and the same switch when possible
  • No Include/Exclude rules are supported
  • Able to run up to 4 NDMP backups simultaneously
    • most effective with large files
    • min of 4GB of memory per accelerator node per stream
    • 4 NDMP simultaneously scheduled as groups backups

Desktop / Laptop


  • Number of clients
  • Amount of data per client
    • user files
    • DB/PST files

DS18 can support ~ 5000 clients

Number of streams per node default is 18 (17 are usable, one should be reserved for restores).

That completes the brain dump.  Wish I had more but that is all for now.