Recent Email

I recently sent an internal Email with some of my favorite educational (technology centric) web links.  Here is a copy for your enjoyment.

Looking for good reading material, here are my suggestions:

If you like the content that Storage magazine has to offer save a tree and look at this online publications from Tech Target:

If you want to absorb the most content in the shortest amount of time I highly recommend Podcasts….  These are a few of my favorites:

Other great online resources:

How to increase you computer aptitude – Must watch documentaries (all can be downloaded to your iPod)!

Understand the Open Source movement:

Hacker sub-culture:

A site the deserves its own category:

.edu:

o One of favorites – http://webcast.rice.edu/webcast.php?action=details&event=196

Professional Societies:

I recently sent an internal Email with some of my favorite educational (technology centric) web links.  Here is a copy for your enjoyment.

Holiday perspective….

Happy Holidays…. This holiday season proved to be both enjoyable and exhausting.  I received a few quick read self-help / motivational books this holiday season.  I finished one today entitled It’s Not How Good You Are, It’s How Good You Want To Be – by Paul Arden.  The book is a quick read aimed at provoking thought to help you be the best you that you can be.  Like the "bible" (as so many refer to it on Amazon) I don’t think the ideas conveyed are meant to be taken literally but rather absorbed, digested and interpreted in a way that help you climb higher.

In the world of technology we spend so much time learning…. formulas, calculations, etc…   Many of the concepts are apropos and really got me thinking so I though I would share them with you.

It’s Wrong To Be Right
Being right is based upon knowledge and experience and is often provable.
Knowledge comes from the past, so it’s safe.  It is also out of date.  It’s the opposite of originality.
Experience is built from solutions to old situations and problems.  The old situations are probably different from the present ones, so that old solutions will have to be bent to fit new problems (and possibly fit badly).  Also the likelihood is that, if you’ve got the experience, you’ll probably use it.
This is lazy.
Experience is the opposite of being creative.
If you can prove you’re right, you’re set in concrete.  You cannot move with the times or with other people.
Being right is also being boring.  Your mind is closed.  You are not open to new ideas.  You are rooted in your own rightness, which is arrogant.  Arrogance is a valuable tool but only if used sparingly.
Worst of all, being right has a tone of morality about it.  To be anything else sounds weak or fallible, and people who are right would hate to be thought fallible.
So:  it’s wrong to be right, because people who are right are rooted in the past, rigid-minded, dull and smug.
There’s no talking to them.

It’s Right To Be Wrong
Start being wrong and suddenly anything is possible.
You’re no longer tying to be infallible.
You’re in the unknown.  There is no way of knowing what can happen, but there’s more chance of it being amazing than if you try to be right.
Of course, being wrong is a risk.
People worry about suggesting stupid ideas because of what others will think.
You will have been in meetings where new thinking has been called for, at your original suggestion.
Instead of saying, ‘That’s the kind of suggestion that leads us to a novel solutions’, the room goes quiet, they look up to the ceiling, roll their eyes and return to the discussion.
Risks are a measure of people.  People who won’t take them are trying to preserve what they have.
People who do that them often end up by having more.
Some risks have a future, and some people call them wrong.  But being right may be like walking backwards proving where you’ve been.
Being wrong isn’t in the future, or the past.
Being wrong isn’t anywhere but being here.
Best place to be, eh?

Do Not Try To Win Awards
Nearly everybody likes to win awards.
Awards create glamour and glamour creates income.
But be aware.
Awards are judged in committee by consensus of what is known.
In other words, what is in fashion.
But originality can’t be fashionable, because it hasn’t yet had the approval of the committee.
Don’t try to follow fashion.
Be true to your subject and you will be far more like to create something that is timeless.
That’s where the true are lies.

The moral is think for yourself, don’t be afraid to be wrong or unfashionable, be creative, innovate, succeed at being all you can be and be humble!

I will finish-up with this final passage:

Do Not Covet Your Ideas
Give away everything you know and more will come back to you.
You will remember from school other students preventing you from seeing their answers by placing their arm around their exercise book or exam paper.
It is the same at work, people are secretive with ideas.  ‘Don’t tell them that, they’ll take the credit for it.’
The problem with hoarding is you end up living off your reserves.  Eventually you will become stale.
If you give away everything you have, you are left with nothing.  This forces you to look, to be aware, to replenish.
Somehow the more you give away the more comes back to you.
Ideas are open knowledge.  Don’t claim ownership.
They’re not your ideas anyway, they’re someone else’s.  They are out there floating by on the ether.
You just have to put yourself in a frame of mind to pick them up.

Happy Holidays!  A little more food for thought before you finalize the New Year’s resolution 🙂

Interesting perspective…. Got me thinking….

Just finished reading Tony Asaro’s iSCSI – What Happens Next? post and the concept of VMware purchasing LeftHand Networks is an interesting one.  Let’s review the current situation, EMC owns 90% of VMware, everyone willingly or reluctantly knows that VMware is crushing it in the marketplace and that the virtualization craze is truly one of those things in our industry that rolls around every 20 years.  VMware’s business model, value prop, etc… precludes them from any and all affinities to a particular hardware vendor.  This may seem interesting seeing how EMC owns 90% of the company but the reality is that while VMware was a phenomenal acquisition for EMC from both an access to intellectual property and share holder value perspective VMware continues to be a major a catalyst in the commoditization of the hardware market.  EMC has been a major benefactor of slow commoditization in the enterprise storage hardware market for years (I should note that EMC is a very, very large software player but the hardware platform it still a critical component to the EMC business model), VMware threatens this space.  Now if we look at LeftHand Networks who has morphed their business and aligned them selves with the virtualization movement.  A purchase of a player such as LeftHand may allow VMware to develop an end-to-end agnostic solution that includes features that traditionally have been handled at the array level such as replication.  I should also note that Celerra, Centera, CLARiiON, Avamar, etc…. the list goes on and on are nothing more that software running on a hardened platform, many of these have insatiable appetites for horsepower so it remains to be seen if they can successfully move to a hardware agnostic model, additionally support becomes increasingly difficult and QoS my suffer in favor of flexibility and cost.  Is this the right model for everyone, probably not but in a large portion of the market where VMware is widely adopted this is a tradeoff most are willing to make.  Why LeftHand vs. the others, well I think LeftHand has done a good job publicizing their virtual approach, the release of a LeftHand Virtual Appliance demonstrates a conscious alignment.  Finally LeftHand is large enough to validate the technology but small enough to be blip on the radar of VMware’s hardware partners, in particular IBM and HP.  The acquisition of LeftHand by and HP or IBM could potentially cause even bigger problems for EMC.  If I were EMC I would look to either move the Celerra code to a virtual appliance quick or acquire someone, being first in this marker could be huge.  Only time will tell what happens in this market but it should be interesting to watch.

The space is getting annoying…

I have not gone off on this one before and I am in the mood to write so I thought I rant about two of my favorite topics. I am sure you are wondering what they are so here you go:

  • The VAR aka “Valueless Annoying Reseller”
  • The PS aka “Please Stop” only agnostic partner

The VAR or Value Added Reseller… IMO is an acronym that should be reserved for niche players who focus on a specific technology segment. Why? A VAR should bring some semblance of tangible value to a specific market (technological, geographical, intellectual, etc…) that outshines the manufacturer. This in no way states that there can not be a super VAR, a VAR that displays superior competence in a number of technology disciplines. The value of the VAR typically presents itself in the form of practical application of the technology and ultimately the development of some level intellectual property built on the foundation provided by the OEM. The VAR to me should represent a field based integrator rich in engineering talent. A VAR should bring together technology, process, best practices and intellect to develop offerings that differentiate a solution in the market place. Where my annoyance mostly lies is with the “expert” VAR that represents 5000 products and employs 50 people. You don’t need to be a rocket scientist to figure out that this “VAR” can not possibly be a subject matter expert in all 5000 products. If the OEM who employs 10s of thousands struggles to understand, test and integrate the 100s of products that they manufacture it would be moronic to believe that the VAR who reps those 100s of products along with 1000s of other products has it covered. The true VAR is focused, they understand practically what can be delivered within a finite product set and is focused on leveraging these technologies to solve business problems. The reality is the VAR market has become what I like to call the “smorgasbord sale” when in reality it should be the “sit down sale with French service”.

Now onto the even more annoying “PS only agnostic partner”. The pitch here is that because this 50 person company only provides services and they don’t care who you source the product from that they have some level of agnosticism. The answer is that they do, the problem is they also have skills that are wide not deep, often the jack of all trades master of none – certifications trump practical experience in this space. The concept of owning a business solution end-to-end is a nearly impossible feat with this partner. Experts in the PS (Please Stop) regardless of the platform? IMO the “PS” agnostic partner is dangerous because unlike the VAR (Valueless Annoying Reseller) who is looking to be a fulfillment channel (no risk here, low cost provider of product – nothing more) the PS partner is looking to deliver business solutions – The words “agnostic”, “engineer” and “technology” should never be uttered in the same breadth – technologists are some of the most opinionated people on the planet, so it just does not work for me. If you don’t believe me just take a look around the blogsphere to see just how opinionated we are, it just is not possible to do this day in and day out and not formulate an opinion. A deep understanding of the physical infrastructure is pretty important when architecting a solution, and even more importing when implementing the architecture. Bottom line is the more practical experience you have the more apt you are to steer clear of problem architectures and save the customer time and money.

Finally, last night I was laying in bed and thinking about the real value of the VAR? Having worked for both the OEM and the VAR here is what I have come with. A good OEM puts their best and brightest into the product development cycle, this usually limits the amount of time they can spend solving unique customer problems. A good VAR has their best and brightest on the front-lines working with customers and leveraging their deep skill set in a focused technology to solve business problems. A good PS organization is focused on business process and not technology (i.e. – Accenture, CSC, BearingPoint, etc…) these companies are typically segmented by vertical markets (i.e. – Financial Services, Life Sciences, etc…) because their value is in coupling the business process with technology. Another word that accurately depicts the PS focused organization is scale, these businesses are built to be able to through people at a problem. PS only organizations who do note adhere to the rules that I have outlined above IMO are more consistent with staff augmentation and NOT professional services.

Oracle Storage Guy: Direct NFS on EMC NAS

I have been chomping at the bit to test VMware on dNFS on EMC NAS for a couple of reasons.  A number of my customers who are looking at EMC NAS in particular the NS20 would like to consolidate storage, servers, file services, etc… on to a unified platform and leverage a single replication technology like Celerra Replicator.   dNFS may offer this possibility, .vmdks can now reside on the a NFS volume, CIFS shares can be consolidated to the the NS20 and all can be replicated with Celerra Replicator.  The only downside to this solution that I can see is right now the replicated volumes will be crash consistent copies but I think with some VMware scripting even this concern can be addressed.  I hope to stand this configuration up in the lab in the next couple of weeks so I should have more detail and a better idea of is viability shortly.  You may be wondering why this post entitled Oracle Storage Guy…… the answer is I was searching the blogsphere for an unbiased opinion and some performance metrics of VMware and dNFS and this was the blog that I stumbled upon.

The performance numbers I have seen for VMware on dNFS come very close to the numbers I have seen for iSCSI, both technologies offer benefits but for the use case I mention above dNFS may become very compelling.  I recommend reading this post Oracle Storage Guy: Direct NFS on EMC NAS, is offers some great commentary on the performance characteristics and benefits of dNFS.

How things have changed…

Starting to compose blog this on my train ride up to Boston on Tuesday night form NYC, I am sure I will actually finish it later which is why I mention this (it is now Friday and I am finally completing this blog on my train ride into NYC, correction it is now Tuesday afternoon and I am finally finishing the blog, I was sidetracked by a few other activities and a more factual blog – worked on it on Friday but did not finish).  The train ride between NY and Boston provides plenty of captive time to catchup on industry happenings. 

I am a fairly regular reader of both Dave Hitz’s blog, great content (since I will most likely not drive a Tesla anytime in the near future I need to read Dave’s blog to gleam some insight) I also happen to think that he is a very intelligent individual – not alone there.  Also from time to time when I am board I also read Jonathan Schwarz’s blog, he is a bit maniacal IMO but he also has some good tidbits which can be gleamed from the piles and piles of rhetoric, for instance reporting earnings on your blog c’mon, sometimes it is just a bit over the top.  I also wish someone would create a JAVA sucks, kill it now petition, I would be more than happy to be the first person to sign it – sometimes portability is just not worth the anguish (With that said SUN had done an incredible job proliferating this utterly painful technology.  Tonight I read both Dave’s and Jonathan’s blog hoping to gain some additional insight into the litigation saga.  BTW – .Net and Mono provide a pretty decent solution for a good looking, decent performing, portable, etc… alternative to JAVA.  Now I am off on a tangent, as a long time Linux user and long time VMware user can someone please explain to me why the hell VMware would remove Linux support for the ESX console in ESX 3.x?  I understand that the bulk of the desktop users are running Windows so Linux support may seem insignificant but I should remind everyone that Linux users needing to run Windows were some of the early adopters of VMware’s technology – show us some love.  The sad part is that the VMware console is built on .Net, some planning and Mono compliance would have given them portability to multiple platforms.  Anyway thought I would through that in there.  Onward….

The litigation situation between SUN and NetApp is what prompted this unorganized stream of consciousness.  It got me to thinking about how much innovation has slowed due to the need to “litigate vs. innovate” (taken from Shwarz’s blog – obviously written during one of his cosmic moments of consciousness).  SUN has changed a lot from the days of Khosla, Bechtolsheim (happy to see him back), Joy and McNealey.

I really enjoy technology history for a couple of reasons, it is factual, finite and tangible, it is well documented with little speculation over what actually happened.  Historical accounts of technology lack the personal opinions that taint world history, religious history and most other historical happenings.  Why is this relevant you will see in a moment – hopefully…

If I were to mention the name Dan Bricklin and Bob Frankston most people will respond with a blank stare and possible a who?  In fact Dan Briklan and Bob Frankston are two of the greatest innovators in personal computing history they wrote and application called VisiCalc which was originally released for the Apple II in 1979.  Interestingly enough the creators of VisiCalc did not pursue a patent for their work, while today I am sure that Dan and Bob would file for a patent back in the day almost no one filed for patents related to software inventions.  A few other software inventions not protected by patent law are word wrapping, cut and paste, the word processing ruler, sorting and compression algorithms, hypertext linking, and compiler techniques.  Could you imagine how innovation would have slowed if these technologies were all patented…  The ability to leverage each others inventions dramatically accelerates the innovation cycle.  The open source community is proof positive that this does work, the problem is that so many of the market leaders (e.g. – Microsoft) would rather protect their market share by  locking the doors rather than just being the best and forcing them selves to out innovate their competition.  It is a complex problem and not one that I have answer for, with that said I would like choice, I would like an open document format that works seamlessly across multiple word processors, etc…  Stifling competitive innovation through lock out is not the right answer.

Another name that most would struggle with is Gary Kildall who was one of the key pioneers of the PC industry who was “upstaged” by Microsoft.  Gary Kildall was a hobbyist and the creator of the MS-DOS predecessor CP/M.  Back in the day when Microsoft was headquarter in a strip mall in Texas developing Basic for the Altair, Gary Killdal was in Seattle running a company called Digital Research and writing a hobbyist operating system called CP/M.  CP/M would be the code base that would eventually become MS-DOS and propel Microsoft to heights beyond their wildest dreams.

While arguably the developers of the modern operating system and the modern spreadsheet application stood by as Microsoft and Lotus became multi-billion dollar companies and continued to innovate today two industry giants like NetApp and SUN would rather take a position just to take a position…  The sad part is that corporations have so much control today that the Bricklins, Frankstons and Kildalls of today, the Torvalds, Raymonds, etc… have a significantly larger hill to climb.

Sometime I actually feel pain when I read Jonathan Shwarz’s blog, with that said WAFL, ZFS, who cares… move on… NetApp, ZFS is not even close to a threat, SUN is not and never will be a storage company.  SUN how about spending less time pulling patents and reviewing them to see what you can litigate against.  I just wish someone would focus their time, money and energy on building something new and innovative rather than trying to keep current technology alive by boxing each other out.

I for sure do not know everything but I do know that the rate of development and innovation has slowed dramatically over the past 30 years.

The Cache Effect

Following a fit of rage last night after I inadvertently deleted 2 hours worth of content I have now calmed down enough to recreate the post.

The story starts out like this, a customer who recently installed a EMC CX3–80 was working on a backup project roll out, the plan was to leverage ATA capacity in the CX3–80 as a backup-to-disk (B2D) target.  Once they rolled out the backup application they were experiencing very poor performance for the backup jobs that were running to disk, additionally the customer did some file system copies to this particular device and the performance appeared to slow.

The CX3–80 is actually a fairly large array but for the purposes of this post I will focus on the particular ATA RAID group which was the target of the backup job where the performance problem was identified.

I was aware that the customer only had on power rail due to some power constraints in their current data center.  The plan was to power up the CX using just the A side power until they could de-commission some equipment and power on the B side.  My initial though was that cache the culprit but I wanted to investigate further before drawing a conclusion.

My first step was to log into the system and validate that cache was actually disabled, which it was.  This was due to the fact that the SPS (supplemental power supply) only had one power feed and the batteries where not charging.  In this case write–back cache is disabled to protect from potential data loss.  Once I validated that cache was in fact disabled I thought that I would take a scientific approach to resolving the issue by base lining the performance without cache and then enabling cache and running the performance test again.

The ATA RAID group which I was testing on was configured as a 15 drive R5 group with 5 LUNs (50 – 54) ~ 2 TB in size.

Figure 1:  Physical disk layout

R5

My testing was run against drive f: which is LUN 50 which resides on the 15 drive R5 group depicted above.  LUNs 51, 52, 53 and 54 were not being used so the RG was only being used by the benchmark I was running on LUN 50.

Figure 2:  Benchmark results before cache was enabled

Pre-cache

As you can see the performance for writes is abysmal.  I will focus on the 64k test as we progress through the rest of this blog.  You will see above that the 64k test only push ~ 4.6 MB/s.  Very poor performance for a 15 drive stripe.  I have a theory for why this is but I will get to that later in the post.

Before cache couple be enabled we needed to power the second power supply on the the SPS, this was done by plugging the B power supply on the SPS into the A side power rail.  Once this was complete and the SPS battery was charged cache was enabled on the CX and the benchmark was run a second time.

Figure 3:  Benchmark results post cache being enabled (Note the scale on this chart differs from the above chart)

Post-cache

As you can see the performance increased from ~ 4.6 MB/s for 64k writes to ~ 160.9 MB/s for 64k writes.  I have to admit I would not have expected write cache to have this dramatic of an effect.

After thinking about it for a while I formulated some theories that I hope to fully prove out in the near future.  I believe that the performance characteristics that presented themselves in this particular situation was a combination of a number of things, the fact that the stripe width was 15 drives and cache being disabled created the huge gap in performance.

Let me explain some RAID basics so hopefully the explanation will become a bit clearer.

A RAID group had two key components that we need to be concerned with for the purpose of this discussion:

  1. Stripe width – which is typically synonymous with the number of drives in the the raid group
  2. Stripe depth – which is the size of the write that the controller performs before it round robin to the next physical spindle (Depicted in Figure 4)

Figure 4: Stripe Depth

Stripe_depth

The next concept is write cache, specifically two features of write cache know as write-back cache and write-gathering cache.

First lets examine the I/O pattern without the use of cache.  Figure 5 depicts a typical 16k I/O on an array with and 8k stripe depth and a 4 drive stripe width, with no write cache.

Figure 5:  Array with no write cache

No_cache

The effect of no write cache is two fold.  First there is no write-back so the I/O needs to be acknowledge by the physical disk, this is obviously much slower that and ack from memory.  Second, because there is no write-gathering full-stripe writes can not be facilitated which means more back-end I/O operations, affectionately referred to as the Read-Modify-Write penalty.

Now lets examine the same configuration with write-cache enabled.  Depicted in Figure 6.

Figure 6:  Array with write cache enabled

W_cache

Here you will note that acks are sent back to the host before they are written to physical spindles, this dramatically improves performance.  Second write-gathering cache is used to facilitate full-stripe writes which negates the read-modify-write penalty.

Finally my conclusion is that the loss of write cache could be somewhat negated by reducing stripe widths from 15 drives to 3 or 4 drives and creating a meta to accommodate larger LUN sizes.  With a 15 drive raid group the read-modify-write penalty can be severe as I believe we have seen in Figure 2.  This theory needs to be test, which I hope to do in the near future.  Obviously write-back cache also had an impact but I am not sure that is was as important as write-gathering in this case.  I could have probably tuned the stripe-depth and file system I/O size to improve the efficiency without cache as well.

Server sprawl wtih the added cost of OS sprawl…

Virtualization has definitely reach revolution status.  This is apparent when you take a look at VMware’s stock price and market cap which has eclipsed the market cap of Ford Motor Company.  What makes the fact that VMware’s market cap is greater than that of Ford Motor Co. insane is that only 10% of VMware is publicly traded – The other 90% is owned by EMC and EMC’s shareholders.  OK – obviously I am not a finance guy but somehow the math is not making sense to me, here is how i see it:

  • VMware is trading a total pool of ~ 383 million shares at a price of ~ $77 which gives them a market cap of ~ $29.5 billion.
  • EMC is trading a total pool of 2.10 billion shares at a price of ~ $19 which gives them a market cap of ~ $40 billion.
  • EMC still owns 90% or VMware so that value should be rolled into the EMC maket cap, right?  Obviously, wrong!  If this were the case based on my calculation EMC would have a market cap of ~ $305 billion.
  • If 100% of VMware were public would that mean that there would be 3.8 billion shares outstanding and would the market cap be 295 billion?
  • Is the stock price inflated because of the limited number of outstanding VMW shares?
  • Should EMC be seeing a greater affect from VMware stock price?

Personally I think the VMware stock is way overvalued based on the indicators that surround this stock but like I said I am not a finance guy so I welcome some additional clarification.

I still standby a prediction that I made in the past, I think that VMware continues to suffer from better technology always wins syndrome, the disease that lead to the demise of Netscape.

VMware and wirtualization have created a new marketplace, I contend that server vendors and Microsoft have suffered no financial impact due to the incredible success of VMware. 

While virtulization may have artificially quelled server sprawl, server vendors are now selling more sophisticated equipment like 16 way boxes and blades which removes some of the comoditization from the server market and provides a way for server vendors to infuse margin back into a marker which was headed nowhere – if you are a server vendor this is goodness.

For a while I wondered why Microsoft was not more aggressively attacking VMware, I now think I have a solid hypothesis.  While VMware represents a new market for Microsoft VMware is also creating a OS sprawl problem which is driving increased operating system sales for Microsoft – if you are Microsoft this is goodness.

In the past end users may have had 1 or 2 test and dev environments now that virtualization has made provisioning so simple they are literally creating hundreds of virtual test and dev environments.  To date if you are Microsoft why would you be overly concerned….  It is also important to note that in my estimation many VMware users will reach pre-virtualization physical server counts not long after virtualizing.  This is due in large part to the ease of provisioning and management associated with virtualization, users are provisioning more VMs more frequently – all representing big wins for OS providers (namely Microsoft), application providers, server providers and VMware.

Now onto the technology…  There is no doubt that VMware has a hypervisor that is far superior to Microsoft but Microsoft will eventually catch-up – what happens once Microsoft begins to offer pricing concessions on the Microsoft OS and applications when running in a virtualized environment with the MS hypervisor vs the VMware hypervisor?  To me this sounds reminiscent of the IE vs. Netscape desktop battle – the inferior technology won that battle.  We also watched it happen with Novell, undoubtedly a better NOS then WindowsNT but how many users made the switch?

One final thought, users buy servers and operating systems to run applications that solve business problems.  One of the biggest problems that Novell faced IMO was the fact that it was a file server and not an application platform.  If I am Microsoft I break out the Netscape and Novell play book and go to work.

Friday morning commute…

So I commute into Manhattan just about everyday from New Jersey, other than the fairly standard New Jersey transit delays and over crowded trains most days are fairly uneventful. This morning was not uneventful; I arrived at the station at 6 AM and hopped on the Hoboken express train (I do this often depending on when I get to the station and transfer in Newark to a train bound for NY Penn Station). I got on to the train, secured a decent seat and broke out my laptop – thus far a fairly mundane morning. Completed a few open tasks leftover from the previous day – again a fairly normal morning train ride. Once I completed a few tasks I spent the rest of the time stumbling the internet (I suggest the FireFox plugin), as the train pulled into Newark Penn Station I closed my laptop and reached under my legs to grab my backpack. As I tried to lift the backpack onto the seat it was seemingly tethered to the floor (I know this seems really odd), upon further examination it appears as if one of the dangling straps from the backpack had made its way into a vent slot on the floor of the train. The strap was stuck, really stuck, needless to say I did not have a pair of scissors, pocket knife or my trusty leatherman (just kidding, I don’t carry a leatherman) to cut the strap and release my bag so I was forced to stay on the train, I missed my connection in Newark as I attempted to wrestle my bag from the grasp of New Jersey Transit. After 2 to 3 minutes of tugging, yanking and pulling the bag was free, the strap was completely destroyed as if it had been eaten by the trains ventilation system. I guess the moral of the story here is use the overhead storage, the floor of NJT trains is a dangerous place. Hopefully tonight on the way home I can grab some pictures of the vents and put together a pictorial follow-up post. Not a good start to my day.

Checking in on two Open Source storage projects

Most of you have probably heard of Amanda (Advanced Maryland Automatic Network Disk Archiver) the backup utility, originally developed by the University of Maryland.  Zmanda took the popular Amanda Open Source project and supercharged it, targeting the SMB space as a lower cost alternative to the traditional options; I think they may be onto something.  The Open Source project that really intrigues me is Cleversafe.  The concept is extremely interesting in an era where disaster recovery is in the forefront of every users mind.  I can see a number of applications for Cleversafe, today we have RAID (Redundant Array of Independent Disks), Sync and Async Remote Mirroring, RAIN (Redundant Array of Independent Nodes) and Cleversafe may be onto what I would call RAIS (Redundant Array of Independent Sites) the ultimate in protection for the geographically dispersed enterprise.  I will keep a close watch on the progress of the Cleversafe project but I would recommend checking out Zmanda and Cleversafe.