Recent Email

I recently sent an internal Email with some of my favorite educational (technology centric) web links.  Here is a copy for your enjoyment.

Looking for good reading material, here are my suggestions:

If you like the content that Storage magazine has to offer save a tree and look at this online publications from Tech Target:

If you want to absorb the most content in the shortest amount of time I highly recommend Podcasts….  These are a few of my favorites:

Other great online resources:

How to increase you computer aptitude – Must watch documentaries (all can be downloaded to your iPod)!

Understand the Open Source movement:

Hacker sub-culture:

A site the deserves its own category:

.edu:

o One of favorites – http://webcast.rice.edu/webcast.php?action=details&event=196

Professional Societies:

I recently sent an internal Email with some of my favorite educational (technology centric) web links.  Here is a copy for your enjoyment.

Holiday perspective….

Happy Holidays…. This holiday season proved to be both enjoyable and exhausting.  I received a few quick read self-help / motivational books this holiday season.  I finished one today entitled It’s Not How Good You Are, It’s How Good You Want To Be – by Paul Arden.  The book is a quick read aimed at provoking thought to help you be the best you that you can be.  Like the "bible" (as so many refer to it on Amazon) I don’t think the ideas conveyed are meant to be taken literally but rather absorbed, digested and interpreted in a way that help you climb higher.

In the world of technology we spend so much time learning…. formulas, calculations, etc…   Many of the concepts are apropos and really got me thinking so I though I would share them with you.

It’s Wrong To Be Right
Being right is based upon knowledge and experience and is often provable.
Knowledge comes from the past, so it’s safe.  It is also out of date.  It’s the opposite of originality.
Experience is built from solutions to old situations and problems.  The old situations are probably different from the present ones, so that old solutions will have to be bent to fit new problems (and possibly fit badly).  Also the likelihood is that, if you’ve got the experience, you’ll probably use it.
This is lazy.
Experience is the opposite of being creative.
If you can prove you’re right, you’re set in concrete.  You cannot move with the times or with other people.
Being right is also being boring.  Your mind is closed.  You are not open to new ideas.  You are rooted in your own rightness, which is arrogant.  Arrogance is a valuable tool but only if used sparingly.
Worst of all, being right has a tone of morality about it.  To be anything else sounds weak or fallible, and people who are right would hate to be thought fallible.
So:  it’s wrong to be right, because people who are right are rooted in the past, rigid-minded, dull and smug.
There’s no talking to them.

It’s Right To Be Wrong
Start being wrong and suddenly anything is possible.
You’re no longer tying to be infallible.
You’re in the unknown.  There is no way of knowing what can happen, but there’s more chance of it being amazing than if you try to be right.
Of course, being wrong is a risk.
People worry about suggesting stupid ideas because of what others will think.
You will have been in meetings where new thinking has been called for, at your original suggestion.
Instead of saying, ‘That’s the kind of suggestion that leads us to a novel solutions’, the room goes quiet, they look up to the ceiling, roll their eyes and return to the discussion.
Risks are a measure of people.  People who won’t take them are trying to preserve what they have.
People who do that them often end up by having more.
Some risks have a future, and some people call them wrong.  But being right may be like walking backwards proving where you’ve been.
Being wrong isn’t in the future, or the past.
Being wrong isn’t anywhere but being here.
Best place to be, eh?

Do Not Try To Win Awards
Nearly everybody likes to win awards.
Awards create glamour and glamour creates income.
But be aware.
Awards are judged in committee by consensus of what is known.
In other words, what is in fashion.
But originality can’t be fashionable, because it hasn’t yet had the approval of the committee.
Don’t try to follow fashion.
Be true to your subject and you will be far more like to create something that is timeless.
That’s where the true are lies.

The moral is think for yourself, don’t be afraid to be wrong or unfashionable, be creative, innovate, succeed at being all you can be and be humble!

I will finish-up with this final passage:

Do Not Covet Your Ideas
Give away everything you know and more will come back to you.
You will remember from school other students preventing you from seeing their answers by placing their arm around their exercise book or exam paper.
It is the same at work, people are secretive with ideas.  ‘Don’t tell them that, they’ll take the credit for it.’
The problem with hoarding is you end up living off your reserves.  Eventually you will become stale.
If you give away everything you have, you are left with nothing.  This forces you to look, to be aware, to replenish.
Somehow the more you give away the more comes back to you.
Ideas are open knowledge.  Don’t claim ownership.
They’re not your ideas anyway, they’re someone else’s.  They are out there floating by on the ether.
You just have to put yourself in a frame of mind to pick them up.

Happy Holidays!  A little more food for thought before you finalize the New Year’s resolution 🙂

Interesting perspective…. Got me thinking….

Just finished reading Tony Asaro’s iSCSI – What Happens Next? post and the concept of VMware purchasing LeftHand Networks is an interesting one.  Let’s review the current situation, EMC owns 90% of VMware, everyone willingly or reluctantly knows that VMware is crushing it in the marketplace and that the virtualization craze is truly one of those things in our industry that rolls around every 20 years.  VMware’s business model, value prop, etc… precludes them from any and all affinities to a particular hardware vendor.  This may seem interesting seeing how EMC owns 90% of the company but the reality is that while VMware was a phenomenal acquisition for EMC from both an access to intellectual property and share holder value perspective VMware continues to be a major a catalyst in the commoditization of the hardware market.  EMC has been a major benefactor of slow commoditization in the enterprise storage hardware market for years (I should note that EMC is a very, very large software player but the hardware platform it still a critical component to the EMC business model), VMware threatens this space.  Now if we look at LeftHand Networks who has morphed their business and aligned them selves with the virtualization movement.  A purchase of a player such as LeftHand may allow VMware to develop an end-to-end agnostic solution that includes features that traditionally have been handled at the array level such as replication.  I should also note that Celerra, Centera, CLARiiON, Avamar, etc…. the list goes on and on are nothing more that software running on a hardened platform, many of these have insatiable appetites for horsepower so it remains to be seen if they can successfully move to a hardware agnostic model, additionally support becomes increasingly difficult and QoS my suffer in favor of flexibility and cost.  Is this the right model for everyone, probably not but in a large portion of the market where VMware is widely adopted this is a tradeoff most are willing to make.  Why LeftHand vs. the others, well I think LeftHand has done a good job publicizing their virtual approach, the release of a LeftHand Virtual Appliance demonstrates a conscious alignment.  Finally LeftHand is large enough to validate the technology but small enough to be blip on the radar of VMware’s hardware partners, in particular IBM and HP.  The acquisition of LeftHand by and HP or IBM could potentially cause even bigger problems for EMC.  If I were EMC I would look to either move the Celerra code to a virtual appliance quick or acquire someone, being first in this marker could be huge.  Only time will tell what happens in this market but it should be interesting to watch.

The space is getting annoying…

I have not gone off on this one before and I am in the mood to write so I thought I rant about two of my favorite topics. I am sure you are wondering what they are so here you go:

  • The VAR aka “Valueless Annoying Reseller”
  • The PS aka “Please Stop” only agnostic partner

The VAR or Value Added Reseller… IMO is an acronym that should be reserved for niche players who focus on a specific technology segment. Why? A VAR should bring some semblance of tangible value to a specific market (technological, geographical, intellectual, etc…) that outshines the manufacturer. This in no way states that there can not be a super VAR, a VAR that displays superior competence in a number of technology disciplines. The value of the VAR typically presents itself in the form of practical application of the technology and ultimately the development of some level intellectual property built on the foundation provided by the OEM. The VAR to me should represent a field based integrator rich in engineering talent. A VAR should bring together technology, process, best practices and intellect to develop offerings that differentiate a solution in the market place. Where my annoyance mostly lies is with the “expert” VAR that represents 5000 products and employs 50 people. You don’t need to be a rocket scientist to figure out that this “VAR” can not possibly be a subject matter expert in all 5000 products. If the OEM who employs 10s of thousands struggles to understand, test and integrate the 100s of products that they manufacture it would be moronic to believe that the VAR who reps those 100s of products along with 1000s of other products has it covered. The true VAR is focused, they understand practically what can be delivered within a finite product set and is focused on leveraging these technologies to solve business problems. The reality is the VAR market has become what I like to call the “smorgasbord sale” when in reality it should be the “sit down sale with French service”.

Now onto the even more annoying “PS only agnostic partner”. The pitch here is that because this 50 person company only provides services and they don’t care who you source the product from that they have some level of agnosticism. The answer is that they do, the problem is they also have skills that are wide not deep, often the jack of all trades master of none – certifications trump practical experience in this space. The concept of owning a business solution end-to-end is a nearly impossible feat with this partner. Experts in the PS (Please Stop) regardless of the platform? IMO the “PS” agnostic partner is dangerous because unlike the VAR (Valueless Annoying Reseller) who is looking to be a fulfillment channel (no risk here, low cost provider of product – nothing more) the PS partner is looking to deliver business solutions – The words “agnostic”, “engineer” and “technology” should never be uttered in the same breadth – technologists are some of the most opinionated people on the planet, so it just does not work for me. If you don’t believe me just take a look around the blogsphere to see just how opinionated we are, it just is not possible to do this day in and day out and not formulate an opinion. A deep understanding of the physical infrastructure is pretty important when architecting a solution, and even more importing when implementing the architecture. Bottom line is the more practical experience you have the more apt you are to steer clear of problem architectures and save the customer time and money.

Finally, last night I was laying in bed and thinking about the real value of the VAR? Having worked for both the OEM and the VAR here is what I have come with. A good OEM puts their best and brightest into the product development cycle, this usually limits the amount of time they can spend solving unique customer problems. A good VAR has their best and brightest on the front-lines working with customers and leveraging their deep skill set in a focused technology to solve business problems. A good PS organization is focused on business process and not technology (i.e. – Accenture, CSC, BearingPoint, etc…) these companies are typically segmented by vertical markets (i.e. – Financial Services, Life Sciences, etc…) because their value is in coupling the business process with technology. Another word that accurately depicts the PS focused organization is scale, these businesses are built to be able to through people at a problem. PS only organizations who do note adhere to the rules that I have outlined above IMO are more consistent with staff augmentation and NOT professional services.

Oracle Storage Guy: Direct NFS on EMC NAS

I have been chomping at the bit to test VMware on dNFS on EMC NAS for a couple of reasons.  A number of my customers who are looking at EMC NAS in particular the NS20 would like to consolidate storage, servers, file services, etc… on to a unified platform and leverage a single replication technology like Celerra Replicator.   dNFS may offer this possibility, .vmdks can now reside on the a NFS volume, CIFS shares can be consolidated to the the NS20 and all can be replicated with Celerra Replicator.  The only downside to this solution that I can see is right now the replicated volumes will be crash consistent copies but I think with some VMware scripting even this concern can be addressed.  I hope to stand this configuration up in the lab in the next couple of weeks so I should have more detail and a better idea of is viability shortly.  You may be wondering why this post entitled Oracle Storage Guy…… the answer is I was searching the blogsphere for an unbiased opinion and some performance metrics of VMware and dNFS and this was the blog that I stumbled upon.

The performance numbers I have seen for VMware on dNFS come very close to the numbers I have seen for iSCSI, both technologies offer benefits but for the use case I mention above dNFS may become very compelling.  I recommend reading this post Oracle Storage Guy: Direct NFS on EMC NAS, is offers some great commentary on the performance characteristics and benefits of dNFS.

How things have changed…

Starting to compose blog this on my train ride up to Boston on Tuesday night form NYC, I am sure I will actually finish it later which is why I mention this (it is now Friday and I am finally completing this blog on my train ride into NYC, correction it is now Tuesday afternoon and I am finally finishing the blog, I was sidetracked by a few other activities and a more factual blog – worked on it on Friday but did not finish).  The train ride between NY and Boston provides plenty of captive time to catchup on industry happenings. 

I am a fairly regular reader of both Dave Hitz’s blog, great content (since I will most likely not drive a Tesla anytime in the near future I need to read Dave’s blog to gleam some insight) I also happen to think that he is a very intelligent individual – not alone there.  Also from time to time when I am board I also read Jonathan Schwarz’s blog, he is a bit maniacal IMO but he also has some good tidbits which can be gleamed from the piles and piles of rhetoric, for instance reporting earnings on your blog c’mon, sometimes it is just a bit over the top.  I also wish someone would create a JAVA sucks, kill it now petition, I would be more than happy to be the first person to sign it – sometimes portability is just not worth the anguish (With that said SUN had done an incredible job proliferating this utterly painful technology.  Tonight I read both Dave’s and Jonathan’s blog hoping to gain some additional insight into the litigation saga.  BTW – .Net and Mono provide a pretty decent solution for a good looking, decent performing, portable, etc… alternative to JAVA.  Now I am off on a tangent, as a long time Linux user and long time VMware user can someone please explain to me why the hell VMware would remove Linux support for the ESX console in ESX 3.x?  I understand that the bulk of the desktop users are running Windows so Linux support may seem insignificant but I should remind everyone that Linux users needing to run Windows were some of the early adopters of VMware’s technology – show us some love.  The sad part is that the VMware console is built on .Net, some planning and Mono compliance would have given them portability to multiple platforms.  Anyway thought I would through that in there.  Onward….

The litigation situation between SUN and NetApp is what prompted this unorganized stream of consciousness.  It got me to thinking about how much innovation has slowed due to the need to “litigate vs. innovate” (taken from Shwarz’s blog – obviously written during one of his cosmic moments of consciousness).  SUN has changed a lot from the days of Khosla, Bechtolsheim (happy to see him back), Joy and McNealey.

I really enjoy technology history for a couple of reasons, it is factual, finite and tangible, it is well documented with little speculation over what actually happened.  Historical accounts of technology lack the personal opinions that taint world history, religious history and most other historical happenings.  Why is this relevant you will see in a moment – hopefully…

If I were to mention the name Dan Bricklin and Bob Frankston most people will respond with a blank stare and possible a who?  In fact Dan Briklan and Bob Frankston are two of the greatest innovators in personal computing history they wrote and application called VisiCalc which was originally released for the Apple II in 1979.  Interestingly enough the creators of VisiCalc did not pursue a patent for their work, while today I am sure that Dan and Bob would file for a patent back in the day almost no one filed for patents related to software inventions.  A few other software inventions not protected by patent law are word wrapping, cut and paste, the word processing ruler, sorting and compression algorithms, hypertext linking, and compiler techniques.  Could you imagine how innovation would have slowed if these technologies were all patented…  The ability to leverage each others inventions dramatically accelerates the innovation cycle.  The open source community is proof positive that this does work, the problem is that so many of the market leaders (e.g. – Microsoft) would rather protect their market share by  locking the doors rather than just being the best and forcing them selves to out innovate their competition.  It is a complex problem and not one that I have answer for, with that said I would like choice, I would like an open document format that works seamlessly across multiple word processors, etc…  Stifling competitive innovation through lock out is not the right answer.

Another name that most would struggle with is Gary Kildall who was one of the key pioneers of the PC industry who was “upstaged” by Microsoft.  Gary Kildall was a hobbyist and the creator of the MS-DOS predecessor CP/M.  Back in the day when Microsoft was headquarter in a strip mall in Texas developing Basic for the Altair, Gary Killdal was in Seattle running a company called Digital Research and writing a hobbyist operating system called CP/M.  CP/M would be the code base that would eventually become MS-DOS and propel Microsoft to heights beyond their wildest dreams.

While arguably the developers of the modern operating system and the modern spreadsheet application stood by as Microsoft and Lotus became multi-billion dollar companies and continued to innovate today two industry giants like NetApp and SUN would rather take a position just to take a position…  The sad part is that corporations have so much control today that the Bricklins, Frankstons and Kildalls of today, the Torvalds, Raymonds, etc… have a significantly larger hill to climb.

Sometime I actually feel pain when I read Jonathan Shwarz’s blog, with that said WAFL, ZFS, who cares… move on… NetApp, ZFS is not even close to a threat, SUN is not and never will be a storage company.  SUN how about spending less time pulling patents and reviewing them to see what you can litigate against.  I just wish someone would focus their time, money and energy on building something new and innovative rather than trying to keep current technology alive by boxing each other out.

I for sure do not know everything but I do know that the rate of development and innovation has slowed dramatically over the past 30 years.

The Cache Effect

Following a fit of rage last night after I inadvertently deleted 2 hours worth of content I have now calmed down enough to recreate the post.

The story starts out like this, a customer who recently installed a EMC CX3–80 was working on a backup project roll out, the plan was to leverage ATA capacity in the CX3–80 as a backup-to-disk (B2D) target.  Once they rolled out the backup application they were experiencing very poor performance for the backup jobs that were running to disk, additionally the customer did some file system copies to this particular device and the performance appeared to slow.

The CX3–80 is actually a fairly large array but for the purposes of this post I will focus on the particular ATA RAID group which was the target of the backup job where the performance problem was identified.

I was aware that the customer only had on power rail due to some power constraints in their current data center.  The plan was to power up the CX using just the A side power until they could de-commission some equipment and power on the B side.  My initial though was that cache the culprit but I wanted to investigate further before drawing a conclusion.

My first step was to log into the system and validate that cache was actually disabled, which it was.  This was due to the fact that the SPS (supplemental power supply) only had one power feed and the batteries where not charging.  In this case write–back cache is disabled to protect from potential data loss.  Once I validated that cache was in fact disabled I thought that I would take a scientific approach to resolving the issue by base lining the performance without cache and then enabling cache and running the performance test again.

The ATA RAID group which I was testing on was configured as a 15 drive R5 group with 5 LUNs (50 – 54) ~ 2 TB in size.

Figure 1:  Physical disk layout

R5

My testing was run against drive f: which is LUN 50 which resides on the 15 drive R5 group depicted above.  LUNs 51, 52, 53 and 54 were not being used so the RG was only being used by the benchmark I was running on LUN 50.

Figure 2:  Benchmark results before cache was enabled

Pre-cache

As you can see the performance for writes is abysmal.  I will focus on the 64k test as we progress through the rest of this blog.  You will see above that the 64k test only push ~ 4.6 MB/s.  Very poor performance for a 15 drive stripe.  I have a theory for why this is but I will get to that later in the post.

Before cache couple be enabled we needed to power the second power supply on the the SPS, this was done by plugging the B power supply on the SPS into the A side power rail.  Once this was complete and the SPS battery was charged cache was enabled on the CX and the benchmark was run a second time.

Figure 3:  Benchmark results post cache being enabled (Note the scale on this chart differs from the above chart)

Post-cache

As you can see the performance increased from ~ 4.6 MB/s for 64k writes to ~ 160.9 MB/s for 64k writes.  I have to admit I would not have expected write cache to have this dramatic of an effect.

After thinking about it for a while I formulated some theories that I hope to fully prove out in the near future.  I believe that the performance characteristics that presented themselves in this particular situation was a combination of a number of things, the fact that the stripe width was 15 drives and cache being disabled created the huge gap in performance.

Let me explain some RAID basics so hopefully the explanation will become a bit clearer.

A RAID group had two key components that we need to be concerned with for the purpose of this discussion:

  1. Stripe width – which is typically synonymous with the number of drives in the the raid group
  2. Stripe depth – which is the size of the write that the controller performs before it round robin to the next physical spindle (Depicted in Figure 4)

Figure 4: Stripe Depth

Stripe_depth

The next concept is write cache, specifically two features of write cache know as write-back cache and write-gathering cache.

First lets examine the I/O pattern without the use of cache.  Figure 5 depicts a typical 16k I/O on an array with and 8k stripe depth and a 4 drive stripe width, with no write cache.

Figure 5:  Array with no write cache

No_cache

The effect of no write cache is two fold.  First there is no write-back so the I/O needs to be acknowledge by the physical disk, this is obviously much slower that and ack from memory.  Second, because there is no write-gathering full-stripe writes can not be facilitated which means more back-end I/O operations, affectionately referred to as the Read-Modify-Write penalty.

Now lets examine the same configuration with write-cache enabled.  Depicted in Figure 6.

Figure 6:  Array with write cache enabled

W_cache

Here you will note that acks are sent back to the host before they are written to physical spindles, this dramatically improves performance.  Second write-gathering cache is used to facilitate full-stripe writes which negates the read-modify-write penalty.

Finally my conclusion is that the loss of write cache could be somewhat negated by reducing stripe widths from 15 drives to 3 or 4 drives and creating a meta to accommodate larger LUN sizes.  With a 15 drive raid group the read-modify-write penalty can be severe as I believe we have seen in Figure 2.  This theory needs to be test, which I hope to do in the near future.  Obviously write-back cache also had an impact but I am not sure that is was as important as write-gathering in this case.  I could have probably tuned the stripe-depth and file system I/O size to improve the efficiency without cache as well.

Dell to Pay $1.4B for EqualLogic – Information Lifecycle Management (ILM) News Analysis – Byte and Switch

Dell to Pay $1.4B for EqualLogic – Information Lifecycle Management (ILM) News Analysis – Byte and Switch.

Interesting acquisition by DELL… I am attending the EMC Parter Advocate Conferernce today and tomorrow.  DELL was added to the agenda as a key discussion topic – no surprise there.  I am sure there will be some continued discussion around on this over the next couple of days and I will pass on information where possible.  The overwelhming message thus far is “business as usual”, DELL will leverage EquilLogic as an extension of the PowerVault series which has always been based on LSI controller technology (not EMC), this will not impact the market where DELL has positioned EMC?  EMC has recently reinked the relationship with DELL and expects the relationship to continue to grow.

If you read this this article it certainly sounds like DELL will try to take the PowerVault up market.  I have to believe that uncertainty in in the field and other factos will in fact lead to a more heated co-opetition arragement between DELL and EMC.

“…Specifically, Dell is planning to build EqualLogic’s technology into its 1000, 3000, and 3000i PowerVault disk arrays. Arterbury would not go into specific details about this integration plan, but he confirmed that EqualLogic’s PS Series hardware could form the basis of a new high-end PowerVault device…”

I have to believe this will erode some portion of what I understand to be ~ 600 – 700 MM in DELL generated commercial space revenue.  In total DELL accounts for ~ 1.4 billion in EMC annual revenues and this number has been growing.  The big question is will DELL begin to account for less revenue which provides opportunity for smaller integrators or will some portion of this revenue shift to EqualLogic?

I dunno, I remember the days of the EMC & HP relationship when 1 billion in annual revenue disapeared overnight, the EMC of today is far more diversified so the effects of this aquisition are far less magnified but still noteworthy.

Bare Metal Recovery

I recently received a comment on my demontration on W2K3 Rocovery using EMC Legato Networker. The question raised was does Legato support true bare metal recovery (BMR) for both Windows and UNIX – this implies that a system can be restored without actually reinstalling a basic operating system as I demonstrate in the video tutorial. This is a multi-part answer and I will do my best to answer each question, provide some insight and make some recommendations. For Windows 2003 and XP Microsoft and EMC Networker support something called an Automated System Recovery (ASR). Unfortunately this is the supported BMR process for EMC Networker on Windows. IMO this is not a production viable bare metal recovery process due to the need to use a floppy – this is contrary to Microsoft’s opinion… floppies? What are they thinking? For UNIX there is not a supported Networker BMR process. The process of installing a base OS and the Networker client then initiating a full system restore actually works better on UNIX then it does on Windows. Most Unix distributions have rich support for network OS boot and installation (i.e. – PXE boot, Solaris JumpStart, etc…), for this reason I do not see many UNIX environments investing in BMR technologies, with that said if BMR is a concern for a heterogeneous UNIX and Windows I would recommend investigating a products such as EMC Homebase or Unitrends (assuming you are looking for commercial products – there are good OpenSource alternatives which I will touch on later in the post). In an all Windows environment the options dramatically increase. EMC recently acquired a company called Indigo Stone and a product called HomeBase to facilitate true BMR. There are a number of other products in the market, personally I have used Acronis True Image, Unitrends, Ghost and a few others. Depending on your goal all of these products have pluses and minuses. The problems with traditional BMR products are usually HAL related. I have not consistently moved to disparate hardware platforms with BMR. BMR products attempt to alter the HAL to make a W2K3 image taken from a AMD box with 4 GB of RAM which can be restored on an Intel box with 8 GB of RAM. While BMR has improved tremendously over the years, IMO it is far from perfect. Many organizations looking for a quick recovery method in the event of a host failure are looking at virtualization technology. A process called P2V (physical-to-virtual) can be performed to create virtual images of physical servers. These virtual images abstract the physical hardware and are highly portable and easy to maintain. My personal preference for facilitating this process is a product

called PlateSpin PowerConvert with this said I am also investigating how HomeBase would facilitate this process. If you just need an image backup of a system to speed time to recovery without the need for hardware independence I would look at a couple of OpenSource options. The Personal Backup Appliance is a virtual appliance that will get you imaging your systems as quickly as possible. I have also used Partition Image for Linux successfully. If you are looking for other options you can find a comprehensive list here http://www.thefreecountry.com/utilities/backupandimage.shtml . Hope this post was helpful, I did not want to bury it as a comment.

California Co-Lo(s)

ca colo

I was talking last week with a customer recently about power requirements in their co-location facility. I am sure it does not come as a shock to most that the biggest problem and cost facing data centers is not space but rather power. Despite all of our green efforts computer equipment continues to suck a massive amount of juice and I don’t think there is a realistic solution to this age old problem on the near-term horizon. Many data centers are aging and require a complete and costly retooling – Enter California Co-Lo(s), a wholly owned subsidiary of California Closets – Obviously I am joking but is this not a catchy idea :). At California Co-Lo(s) we have our eye on designing the green data center of tomorrow while incorporating organization, ascetics and a sense of style to accommodate even the most discriminating Metrotechie. Our data centers architects are certified Feng Shui practitioners so not only will your data center be green, economical, organized and stylish but California Co-Lo(s) invokes a sense of peace amidst the drab white backdrop, dull hum of equipment, blinking equipment lights, flickering florescent lighting, arctic temperatures and halon systems that comprise what we call the data center. Reach out an call one of out Co-Lo counselors. Could you imagine – LOL….