App that provides dramatic productivity improvements (for parents)

So this may seem like a strange post, as most people will think that I am going to be talking about a an IDE application, a RAD tool, a CRM application or some sort of text-to-speech processor, regardless of what you are expecting I can almost guarantee you will be expecting something a little more sexy than what you are about to see (especially if you are not a parent).

I think this app is so useful I am not only posting to my appoftheday.org blog but also to my gotitsolutions.org blog because it is that good.

Let me provide some background.  I have two wonderful little girls, a 5 year old and a 6 month old, for anyone with children we all we have retooled the human machined (ourselves) to have a CPU that is focused on work and coprocessor that deals with our children while we try to focus (we can flip this paradigm as well).  I have to say my time slicing skills are second to none, you learn how to work in 2 min slices while breaking away for 30 seconds to lend some CPU cycles to an often overheating parental coprocessor.  I often read emails back later that had the same thought double typed, missing words, etc… this is because I am processing too much information, my mental programming is way off.  I have this huge array of things I need to do, things I am doing, things I am being told to do, things my kids want to do, yadda, yadda, yadda…. Let’s just say that the that I often suffer pointer corruption which leads to memory leaks, corruption and eventually a segmentation fault (in non techie lingo this is know as a freak out, but this is a technical blog hence the techie speak).

So to the point of the post.  There is this brilliant lady named Julie Aigner-Clark the founder of The Baby Einstein Company, absolute best videos for kids under the age of one to help cool down the coprocessor (why didn’t I start filming shiny lights and hand puppets 10 years ago).  My 5 year old will even watch the videos.  There is this great website site called YouTube where you can find Baby Einstein videos as well as other great videos like Oswald, WordGirl, Hannah Montana and The Pink Panther (a few of my older daughters favorites) So you are probably asking what relevance does this have.  I will explain, be patient, I know how difficult this probably is because you 6 month old wants to eat and your 5 year old wants you to “Play Barbies” with her.

I am in my office trying to work and my daughter comes in, she wants me to stop what I am doing to play with her, I attempt to stall and concentrate at the same time (very difficult).  I eventually sit her on my lap (applies to 6 month old and 5 year old) and open YouTube in my browser and start playing our favorite Baby Einstein or WordGirl video.  Good so far.  I pop out the video window from youtube.com and resize my excel sheet and attempt to work, here is a screen shot of what I am left with:

image

So on the left my daughter(s) can sit on my lap and watch the vide while I work on the spreadsheet on the right.  Now here is the issue, I only have 3/4 of the screen which can be a little annoying, if I need to use another app it can be a big issue.  So what is the effect of me switching windows:

image

Oh no, the video moved to the background, scramble to resize the browser window to avoid a complete meltdown.  Reflexes are not that good so I rarely accomplish the goal.

Now for the introduction of a must have application that dramatically improves productivity, focus and sanity.  The app is called DeskPins and simply it allows you to pin any window to the foreground so lets look at a couple of examples of how I use this.

I follow the same process as before with finding a video on YouTube, popping out the video windows but now I pin the video window to the foreground.

image

Now I can maximize my spreadsheet (far better) and without the video moving to the background, I can move the video window around as needed.  I can open FireFox and not worry about losing the video to the background.

image

The app works on 32 and 64 bit versions of Windows (I am running on 32 bit XP, 32 bit Win 7 and 64 bit Win 7) and has become an invaluable tool for me.  Hopefully this post helps with some use case examples and helps other parents occupy their children in times of need.  Enjoy!

Hello from Cisco Live 2010

Got in yesterday (6/28/2010) and planned to attend an afternoon session but I got hung up on a few items that required my attention.  Attendance looks pretty good, food was a bit weak this AM but I am more of a coffee only person in the morning so not a huge deal for me.  Internet connectivity is stellar thus far hopefully this keeps up.  Looking forward to the sessions this week, I am starting the week with a session on entitled Mastering IP Subnetting Forever.  I will be blogging as always from the sessions I attend.  TTFN

Avamar sizing brain dump

Avamar DS18 = Utility Node + Spare Node + 16 Active Data Nodes

For a 3.3. TB Gen-3 Grid

  • Raw Capacity ~102 TB
  • Spare Node ~6 TB
  • RAID5 ~15 TB
  • Checkpoint  / GC ~28 TB
  • RAIN ~3 TB
  • Available for Active Backups ~49 TB

RAID Configuration:

  • RAID 1 for 3.3 TB node
  • RAID 5 for 2 TB nodes
  • RAID 1 for 1 TB nodes

How to calculate the required capacity:

  • Seed (Initial backups)
    Daily Change * Rentention in Days
    +RAIN = GSAN Utilization

 

  • Need min available space for 4 checkpoints
  • 3 checkpoints maintained by default

Data Gathering

Note:  Agent only vs. data store depends on the desired RPO

  • xfer_rate = Gb/hr * .70
  • date_size = total of the data set to be backed up
  • restore_time = data_size x .65 / xfer_rate

If RTO < restore_rate then data store else agent only

Always use 3.3 TB nodes when configuring unless additional nodes are required to increase the ingestion rate.

Use the default de-dupe rate unless a POC or assessment has been performed.

Sizing Considerations:

  • Data Types
    • File Systems
    • Databases
    • Large Clients > 2 TB
    • Dense File Systems (excluding EMC Celerra and NetApp)
  • Organic Growth
  • RTO
  • Replication Window
  • Maintenance Window

Non-RAIN node must be replicated this includes single node Avamar deployments and 1×2 (1 utility node and 2 data store nodes – this is non-RAIN config) configurations.

**** Remember this: As a general rule it seems that transactional databases are better suited to be backed up to Data Domain and NOT with the Avamar as the hashing of databases is generally very slow.

VMware (specifically using the VMware Storage APIs) and CIFS are well suited for Avamar

Data save rates:

  • 100 – 150 GB/hr per avtar stream on latest server types
    • Note:  it is possible to launch multiple avtar daemons with some tweaking, but an out of the box install only launches a single avtar process.
  • VMguest backups can be slower (very scientific, these are backups that
  • Default assumption is chuck-compress-hash process runs at a rate of 100 GB/hr
    • This is the process that bottlenecks database backups (ideally is seems that the avtar stream rate should match the check-compress-hash process)

Scan rate:

  • ~ 1 million files per hour
    • 1 TB of file data will take about 1 hour to backup
    • 1 TB DB will take ~ 10 hours to complete

Performance:

  • 1 TB/hr per node in the grid (all file data)
  • 80% file (800 GB file) and 20% DB (200 GB DB) and the performance level drops off to .5 TB/hr
  • E.g. – DS18 perf will be ~ 15-16 TB/hr
  • Per node ingest rate ~ 8GB/hr

Restores:

Data Fetch Process

  • Per node assumption
    • Chuck size 24kb
    • each chunk is referenced in a hash index stripe
    • Speed:
      • 5 MB/s
      • 18 GB/hr (compressed chunk)
      • 25 GB/hr (rehydrated chunk)
  • E.g. – A DS18 will restore at a rate of .5 TB/hr

NDMP Sizing:

  • Size of the NDMP data set
  • Type of filer (Celerra or NetApp)
  • Number of volumes, file systems, qtrees
  • Size of volumes
  • Number of files per volume / file system

L-0 Fulls on happen once (we don’t want to size for them)

Size for L-1 incremental which will happen in perpetuity following the completion of the L-0 full.

  • Important L-1 sizing data
    • Number of files in the L-1 backup
    • Backup window

2 Accelerator Nodes

Config Max Files   Max Data   Max Streams  
  Celerra NetApp Celerra NetApp Celerra NetApp
6 GB 5 m 30 m 4-6 TB 4-6 TB 1-2 1-2
36 GB 40 m 60 m 8-12 TB 8-12 TB 4 4

NDMP throughput ~ 100 – 150 TB/hr

Assumed DeDupe Rates:

  • File data
    • Initial backup:  70% commonality (30% of the data is unique)
      • e.g. – 30% of 10 TB = 3 TB stored
    • Subsequent backups:  .3% daily change
      • e.g. – .3% of 10 TB = 30 GB stored per day
  • Database data
    • Initial backup:  35% commonality (65% of the data is unique)
      • e.g. – 65% of 10 TB = 6.5 TB stored
    • Subsequent backups:  4% daily change
      • e.g. – 4% of 10 TB = 400 GB stored per day

Tip:  Based on scan rate and the amount of data stored for DB backups you can see why Avamar may not be the best choice for DB backups.

NDMP Tips:

  • Avamar NDMP accelerator node should be on the same LAN segment as the filer and the same switch when possible
  • No Include/Exclude rules are supported
  • Able to run up to 4 NDMP backups simultaneously
    • most effective with large files
    • min of 4GB of memory per accelerator node per stream
    • 4 NDMP simultaneously scheduled as groups backups

Desktop / Laptop

Sizing:

  • Number of clients
  • Amount of data per client
    • user files
    • DB/PST files

DS18 can support ~ 5000 clients

Number of streams per node default is 18 (17 are usable, one should be reserved for restores).

That completes the brain dump.  Wish I had more but that is all for now.

VMotion Over Distance with EMC VPLEX

If you have not heard EMC announced a product called VPLEX at EMC World 2010 this week.

Note:  I was watching and documenting at the same time so feel free to make any corrections to the data below.

What is the VPLEX recipe (what am I tasting in the session):

  • 1/4 tablespoon storage vmotion capability in a geo dispersed deployment model
  • 1/4tablespoon EMC Invista
  • 1/4 tablespoon V-Max engine
  • 1/4 tablespoon FAST

What you get:

  • Datacenter Load Balancing
  • Disaster avoidance datacenter maintenance
  • Zero-downtime datacenter moves

The concept of VMotion is facilitated by the the presentation of a VPLEX Virtual-Volume.

Some infrastructure details:

  • Up to 8 VPLEX directors per cluster
  • Peer relationship
  • 32GB of cache per director
  • Cache coherency is maintained between the peer VPLEX’s

VPLEX Metro is a geo dispersed cluster-pair that presents one VPLEX virtual volume across data centers.  Read I/O benefits from local reads when accessing a VPLEX virtual volume.

Requirements/Rules:

  • Implementation is synchronous
  • 100 km distance limit
  • < 5 ms of round-trip latency
  • Stretched layer-2 network (Check out OTV to avoid STP issues associated with stretched layer-2 networks)
    • shared layer-2 broadcast domain
  • Do not stretch VMware clusters between data centers
  • Used Fixed policy with VMware NMP
  • Storage VMotion should be used for non-disruptive migrations

Simulation/Example :

  • 2 data center separated by 100km
  • Shared VM and VMotion networks
  • Shared data stores through VPLEX metro
  • Two node ESX cluster at each site with a single vCenter hos
  • PowerPath/VE and set to adaptive
  • Presented 500B LUNs presented directly
  • Storage VMotion used for non-disruptive migration
  • No storage resignaturing as this is only required on ESX 3.5
  • more stuff i did not get…

Testing was performed on MOSS 2007, SQL Server, SAP and Oracle using workload simulation tools.

Test Scenarios:

  • Scenario 1:Vmotion between 2 data centers in a VPLEx environment compared to a stretched SAN
  • Result 1:  Storage vmoton followed by vmotion in stretched san took approcimately 25x longer than a Vmotion using a shared VPEX lun
  • Scenario2 :  Vmotion between2 data centers with VPLEX separated by 100km
  • Result 2:  Vmotion perf was well within the m specs and did not impact app perf or user experience

Note:  With these requirements this technology will pretty much be relegated to enterprise class customers with dark fibre between sites. With that said technology looks pretty cool if you can afford the infrastructure.

According the The Storage Anarchist’s blog “VPLEX comes to market with two mainstream customer references: AOL and Melbourne IT (who will be replacing their sheep farmer-endorsed product with the more applicable VPLEX)”

Check ot the VPLEX/VMotion Whitpaper

EMC World 2010 – Day 3 Update #emcworld

Well, I have pretty much gotten my ass kicked by EMC certification exams this year.  Day one Centera exam was a big miss (kind of expected this one), BRS TA E20-329 was a near miss yesterday, 28 questions of 70 on Avamar and Data Domain, wonder if this provides any insight on the future of Networker, considering on the previous BURA TA the majority of the questions were Networker questions I find this pretty telling.  This morning a near miss on the RecoverPoint E22-275 exam, 1 of 7 on the Brocade SAS section (ouch), but who the heck uses Brocade fabric splitters?

Anyway, hopefully I can get a win this afternoon, feeling pretty beat down, my cold is not helping.  I wonder how much Sudafed I can take before I damage vital organs 🙂

CLARiiON FAST Cache #emcworld

  • Available later this year (2010)
  • Aligned with FAST: “Place data on the most appropriate storage resources”
    • Temporarily relocates often-used data to faster storage resources
      • Provide Flash drive performance to hottest data
      • Reduces load and improves performance of other resources
    • Fully automated application acceleration
  • Performance proposition
    • Large enough to contain a high percentage of working set over long time intervals
    • Fast enough to provide order of magnitude performance improvements
  • Traditional DRAM cache vs FAST cache
    • DRAM cache limited in size and very fast 10^-9
    • 15K FC disk drive 10^-3
    • Flash drive 10^-6
  • Requirements
    • FLARE R30 Required for FAST Cache
    • Dedicated FLASH drives
    • Native mirrored protection for read/write cache
    • Can be unprotected for read cache only
  • Implementation
    • Memory map tracks host address usage and ownership
      • 64kb extents (not LUN movement, much more granularity)
    • All I/O flows through the FAST cache driver and memory map
      • Memory map lookup is very low impact
      • Memory map does take some DRAM space so there will be marginally less DRAM cache available (~ 1 GB of DRAM per 1 TB of FAST Cache)
    • No “FORCED FLUSHING” so for bursty work loads that invoke traditional DRAM forced cache flushes this may help.
    • Background process runs on CX to cleanup the extents
  • Benefits
    • Flash Cache read hits = Flash drive response times
    • Flash Cache write hits flush faster
    • Flash Cache hits offload HDDs
    • Lower net application response time enables higher IOPs
    • Efficient use of Flash drive technology
  • Key concept for max Flash cache benefit
    • Understand Locality of Reference
      • Total GB of actively reference data
      • Same areas reference over short periods and multiple times
  • What makes for a good Flash cache workload
    • Small to moderate working sets
    • High frequency of access to same chunks – rehits
    • Perf limited by disk technology not SPs
  • Profiles of common apps
    • DB OLTP/DSS
      • Oracle, MS SQL
    • Exchange
    • File Servers
  • Determine appropriate subset of LUNs for use with Fast cache

Note:  Sequential workloads are still (typically) better served by traditional rotational media  (e.g. – backup-to-disk)

  • Tools
    • FAST cache analyzer
      • Will require RBA traces for FAST cache analysis
  • Uber tiering with FAST cache plus FAST
    • DRAM cache  <-> FAST cache <-> FC <-> SATA
  • FAST Cache is a license so the CX enabler will be required (there is a bundle for both FAST and FAST cache)

Questions:

  • Are you limited to 2 TB FAST cache?  Can you have multiple FAST cache LUNs?
    • No limited to 2 TB really depends on how much DRAM capacity you want to consume with the memory map
    • Limited to a single FAST cache LUN

Intro to Unisphere session at #emcworld

  • Unified management platform for CLARiiON, Celerra and RecoverPoint
    • Unified look and feel across all aforementioned products
    • Support for all CLARiiON >= FLARE 19
  • Release data Q3 2010
  • Only functions at the presentation layer
    • Will not impact the CLI so scripts will not be impacted by Unisphere
  • Views
    • Dashboard
      • Unified alerting sorted by severity
      • System list in the top toll bar provides the ability to drill down from the domain level to the physical system to view details
    • System View
      • Graphical hardware depiction (finally)
    • Summary Views
      • Storage
        • Bye bye CAP reports!!!!!  (thank you)
      • Replication
    • Note:  View Blocks are widgets that are placed on the screen somewhere that Aggregate data graphically displaying utilization, capacity, etc…
      • Ability to customize views, these views are tied to the client not the user so they will stay on the client not move with the user
  • Right-click provides context menu similar to what is currently available in Navisphere
  • Array data is cached locally on Unisphere client
    • This will hopefully help out performance
    • Data collection from array is time stamped so you can ensure you are working with the correct data set
    • A refresh (manual) needs to be performed to query the array and refresh the Unisphere cache
    • Once the data is cached it can be manipulated
  • Context sensitive help and actions
  • Replication Manager (RM) and USM (Unisphere Service Manager) integration (via link and launch)
    • If apps not installed it launches PowerLink and grabs RM or USM
    • Performs the install
    • Launches the app
  • USM replaces NST and unifies the ability to perform service tasks on CLARiiON or Celerra
    • USM adds the ability to track FCOs, Support Advisories, etc… via RSS feeds
    • Also provides the ability to launch to the proper location to research things (e.g. – EMC Support Forums), USM remains as the presentation layer, no need to jump around between apps.
    • Service requests can be created directly from USM

Interoperability Matrix

Platform Native Managed
CLARiiON CX, CX3, CX4 FLARE R30+ FLARE R19+
Celerra DART 6.0+ DART 6.0+
CLARiiON AX4 TBD FLARE R23+
  • Navisphere Off-Array being replace by Unisphere client and server
    • Support for Navi Analyzer off-array
  • Navisphere manager license supports Unisphere, no need for an upgrade

Overall the app looks really nice considering most of us are used to using a fractured set of tools between CX, NS and RP.  Will be interested in seeing how Unisphere helps us map the use of the array.  It appears to still be a JAVA based application so the jury is out on performance until I see it.

Audience questions:

  • Can you get to engineering mode from Unisphere
    • Unanswered.  Translation: Assume access to engineering mode on the CX will still require us to log into Navisphere.
  • Support for Centera on the roadmap
    • Answered, Yes.  No timeframe given.
  • What is the user experience like on a very large CLARiiON that is busy
    • Answer, it depends.  Translation:  It will likely still be slow.

EMC World 2010 Initial Thoughts

Sitting in room 153A at a rather rudimentary Cisco session so thought I would take a few minutes to write up my initial observations and comments on EMC World 2010 this far (only an 1 hour and 30 mins in):

  • I am hoping that the Cloud message is less nebulous by the time I leave on Thursday
  • Green is a theme, ice work with the bamboo plates at breakfast.  Also enjoyed NOT getting five million data sheets in my EMC goodie bag.
  • No backpacks or laptop cases this year, they have been replaced by a generic EMC branded bag.  Works for me because I can fit the giveaway bag into my normal bag so I don’t have to carry two bags all day.
    • Assumed this was due to cutbacks but I was told by the guy at the registration desk that it feels like there are more attendees this year than in the past.  He thought the number was somewhere around 10k attendees.
  • Social networking is very visible, with Twitter and FourSquare leading the way. image
    • Not sure that everyone here gets the FourSquare thing, EMC has created a ton of venues but you would think that a swarm would be easily attainable at a technology conference with nearly 10k attendees.  I think a fair number of attendees may need a mobile device upgrade, the StarTAC is no longer an acceptable cellular device, having the StarTAC car kit in your Mercedes does not justify holding onto the phone 🙂
  • Nice work with the mobile app
    • http://emc.tripbuilder.com/mobi
    • Who knows, maybe next year there will be an actual iPhone and Android app so we can jettison the session guide and save a few trees.
  • Finally it is obvious that like Apple ownership of the letter “I” EMC has laid claimed the letter “V”
    • I understand but does every product need to really be prefixed with a “V”?
  • Only one guru level session, very disappointing.
  • Looking forward to hearing about V-Plex, Unisphere and attending the session on automating Virtual Provisioning with Windows PowerShell
  • Hoping the V-Plex and Unisphere session are more than just marchitecture

Here is wishing everyone a great EMC World 2010.  If you don’t have FourSquare installed on your mobile device, I will assume you are either a StarTAC users or posting your updates to MySpace 🙂

The best thing and the worst thing….

It has been said that the best thing about the internet is that anyone can publish, unfortunately the worst thing about the internet is that anyone can publish.  Another fitting cliché is that opinions are like buttholes, I propose that the blog sphere has become a public restroom, reeking from the stench of personal opinion backed by analogies, anecdotes, etc… and little fact (much like the opinions expressed in this blog).  By no means am I absolving myself from the claims made in this post, I am as guilty as the next guy when it comes to writing valueless content, but I do feel like I mix in some valuable content that is based on empirical data and facts.  I am all for some good rhetoric but let’s face it people, we all like to hear ourselves talk regardless of how little value the commentary actually has.  When your platform is largely opinions based loosely on facts as defined in the “Marchitecture” documentation you have to be willing to accept the 50% of the people who will agree with your perspective and 50% of the people who won’t.  Why does anyone even care what influences the authors perspective, why does it matter to the content consumer?  The assumption we all should make is that the content author is motivated by something, this motivation can be pure or corrupt in nature.  The great thing is you can decide to either agree or disagree, offer up some additional conjecture or not, that is the beauty of free will.

A WORD OF CAUTION TO BLOG CONTENT CONSUMERS

Every person has a predisposition to one perspective or another thus the concept of a non-biased view of the world, policy, product, etc… is made impossible by this little thing we call human nature.  But wait it's worse, beyond just human nature we have what I believe to be the two additional key aspects that influence behavior:

Indoctrination:  The belief system in which we participate (e.g. – WAFL vs CoW).  There is no doubt that a long time NetApp employee, user, etc… who has been indoctrinated in to the culture, ideology, thought process, etc… will believe that WAFL is a superior technology when compared to CoFW.  In contrast a person indoctrinated into the EMC culture could likely argue why CoFW is a superior technology.  The problem is that both of these perspectives speak to the technology and not the use case.

Personal Gain (monetary or otherwise):  My favorite because it has a huge impact because the so called independent analysts in the technology community (who will remain unnamed due to the litigious nature of the world we live in) are really marketing mercenaries or blackmail artists depending on your perspective.  This is not to say that analysts do not initiate coverage on technologies that they are no being paid to follow but let’s just say that the coverage of technologies that are being paid to follow is a bit more substantial.  It is funny how as human beings our opinions tend to align with our goals.

So my word of caution is as follows, trust ONLY yourself (and yes you can trust yourself, it is true that you likely have an agenda but is also true that this agenda is likely in your best interest), read lots of differing opinions and formulate your own.  Realize that reading information found on the web can be a dangerous thing if you don’t take what you have learned, internalize it and think for yourself.  My favorite example here is researching symptoms on WebMD, use the WebMD Symptom Checker and enter the req’d info (e.g. – male, 35-44 years), click submit, then drill down on the head, once you get to the symptom picker choose Headache (worst ever) now take note of the only possible condition.  Enough said about the dangers of the internet and not thinking for yourself.  So PLEASE apply some modicum of logic, reason and realism when digesting opinionated content.

So what prompted this seemingly common sense cautionary tale.  My opinionated colleague over at RecoveryMonkey.net posted an OPINION entitled “More FUD busting: Deduplication – is variable-block better than fixed-block, and should you care?” on his blog that received criticism from other opinionated ministers of public enlightenment and propaganda.  I have read through the posts and the short answer is everyone is correct and equally adept at the art of FUD slinging, what a tragedy.   IMHO the market today (especially among the big boys) has parity +/- 1% (exclusive of the features that don’t work or no one cares about, and yes they exist in all products), the 1% differentiation is often littered with caveats, the blogs outlining these caveats, workarounds, use cases, etc… are the valuable ones, spend more time consuming this content and less time reading content that reminds more of TMZ than a technical blog.

It should be fairly easy when consuming content to determine what is valuable and what is not, just read Scott Lowe’s Blog to see what good content looks like.

One final thought, was the FTC warning really necessary, really????  See what I mean about the litigious nature of our society.  It all starts with nationalizing health care, the next thing you know the FTC is commandeering your blog, where does it end.