VMotion Over Distance with EMC VPLEX

If you have not heard EMC announced a product called VPLEX at EMC World 2010 this week.

Note:  I was watching and documenting at the same time so feel free to make any corrections to the data below.

What is the VPLEX recipe (what am I tasting in the session):

  • 1/4 tablespoon storage vmotion capability in a geo dispersed deployment model
  • 1/4tablespoon EMC Invista
  • 1/4 tablespoon V-Max engine
  • 1/4 tablespoon FAST

What you get:

  • Datacenter Load Balancing
  • Disaster avoidance datacenter maintenance
  • Zero-downtime datacenter moves

The concept of VMotion is facilitated by the the presentation of a VPLEX Virtual-Volume.

Some infrastructure details:

  • Up to 8 VPLEX directors per cluster
  • Peer relationship
  • 32GB of cache per director
  • Cache coherency is maintained between the peer VPLEX’s

VPLEX Metro is a geo dispersed cluster-pair that presents one VPLEX virtual volume across data centers.  Read I/O benefits from local reads when accessing a VPLEX virtual volume.

Requirements/Rules:

  • Implementation is synchronous
  • 100 km distance limit
  • < 5 ms of round-trip latency
  • Stretched layer-2 network (Check out OTV to avoid STP issues associated with stretched layer-2 networks)
    • shared layer-2 broadcast domain
  • Do not stretch VMware clusters between data centers
  • Used Fixed policy with VMware NMP
  • Storage VMotion should be used for non-disruptive migrations

Simulation/Example :

  • 2 data center separated by 100km
  • Shared VM and VMotion networks
  • Shared data stores through VPLEX metro
  • Two node ESX cluster at each site with a single vCenter hos
  • PowerPath/VE and set to adaptive
  • Presented 500B LUNs presented directly
  • Storage VMotion used for non-disruptive migration
  • No storage resignaturing as this is only required on ESX 3.5
  • more stuff i did not get…

Testing was performed on MOSS 2007, SQL Server, SAP and Oracle using workload simulation tools.

Test Scenarios:

  • Scenario 1:Vmotion between 2 data centers in a VPLEx environment compared to a stretched SAN
  • Result 1:  Storage vmoton followed by vmotion in stretched san took approcimately 25x longer than a Vmotion using a shared VPEX lun
  • Scenario2 :  Vmotion between2 data centers with VPLEX separated by 100km
  • Result 2:  Vmotion perf was well within the m specs and did not impact app perf or user experience

Note:  With these requirements this technology will pretty much be relegated to enterprise class customers with dark fibre between sites. With that said technology looks pretty cool if you can afford the infrastructure.

According the The Storage Anarchist’s blog “VPLEX comes to market with two mainstream customer references: AOL and Melbourne IT (who will be replacing their sheep farmer-endorsed product with the more applicable VPLEX)”

Check ot the VPLEX/VMotion Whitpaper

EMC World 2010 – Day 3 Update #emcworld

Well, I have pretty much gotten my ass kicked by EMC certification exams this year.  Day one Centera exam was a big miss (kind of expected this one), BRS TA E20-329 was a near miss yesterday, 28 questions of 70 on Avamar and Data Domain, wonder if this provides any insight on the future of Networker, considering on the previous BURA TA the majority of the questions were Networker questions I find this pretty telling.  This morning a near miss on the RecoverPoint E22-275 exam, 1 of 7 on the Brocade SAS section (ouch), but who the heck uses Brocade fabric splitters?

Anyway, hopefully I can get a win this afternoon, feeling pretty beat down, my cold is not helping.  I wonder how much Sudafed I can take before I damage vital organs 🙂

CLARiiON FAST Cache #emcworld

  • Available later this year (2010)
  • Aligned with FAST: “Place data on the most appropriate storage resources”
    • Temporarily relocates often-used data to faster storage resources
      • Provide Flash drive performance to hottest data
      • Reduces load and improves performance of other resources
    • Fully automated application acceleration
  • Performance proposition
    • Large enough to contain a high percentage of working set over long time intervals
    • Fast enough to provide order of magnitude performance improvements
  • Traditional DRAM cache vs FAST cache
    • DRAM cache limited in size and very fast 10^-9
    • 15K FC disk drive 10^-3
    • Flash drive 10^-6
  • Requirements
    • FLARE R30 Required for FAST Cache
    • Dedicated FLASH drives
    • Native mirrored protection for read/write cache
    • Can be unprotected for read cache only
  • Implementation
    • Memory map tracks host address usage and ownership
      • 64kb extents (not LUN movement, much more granularity)
    • All I/O flows through the FAST cache driver and memory map
      • Memory map lookup is very low impact
      • Memory map does take some DRAM space so there will be marginally less DRAM cache available (~ 1 GB of DRAM per 1 TB of FAST Cache)
    • No “FORCED FLUSHING” so for bursty work loads that invoke traditional DRAM forced cache flushes this may help.
    • Background process runs on CX to cleanup the extents
  • Benefits
    • Flash Cache read hits = Flash drive response times
    • Flash Cache write hits flush faster
    • Flash Cache hits offload HDDs
    • Lower net application response time enables higher IOPs
    • Efficient use of Flash drive technology
  • Key concept for max Flash cache benefit
    • Understand Locality of Reference
      • Total GB of actively reference data
      • Same areas reference over short periods and multiple times
  • What makes for a good Flash cache workload
    • Small to moderate working sets
    • High frequency of access to same chunks – rehits
    • Perf limited by disk technology not SPs
  • Profiles of common apps
    • DB OLTP/DSS
      • Oracle, MS SQL
    • Exchange
    • File Servers
  • Determine appropriate subset of LUNs for use with Fast cache

Note:  Sequential workloads are still (typically) better served by traditional rotational media  (e.g. – backup-to-disk)

  • Tools
    • FAST cache analyzer
      • Will require RBA traces for FAST cache analysis
  • Uber tiering with FAST cache plus FAST
    • DRAM cache  <-> FAST cache <-> FC <-> SATA
  • FAST Cache is a license so the CX enabler will be required (there is a bundle for both FAST and FAST cache)

Questions:

  • Are you limited to 2 TB FAST cache?  Can you have multiple FAST cache LUNs?
    • No limited to 2 TB really depends on how much DRAM capacity you want to consume with the memory map
    • Limited to a single FAST cache LUN

Intro to Unisphere session at #emcworld

  • Unified management platform for CLARiiON, Celerra and RecoverPoint
    • Unified look and feel across all aforementioned products
    • Support for all CLARiiON >= FLARE 19
  • Release data Q3 2010
  • Only functions at the presentation layer
    • Will not impact the CLI so scripts will not be impacted by Unisphere
  • Views
    • Dashboard
      • Unified alerting sorted by severity
      • System list in the top toll bar provides the ability to drill down from the domain level to the physical system to view details
    • System View
      • Graphical hardware depiction (finally)
    • Summary Views
      • Storage
        • Bye bye CAP reports!!!!!  (thank you)
      • Replication
    • Note:  View Blocks are widgets that are placed on the screen somewhere that Aggregate data graphically displaying utilization, capacity, etc…
      • Ability to customize views, these views are tied to the client not the user so they will stay on the client not move with the user
  • Right-click provides context menu similar to what is currently available in Navisphere
  • Array data is cached locally on Unisphere client
    • This will hopefully help out performance
    • Data collection from array is time stamped so you can ensure you are working with the correct data set
    • A refresh (manual) needs to be performed to query the array and refresh the Unisphere cache
    • Once the data is cached it can be manipulated
  • Context sensitive help and actions
  • Replication Manager (RM) and USM (Unisphere Service Manager) integration (via link and launch)
    • If apps not installed it launches PowerLink and grabs RM or USM
    • Performs the install
    • Launches the app
  • USM replaces NST and unifies the ability to perform service tasks on CLARiiON or Celerra
    • USM adds the ability to track FCOs, Support Advisories, etc… via RSS feeds
    • Also provides the ability to launch to the proper location to research things (e.g. – EMC Support Forums), USM remains as the presentation layer, no need to jump around between apps.
    • Service requests can be created directly from USM

Interoperability Matrix

Platform Native Managed
CLARiiON CX, CX3, CX4 FLARE R30+ FLARE R19+
Celerra DART 6.0+ DART 6.0+
CLARiiON AX4 TBD FLARE R23+
  • Navisphere Off-Array being replace by Unisphere client and server
    • Support for Navi Analyzer off-array
  • Navisphere manager license supports Unisphere, no need for an upgrade

Overall the app looks really nice considering most of us are used to using a fractured set of tools between CX, NS and RP.  Will be interested in seeing how Unisphere helps us map the use of the array.  It appears to still be a JAVA based application so the jury is out on performance until I see it.

Audience questions:

  • Can you get to engineering mode from Unisphere
    • Unanswered.  Translation: Assume access to engineering mode on the CX will still require us to log into Navisphere.
  • Support for Centera on the roadmap
    • Answered, Yes.  No timeframe given.
  • What is the user experience like on a very large CLARiiON that is busy
    • Answer, it depends.  Translation:  It will likely still be slow.

EMC World 2010 Initial Thoughts

Sitting in room 153A at a rather rudimentary Cisco session so thought I would take a few minutes to write up my initial observations and comments on EMC World 2010 this far (only an 1 hour and 30 mins in):

  • I am hoping that the Cloud message is less nebulous by the time I leave on Thursday
  • Green is a theme, ice work with the bamboo plates at breakfast.  Also enjoyed NOT getting five million data sheets in my EMC goodie bag.
  • No backpacks or laptop cases this year, they have been replaced by a generic EMC branded bag.  Works for me because I can fit the giveaway bag into my normal bag so I don’t have to carry two bags all day.
    • Assumed this was due to cutbacks but I was told by the guy at the registration desk that it feels like there are more attendees this year than in the past.  He thought the number was somewhere around 10k attendees.
  • Social networking is very visible, with Twitter and FourSquare leading the way. image
    • Not sure that everyone here gets the FourSquare thing, EMC has created a ton of venues but you would think that a swarm would be easily attainable at a technology conference with nearly 10k attendees.  I think a fair number of attendees may need a mobile device upgrade, the StarTAC is no longer an acceptable cellular device, having the StarTAC car kit in your Mercedes does not justify holding onto the phone 🙂
  • Nice work with the mobile app
    • http://emc.tripbuilder.com/mobi
    • Who knows, maybe next year there will be an actual iPhone and Android app so we can jettison the session guide and save a few trees.
  • Finally it is obvious that like Apple ownership of the letter “I” EMC has laid claimed the letter “V”
    • I understand but does every product need to really be prefixed with a “V”?
  • Only one guru level session, very disappointing.
  • Looking forward to hearing about V-Plex, Unisphere and attending the session on automating Virtual Provisioning with Windows PowerShell
  • Hoping the V-Plex and Unisphere session are more than just marchitecture

Here is wishing everyone a great EMC World 2010.  If you don’t have FourSquare installed on your mobile device, I will assume you are either a StarTAC users or posting your updates to MySpace 🙂