If you have not heard EMC announced a product called VPLEX at EMC World 2010 this week.
Note: I was watching and documenting at the same time so feel free to make any corrections to the data below.
What is the VPLEX recipe (what am I tasting in the session):
- 1/4 tablespoon storage vmotion capability in a geo dispersed deployment model
- 1/4tablespoon EMC Invista
- 1/4 tablespoon V-Max engine
- 1/4 tablespoon FAST
What you get:
- Datacenter Load Balancing
- Disaster avoidance datacenter maintenance
- Zero-downtime datacenter moves
The concept of VMotion is facilitated by the the presentation of a VPLEX Virtual-Volume.
Some infrastructure details:
- Up to 8 VPLEX directors per cluster
- Peer relationship
- 32GB of cache per director
- Cache coherency is maintained between the peer VPLEX’s
VPLEX Metro is a geo dispersed cluster-pair that presents one VPLEX virtual volume across data centers. Read I/O benefits from local reads when accessing a VPLEX virtual volume.
Requirements/Rules:
- Implementation is synchronous
- 100 km distance limit
- < 5 ms of round-trip latency
- Stretched layer-2 network (Check out OTV to avoid STP issues associated with stretched layer-2 networks)
- shared layer-2 broadcast domain
- Do not stretch VMware clusters between data centers
- Used Fixed policy with VMware NMP
- Storage VMotion should be used for non-disruptive migrations
Simulation/Example :
- 2 data center separated by 100km
- Shared VM and VMotion networks
- Shared data stores through VPLEX metro
- Two node ESX cluster at each site with a single vCenter hos
- PowerPath/VE and set to adaptive
- Presented 500B LUNs presented directly
- Storage VMotion used for non-disruptive migration
- No storage resignaturing as this is only required on ESX 3.5
- more stuff i did not get…
Testing was performed on MOSS 2007, SQL Server, SAP and Oracle using workload simulation tools.
Test Scenarios:
- Scenario 1:Vmotion between 2 data centers in a VPLEx environment compared to a stretched SAN
- Result 1: Storage vmoton followed by vmotion in stretched san took approcimately 25x longer than a Vmotion using a shared VPEX lun
- Scenario2 : Vmotion between2 data centers with VPLEX separated by 100km
- Result 2: Vmotion perf was well within the m specs and did not impact app perf or user experience
Note: With these requirements this technology will pretty much be relegated to enterprise class customers with dark fibre between sites. With that said technology looks pretty cool if you can afford the infrastructure.
According the The Storage Anarchist’s blog “VPLEX comes to market with two mainstream customer references: AOL and Melbourne IT (who will be replacing their sheep farmer-endorsed product with the more applicable VPLEX)”
Check ot the VPLEX/VMotion Whitpaper