Celerra Performance Management Session

Had to leave the Maintaining High NAS Performance Levels:  Celerra Performance Analysis and Troubleshooting session early which was a little disappointing because I was enjoying it.

Once again posting this quickly so please excuse any spelling and grammar errors.  Also if you find any information that is incorrect please comment.

Notes from the session:

Start performance troubleshooting by characterizing the workload (Note:  this is performance troubleshooting 101 regardless of platform)

  1. Rate (IOPs)
  2. Size (Transfer Size / KB)
  3. Direction (Read/Write)

Key Celerra commands for looking at performance data:

Protocol stats summary for CIFS and NFS:

server_stats server_x -summary nfs,cifs -interval=10 =count=6

  • Determine Protocol type and R/W ratio

Protocol stats summary for NFS only:

server_stats server_x -summary cifs -i 10 =c 6

Investigate response times:

server_stats server_x -table nfs -i 10 -c 6

Where is the I/O destined for (which file system):

server_stats server+x -table fsvol -i 10 -c 6

  • Look for a I/O balance across available resources

Where is the I/O destined for (basic volumes):

server_stats server+x -table dvol -i 10 -c 6

  • Again look for a I/O balance across available resources

A high level of activity on root_ldisk is usually indicative of a high number of ufslog messages

server_log server_x | tail –f

Hoping some of my peers took some notes after I departed the session and hopefully I can post them later today.

Notes from Tucci / Maritz Keynote

EMC Market Share

  • External Disk 28%
  • Networked Storage 28%
  • VMware Environments 48%
Areas of Growing Interest
  • Server Virtualization
  • FCoE
  • Cloud-based Storage
  • Datacenter efficiency "green"
Tucci Quote:
  • "SSD (aka EFD) Tuned Arrays will Totally Change the Game"
Declining EFD Pricing
  • 1Q08 – EFD 40x more expensive than FC rotational disk
  • 3Q08 – EFD 22x more expensive than FC rotational disk
  • 1Q09 – EFD 8x more expensive than FC rotational disk
What’s Next in IT
  • Virtual Data Centers
  • Cloud Computing
  • Virtual Clients
  • Virtual Applications
Data Center Computing vs. Cloud Computing
  • Data Center Attributes:  Trusted, Controlled, Reliable, Secure
  • Cloud Attributes:  Dynamic, Efficient, On-demand, Flexible
EMC (Tucci) proposes that by Virtualizing the Data Center we are converting the traditional data center into an internal cloud,  ahhhh vCloud :)….  Could this picture be anymore cloudy?
I see the thought process and vision here but will the average end user comprehend this?
 
I fully expect someone to ask me next week if they can vMotion from their internal cloud (legacy vMware farm) to the EMC AtmosSphere.
 
Ohhhh my, now we have a slide federating the private and public cloud, can’t wait to fill out that qualifier.
 
Favorite Cloude Quote:  
Paul Maritz – Cloud California motels "check your applications in, but they can’t get out" – Hmmm, sounds like a little know on Amazon’s S3 SimpleDB.
 
Final thought, has EMC/VMware decided to prefix all terms with "v" (e.g. – vSphere) while Microsoft has decided to use the letter "v" as a suffix (e.g. – Hyper-V)?  

 

Symm V-Max Overview with Enginuity Overview Session

Below are my notes from the Symm V-Max Overview with Enginuity Overview session, please excuse any typographical errors as I posted this as quickly as possible.  If you see any content errors or missing information please comment.

  • Symm V-Max and V-Max SE
    • Distributed Global Memory
  • Symm V-Max Configuration Overview
    • 1 – 8 V-Max Engines
    • Up to 128 FC FE Ports
    • Up to 64 FiCON FE Ports
    • Up to 64 gigE FE Ports
    • Up to 1 TB of Mem
    • Up to 10 Storage Bays
    • 96-2400 Drives
    • Up to 2 PB of total capacity
  • Director Boards Populated from 1 – 16 bottom to top
  • Engines are populated from inside out
  • Symm V-Max SE
    • Singe V-Max Engine (Engine 4)
    • Up to 16 FC FE Ports
    • Up to 8 FiCON FE Ports
    • Up to 8 gigE FE Ports
    • Up to 128 GB of Mem
    • Upto 120 Drives
  • V-Max Engine Overview
    • 2 Director Boards
    • Redundant PS, battery, fans, etc…
    • FA and DA on V-Max Engine
    • Backend I/O module supports up to 4 DAEs
      • 4 Backend I/O modules will support up to 8 DAEs in a redundant configuration
    • EFD support for  200 and 400 GB drives
  • FC I/O modules, FiCON I/O module, iSCSI/gigE I/O module
  • Memory Config
    • 32GB, 64GB, 128GB options
    • Memory can be configured by adding memory to a existing V-Max engine or adding additional V-Max engines
    • Memory is mirrored across V-Max engines for improved availability
  • Hard to show the picture I am looking at but the rear of the V-Max engine chassis has the following ports
    • Virtual Matrix interfaces
    • Backend I/O modules interfaces
    • Front End I/O modules interfaces
  • Symm V-Max Matrix Interface Board Enclosure (MIBE)
  • Each V-Max Engines can be directly connected to 8 DAEs
    • Depending on the configuration up to two additional DAEs can be daisy chained beyond the primary DAE
  • V-Max provides 2x the ability to connect directly to drive bays over the DMX, from 64 drive loops to 128 drive loops
  • I/O Flow
    • Director board has 2 quad core procs
    • CPUs are mapped as A-H slices