- Available later this year (2010)
- Aligned with FAST: “Place data on the most appropriate storage resources”
- Temporarily relocates often-used data to faster storage resources
- Provide Flash drive performance to hottest data
- Reduces load and improves performance of other resources
- Fully automated application acceleration
- Performance proposition
- Large enough to contain a high percentage of working set over long time intervals
- Fast enough to provide order of magnitude performance improvements
- Traditional DRAM cache vs FAST cache
- DRAM cache limited in size and very fast 10^-9
- 15K FC disk drive 10^-3
- Flash drive 10^-6
- Requirements
- FLARE R30 Required for FAST Cache
- Dedicated FLASH drives
- Native mirrored protection for read/write cache
- Can be unprotected for read cache only
- Implementation
- Memory map tracks host address usage and ownership
- 64kb extents (not LUN movement, much more granularity)
- All I/O flows through the FAST cache driver and memory map
- Memory map lookup is very low impact
- Memory map does take some DRAM space so there will be marginally less DRAM cache available (~ 1 GB of DRAM per 1 TB of FAST Cache)
- No “FORCED FLUSHING” so for bursty work loads that invoke traditional DRAM forced cache flushes this may help.
- Background process runs on CX to cleanup the extents
- Benefits
- Flash Cache read hits = Flash drive response times
- Flash Cache write hits flush faster
- Flash Cache hits offload HDDs
- Lower net application response time enables higher IOPs
- Efficient use of Flash drive technology
- Key concept for max Flash cache benefit
- Understand Locality of Reference
- Total GB of actively reference data
- Same areas reference over short periods and multiple times
- What makes for a good Flash cache workload
- Small to moderate working sets
- High frequency of access to same chunks – rehits
- Perf limited by disk technology not SPs
- Profiles of common apps
- DB OLTP/DSS
- Oracle, MS SQL
- Exchange
- File Servers
- Determine appropriate subset of LUNs for use with Fast cache
Note: Sequential workloads are still (typically) better served by traditional rotational media (e.g. – backup-to-disk)
- Tools
- FAST cache analyzer
- Will require RBA traces for FAST cache analysis
- Uber tiering with FAST cache plus FAST
- DRAM cache <-> FAST cache <-> FC <-> SATA
- FAST Cache is a license so the CX enabler will be required (there is a bundle for both FAST and FAST cache)
Questions:
- Are you limited to 2 TB FAST cache? Can you have multiple FAST cache LUNs?
- No limited to 2 TB really depends on how much DRAM capacity you want to consume with the memory map
- Limited to a single FAST cache LUN
Would you allow me to reference a few sentences from your post? I’m
preparing to write project for college. Thanks either way!