I believe it is fair to say the disk I/O performance characteristics bare not been the focus of VMware in the past but it seems VMware has taken some strides to address this in VI3. The ADC0135 – Choosing and Architecting Storage for Your Environment session was a bit basic for someone who understands disk technology but I think that over the past few years so much emphasis had been put on server consolidation that much of the VMware community has ignored the disk I/O discussion. I don’t think this was intentional but the value prop was so impressive around consolidation and test/dev that the I/O discussion was not a primary concern, the target audience has often also been server engineering teams and not storage engineering.
During the session the presenters reviewed some rudimentary topics such as SAN, NAS, iciness, DAS and where each is applicable as well as technological differentiators between technologies such as Fibre Channel (FC) and ATA (Advanced Technology Attachment) (e.g. – Tagged Command Queuing). As a proof point the moderator polled the audience of about 300 strong asking if anyone had ever heard off HIPPI (high Performance Parallel Interface) and about 3 people raised their hands. This is understandable as the target audience for VMware had traditionally been the server engineering team and/or developers and not the storage engineers thus the probable lack of a detailed understanding of storage interconnects.
With VMware looking for greater adoption rates in the corporate production IT environment by leveraging new value propositions focused on business continuity and disaster recovery and host of others, Virtualized servers will demand high I/O performance characteristics from both an transaction and bandwidth perspective. Storage farms will grow, become more sophisticated and more attention will be paid to integrating VMware technology with complex storage technologies such as platform based replication (e.g. – EMC SRDF), snapshot technology (e.g. – EMC Timefinder) and emerging technologies like CDP (Continuos data protection).
A practical example of what I believe has been a lack of education around storage and storage best practice can be proven through the fact that I believe many VMware users are unaware partition offset alignment. Offset alignment is a best practice that absolutely should be followed, this is not a function or responsibility of VMware but it is an often overlooked best practice – (engineers who grew up in the UNIX world and are familiar with a command strings like “sync;sync;sync” typically align partition offsets but admits who grew up in the Windows world I find often overlook offset alignment unless they are very savvy Exchange or SQL performance gurus). Windows users have become accustomed to portioning using disk manager from which it is not possible to align offsets, diskpar must be used to partition and align offsets.
I would be interested in some feedback on how many VMware / Windows users did not do this during their VMware configuration of Windows VM install? Be honest! If you are not using disk par to create partitions and align offsets it means that we need to do a better job educating.
Other notable points from the session:
- <= ESX 2.5.x FC-AL was not supported by VMware, VI3 supports FC-AL.
- VI3 supports 3 outstanding tag command queues per VMDK vs the a single command tag queue which was available in per VMFS in <= ESX 2.5.x – If someone else can verify this it would be great because I have a question mark next to my notes which means I may not have heard it correctly.