I believe it is fair to say the disk I/O performance characteristics bare not been the focus of VMware in the past but it seems VMware has taken some strides to address this in VI3. The ADC0135 – Choosing and Architecting Storage for Your Environment session was a bit basic for someone who understands disk technology but I think that over the past few years so much emphasis had been put on server consolidation that much of the VMware community has ignored the disk I/O discussion. I don’t think this was intentional but the value prop was so impressive around consolidation and test/dev that the I/O discussion was not a primary concern, the target audience has often also been server engineering teams and not storage engineering.
During the session the presenters reviewed some rudimentary topics such as SAN, NAS, iciness, DAS and where each is applicable as well as technological differentiators between technologies such as Fibre Channel (FC) and ATA (Advanced Technology Attachment) (e.g. – Tagged Command Queuing). As a proof point the moderator polled the audience of about 300 strong asking if anyone had ever heard off HIPPI (high Performance Parallel Interface) and about 3 people raised their hands. This is understandable as the target audience for VMware had traditionally been the server engineering team and/or developers and not the storage engineers thus the probable lack of a detailed understanding of storage interconnects.
With VMware looking for greater adoption rates in the corporate production IT environment by leveraging new value propositions focused on business continuity and disaster recovery and host of others, Virtualized servers will demand high I/O performance characteristics from both an transaction and bandwidth perspective. Storage farms will grow, become more sophisticated and more attention will be paid to integrating VMware technology with complex storage technologies such as platform based replication (e.g. – EMC SRDF), snapshot technology (e.g. – EMC Timefinder) and emerging technologies like CDP (Continuos data protection).
A practical example of what I believe has been a lack of education around storage and storage best practice can be proven through the fact that I believe many VMware users are unaware partition offset alignment. Offset alignment is a best practice that absolutely should be followed, this is not a function or responsibility of VMware but it is an often overlooked best practice – (engineers who grew up in the UNIX world and are familiar with a command strings like “sync;sync;sync” typically align partition offsets but admits who grew up in the Windows world I find often overlook offset alignment unless they are very savvy Exchange or SQL performance gurus). Windows users have become accustomed to portioning using disk manager from which it is not possible to align offsets, diskpar must be used to partition and align offsets.
I would be interested in some feedback on how many VMware / Windows users did not do this during their VMware configuration of Windows VM install? Be honest! If you are not using disk par to create partitions and align offsets it means that we need to do a better job educating.
Other notable points from the session:
- <= ESX 2.5.x FC-AL was not supported by VMware, VI3 supports FC-AL.
- VI3 supports 3 outstanding tag command queues per VMDK vs the a single command tag queue which was available in per VMFS in <= ESX 2.5.x – If someone else can verify this it would be great because I have a question mark next to my notes which means I may not have heard it correctly.
Just an FYI regarding partition offsets. The VMware guides say that you shouldn’t create VMFS partitions during installation. Instead, you should create them from Virtual Center. If you create then with Virtual Center it’s supposed to take care of the offset issue for you.
It should be noted that the information regarding offsets from
VMware is contradictory. Aways create a VMFS using the VI client,
or offset the partition by 128 blocks if using fdisk.
Ah, I find the actual fact has unlimited debatable points. I don’t
need to argue with you right here, but I’ve my own opinions as
well. Anyway, you probably did an incredible job in writing the
post, and want to reward you for the hard work. Sustain with the
good job!
I have to say that this publish may be the most relevant article
I’ve ever read and saw. It is an excellent aid to everyone who’s
searching for this information.
Bit of clarification on the partition offset alignment comments. The alignment offset that I was referring to was not associated with VMFS volumes but rather with RDM volumes where the physical disk will be mapped directly to the VM. Important to note that using diskmgmt.msc in Windows 2008 now aligns partitions so administrators no longer need to use diskpar to perform the alignment. Remember that in order to use advanced array based technologies RDM volumes are still required so users should be educated beyond what the VI client does for them.