Hi All ..
In this part, we'll start diving through basic concepts of vSphere Storage (Datastores) as well as best practices guide to use them in production.
Credits:
- Larus Hjartarson
- Mike Da Costa
Let's Start....
1. SAN Storage Device Naming:
This article officially released by VMware explains the naming convention of any storage device:
vSphere 5.5 Documentation Center - Understanding Storage Device Naming
Another article by Larus Hjartarson is also explaining it more clearly:
VMware Storage Basics Explanation + Video | Virtual-Ice
2. iSCSI Adapters (Initiators):
Official article by VMware for comparison between iSCSI initiators:
vSphere 5.5 Documentation Center - iSCSI Initiators
3. Best Practices for Storage Designing:
1-) Dedicate disks whenever possible, i.e. Don’t mix loads (vSphere Datastore, SQL Databases, etc.) on the same arrays.
2-) Confirm storage settings after each software change or update to make sure there’s no change happened or needed.
3-) Always GAC: Guess your loads, Analyze your data pulled from performance monitoring apps then Correct your guess to get the perfect design for your storage.
4. Upgrading from VMFS-3 to VMFS-5 Best Practice:
When upgrading a datastore from VMFS-3 to VMFS-5, the old block size is preserved. This may cause VAAI not to work as VAAI only supports block size of 1 MB (VMFS-5 Block Size). To solve this issue, it’s recommended to evacuating all VMs from the datastore, re-format it then re-create it with VMFS-5 file system. The only drawback to this method is that it’ll take the datastore offline for some time while in-place upgrade is done while the datastore is online and VMs are operating on it.
5. NFS Datastores Best Practices:
1-) Mount all NFS datastores on all hosts with the same Name or IP address in order to share them correctly between all hosts.
2-) For NFS Load-distribution and in case that your NFS Storage has multiple NICs, you can add this NFS datastore multiple times, each time with an IP of one of its NICs. Ensure that you create a vSwitch for each connection with a vmkernel port in the same subnet and VLAN of the NIC used. At this point, your NFS single datastore will appear as many NFS datastores for ESXi host connected. Last thing, distribute your load by un-register your VMs on this NFS datastore then re-register them from all NFS datastores appeared.
3-) Be careful when mounting NFS datastores in order not to mount them as a Read-Only datastores which can’t be corrected till you un-mount and delete them then remount them again.
4-) When mounting a NFS datastore, ESXi host uses root account to access it. Check that the datastore isn’t configured with (no_root_squash) option which either prevents root access to the datastore or grants root access Read-Only permission.
5-) Configure Non-routable dedicated VLAN network for NFS connection with only one vmkernel port configured in it to ensure that only that vmkernel port is used for NFS connection (as mentioned in no. 2). This issue was described in details by this article written by Mike Da Costa and officially published by VMware:
Challenges with Multiple VMkernel Ports in the Same Subnet | VMware Support Insider - VMware Blogs
6. Removing a Storage Device from vSphere 5.x Best Practices:
Removing a storage device from ESXi hosts is a long critical process. The following KB article from VMware is a step-by-step guide for this process:
Keep in mind that for RDM disks, you have to remove them manually from the VM by using VM Settings GUI and select Remove and Delete from Disk. This deletes the mapping file but not the LUN itself. Then you can re-format the used LUN.
Share the knowledge ...
Previous: vSphere 5.x Notes & Tips - Part V: