Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all articles
Browse latest Browse all 3135

Interview Questions SDRS

$
0
0

Ques 1: Can 2 datastores that are part of two differ datacenters be added to a datastore cluster?

Ans: No we can’t add datastores in a datastore cluster that are part of 2 different datacenters.

 

Ques 2: Can we add new datastores to a datastore cluster without incurring any downtime?

Ans: Yes we can add new datastores in a datastore cluster without having any downtime.


Ques 3: If a datastore is space utilization is above the configured threshold then is the initial placement of a new VM is possible on such datastore?

Ans: Yes initial placement is possible on datastore which has already crossed the utilization threshold if it is capable of storing the new VM. Initial placement is always a manual process and you will be prompted to select datastore out of a datastore cluster while creating a new VM or migrating a VM from a datastore which is not part of the datastore cluster onto a datastore which is part of the datastore cluster.


Ques 4: What are pre-requisite migrations in terms of SDRS?

Ans: The set of migration recommendations generated by SDRS for existing VM’s before initial placement of a new VM are called pre-requisite migrations.


Ques 5: What is meant by Datastore cluster defragmentation?

Ans: When there is enough free space available at datastore cluster level but not enough space available per datastore for accommodating a new incoming VM then the datastore cluster is said to be defragmented. To place a new VM, SDRS will migrate existing VM’s from one datastore to other to free up enough space on a single datastore which can hold the newly created VM.


Ques 6: What is space utilization ratio difference and what is its default value? What is the purpose of defining space utilization ratio difference?

Ans: To avoid unnecessary migrations from one overloaded datastore to a datastore which is near the configured threshold, SDRS uses the space utilization ratio difference to determine which datastores should be considered as destinations for virtual machine migration.

By default the value is set to 5%. It means when there is a difference of 5% space utilization between 2 datastores then only a VM will be migrated from the datastore which is heavily loaded to other datastore which is less loaded.

 

Ques 7: Why migrations of powered-off VM is preferred by SDRS for load balancing a datastore cluster?

Ans: Migration of powered off VM is preferred by SDRS because there will be no changes going in the VM at the time of migration and SDRS don’t have to track which blocks have changed inside the VMDK of the VM when migration was going on.

Note: If swap file of VM is stored at user defined location and not inside the VM directory then SDRS will leave swap files untouched during migration of that VM.


Ques 8: What is VM.Observed.Latency in terms of SDRS?

Ans: It is the time elapsed between when a VM send I/O request and Esxi capturing that request and getting back response from the datastore.

Note: In vSphere 5.0 SDRS was only considering the time elapsed between an I/O request leaving Esxi and response coming back from datastore but in vSphere 5.1 the time is calculated right after an I/O request generated by VM and leaving it.


Ques 9: What is meant by “Performance Correlated Datastores”? How it affects SDRS in generating migrations recommendations?

Ans: Performance related datastores are those datastores that share same backend resources such as same disk group or same RAID group on storage array. By default SDRS will avoid migration recommendations for a VM between 2 performance correlated datastores because if one datastore is experiencing high latency then there might be chances that the other datastore carved out of same disk group or RAID group might experience same latency.

Note: in vSphere 5.0 SDRS was dependent on VASA to identify performance correlated datastores but in vSphere 5.1 SRDS leverages SIOC for the same.


Ques 10: What is default invocation period for SDRS and why it is not invoked at default time value for first time when SDRS is enabled on a datastore cluster?

Ans: The default invocation period for SDRS is 8 hours. But SDRS will not be invoked in 8 hours when first time it is enabled on a datastore cluster because it requires atleast 16 hours of historical data to make any space or I/O related migration recommendations. When 16 hours has been elapsed and SDRS have some data with it then it will be invoked after every 8 hours from then but will always analyze data from last 16 hours.


Ques 11: What are the different conditions under which SDRS is invoked?

Ans: Following can be the situations when SDRS will be invoked:

  1. 1) A datastore entering in maintenance mode.
  2. 2) A new datastore is added to datastore cluster.
  3. 3) A datastore exceeds its configured threshold.
  4. 4) During initial placement of a VM.
  5. 5) When SDRS is invoked manually by administrator
  6. 6) Datastore cluster configuration is updated.


Ques 12: How does Esxi hosts in a cluster learns what latency is observed by other Esxi hosts on a given datastore?

Ans: On each datastore a file named “iormstats.sf” is created and is shared among each Esxi connected to that datastore. Every Esxi host periodically writes its average latency and number of I/O for that datastore in this file. Each Esxi host read this file and calculates datastore wide average latency.


Ques 13: How to enable SIOC logging and how we can monitor SIOC logs?

Ans: SIOC logging can be enabled by editing the advance settings in vCenter server. You have to set the value of Misc.SIOCControlLogLevel parameter to 7.


Note: SIOC needs to be restarted to change the log level and it can be restarted by logging into Esxi host and use command /etc/init.d/StorageRM restart.


Ques 14: If someone has changed the SIOC log level then which file you will consult to find out so?

Ans: When log level of SIOC has been changed, this event is logged into /var/log/vmkernel log file.


Ques 15: Why it is not considered to be a best practice to group together datastores coming from different storage arrays in a single datastore cluster?

Ans: When datastores from different type of storage arrays are grouped together in a datastore cluster then performance of a VM varies on these datastores. Also SDRS will be unable to leverage VAAI offloading during VM migration between 2 datastores that are part of different storage arrays.


Ques 16: How SDRS is affected if extended datastores are used in a datastore cluster?

Ans: Extents are used to extend a datastore size but we should not use extended datastores in datastore cluster because SDRS disables I/O load balancing for such datastores. Also SIOC will be disabled on that datastore.


Ques 17: Can we migrate VM’s with independent disks using SDRS? If yes then how and if no then why?

Ans: By default SDRS doesn’t migrate VM’s with independent disks. This behavior can be changed by adding an entry “sdrs.disableSDRSonIndependentDisks” and set it value to false.


Note: This will work only for non-shared independent disks. Moving shared independent disks is not supported by SDRS.


Ques 18: How SDRS computes space requirement for thin provisioned VM’s?

Ans: For a thin provisioned VM, SDRS considers the allocated disk size instead of provisioned size for generating migration recommendation. When determining placement of a virtual machine, Storage DRS verifies the disk usage of the files stored on the datastore. To avoid getting caught out by instant data growth of the existing thin disk VMDKs, Storage DRS adds a buffer space to each thin disk.This buffer zone is determined by the advanced setting “PercentIdleMBinSpaceDemand”.

This setting controls how conservative Storage DRS is with determining the available space on the datastore for load balancing and initial placement operations of virtual machines.

SRDS will analyze data growth rate inside a thin provisioned VM and if it is very high, then SDRS attempts to avoid migrating such VM on datastores where it can cause exceed in space utilization threshold of that datastore in near future.


For more info follow the link

http://frankdenneman.nl/2012/10/01/avoiding-vmdk-level-over-commitment-while-using-thin-disks-and-storage-drs/


Ques 19: What is mirror drivers and how it works?

Ans: Mirror driver is used by SDRS to track the block changes in VMDK of a VM when storage migration of that VM was going on. During migration if some write operations are generated then mirror driver will commit these disk writes in both source and destination machine.

Mirror driver work at VMkernel level and uses Datamover to migrate VM disks from one datastore to other. Before mirror driver is enabled for a VM, VM is first stunned and then unstunned after enabling of mirror driver. Datamover uses “single pass block” copy of disks from source to destination datastore.


Ques 20: What are the types of datamovers which can be used by SDRS?

Ans: There are 3 types of datamovers which is used by SRDS:

  1. 1) Fsdm: This the legacy 3.0 datamover present in Esxi host. It is the slowest of all.
  2. 2) Fs3dm: This is the datamover which was introduced in vSphere 4.0. It is faster than legacy 3.0 datamover.
  3. 3) Fs3dm-hardware offload: This was introduced in vSphere 4.1 and it is the fastest datamover among all three. It leverages VAAI to offload disk migration task between to 2 datastores.

 

Ques 21: Why it is recommended to avoid mixing datastores with different block size in a datastore cluster? 

Ans: When the destination datastore is hosted on different storage array or has different block size as compared to source datastore then SDRS is forced to use “fsdm” datamover which is the slowest one.

 

Note: When source and destination datastore are from same storage array and have same block size, SDRS utilizes “fs3dm” datamover

When storage array has VAAI functionality and source and destination datastores are having same block size and are hosted on same storage array, SDRS used “fs3dm-hardware offload” datamover.


Ques 22: What are the enhancements that was made in SvMotion in vSphere 5.1 as compared to vSphere 5.0?

Ans: vSphere 5.1 allows 4 parallel disk copies/SvMotion process. Prior to vSphere 5.1 copying of disks were done serially. In 5.1 version, if a VM has 5 disks, then copy of first four disks will be done parallel and when copy of any of the disk out of 4 completes, 5th disk will be copied.


Ques 23: What is the max number of simultaneous SvMotion process associated with a datastore? How to change this value?

Ans: Max no of simultaneous SvMotion on a datastore is 8. This can be throttled by editing vpxd.cfg or from advance settings on vCenter server. In vpxd.cfg file modify the parameter “MaxCostPerEsx41DS”.


Ques 24: Why partially connected datastores should not be used in datastore cluster?

Ans: When a datastore cluster contains partially connected datastores, then I/O load balancing is disabled by SDRS on that datastore cluster. SDRS will do the load balancing only based on space utilization in such case.


Viewing all articles
Browse latest Browse all 3135

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>