VCAP5-DCA Objective 1.1 – Implement and Manage complex storage

Objectives

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA‐related commands
  • Analyze I/O workloads to determine storage performance requirements
  • Identify and tag SSD devices
  • Administer hardware acceleration for VAAI
  • Configure and administer profile-based storage
  • Prepare storage for maintenance (mounting/un-mounting)
  • Upgrade VMware storage infrastructure

Last update 16-09-2012

Determine use cases for and configure VMware DirectPath I/O

Official Documentation:
vSphere Virtual Machine Administration, Chapter 8, Section “Add a PCI Device in the vSphere Client”, page 149.

Summary:
vSphere DirectPath I/O allows a guest operating system on a virtual machine to directly access physical PCI and PCIe devices connected to a host. Each virtual machine can be connected to up to six PCI devices. PCI devices connected to a host can be marked  as available for passthrough from the Hardware Advanced Settings in the Configuration tab for the host. Snapshots are not supported with PCI vSphere Direct Path I/O devices.

Prerequisites:

  • To use DirectPath I/O, verify that the host has Intel® Virtualization Technology for Directed I/O (VT-d) or AMD I/O Virtualization Technology (IOMMU) enabled in the BIOS.
  • Verify that the PCI devices are connected to the host and marked as available for passthrough.
  • Verify that the virtual machine is using hardware version 7 or later.

Action is supported with vSphere Web Client and vSphere Client. Figure 1

Installation is a two step process. First add a PCI device on the host level. When finished, add a PCI device to the Virtual Machine Configuration. Figure 2

Note: Adding a PCI device creates a Memory reservation for the VM. Removing the PCI device did not release the reservation.
Other references:

Determine requirements for and configure NPIV

Official Documentation:
vSphere Virtual Machine Administration, Chapter 8, Section “Configure Fibre Channel NPIV Settings in the vSphere Web Client / vSphere Client”, page 123. Detailed information can be found in vSphere Storage Guide, Chapter 4, “N-Port ID Virtualization, page 41”

Summary:
Control virtual machine access to LUNs on a per-virtual machine basis. N-port ID virtualization (NPIV) provides the ability to share a single physical Fibre Channel HBA port among multiple virtual ports, each with unique identifiers. NPIV support is subject to the following limitations:

  • NPIV must be enabled on the SAN switch. Contact the switch vendor for information about enabling NPIV on their devices.
  • NPIV is supported only for virtual machines with RDM disks. Virtual machines with regular virtual disks continue to use the WWNs of the host’s physical HBAs.
  • The physical HBAs on the ESXi host must have access to a LUN using its WWNs in order for any virtual machines on that host to have access to that LUN using their NPIV WWNs. Ensure that access is provided to both the host and the virtual machines.
  • The physical HBAs on the ESXi host must support NPIV. If the physical HBAs do not support NPIV, the virtual machines running on that host will fall back to using the WWNs of the host’s physical HBAs for LUN access.
  • Each virtual machine can have up to 4 virtual ports. NPIV-enabled virtual machines are assigned exactly 4 NPIV-related WWNs, which are used to communicate with physical HBAs through virtual ports. Therefore, virtual machines can utilize up to 4 physical HBAs for NPIV purposes.

NOTE: To use vMotion for virtual machines with enabled NPIV, make sure that the RDM files of the virtual machines are located on the same datastore. You cannot perform Storage vMotion or vMotion between datastores when NPIV is enabled. Figure 3

Other references:

Determine appropriate RAID level for various Virtual Machine workloads

Official Documentation:
Not much.

Summary:
Choosing a RAID level, first of all it depends on the underlying storage hardware. Most storage support more than one RAID level, like RAID-5, RAID-6, RAID-10 or RAID-50. The choice is always a trade-off between performance, net capacity and things like performance impact in case of a rebuild. Usually performance characteristics are available and it is not too hard to calculate net capacity. Imho another factor, apart from RAID level is the type of disk, SATA 7.2 K, SAS 10K, 15K or SSD. Modern storage even combines SSD and traditional disks in one enclosure and decides where to put a LUN.

Other references:

Apply VMware storage best practices

Official Documentation:
Overview at: http://www.vmware.com/technical-resources/virtual-storage/best-practices.html Documentation can be found at: http://www.vmware.com/technical-resources/virtual-storage/resources.html

Summary:
Many of the best practices for physical storage environments also apply to virtual storage environments. It is best to keep in mind the following rules of thumb when configuring your virtual storage infrastructure:

Configure and size storage resources for optimal I/O performance first, then for storage capacity
This means that you should consider throughput capability and not just capacity. Imagine a very large parking lot with only one lane of traffic for an exit. Regardless of capacity, throughput is affected. It’s critical to take into consideration the size and storage resources necessary to handle your volume of traffic—as well as the total capacity.

Aggregate application I/O requirements for the environment and size them accordingly.
As you consolidate multiple workloads onto a set of ESX servers that have a shared pool of storage, don’t exceed the total throughput capacity of that storage resource. Looking at the throughput characterization of physical environment prior to virtualization can help you predict what throughput each workload will generate in the virtual environment.

Base your storage choices on your I/O workload.
Use an aggregation of the measured workload to determine what protocol, redundancy protection and array features to use, rather than using an estimate. The best results come from measuring your applications I/O throughput and capacity for a period of several days prior to moving them to a virtualized environment.

Remember that pooling storage resources increases utilization and simplifies management, but can lead to contention.
There are significant benefits to pooling storage resources, including increased storage resource utilization and ease of management. However, at times, heavy workloads can have an impact on performance. It’s a good idea to use a shared VMFS volume for most virtual disks, but consider placing heavy I/O virtual disks on a dedicated VMFS volume or an RDM to reduce the effects of contention.

Other references:

Understand use cases for Raw Device Mapping

Official Documentation:
Chapter 14 in the vSphere Storage Guide is dedicated to Raw Device Mappings (starting page 135). This chapter starts with an introduction about RDMs and discusses the Characteristics and concludes with information how to create RDMs and how to manage paths for a mapped Raw LUN.

Summary:
An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM allows a virtual machine to directly access and use the storage device. The RDM contains metadata for managing and redirecting disk access to the physical device. The file gives you some of the advantages of direct access to a physical device while keeping some advantages of a virtual disk in VMFS. As a result, it merges VMFS manageability with raw device access. Use cases for raw LUNs with RDMs are:

  • When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN.
  • In any MSCS clustering scenario that spans physical hosts – virtual-to-virtual clusters as well as physicalto-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as virtual disks on a shared VMFS.

Think of an RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping makes LUNs appear as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine configuration. The RDM contains a reference to the raw LUN. Using RDMs, you can:

  • Use vMotion to migrate virtual machines using raw LUNs.
  • Add raw LUNs to virtual machines using the vSphere Client.
  • Use file system features such as distributed file locking, permissions, and naming.

Two compatibility modes are available for RDMs:

  • Virtual compatibility mode allows an RDM to act exactly like a virtual disk file, including the use of snapshots.
  • Physical compatibility mode allows direct access of the SCSI device for those applications that need lower level control.

RDM offers several benefits (shortlist).

  • User-Friendly Persistent Names
  • Dynamic Name Resolution
  • Distributed File Locking
  • File Permissions
  • File System Operations
  • Snapshots
  • vMotion
  • SAN Management Agents
  • N-Port ID Virtualization (NPIV)

Limitations of Raw Device Mapping

  • The RDM is not available for direct-attached block devices or certain RAID devices. The RDM uses a SCSI serial number to identify the mapped device. Because block devices and some direct-attach RAID devices do not export serial numbers, they cannot be used with RDMs.
  • If you are using the RDM in physical compatibility mode, you cannot use a snapshot with the disk. Physical compatibility mode allows the virtual machine to manage its own, storage-based, snapshot or mirroring operations.Virtual machine snapshots are available for RDMs with virtual compatibility mode.
  • You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.

Comparing features available with virtual disks and RDMs: Figure 4
In 2008 VMware presented Performance Study “Performance Characterization of VMFS and RDM Using a SAN”. Based on ESX 3.5, tests were ran to compare the performance of VMFS and RDM. The conclusions are:

  • For random reads and writes, VMFS and RDM yield a similar number of I/O operations per second.
  • For sequential reads and writes, performance of VMFS is very close to that of RDM (except on sequential reads with an I/O block size of 4K). Both RDM and VMFS yield a very high throughput in excess of 300 megabytes per second depending on the I/O block size.
  • For random reads and writes, VMFS requires 5 percent more CPU cycles per I/O operation compared to RDM.
  • For sequential reads and writes, VMFS requires about 8 percent more CPU cycles per I/O operation compared to RDM.

Another paper “Performance Best Practices for VMware vSphere 5.0” comes to the following conclusion: “Ordinary VMFS is recommended for most virtual disk storage, but raw disks might be desirable in some cases”
Other references:

Configure vCenter Server storage filters

Official Documentation:
vSphere Storage Guide, Chapter 13 “Working with Datastores”, page 125.
Summary:
When you perform VMFS datastore management operations, vCenter Server uses default storage protection filters. The filters help you to avoid storage corruption by retrieving only the storage devices that can be used for a particular operation. Unsuitable devices are not displayed for selection. You can turn off the filters to view all devices. There are 4 types of storage filters:

  • config.vpxd.filter.vmfsFilter, VMFS Filter
  • config.vpxd.filter.rdmFilter, RDM Filter
  • config.vpxd.filter.SameHostAndTransportsFilter, Same Host andTransports Filter
  • config.vpxd.filter.hostRescanFilter, Host Rescan Filter

VMFS Filter
Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server.

RDM Filter
Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or to be used by a different RDM.

Same Host and Transports Filter
Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. Prevents you from adding the following LUNs as extents:

  • LUNs not exposed to all hosts that share the original VMFS datastore.
  • LUNs that use a storage type different from the one the original VMFS datastore uses. For example, you cannot add a Fibre Channel extent to a VMFS datastore on a local storage device.

Host Rescan Filter
Automatically rescans and updates VMFS datastores after you perform datastore management operations. The filter helps provide a consistent view of all VMFS datastores on all hosts managed by vCenter Server.
NOTE If you present a new LUN to a host or a cluster, the hosts automatically perform a rescan no matter whether you have the Host Rescan Filter on or off. So, vCenter Server storage protection filters are part of the vCenter Server and are managed with the vSphere Client. The filters are turned On by default. To Turn off a Storage Filter

  • In the vSphere Client, select Administration > vCenter Server Settings.
  • In the settings list, select Advanced Settings.
  • In the Key text box, type a key, likeconfig.vpxd.filter.vmfsFilter
  • In the Value text box, type False for the specified key.
  • Click Add.
  • Click OK.

Figure 5
Other references:

Understand and apply VMFS resignaturing

Official Documentation:
vSphere Storage Guide, Chapter 13 “Working with Datastores”, page 122.

Summary:
When a storage device contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature.
Each VMFS datastore created in a storage disk has a unique UUID that is stored in the file system superblock. When the storage disk is replicated or snapshotted, the resulting disk copy is identical, byte-for-byte, with the original disk. As a result, if the original storage disk contains a VMFS datastore with UUID X, the disk copy appears to contain an identical VMFS datastore, or a VMFS datastore copy, with exactly the same UUID X.

ESXi can detect the VMFS datastore copy and display it in the vSphere Client. You can mount the datastore copy with its original UUID or change the UUID, thus resignaturing the datastore.
In addition to LUN snapshotting and replication, the following storage device operations might cause ESXi to mark the existing datastore on the device as a copy of the original datastore:

  • LUN ID changes
  • SCSI device type changes, for example, from SCSI-2 to SCSI-3
  • SPC-2 compliancy enablement

Mount a VMFS Datastore with an Existing Signature, example: You can keep the signature if, for example, you maintain synchronized copies of virtual machines at a secondary site as part of a disaster recovery plan. In the event of a disaster at the primary site, you mount the datastore copy and power on the virtual machines at the secondary site.

IMPORTANT: You can mount a VMFS datastore copy only if it does not collide with the original VMFS datastore that has the same UUID. To mount the copy, the original VMFS datastore has to be offline.

When you mount the VMFS datastore, ESXi allows both reads and writes to the datastore residing on the LUN copy. The LUN copy must be writable. The datastore mounts are persistent and valid across system reboots.

Procedure:
1. Log in to the vSphere Client and select the server from the inventory panel.
2. Click the Configuration tab and click Storage in the Hardware panel.
3. Click Add Storage.
4. Select the Disk/LUN storage type and click Next.
5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next. The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
6. Under Mount Options, select Keep Existing Signature.
7. In the Ready to Complete page, review the datastore configuration information and click Finish.

Use datastore resignaturingif you want to retain the data stored on the VMFS datastore copy.
When resignaturing a VMFS copy, ESXi assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from the original.
The default format of the new label assigned to the datastore is snap-snapID-oldLabel, where snapID is an integer and oldLabel is the label of the original datastore.
When you perform datastore resignaturing, consider the following points:

  • Datastore resignaturing is irreversible.
  • The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy.
  • A spanned datastore can be resignatured only if all its extents are online.
  • The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later.
  • You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other datastore, such as an ancestor or child in a hierarchy of LUN snapshots.

Procedure as above, except:
6 Under Mount Options, select Assign a New Signature.

Other references:

Understand and apply LUN masking using PSA‐related commands

Official Documentation:
vSphere Storage Guide, Chapter 17 “Understanding Multipathing and Failover”, page 169.

Summary:
The purpose of LUN masking is to prevent the host from accessing storage devices or LUNs or from using individual paths to a LUN. Use the esxcli commands to mask the paths. When you mask paths, you create claim rules that assign the MASK_PATH plug-in to the specified paths.

You can run the esxcli command directly in the ESXi shell, or use the vMA or the vCLI. The syntax is slightly different while using the esxcli command from the vMA or vCLI, you have to add the –server=server_name option.

Procedure for Masking a LUN, in this example a Datastore named “IX2-iSCSI-LUNMASK”. Figure 6

Open the Datastore “Properties” and “Manage Paths” VMware KB 1009449 is more detailed then the Storage Guide.
I have followed the steps in the KB.
1. Log into an ESXI host
2. Look at the Multipath Plug-ins currently installed on your ESX with the command:

~ # esxcfg-mpath -G

MASK_PATH
NMP

3. List all the claimrules currently on the ESX with the command:

~ # esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ---------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP          65535  runtime  vendor     NMP        vendor=* model=*</pre>

This is the default output.

4. Add a rule to hide the LUN with the command.

Find the naa device of the datastore you want to hide with the command:

~ # esxcfg-scsidevs -m

t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________:3 /vmfs/devices/disks/t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________:3 4c13c151-2e6c6f81-ab84-f4ce4698970c  0  ml110g5-local
naa.5000144f77827768:1                                                     /vmfs/devices/disks/naa.5000144f77827768:1                                                     4f9eca2e-3a28f563-c184-001b2181d256  0  IX2-iSCSI-01
naa.5000144f80206240:1                                                     /vmfs/devices/disks/naa.5000144f80206240:1                                                     4fa53d67-eac91517-abd8-001b2181d256  0  IX2-iSCSI-LUNMASK

naa.5000144f80206240:1, display name:IX2-iSCSI-LUNMASKis the device we want to MASK.

Another command to show all devices and paths:

~ # esxcfg-mpath -L

vmhba35:C0:T1:L0 state:active naa.5000144f80206240 vmhba35 0 1 0 NMP active san iqn.1998-01.com.vmware:ml110g5 00023d000001,iqn.1992-04.com.emc:storage.StorCenterIX2.IX2-iSCSI-02,t,1
vmhba32:C0:T0:L0 state:active mpx.vmhba32:C0:T0:L0 vmhba32 0 0 0 NMP active local usb.vmhba32 usb.0:0
vmhba35:C0:T0:L0 state:active naa.5000144f77827768 vmhba35 0 0 0 NMP active san iqn.1998-01.com.vmware:ml110g5 00023d000001,iqn.1992-04.com.emc:storage.StorCenterIX2.IX2-iSCSI-01,t,1
vmhba0:C0:T0:L0 state:active t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________ vmhba0 0 0 0 NMP active local sata.vmhba0 sata.0:0
vmhba1:C0:T0:L0 state:active mpx.vmhba1:C0:T0:L0 vmhba1 0 0 0 NMP active local sata.vmhba1 sata.0:0

Second, Check all of the paths that device naa.5000144f80206240 has (vmhba35:C0:T1:L0):

~ # esxcfg-mpath -L | grep naa.5000144f80206240

vmhba35:C0:T1:L0 state:active naa.5000144f80206240 vmhba35 0 1 0 NMP active san iqn.1998-01.com.vmware:ml110g5 00023d000001,iqn.1992-04.com.emc:storage.StorCenterIX2.IX2-iSCSI-02,t,1

As you apply the rule to -A vmhba35 -C 0 -L 0, verify that there is no other device with those parameters.

~ # esxcfg-mpath -L | egrep "vmhba35:C0.*L0"
vmhba35:C0:T1:L0 state:active naa.5000144f80206240 vmhba35 0 1 0 NMP active san iqn.1998-01.com.vmware:ml110g5 00023d000001,iqn.1992-04.com.emc:storage.StorCenterIX2.IX2-iSCSI-02,t,1
vmhba35:C0:T0:L0 state:active naa.5000144f77827768 vmhba35 0 0 0 NMP active san iqn.1998-01.com.vmware:ml110g5 00023d000001,iqn.1992-04.com.emc:storage.StorCenterIX2.IX2-iSCSI-01,t,1

Add a rule for this LUN with the command:

~ # esxcli storage core claimrule add -r 103 -t location -A vmhba35 -C 0 -T 1 -L 0 -P MASK_PATH

5. Verify that the rule is in effect with the command:

~ # esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ----------------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            103  file     location   MASK_PATH  adapter=vmhba35 channel=0 target=1 lun=0
MP          65535  runtime  vendor     NMP        vendor=* model=*

6. Reload your claimrules in the VMkernel with the command:

~ # esxcli storage core claimrule load

7.Re-examine your claimrules and verify that you can see both the file and runtime class. Run the command:

~ # esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ----------------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            103  runtime  location   MASK_PATH  adapter=vmhba35 channel=0 target=1 lun=0
MP            103  file     location   MASK_PATH  adapter=vmhba35 channel=0 target=1 lun=0
MP          65535  runtime  vendor     NMP        vendor=* model=*

8. Unclaim all paths to a device and then run the loaded claimrules on each of the paths to reclaim them.

~ # esxcli storage core claiming reclaim -d naa.5000144f80206240

~ # esxcli storage core claimrule run

9. Verify that the masked device is no longer used by the ESX host.

~ # esxcfg-scsidevs -m

t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________:3 /vmfs/devices/disks/t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________:3 4c13c151-2e6c6f81-ab84-f4ce4698970c  0  ml110g5-local
naa.5000144f77827768:1                                                     /vmfs/devices/disks/naa.5000144f77827768:1                                                     4f9eca2e-3a28f563-c184-001b2181d256  0  IX2-iSCSI-01

The masked datastore does not appear in the list.

To see all the LUNs use “esxcfg-scsidevs -c” command.

~ # esxcfg-scsidevs -c

Device UID                                                                Device Type      Console Device                                                                                Size      Multipath PluginDisplay Name

mpx.vmhba1:C0:T0:L0                                                       CD-ROM           /vmfs/devices/cdrom/mpx.vmhba1:C0:T0:L0                                                       0MB       NMP     Local TSSTcorp CD-ROM (mpx.vmhba1:C0:T0:L0)
mpx.vmhba32:C0:T0:L0                                                      Direct-Access    /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0                                                      3815MB    NMP     Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)
naa.5000144f77827768                                                      Direct-Access    /vmfs/devices/disks/naa.5000144f77827768                                                      307200MB  NMP     EMC iSCSI Disk (naa.5000144f77827768)
t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________  Direct-Access    /vmfs/devices/disks/t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________  152627MB  NMP     Local ATA Disk (t10.ATA_____GB0160CAABV_____________________________5RX7BZHC____________)

To verify that a masked LUN is no longer an active device, run the command:

~ # esxcfg-mpath -L | grep naa.5000144f80206240

~ #

Empty output indicates that the LUN is not active.

Procedure for Unmasking  a Path

1. List actual claimrules

~ # esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ----------------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            103  runtime  location   MASK_PATH  adapter=vmhba35 channel=0 target=1 lun=0
MP            103  file     location   MASK_PATH  adapter=vmhba35 channel=0 target=1 lun=0
MP          65535  runtime  vendor     NMP        vendor=* model=*

2. Delete the MASK_PATH rule.

~ # esxcli storage core claimrule remove -r 103

3. Verify that the claimrule was deleted correctly.

~ # esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ----------------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            103  runtime  location   MASK_PATH  adapter=vmhba35 channel=0 target=1 lun=0
MP          65535  runtime  vendor     NMP        vendor=* model=*

4. Reload the path claiming rules from the configuration file into the VMkernel.

~ # esxcli storage core claimrule load

5. Run the esxcli storage core claiming unclaim command for each path to the masked storage device

~ # esxcli storage core claiming unclaim -t location -A vmhba35 -C 0 -T 1 -L 0

6. Run the path claiming rules.

~ # esxcli storage core claimrule run

Your host can now access the previously masked storage device.

Other references:

Analyze I/O workloads to determine storage performance requirements

Official Documentation:
VMware website “Solutions” section contains information about virtualizing  common business applications like Microsoft Exchange, SQl, Sharepoint, Oracle DB and SAP and lots of related resources.

Figure 7

Summary:
This topic suggests how to analyze existing I/O workloads  in the storage field on (physical) systems to determine the required storage performance in the virtual environment.

Imho, this is different from monitoring the I/O load in a virtual environment, VMware and other parties like Vkernel do have tools and documentation on that subject. To name a few: Performance graphs, EsxTop, vscsiStats etc.

Other references:

Identify and tag SSD devices

Official Documentation:
vSphere Storage Guide, Chapter 15 “Solid State Disks Enablement”, page 143. This new chapter is dedicated to SSD devices and contains topics like; “Tag Devices as SSD”, “Identify SSD Devices” and so on.

Summary:
Identify SSD devices

You can identify the SSD devices in your storage network. Before you identify an SSD device, ensure that the device is tagged as SSD.

Procedure:

  1. List the devices.
    # esxcli storage core device list
    
    #
    

    The command output includes the following information about the listed device.
    Is SSD: true

  2. Verify whether the value of the flag Is SSD is true. The other value is false. This is different from the information in the vSphere client in the Drive Type Column.

Figure 8

Tag SSD devices

If ESXI does not automatically identifies a device as a SSD, there is a procedure to tag a SSD using PSA SATP claimrules  The procedure to tag a SSD device is straight forward and has a lot in common with the MASK_PATH procedure.

  1. Identify the device to be tagged and its SATP.
    # esxcli storage nmp device list
    #
    
  2. Note down the SATP associated with the device.
  3. Add a PSA claim rule to mark the device as SSD.There are 4 different ways, for example by specifying the device name

    # esxcli storage nmp satp rule add -s SATP --device device_name --option=enable_ssd
    #
    


  4. Unclaim the device.Also here 4 possible ways, example by device name
    # esxcli storage core claiming unclaim --type device --device device_name
    #
    
  5. Reclaim the device by running the following commands.
    # esxcli storage core claimrule load
    # esxcli storage core claimrule run
    #
    
  6. Verify if devices are tagged as SSD.
    # esxcli storage core device list -d device_name
    #
    
  7. The command output indicates if a listed device is tagged as SSD.
    Is SSD: true

If the SSD device that you want to tag is shared among multiple hosts, make sure that you tag the device from all the hosts that share the device.

In case you do not have a SSD device available, you can trick ESXi and turn a local disk into a SSD device by performing the procedure as presented by William Lam.

Other references:

Administer hardware acceleration for VAAI

Official Documentation:
vSphere Storage Guide, Chapter 18 “Storage Hardware Acceleration”, page 173 is dedicated to VAAI

Summary:

When the hardware acceleration functionality is supported, the ESXi host can get hardware assistance and perform several tasks faster and more efficiently.

The host can get assistance with the following activities:

  • Migrating virtual machines with Storage vMotion
  • Deploying virtual machines from templates
  • Cloning virtual machines or templates
  • VMFS clustered locking and metadata operations for virtual machine files
  • Writes to thin provisioned and thick virtual disks
  • Creating fault-tolerant virtual machines
  • Creating and cloning thick disks on NFS datastores

vSphere Storage APIs – Array Integration (VAAI) were first introduced with vSphere 4.1, enabling offload capabilities support for three primitives:

  1. Full copy, enabling the storage array to make full copies of data within the array
  2. Block zeroing, enabling the array to zero out large numbers of blocks
  3. Hardware-assisted locking, providing an alternative mechanism to protect VMFS metadata

With vSphere 5.0, support for the VAAI primitives has been enhanced and additional primitives have been introduced:

  1. Thin Provisioning, enabling the reclamation of unused space and monitoring of space usage for thin-provisioned LUNs
  2. Hardware acceleration for NAS
  3. SCSI standardization by T10 compliancy for full copy, block zeroing and hardware-assisted locking

Imho, support for NAS devices is one of the biggest improvements. Prior to vSphere 5.0, a virtual disk was created as a thin-provisioned disk, not even enabling the creation of a thick disk. Starting with vSphere 5.0, VAAI NAS extensions enable NAS vendors to reserve space for an entire virtual disk. This enables the creation of thick disks on NFS datastores.

NAS VAAI plug-ins are not shipped with vSphere 5.0. They are developed and distributed by storage vendors.

Hardware acceleration is On by default, but can be disabled by default. Read my post “Veni, Vidi, VAAI” for more info on how to check the Hardware Acceleration Support Status.

It is also possible to add Hardware Acceleration Claim Rules.

Remember, you need to add two claim rules, one for the VAAI filter and another for the VAAI plug-in. For the new claim rules to be active, you first define the rules and then load them into your system.

Procedure:

1. Define a new claim rule for the VAAI filter by running:

# esxcli --server=server_name storage core claimrule add --claimrule-class=Filter --plugin=VAAI_FILTER
#

2. Define a new claim rule for the VAAI plug-in by running:

# esxcli --server=server_name storage core claimrule add --claimrule-class=VAAI
#

3. Load both claim rules by running the following commands:

# esxcli --server=server_name storage core claimrule load --claimrule-class=Filter
# esxcli --server=server_name storage core claimrule load --claimrule-class=VAAI
#

4. Run the VAAI filter claim rule by running:

# esxcli --server=server_name storage core claimrule run --claimrule-class=Filter
#

NOTE: Only the Filter-class rules need to be run. When the VAAI filter claims a device, it automatically finds the proper VAAI plug-in to attach.

Procedure for installing a NAS plug-in

This procedure is different from the previous and presumes the installation of a VIB package.

Procedure:

1. Place your host into the maintenance mode.

2. Get and eventually set the host acceptance level:

# esxcli software acceptance get esxcli software acceptance set --level=value
#

This command controls which VIB package is allowed on the host. The value can be one of the following: VMwareCertified, VMwareAccepted, PartnerSupported, CommunitySupported. Default is PartnerSupported

3. Install the VIB package:

# esxcli software vib install -v|--viburl=URL
#

The URL specifies the URL to the VIB package to install. http:, https:, ftp:, and file: are supported.

4. Verify that the plug-in is installed:

# esxcli software vib list
#

5. Reboot your host for the installation to take effect.

When you use the hardware acceleration functionality, certain considerations apply.

Several reasons might cause a hardware-accelerated operation to fail.

For any primitive that the array does not implement, the array returns an error. The error triggers the ESXi host to attempt the operation using its native methods.

The VMFS data mover does not leverage hardware offloads and instead uses software data movement when one of the following occurs:

  • The source and destination VMFS datastores have different block sizes.
  • The source file type is RDM and the destination file type is non-RDM (regular file).
  • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin.
  • The source or destination VMDK is in sparse or hosted format.
  • The source virtual machine has a snapshot.
  • The logical address and transfer length in the requested operation are not aligned to the minimum alignment required by the storage device. All datastores created with the vSphere Client are aligned automatically.
  • The VMFS has multiple LUNs or extents, and they are on different arrays.
  • Hardware cloning between arrays, even within the same VMFS datastore, does not work

TIP: when playing around with esxcli. VMware has put a lot of effort in making esxcli a great command, it contains a lot of build-in help.

Example:

This command seems out of options…

# esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ---------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP          65535  runtime  vendor     NMP        vendor=* model=*
#

But type this:

# esxcli storage core claimrule list -h

Error: Invalid option -h

Usage: esxcli storage core claimrule list [cmd options]

Description:
  list                  List all the claimrules on the system.

Cmd options:
  -c|--claimrule-class=<str>

Indicate the claim rule class to use in this operation [MP, Filter, VAAI, all].

So this command will give us more information:

# <strong>esxcli storage core claimrule list -c all

Rule Class   Rule  Class    Type       Plugin            Matches
----------  -----  -------  ---------  ----------------  ---------------------------------
MP              0  runtime  transport  NMP               transport=usb
MP              1  runtime  transport  NMP               transport=sata
MP              2  runtime  transport  NMP               transport=ide
MP              3  runtime  transport  NMP               transport=block
MP              4  runtime  transport  NMP               transport=unknown
MP            101  runtime  vendor     MASK_PATH         vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH         vendor=DELL model=Universal Xport
MP          65535  runtime  vendor     NMP               vendor=* model=*
Filter      65430  runtime  vendor     VAAI_FILTER       vendor=EMC model=SYMMETRIX
Filter      65430  file     vendor     VAAI_FILTER       vendor=EMC model=SYMMETRIX
Filter      65431  runtime  vendor     VAAI_FILTER       vendor=DGC model=*
Filter      65431  file     vendor     VAAI_FILTER       vendor=DGC model=*
Filter      65432  runtime  vendor     VAAI_FILTER       vendor=EQLOGIC model=*
Filter      65432  file     vendor     VAAI_FILTER       vendor=EQLOGIC model=*
Filter      65433  runtime  vendor     VAAI_FILTER       vendor=NETAPP model=*
Filter      65433  file     vendor     VAAI_FILTER       vendor=NETAPP model=*
Filter      65434  runtime  vendor     VAAI_FILTER       vendor=HITACHI model=*
Filter      65434  file     vendor     VAAI_FILTER       vendor=HITACHI model=*
Filter      65435  runtime  vendor     VAAI_FILTER       vendor=LEFTHAND model=*
Filter      65435  file     vendor     VAAI_FILTER       vendor=LEFTHAND model=*
VAAI        65430  runtime  vendor     VMW_VAAIP_SYMM    vendor=EMC model=SYMMETRIX
VAAI        65430  file     vendor     VMW_VAAIP_SYMM    vendor=EMC model=SYMMETRIX
VAAI        65431  runtime  vendor     VMW_VAAIP_CX      vendor=DGC model=*
VAAI        65431  file     vendor     VMW_VAAIP_CX      vendor=DGC model=*
VAAI        65432  runtime  vendor     VMW_VAAIP_EQL     vendor=EQLOGIC model=*
VAAI        65432  file     vendor     VMW_VAAIP_EQL     vendor=EQLOGIC model=*
VAAI        65433  runtime  vendor     VMW_VAAIP_NETAPP  vendor=NETAPP model=*
VAAI        65433  file     vendor     VMW_VAAIP_NETAPP  vendor=NETAPP model=*
VAAI        65434  runtime  vendor     VMW_VAAIP_HDS     vendor=HITACHI model=*
VAAI        65434  file     vendor     VMW_VAAIP_HDS     vendor=HITACHI model=*
VAAI        65435  runtime  vendor     VMW_VAAIP_LHN     vendor=LEFTHAND model=*
VAAI        65435  file     vendor     VMW_VAAIP_LHN     vendor=LEFTHAND model=*
~ #

Other references:

Configure and administer profile-based storage

Official Documentation:
vSphere Storage Guide, Chapter 21 “Virtual Machine Storage profiles”, page 195.

Also, vSphere Storage Guide, Chapter 20 “Using Storage Vendor providers”, page 191.

Summary:
In a few words, with Profile-driven storage, you can describe storage capabilities in terms of Capacity, performance, Fault tolerance, Replication etc. The information comes from Storage vendors (See Chapter 20, also known as “vSphere Storage APIs – Storage Awareness” or VASA) or is custom defined. In the final step, a VM is associated with a Storage profile. Depending on its placement, the VM is compliant or not.

And that is exactly what happens. It is just a bit cumbersome imho.

Important Note:  Profile-driven storage does not support RDMs.

In fact, it comes to performing the following tasks to get Profile drive Storage in place:

  1. If your storage does not support VASA, then create your User-defined Capabilities. Go to “VM Storage Profiles”Figure 9
  2. and select “Manage Storage Capabilities”. Add the new Storage Capabilities.Figure 10
  3. Create your VM Storage Profiles; (bind to capabilities).Figure 11
  4. Result, we have created 3 storage profiles: Gold, Silver and Bronze.Figure 12
  5. Assign Storage Capabilities to Datastores (is necessary when using user-defined capabilities).
  6. Go to Datastores, select a DatastoreFigure 13
  7. and assign a defined storage capabilityFigure 14
  8. The resultFigure 15
  9. Return, now Enable Storage profiles.Figure 16
  10. Select Hosts or Cluster, check licenses and Enable. KOE-HADRS01 is now enabled.
  11. Assign VMs to an associated Storage profileFigure 17
  12. Do not forget Propagate to disks.
  13. ResultFigure 18
  14. Check ComplianceFigure 19
  15. Finished

Other references:

  • vSphere Storage APIs – Storage Awareness FAQ, http://kb.vmware.com/kb/2004098
  • A sneak-peek at how some of VMware’s Storage Partners are implementing VASA, a VMware blog post with some real life examples.

Prepare storage for maintenance (mounting/un-mounting)

Official Documentation:
vSphere Storage Guide, Chapter 13 “Working with Datastores”, page 128 describes how to unmount a VMFS or NFS Datastore

Summary:

When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify. The datastore continues to appear on other hosts, where it remains mounted.

Important NOTE:  vSphere HA heartbeating does not prevent you from unmounting the datastore. If a datastore is used for heartbeating, unmounting it might cause the host to fail and restart any active virtual machine. If the heartbeating check  fails, the vSphere Client displays a warning.

Before unmounting VMFS datastores, make sure that the following prerequisites are met:

  • No virtual machines reside on the datastore.
  • The datastore is not part of a datastore cluster.
  • The datastore is not managed by Storage DRS.
  • Storage I/O control is disabled for this datastore.
  • The datastore is not used for vSphere HA heartbeating.

The procedure is simple, display the Datastore of choice, right-click and select Unmount.

If the datastore is shared, you can select which hosts should no longer access the datastore. Before finishing the task, the prerequisites are presented one more time.

Figure 20

Mounting a Datastore is a bit simpeler. There is a slight difference between mounting a shared or unshared VMFS Datastore.

Other references:

Upgrade VMware storage infrastructure

Official Documentation:
vSphere Storage Guide, Chapter 13 “Working with Datastores”, page 120 has a section on Upgrading VMFS Datastores.

Summary:

  • A VMFS3 Datastore can directly be upgraded to VMFS5.
  • A VMFS2 Datasttore should first be upgraded to VMFS3, before upgrading to VMFS5. You will need an ESX/ESXI 4.x host to perform this step.
  • A datastore upgrade is a one-way process.
  • Remember, an Upgraded VMFS5 does not have the same characteristics as a newly created VMFS5
  • All hosts accessing a VMFS5 Datastore must support this version
  • Before upgrading to VMFS5, check that the volume has at least  2 MB of free blocks and 1 free filedescriptor
  • The upgrade process is non-disruptive

Other references:

Advertisements

5 Responses to VCAP5-DCA Objective 1.1 – Implement and Manage complex storage

  1. Minh Le says:

    these are the best document i ve ever seen..

    thank you very much for sharing them

    • paulgrevink says:

      Thank you, I really appreciate. There are several study guides available, this one follows the official VMware guides.

      Best regards,

      Paul

  2. Darragh says:

    Hi Paul ,
    First , thank you very much for going to the trouble of creating the most comprehensive DCA-5 study guide i can find.
    Second , I used Ed Grigsons study guide for the DCA-4 and printed his off so that i could keep a copy with me and read it on the train / free time when i didnt have access to my PC. Could you please create a PDF of your study guide?

    Thanks again,

    Darragh

    • paulgrevink says:

      Hello, Darragh,

      Thank you very much for your feedback, I really appreciate. Concerning your question, at this moment it is a very busy time. I will try to finalize a PDF version in January 2013.

      Regards,

      Paul

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: