VCAP DCD Study – Home Lab Design Part 8

Objective 3.5 – Determine Virtual Machine Configuration for a vSphere 5.x Physical Design
Knowledge

 

1.Describe the applicability of using an RDM or a virtual disk for a given VM.

RDMs

Only use when necessary, i.e. Microsoft Clustering, SAN agents that require direct access and for migrations, there is very little performance difference between and RDM and VMFS.

Skills and Abilities

2. Based on the service catalog and given functional requirements, for each service: Determine the most appropriate virtual machine configuration for the design.

o Implement the service based on the required infrastructure qualities.

  • Always start with only 1 vCPU
  • Enable TPS
  • Always install VMware Tools
  • Only allocate RAM needed
  • Align virtual disks
  • Remove Floppy and any unneeded I/O devices or VM Hardware
  • Paravirtual SCSI for Data disks (not OS); typically use for > 2000 IOPS
  • VMXNET3 Ethernet Adapters
  • If redirecting VM swap files, do so on Shared Storage for better vMotion performance

3. Based on an existing logical design, determine appropriate virtual disk type and placement.

 

  • Thick Provision Lazy Zeroed Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. Using the default flat virtual disk format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space. You cannot convert a flat disk to a thin disk.
  • Thick Provision Eager Zeroed A type of thick virtual disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created. It might take much longer to create disks in this format than to create other types of disks.
  • Thin Provision Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations. NOTE If a virtual disk supports clustering solutions such as Fault Tolerance, do not make the disk thin. If the thin disk needs more space later, it can grow to its maximum capacity and occupy the entire datastore space provisioned to it. Also, you can manually convert the thin disk into a thick disk.

 

4. Size VMs appropriately according to application requirements, incorporating VMware best practices.

BrownBag notes again!

  • Start with 1 vCPU and only allocate RAM required by ISVs(Independent Software vendors)  for a given application
  • For storage, you can get this by current state analysis, then add enough for growth (patches/updates), vswp, logging, other ‘overhead’ (avg size of VMs * # VMs on Datastore) + 20% = round up the final number
  • Size VM resources in accordance with NUMA boundaries. So, if you have 4 cores, assign vCPUs by multiple of 4, 6 cores = multiple of 6, etc.
  • If overallocate RAM, more RAM overhead is used per VM
    thus wasting RAM…for larger environments that is more applicable

5. Determine appropriate reservations, shares, and limits.

Shares,Reservations, and Limits:

  • Deploy VMs with default setting unless clear reason to do otherwise
  • Use sparingly if at all!
  • Are there Apps that need resources even during contention? Then use Reservations
  • This adds complexity and administration overhead.

6. Based on an existing logical design, determine virtual hardware options.

From the performance best practice doc.

Allocate to each virtual machine only as much virtual hardware as that
virtual machine requires.

Provisioning a virtual machine with more resources than it requires can, in some cases,
reduce the performance of that virtual machine as well as other virtual machines
sharing the same host.

Disconnect or disable any physical hardware devices that you will not be using. These might include

devices such as:
„
COM ports
„
LPT ports
„
USB controllers
„
Floppy drives
„
Optical drives (that is, CD or DVD drives)
„
Network interfaces
„
Storage controllers

Disabling hardware devices (typically done in BIOS ) can free interrupt resources. Additionally, some devices, such as USB controllers, operate on a polling scheme that consumes extra CPU resources. Lastly, some PCI devices reserve blocks of memory,making that memory unavailable to ESXi.

„
Unused or unnecessary virtual hardware devices can impact performance and should be disabled. For example, Windows guest operating systems poll optical drives (that is, CD or DVD drives) quite frequently. When virtual machines are configured to use a physical drive, and multiple guest operating systems simultaneously try to access that drive, performance could suffer. This can be reduced by configuring the virtual machines to use ISO images instead of physical drives, and can be avoided entirely by disabling optical drives in virtual machines when the devices are not needed.

„
ESXi 5.5 introduces virtual hardware version 10. By creating virtual machines using this hardware version, or upgrading existing virtual machines to this version, a number of additional capabilities become available. This hardware version is not compatible with versions of ESXi prior to 5.5, however, and thus if a cluster of ESXi hosts will contain some hosts running pre-5.5 versions of ESXi, the virtual machines running on hardware version 10 will be constrained to run only on the ESXi 5.5 hosts. This could limit vMotion choices for Distributed Resource Scheduling (DRS) or Distributed Power Management (DPM)

7. Design a vApp catalog of appropriate VM offerings (e.g., templates, OVFs, vCO).

Useful for packaging applications that have dependencies, can be converted to OVF and exported.

8. Describe implications of and apply appropriate use cases for vApps.

Simplified deployment of an application for developers, can be re-packaged and converted to OVF at each stage of the SDLC.

9. Decide on the suitability of using FT or 3rd party clustering products based on application requirements.

Currently limited to 1 vCPU, but…. vSphere 6.0 announced this week so support for up to 4 vCPUs is here!!! Awesome! We’ll be seeing a lot more use cases…

From Performance best practice doc.

FT virtual machines that receive large amounts of network traffic or perform lots
of disk reads can create significant bandwidth on the NIC specified for the logging
traffic. This is true of machines that routinely do these things as well as machines doing
them only intermittently, such as during a backup operation. To avoid saturating the
network link used for logging traffic limit the number of FT virtual machines on each
host or limit disk read bandwidth and network receive band width of those virtual machines.

Make sure the FT logging traffic is carried by at least a Gigabit-rated NIC (which should in turn be connected to at least Gigabit-rated network infrastructure).

NOTE: Turning on FT for a powered-on virtual machine will also automatically “Enable FT” for that virtual machine.

Avoid placing more than four FT-enabled virtual machines on a single host. In addition to reducing the possibility of saturating the network link used for logging traffic, this also limits the number of simultaneous live-migrations needed to create new secondary virtual machines in the event of a host failure.
„
If the secondary virtual machine lags too far behind the primary (which usually happens when the primary virtual machine is CPU bound and the secondary virtual machine is not getting enough CPU cycles), the hypervisor might slow the primary to allow the secondary to catch up. The following recommendations help avoid this situation:
„
Make sure the hosts on which the primary and secondary virtual machines run are relatively closely matched, with similar CPU make, model, and frequency.
Make sure that power managementscheme settings (both in the BIOS and in ESXi) that cause CPU frequency scaling are consistent between the hosts on which the primary and secondary virtual machines run.
„
Enable CPU reservations for the primary virtual machine (which will be duplicated for the secondary virtual machine) to ensure that the secondary gets CPU cycles when it requires them.

 

10. Determine and implement an anti-virus solution

 Basically referring to vShield endpoint, there are many AV products and choosing one will come down to the requirements.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s