public

What’s New in VMware vSphere 8 Update 3 (VCF 5.2)

Discover the latest enhancements in VMware vSphere 8 Update 3, including improved lifecycle management, advanced security features, GPU workload support. Learn how these updates optimize performance, scalability, and resilience for your IT infrastructure.

4 months ago

Latest Post What’s New in VMware Aria Suite (VCF 5.2) by Alexey Koznov public

Introduction


Links to documentation:


VMware vSphere 8 Update 3 brings a host of new features and enhancements designed to improve performance, security, and operational efficiency. This article provides an in-depth look at the latest updates and how they can benefit your IT infrastructure.

Minimal downtime when updating vCenter. You can quickly address security vulnerabilities and easily rollback if complications arise.

Previously it was limited to single self-managed vCenters, but not it is available for all vCenter topologies.

And administrators now have an option to do manual or automated switchover. Before that, it was only manual switchover.

💡
VMware planned to add a Scheduled switchover in next releases.

It is a state that ESXi will be placed into where VMs continue to run, disallowing migrations and new VM creations on a host. It is tries to lock down the host in terms of VM mobility and VM creation, but it allows the current work workloads to continue running. Furthermore, it is not a state that use will be able to enter into. Only Lifecycle manager will automatically place the host into Partial Maintenance mode when it is performing Live Patching. User have an option to move the host out of Partial Maintenance mode when something goes wrong.

Reduces downtime and maintains continuous operation - apply patches without rebooting hosts.

Technical details of that process:

  1. We're moving host to the partial Maintenance mode
  2. We mount a new copy (clone / new mount provision) of the area of ESXi host that we want to patch.
  3. Patching a newly mounted area.
  4. VMs take advantage of newly patched instance and go through Fast Suspend Resume (FSR). This is the same process as VM configuration (adding a NIC, Hard Disk, etc) and this process is not disruptive.
  5. When a host is patched, it will be automatically moved out of Partial Maintenance mode.
💡
There are some limitations:
FSR is not available for VMs that configured with Fault Tolerance and Direct Access devices.

But GPU enabled VMs are supported!
💡
Live Patching is available only for applying only close release of patch, keep that in mind that feature works if you keep on the most current release.

Streamlines cluster image definitions and maintains compatibility and customization, and you can override vendor add-ons and remove third-party components. Sometimes, it is useful to override some components that vendor is recommends to apply to the images or use older drivers for compatibility and stability. To decrease size of image for Locations with tiny network bandwidth, you can remove VMware Tools or some of the Host Client components.

Now, vCLS service objects are no longer traditional VMs - they are now based on CRX runtime. Now only 2 vCLS appliances running per cluster and they are embedded to ESXi directly! No OVA push and problems with that! Memory footprint around 100Mb per appliance = 200Mb per cluster.

💡
Cluster will auto-converts to Embedded vCLS installation if one of the hosts are on 8.0 U3 and roll-backed to traditional vCLS in case of downgrade and roll-back.

With FPIN vSphere infrastructure layer now can handle notifications from SAN switches or targets, learn about degraded SAN links to make sure to use healthy paths to the storage devices. FPIN is an industry standard that provides notification to the devices of the link or other issues with connection or a possible a path through the fabric.

New VMFS API calls has been implemented to allow for the inflation of the blocks on VMFS disk while disk is in use. This API is 10x faster than existing Eager Zero Thick disks on VMFS (Thin → EZT). Thick Provision Lazy Zeroed / Thick Provisioning Eager Zeroed and First Class Disks (FCD) now can be inflated in much faster times.

Initial support for SCSI and uniform storage configurations - provides active/active storage provider capabilities for both stretched and non-stretched storage clusters. This will enable vVols to support A-A deployment topologies, or iSCSI block based / FC / iSCSI access between 2 sites.

Pure Storage is design partner for that solution.

💡
It is not supporting vCenter HA configuration, but later they are planning to add it.

Also, 8.0 U3 now have manual CLI and automatic UNMAP support for vVols on NVMe volumes. This allows you to maintain space efficiency without admin intervention. Also, for mitigation of the space reclamation of the array with that amount of traffic with UNMAP commands, the newest version has a definition of maximum number of hosts sending Unmap at once, and it is configured per Datastore (using parameter called reclaim-maxhosts in between 1 and 128 hosts).

That means that MS WSFC supported within vVols shared disks and no need for RDMs.

Now Hosts can have different types of workloads on a single GPU and share GPU resources among various applications. Now you can mix and match vGPU profile type, the memory sizes across different VMs leveraging the same physical GPU.

💡
Underlying GPUs needs to support multiple different profile types, check NVIDIA documentation.

Also, we can leverage a new piece of Hardware called GPU Media Engine. There is typically only one Media Engine per physical GPU and this hardware is designed for things like video rendering (hardware acceleration for H264 / H265 video codecs).

💡
Because it is only one ME we can assign it to only one vGPU profile, but we can share it across multiple VMs.

For those devices, vSphere Cluster now has 2 important options with configuring VM Stun time during vMotion and Passthrough VM DRS automation

Finally, we now have support for multiple on-premises and cloud IdPs for enabling SSO, MFA, and modern authentication mechanisms and PingFederate is an good start.

Quickly configure modern TLS ciphers for simplifying configuration of the hosts.

💡
This operation requires a reboot of ESXi host!
💡
ETCD is a key-value store that serves as the backing store for all cluster data in Kubernetes. It is crucial for the reliability and consistency of Kubernetes clusters.

ETCD uses a quorum-based mechanism to ensure data consistency and reliability. This means that a majority of ETCD nodes (a quorum) must agree on any changes before they are applied. This mechanism helps prevent split-brain scenarios and ensures that the data remains consistent across the cluster.

The is no real benefit of spreading a given set of odd numbers of Control Plane VMs across 2 sites for an A-A deployment.

Instead of that, deploy 3 Supervisor Control Plane VMs that will be placed on the same site. And all Control Plane VMs of Kubernetes clusters should also be placed in the same site.

Worker nodes can be spread across 2 sites. For Placement of VMs, you can use Host Affinity rules for each type of VM.

Increased limits to 250 volumes will help customers with additional file share volumes for Kubernetes Persistent Volumes (PV) or Persistent Volume Claims (PVC).

Conclusion


VMware vSphere 8 Update 3 offers significant improvements in lifecycle management, performance, security, and operational efficiency. These enhancements are designed to help organizations optimize their IT infrastructure and achieve greater agility and reliability.

Alexey Koznov

Published 4 months ago

Comments?

Leave us your opinion.