Hi all, the idea today is to summarize as briefly as possible the new features and functionalities of the upcoming Nutanix AOS Release which is v5.17.
Please keep in mind this is a STS release, so you should expect to see these features and functionalities available in a future LTS release within the next 10-12 months.
Sync (Metro) Support for AHV
This has to be THE most eagerly awaited feature for AHV.
The caveat in that for now the only implemented method of failover is Manual (no Witness option).
Near-Sync Support with Leap
NearSync policy is applied using a protection policy
- Max 100 VMs per Protection Policy
- RPO in the 1min to 14min range
- Still requieres a PC instance per site
Single PC Support for Leap
Mainly for ROBO Use Cases
RPO >= 1hr
CHDR not supported (Source and target should have same hypervisor)
Near-Sync Suport with Metro Availability in vSphere Environments
Now Metro protected workoads can also participate in a Near-sync schedule to a third site.
MetroAv break / re-enable tasks does not impact NearSync replication and vice-versa
Network Segmentation for Replication
Now you have the ability to create a separated network for External Replication using an externally routable vLAN / subnet.
This is not for RF traffic but for DR traffic.
- 3+ nodes per cluster required.
- For Greenfield environments only.
- Not compatible with Leap (yet).
Leap Static IP Mappings
Ability to Offset IP addresses within the same subnet.
Leap Recovery Plan Updates
Ability to add custom scripts for being executed in the Power On sequence of recovered VMs at a DR site (at guest level).
Data Resiliency Widget
The Data Resiliency widget in PE needed to be more informative and accurate. This is how the pre-v5.17 widget looks like:
Now you can see the data resiliency of a cluster at a glance, but most useful you can see more clearly if the showed state is different from the configured state or expected resiliency state.
Also you can see the configured failure domain and FT levels of a cluster:
Rack Awareness for Hyper-V
Rack Awareness has been supported for a while now using AHV and ESXi hypervisors, and now it´s the turn of Hyper-V to support this feature.
SCSI-3 Persistent Reservations with Volumes
With 5.17 SCSI-3 persistent reservations are supported for Volume Groups that are direct attached to an AHV VM.
This simplifies clustered application deployment by removing the need to set up in-guest iSCSI connections in order to leverage SCSI-3 persistent reservations.
This is supported for Windows and RHEL VMs running on AHV.
Efficiency & Performance
Erasure Coding Interop with AES
Autonomous Extent Store (AES) was released with AOS 5.11, improving sustained performance for AOS workloads by keeping a portion of metadata on the node where a workload resides.
AOS 5.17 enables EC for AES.
Merged vBlock Metadata
Read performance sometimes can be impacted by the presence of long snapshot chains (will we ever learn?), especially when the metadata is not served from cache.
This could happen with large or changing working sets or when Stargate services restarts.
Merged vBlocks introduces a new metadata entity that helps improve read performance by 4x during IO ramp up across all plataforms, and also allegedly improves random read performance by 30% for AF and 50% for hybrid.
This is currently enabled ONLY for clusters that have a storage capacity between 60TB and 70TB. We can expect that future AOS releases will address this limitation.
Merged vBlock Metadata is also automatically disabled when dedup is enabled on the container, so there are still a few caveats for using this feature.
Object licensing management has now been enabled in PC.
Common Acces Card (CAC) authentication is now supported for PC (this method of authentication is mainly used by Active Duty US Defense military members)
Mercury API Gateway is a new API Gateway for v3 APIs, replacing the Aptos API Gateway. This new Gateway allegedly has better performance than the previous one (improved # of API request per second).
Prism Central Scalability
PC now supports up to 25.000 VMs and 300 clusters (depending on the # of nodes per cluster).
Also now PC has the ability to spread it´s load into multiple vDisks, improving Cassandra throughput. This is for Greenfield environments only; existing PC instances cannot be migrated to multiple vDisks.
Ok, that a wrap! Hope this gross oversimplification gives you a good starting point to do your own research on how to start adopting the use of some of these features.
Stay home, stay safe!