VMworld 2017 Roundup: Day 3
The second full day of VMworld was one of the fullest. Onward!
For general session commentary, check out the Live Blog. The super-condensed version:
- Pivotal and VMware announce Pivotal Container Service (PKS), a joint offering that was developed in partnership with Google Cloud (the ‘K’ alludes to Kubernetes),
- VMware showed off the new PKS as well as AppDefense and Wavefront, via a fictitious customer story, and
- VMware Workspace ONE is now available for trial.
NetApp HCI, Simplicity AND Enterprise Scale [STO3308BUS]
Gabe Chapman started off the session by giving an overview of hyper-converged infrastructure (HCI). In theory HCI is software-defined compute, network and storage in a single package. In reality there have been (and still are, in cases) challenges with that approach. First generation HCI solutions had to compromise in the areas of performance, flexibility and consolidation to deliver a “single-box” product. This was often rooted in the limited ways in nodes were engineered to make the various infrastructure components available. If you needed more storage, you were forced to consume more compute, whether you needed it or not, and vice-versa if you just needed more compute and not storage. Depending on your workloads, the addition of more compute can increase your per-CPU licensing requirements (ex. Oracle, vSphere, Microsoft SQL) which means that extra storage you needed just got much more expensive.
NetApp’s HCI offering is built on top of SolidFire Element OS, which takes care of software-defined storage, and VMware vSphere, which deals with software-defined compute (i.e. virtualization). These are tied together by the NetApp Deployment Engine, which reduces the steps and effort in deploying the solution. If a customer wants to leverage NetApp HCI for deliver file services, they can deploy a virtualized OnTap instance (NetApp ONTAP Select vNAS) that will make those services available while the customer can leverage the benefits of block-storage on the back-end.
NetApp HCI will be delivered as a four (4) node chassis. You need at least two compute nodes and two storage nodes, however you’re free to extend nodes as necessary after that. Nodes are available in “t-shirt” sizes, meaning there are three sizes of compute nodes and three sizes of storage nodes to choose from.
Within NetApp HCI, the client can specify performance guarantee’s per application through specifying minimum, maximum and burst IOPs. This allows you to create walled gardens around your apps, to make sure that you aren’t as affected by “noisy neighbour” syndrome. If VVOLs are deployed, then these guarantees can be specified per VM, otherwise they’d be applied per VMDK. Thinking about the practicality of this, I’m thinking that there’s likely going to need to be some sort of show-/charge-back to business units that ask for high IOPs guarantees. Otherwise all the business services will ask for high guarantees.
Gabriel cautions about watching out for what he calls the “HCI tax”. He’s seen that some HCI offerings deploy controller VMs that use anywhere between 16-128 GB of RAM and 4-8 CPUs each. This controller VMs take away from the resources available to VMs within each HCI node. Also watch out for some HCI solutions that do support deploying vCenter directly on the HCI solution, meaning you would need to find more infrastructure to house vCenter.
Back to the NetApp Deployment Engine (NDE), it is capable of deploying Element OS, ESXi, vCenter with relevant HCI plugins, and a single monitoring VM to the NetApp HCI environment. NetApp HCI has administrators manage their daily operations through the familiar vCenter interface, but also provides a REST-based API. The NetApp Data Fabric that is also part of NetApp HCI means that the solution can be integrated with AWS & Azure. The NetApp pedigree means that familiar NetApp features, like SnapMirror, can be leveraged to migrated storage/VMs from HCI to and from other NetApp-capable targets.
NetApp HCI general availability is targeted for October 27th of this year.